At form3 I have recently solved an interesting problem to route our outbound web hook calls via Nginx. Before I dive into how I achieved this I want to set the landscape as to why you would want to do this.
For web hook calls your customer has to open up a https endpoint on the internet, so you really want to present a client certificate so the person receiving your call can verify it really is you calling them. You could add the client auth code directly inside your service (our service is written in Java). Although this becomes a pain for a number of reasons. Firstly the code for doing this in Java is pretty cumbersome. Secondly it means you now need to deploy this client certificate inside your application or load it in somehow, which shouldn’t really be the concern of the application.
The solution to all of this is to route the outbound call through Nginx and let Nginx take care of the client auth. This is much neater as it means the application doesn’t need to worry about any certificates, instead it can simply post to Nginx and let Nginx do all of the heavy TLS lifting.
The above design is what we end up with. Note we route everything using Linkerd to Linkerd. If you want to read more about how we set this up on AWS ECS you can read my blog post on it. If the web hook url for a customer was
https://kevinholditch.co.uk/callback then the notification api will post this through Linkerd to Nginx using the url
https://localhost:4140/notification-proxy/?address=http%3A%2F%2Fkevinholditch.co.uk%2Fcallback. Note we have to url encode the callback address. We then setup nginx to proxy the request onto the address provided in the
address query string parameter. This is done using the following Nginx server config:
set_by_lua block is a piece of lua code that url decodes the
address query string parameter. the
ngx.var.arg_address relates to the
address query string parameter. Note that you can replace
address with anything to read any parameter from the query string. This address is then used on line 12 as the
proxy_pass parameter. Lines 13 and 14 do the client auth signing.
The last trick to making all of this work was working out that the IP address on line 6 needs to change based on where you are running this. This line is basically telling Nginx which DNS server to use to look up the proxy address. Locally on my machine I need to use
127.0.0.11 (the docker DNS server) however on AWS this address changes. The last part of the jigsaw was working out how to find this dynamically. You can do that by issuing the following command
cat /etc/resolv.conf | grep nameserver | cut -d' ' -f2-. Once I had cracked that, I then updated the Nginx config file to be a template:
We then use something like the following startup script on our docker container that derives from
The clever part here is that we are setting an Environment variable on the fly which is the IP address of the DNS server using the command we worked out above. Then we are using
envsubst to substitute any environment variables in our template config file and writing our templated file out to disk. So when Nginx starts the IP address will be correct. This will all happen as the container starts so wherever the container is running (locally or on AWS) it will get the correct IP address and work.
I like to practice the approach of full stack component testing where the guiding principle is that you test the entire component from as high level as possible and only stub out third party dependencies (read other APIs) or something that isn’t easily available as a docker container. I have recently started a golang project to write a client for kong and I thought this would be a good opportunity to use this testing strategy in a golang project.
I love Go but the one thing I don’t like so much about it is the approach that most people seem to be taking to testing. A lot of tests are at the method level where your tests end up being tightly coupled to your implementation. This is a bad thing. You know a test is not very good when if you have to change your test when you change your implementation. The reason this is bad is because firstly as you are changing your test the same time as you are changing your code once you have finished you have know way of knowing if the new implementation still works as the test has changed. It also restricts how much you can get in and edit the implementation as you have constantly having to update the way the tests mock out everything. By testing at the top component level the tests do not care about the implementation and the code runs with real components so it works how it will in the real world. By writing tests in this way I have seen a lot less defects and have never had to manually debug something.
Anyway back to the subject of the full stack testing approach in Go. To start I used the excellent dockertest project which gives you a great API to start and stop docker containers. I then took advantage of the fact that in a Go project there is a special test function that gets called for every test run:
In the above method you can do your test setup code where I have placed the
//setup comment and your teardown code where I have placed the
//teardown comment. The code that gets returned by
m.Run() is the exit code from the test run. Go sets this to non zero if the test run fails so you need to exit with this code so your build will fail if your test run fails. Now using this method I can start the kong docker container, run the tests and then stop the kong docker container. Here is the full
TestMain code at time of writing:y
I have wrapped the starting and stopping of the kong container in a method to abstract away the detail. Notice how the
StartKong method takes the Kong version as a parameter. It gets the Kong version either from the environment variable
KONG_VERSION or if that environment variable is not set then it uses the default Kong version which I set to the latest version
0.11 at time of writing. The cool thing about this is that if I want to run my tests against a different version of Kong I can do that easily by changing this value. The really cool thing about this is that I can run the build against multiple versions of Kong on travis-ci by taking advantage of the env matrix feature. If you list multiple values for an environment variable in travis-ci then travis-ci will automatically run a build for each entry. This means it is really easy to run the whole test pack against multiple versions of Kong which is pretty neat. You can check out the gokong build to see this in action!
The one part you may be wondering from all of this is how do I get the url of the container that Kong is running on for use in my tests. That is done by setting an environment variable
KONG_ADMIN_ADDR. The client uses that environment variable if set and if not then it defaults to
With all of this in place it allows me to test the client by hitting a real running Kong in a container, no mocks in sight! How cool is that. Plus I can run against any version of Kong that is built as a docker container with a flick of a switch!
Here is an example of what a test looks like so you can get a feel:
I think that is really clean and readable. All of the code that boots up and tears down Kong is out of sight and you can just concentrate on the test. Again with no mocks around 🙂
If you want to see the rest of the code or help contribute to my gokong project that would be great. I look forward to any feedback you have on this.
I wanted to add a short post to describe how to automate the downloading of releases from private github repositories using a bash script or in a Docker build.
To start you need to create a Github token that has access to your repository. Once you have your token you can use the following bash script filling in the relevant details:
This script will download your release to the
/tmp/ directory, from there you can untar and move it etc.
To take this a stage further if you want to download your release as part of a docker build you can use the
Dockerfile snippet below to give you a starting point:
The trick here is that we are passing in the GITHUB_TOKEN using a docker build arg. This allows you to build the container using travis by setting a secure ENV variable and then passing that into your docker build script as the docker arg parameter. For example:
In the script above we check that the GITHUB_TOKEN env variable is set and if it isn’t then we terminate with a non zero exit code, halting the build. This then allows developers to run the build with their own GITHUB_TOKEN and you can run this build on travis by setting a secure env variable (or the equivalent in the builder server you are using).