At form3 I have recently solved an interesting problem to route our outbound web hook calls via Nginx. Before I dive into how I achieved this I want to set the landscape as to why you would want to do this.
For web hook calls your customer has to open up a https endpoint on the internet, so you really want to present a client certificate so the person receiving your call can verify it really is you calling them. You could add the client auth code directly inside your service (our service is written in Java). Although this becomes a pain for a number of reasons. Firstly the code for doing this in Java is pretty cumbersome. Secondly it means you now need to deploy this client certificate inside your application or load it in somehow, which shouldn’t really be the concern of the application.
The solution to all of this is to route the outbound call through Nginx and let Nginx take care of the client auth. This is much neater as it means the application doesn’t need to worry about any certificates, instead it can simply post to Nginx and let Nginx do all of the heavy TLS lifting.
The above design is what we end up with. Note we route everything using Linkerd to Linkerd. If you want to read more about how we set this up on AWS ECS you can read my blog post on it. If the web hook url for a customer was
https://kevinholditch.co.uk/callback then the notification api will post this through Linkerd to Nginx using the url
https://localhost:4140/notification-proxy/?address=http%3A%2F%2Fkevinholditch.co.uk%2Fcallback. Note we have to url encode the callback address. We then setup nginx to proxy the request onto the address provided in the
address query string parameter. This is done using the following Nginx server config:
set_by_lua block is a piece of lua code that url decodes the
address query string parameter. the
ngx.var.arg_address relates to the
address query string parameter. Note that you can replace
address with anything to read any parameter from the query string. This address is then used on line 12 as the
proxy_pass parameter. Lines 13 and 14 do the client auth signing.
The last trick to making all of this work was working out that the IP address on line 6 needs to change based on where you are running this. This line is basically telling Nginx which DNS server to use to look up the proxy address. Locally on my machine I need to use
127.0.0.11 (the docker DNS server) however on AWS this address changes. The last part of the jigsaw was working out how to find this dynamically. You can do that by issuing the following command
cat /etc/resolv.conf | grep nameserver | cut -d' ' -f2-. Once I had cracked that, I then updated the Nginx config file to be a template:
We then use something like the following startup script on our docker container that derives from
The clever part here is that we are setting an Environment variable on the fly which is the IP address of the DNS server using the command we worked out above. Then we are using
envsubst to substitute any environment variables in our template config file and writing our templated file out to disk. So when Nginx starts the IP address will be correct. This will all happen as the container starts so wherever the container is running (locally or on AWS) it will get the correct IP address and work.