Routing outbound webhook calls through Nginx on AWS

At form3 I have recently solved an interesting problem to route our outbound web hook calls via Nginx.  Before I dive into how I achieved this I want to set the landscape as to why you would want to do this.

For web hook calls your customer has to open up a https endpoint on the internet, so you really want to present a client certificate so the person receiving your call can verify it really is you calling them.  You could add the client auth code directly inside your service (our service is written in Java).  Although this becomes a pain for a number of reasons.  Firstly the code for doing this in Java is pretty cumbersome.  Secondly it means you now need to deploy this client certificate inside your application or load it in somehow, which shouldn’t really be the concern of the application.

The solution to all of this is to route the outbound call through Nginx and let Nginx take care of the client auth.  This is much neater as it means the application doesn’t need to worry about any certificates, instead it can simply post to Nginx and let Nginx do all of the heavy TLS lifting.

The above design is what we end up with.  Note we route everything using Linkerd to Linkerd.  If you want to read more about how we set this up on AWS ECS you can read my blog post on it. If the web hook url for a customer was https://kevinholditch.co.uk/callback then the notification api will post this through Linkerd to Nginx using the url https://localhost:4140/notification-proxy/?address=http%3A%2F%2Fkevinholditch.co.uk%2Fcallback. Note we have to url encode the callback address. We then setup nginx to proxy the request onto the address provided in the address query string parameter. This is done using the following Nginx server config:

The set_by_lua block is a piece of lua code that url decodes the address query string parameter.  the ngx.var.arg_address relates to the address query string parameter.  Note that you can replace address with anything to read any parameter from the query string.  This address is then used on line 12 as the proxy_pass parameter.  Lines 13 and 14 do the client auth signing.

The last trick to making all of this work was working out that the IP address on line 6 needs to change based on where you are running this.  This line is basically telling Nginx which DNS server to use to look up the proxy address.  Locally on my machine I need to use 127.0.0.11 (the docker DNS server) however on AWS this address changes.  The last part of the jigsaw was working out how to find this dynamically.  You can do that by issuing the following command cat /etc/resolv.conf | grep nameserver | cut -d' ' -f2-. Once I had cracked that, I then updated the Nginx config file to be a template:

We then use something like the following startup script on our docker container that derives from openresty/openresty :

The clever part here is that we are setting an Environment variable on the fly which is the IP address of the DNS server using the command we worked out above.  Then we are using envsubst to substitute any environment variables in our template config file and writing our templated file out to disk.  So when Nginx starts the IP address will be correct.  This will all happen as the container starts so wherever the container is running (locally or on AWS) it will get the correct IP address and work.

 

 

 

 

Mutual client ssl using nginx on AWS

An interesting problem I’ve recently had to solve is to integrate with a third party client who wanted to communicate with our services over the internet and use mutual client auth (ssl) to lock down the connection. The naive way to solve this would have been to put the code directly into the java service itself. This however is quite limiting and restrictive. For example if you want to update the certificates that are allowed you need to rerelease your service and your service is now tightly coupled to that customer’s way of doing things.

A neater solution is to offload this concern to a third party service (in this case nginx running on a docker container). This means that the original service can talk using normal http/s to nginx and nginx can do all of the hard work of the mutual client auth and proxying the request onto the customer.

When implementing this solution I couldn’t find a full example of how to set this up using nginx so I wanted to go through it. I want to split the explanation into two halves outgoing and incoming. First lets go through the outgoing config in nginx:

http {
  server {
    server_name outgoing_proxy;
    listen 8888;
    location / {
      proxy_pass                    http://thirdparty.com/api/;
      proxy_ssl_certificate         /etc/nginx/certs/client.cert.pem;
      proxy_ssl_certificate_key     /etc/nginx/certs/client.key.pem;
    }
  }
}

This is a pretty simple block to understand. It says we are hosting a server on port 8888 at the root. We are going to proxy all requests to http://thirdparty.com/api/ and use the client certificate specified to sign the requests. Pretty simple so far. The harder part is the configuration for the inbound:


http {
  map $ssl_client_s_dn $allowed_ssl_client_s_dn {
      default no;
      "CN=inbound.mycompany.com,OU=My Company,O=My Company Limited,L=London,C=GB" yes;
  }

  server {
    listen       443 ssl;
    server_name  inbound.mycompany.com;

    ssl_client_certificate  /etc/nginx/certs/client-ca-bundle.pem;
    ssl_certificate         /etc/nginx/certs/server.cert.pem;
    ssl_certificate_key     /etc/nginx/certs/server.key.pem;
    ssl_verify_client       on;
    ssl_verify_depth        2;

    location / {

      if ($allowed_ssl_client_s_dn = no) {
        add_header X-SSL-Client-S-DN $ssl_client_s_dn always;
        return 403;
      }

      proxy_pass localhost:4140/myservice/;
      proxy_set_header Host $host;
      proxy_set_header 'Content-Type' 'text/xml;charset=UTF-8';
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
  }
}

Before I explain the config block above I wanted to point out that in practice you place all of the code in the above two snippets inside the same http block.

Starting at the server block we can see that we are listening on port 443 (normal https port) using the hostname inbound.mycompany.com. This is where the ssl will terminate. We are using an AWS ELB to load balance requests to a pool of nginx servers that handle the ssl termination. The ssl_client_certificate is the ca bundle pem for all of the certificates with trust (intermediate and root authorities). The ssl_certificate and ssl_certificate_key host the server certificate (for the ssl endpoint ie a certificate with the subject name “inbound.mycompany.com”). ssl_verify_client on means to check that the client is trusted and ssl_verify_depth is how far down the certificate chain to check. The if statement says if we have verified that the presented client certificate is indeed one that we trust then lets check that the subject distinguished name is one that we are allowing explicitly. This is done by checking the map above. In my example config above the only subject distinguished name that will be allowed is “CN=inbound.mycompany.com,OU=My Company,O=My Company Limited,L=London,C=GB”. If the subject distinguished name is not allowed then nginx will return http code 403 forbidden. If it is allowed then we proxy the request onto myservice through linkerd.

By using a map to define allowed subject distinguished names we can easily generate this programmatically and it keeps all of the allowed subject distinguished names in a single place.

I really like the solution above as all of our services can talk to their local linkerd container ( see my linkerd post on running linkerd on AWS ECS), then linkerd can take care of talking using https between server boundaries and then nginx can take care of doing the ssl mutual auth to talk to the customer. The service does not need to worry about any of that. In fact as far as the service is concerned it is talking to a service running on the box (the local linkerd instance). This means it is much more flexible and if you have another service that needs to talk to that customer using mutual ssl that service can just talk through the same channel through linkerd and nginx. If you did it using the mutual ssl code directly in your service you would then have two copies (or two references) to a library to handle the mutual ssl that you would have to keep up to date with your client’s allowed certificates. That could quickly explode as you have to write more services or have more customers that want to use mutual client auth and would quickly become a nightmare to maintain. By solving this problem using a single service for just this job all of the ssl configuration is in a single place and all of the services are much simpler to write.