I recently solved an interesting problem of configuring linkerd to run on an AWS ECS cluster.
Before I explain the linkerd configuration I think it would help to go through a diagram showing our setup:
A request comes in the top it then gets routed to a Kong instance. Kong is configured to route the request to a local linkerd instance. The local linkerd instance then uses its local Consul to find out where the service is. It Then rewrites the request to call another linkerd on the destination server where the service resides (the one that was discovered in consul). The linkerd on the service box then receives the request and uses its local consul to find the service. At this point we use a filter to make sure it only uses the service located on the same box as essentially the service discovery has already happened by the calling linkerd. We then call the service and reply.
The problem to solve when running on AWS ECS is how to bind to only services on your local box. The normal way of doing this is to use the interpreter “io.l5d.localhost” (localhost). Which will then filter services in consul that are on the local host. When running in a docker container this won’t work as the local ip address of the linkerd in the docker container will not be the ip address of the server it is running on. Meaning when it queries consul it will have no matches.
To solve this problem we can use the specificHost filter (added recently). We can use this to provide the IP address of the server to filter on. Now we run into another problem where we do not know the ip address of the server until runtime. There is a neat solution to this problem. Firstly we can write our own docker container based off the official one. Next we define a templated config file like this:
– kind: io.l5d.specificHost
Notice that I have used $LOCAL_IP instead of the actual ip. This is because at runtime we can write a simple script that will set the $LOCAL_IP environment variable to the IP of the box the container is running on and then substitute all environment variables in the config and then run linkerd.
To do this we use the following code inside an entrypoint.sh file:
export LOCAL_IP=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4) envsubst < /opt/linkerd/conf/linkerd.conf.template > /opt/linkerd/conf/linkerd.conf echo "starting linkerd on $LOCAL_IP..." ./bundle-exec /opt/linkerd/conf/linkerd.conf
The trick here is to use the AWS meta endpoint to grab the local ip of the machine. We then use the envsubst program to substitute out all environment variables in our config and build a real linkerd.conf file. From there we can simply start linkerd.
To make this work you need a Dockerfile like the following:
FROM buoyantio/linkerd:1.1.0 RUN wget -O /usr/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64 RUN chmod +x /usr/bin/dumb-init RUN apt-get update && apt-get install gettext -y ADD entrypoint.sh /entrypoint.sh ENTRYPOINT ["/usr/bin/dumb-init", "--"]
This dockerfile uses dumb-init to manage the linkerd process. It installs the gettext package which is where the envsubst program lives. It then kicks off the entrypoint.sh script which is where we do our environment substitution and start linkerd.
The nice thing about this is we can put any environment variables we want into our linkerd config template file and then supply them at runtime in our task definition file. They will then get substituted out when the container starts. For example I am supplying the common name to use for ssl this way in my container.
Although this solution works it would be great if linkerd supported having environment variables inside its config file and swapped them out automatically. Maybe a pull request for another time…