I like to practice the approach of full stack component testing where the guiding principle is that you test the entire component from as high level as possible and only stub out third party dependencies (read other APIs) or something that isn’t easily available as a docker container. I have recently started a golang project to write a client for kong and I thought this would be a good opportunity to use this testing strategy in a golang project.
I love Go but the one thing I don’t like so much about it is the approach that most people seem to be taking to testing. A lot of tests are at the method level where your tests end up being tightly coupled to your implementation. This is a bad thing. You know a test is not very good when if you have to change your test when you change your implementation. The reason this is bad is because firstly as you are changing your test the same time as you are changing your code once you have finished you have know way of knowing if the new implementation still works as the test has changed. It also restricts how much you can get in and edit the implementation as you have constantly having to update the way the tests mock out everything. By testing at the top component level the tests do not care about the implementation and the code runs with real components so it works how it will in the real world. By writing tests in this way I have seen a lot less defects and have never had to manually debug something.
Anyway back to the subject of the full stack testing approach in Go. To start I used the excellent dockertest project which gives you a great API to start and stop docker containers. I then took advantage of the fact that in a Go project there is a special test function that gets called for every test run:
In the above method you can do your test setup code where I have placed the
//setup comment and your teardown code where I have placed the
//teardown comment. The code that gets returned by
m.Run() is the exit code from the test run. Go sets this to non zero if the test run fails so you need to exit with this code so your build will fail if your test run fails. Now using this method I can start the kong docker container, run the tests and then stop the kong docker container. Here is the full
TestMain code at time of writing:y
I have wrapped the starting and stopping of the kong container in a method to abstract away the detail. Notice how the
StartKong method takes the Kong version as a parameter. It gets the Kong version either from the environment variable
KONG_VERSION or if that environment variable is not set then it uses the default Kong version which I set to the latest version
0.11 at time of writing. The cool thing about this is that if I want to run my tests against a different version of Kong I can do that easily by changing this value. The really cool thing about this is that I can run the build against multiple versions of Kong on travis-ci by taking advantage of the env matrix feature. If you list multiple values for an environment variable in travis-ci then travis-ci will automatically run a build for each entry. This means it is really easy to run the whole test pack against multiple versions of Kong which is pretty neat. You can check out the gokong build to see this in action!
The one part you may be wondering from all of this is how do I get the url of the container that Kong is running on for use in my tests. That is done by setting an environment variable
KONG_ADMIN_ADDR. The client uses that environment variable if set and if not then it defaults to
With all of this in place it allows me to test the client by hitting a real running Kong in a container, no mocks in sight! How cool is that. Plus I can run against any version of Kong that is built as a docker container with a flick of a switch!
Here is an example of what a test looks like so you can get a feel:
I think that is really clean and readable. All of the code that boots up and tears down Kong is out of sight and you can just concentrate on the test. Again with no mocks around 🙂
If you want to see the rest of the code or help contribute to my gokong project that would be great. I look forward to any feedback you have on this.
As part of the crane open source project that I talked about in this post, I am setting up a Team City Server in Azure.
I ran into a few problems so I wanted to document the process from start to finish here:
- Create a new VM on azure using the Azure management portal selecting a Windows Server 2012 R2 machine
- Click on “all items” on the left, select the new virtual machine then click ‘connect’ in the bar at the bottom. This will download an rdp file configured to remote in to the desktop.
- Log in to the new vm using the username and password you setup in step 1
- At this point I wanted to download and install Team City but I had real trouble getting Team City to download in IE. I then tried to download Chrome and I could not get Chrome to download either. So I ended up installing chocolately and then installing Chrome through chocolatey. If anyone reading this knows a better way please let me know.
- Open powershell as admin
- Run the command “Set-ExecutionPolicy Unrestricted”
- Run the command “iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1‘))”
- Chocolately should now be installed. Next we need to install Chrome using chocolatey. Run the command “choco install GoogleChrome”
- Open up google chrome and download the Team City 9 EAP (or whichever version of Team City you want to run)
- Run the Team City installer, for ease I would use port 80 for the port. Select the local system account both for the team city server and agent service
- Team City should now be up and running, feel free to configure your builds using crane 🙂
- Now at this point I thought (naively) that it would all just work but that’s not the case. You need to setup some firewall rules to allow traffic through
- Open server manager (by default its the first item in the task bar with the picture of the suitcase)
- Click “Tools > Windows firewall with advanced security” to open up the firewall rules window
- Select “inbound rules” then “add new rule”, this will open up the new rule wizard.
- Select “port” as the rule type, click next.
- Type 80 in the port box (or whichever port you used for Team City), click next
- Click “allow the connection”, click next
- Leave the profile as is, click next
- Then click next on the last screen
- Now we have configured Windows Server to allow the connection, now we just have to open up the load balancer in azure…
- Go back to the azure management portal
- Click on all items on the left and select your virtual machine
- Select endpoints at the top
- Click add at the bottom
- Select add a “stand alone endpoint” click next
- If you have used port 80 then you can simply select “http” from the drop down list and click next. If you have used a custom Team City port then you will need to set up the private port to the port you put Team City on and the public port to the port you want to use on the internet. For example if you put Team City on port 8000 but you want to access it on port 80 you would select port 8000 for the private port and 80 for the public port. Click next.
- Once Azure finishes updating you should now be able to access your Team City server on the internet by going to <vmname>.cloudapp.net
I hope you found this helpful and it has saved you some time. I don’t think a lot of that is obvious out of the box.
I have recently started an open source project with a colleague (Ed Wilde). The project is called crane check it out on github. The idea behind the project is to write a tool that creates .Net builds for you automatically. For those of you who have spent time on a build server like Team City (not to pick on Team City as all build servers suffer from this) it takes quite a bit of time just to set up a boiler plate build.
A lot of that time is spent configuring the steps of the build normally something along the lines of the following:
- Get latest source
- Updated assembly build versions
- Build software
- Run tests
- Pack into nuget
- Publish into nuget
It can be confusing even for a developer with experience to set the build up in the right way and not only that but in a way that scales well as more applications are built.
This is where crane comes in. The idea is you can point crane at your solution and it will generate you a build automatically with all of the steps in place. By convention. We believe we can use our experience of configuring builds to give you a great starting place either for a brand new project or to create a build for an existing project.
We are also planning to give crane the ability to create build chains by analysing your solutions. This will give you the ability to split up your code base and work on smaller projects without the headache of trying to build it.
We would love to hear your initial thoughts on crane and whether it would be a useful tool for you.