Running SNS & SQS locally in docker containers supporting fan out

On AWS using SNS to fan out to multiple SQS queues is a common scenario. SNS fan out means creating a SQS queue for each consumer of an SNS message and subscribing each SQS queue to the SNS topic. This means when a message is sent to the SNS topic a copy of the message arrives in each consumer’s queue. It gives you multicast messaging and the ability to consume messages at your own pace and allowing you to not be online when a notification occurs.

I wanted to use SNS fan out in one of our components and as our testing model tests at the component level this means I needed to get a SNS SQS solution working in docker. Step forward ElasticMq and SNS.

Inside the example folder inside the SNS repository was the following docker compose file as an example to get SNS and SQS containers working together in fan out mode:

services:
sns:
image: s12v/sns
ports:
– "9911:9911"
volumes:
– ./config:/etc/sns
depends_on:
– sqs
sqs:
image: s12v/elasticmq
ports:
– "9324:9324"

When started with the docker-compose up command the containers span up ok. The problem came when publishing a message to the sns topic using the following command:

aws sns publish --topic-arn arn:aws:sns:us-east-1:1465414804035:test1 --endpoint-url http://localhost:9911 --message "hello"

The error received was:

sns_1 | com.amazonaws.http.AmazonHttpClient executeHelper
sns_1 | INFO: Unable to execute HTTP request: Connection refused (Connection refused)
sns_1 | java.net.ConnectException: Connection refused (Connection refused) 

So not a great start. For some reason the SNS container could not send the message on to the sqs container. Time to debug why….

The first step to working out why was going onto the SNS container and sending a message to the SQS container. This tells us whether or not the containers can talk to each other. When running this test the message got sent to the SQS queue successfully.

The next stage in testing was to look at the code for the SNS library to see if I could work out whether it logged out the SQS queue name it was trying to send it to. Upon inspection I realised that the SNS library was using Apache Camel to connect to SQS. I noticed that in the source code for Apache Camel it does log out a lot more information when the log level is set to trace. Going back to the SNS library there is the following logback.xml file:

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <logger name="org.apache.camel" level="INFO"/>

    <root level="DEBUG">
        <appender-ref ref="STDOUT" />
    </root>
</configuration>

I simply cloned the SNS repository from github, updated the level from DEBUG to TRACE and then recompiled the SNS code using the command sbt assembly. Once this finished it was simply a matter of copying the new jar into the root of the folder where I had cloned the SNS repo and updating the Dockerfile to use my newly compiled jar. The last change needed was updating the docker-compose.yml file in the example directory to:

services:
sns:
build: ../
ports:
– "9911:9911"
volumes:
– ./config:/etc/sns
depends_on:
– sqs
sqs:
image: s12v/elasticmq
ports:
– "9324:9324"

The important line being that we are now using the local SNS container not the one from Github. To build this I simply ran docker-compose build and then docker-compose up. This time the SNS container started logging with trace logging. When I sent a message to SNS I got a much more informative error message:

TRACE o.a.c.component.aws.sqs.SqsEndpoint - Queue available at 'http://localhost:9324/queue/queue1'.

Its clear now that the url for the queue is being set incorrectly. It should be http://sqs:9324/queue/queue1 as sqs is the name of the container, the reason we were getting connection refused before was that the messages were being sent to the host. To work out how to change this we had to dig through the Apache Camel code to work out how it configures its queue urls. We found that it queries sqs using the list queues command. Running the same list queues command on our running container revealed that the queues were being bound to localhost and not sqs.

To change this we simply had to use the following config file for elasticmq:

include classpath("application.conf")

node-address {
host = sqs
}

queues {
queue1 {}
}

The key line being “host = sqs”. The last part to making everything work was updating the docker-compose.yml file to include the config file for elastic mq:

services:
sns:
image: s12v/sns
ports:
– "9911:9911"
volumes:
– ./config/db.json:/etc/sns/db.json
depends_on:
– sqs
sqs:
image: s12v/elasticmq
ports:
– "9324:9324"
volumes:
– ./config/elasticmq.conf:/etc/elasticmq/elasticmq.conf

Once I tore down the containers and started them up again I ran the list queues command, this time the queues came back bound to sqs: http://sqs:9324/queue/queue1. I then ran the command to send the a message to SNS and could see it successfully get sent to SQS by receiving it with the following command:

aws sqs receive-message --queue-url http://localhost:9324/queue/queue1  --region elasticmq --endpoint-url http://localhost:9324 --no-verify-ssl --no-sign-request --attribute-names All --message-attribute-names All

And there we have it, a working SNS fan out to SQS using docker containers. The author of the SNS container has accepted a PR from my colleague sam-io to update the example docker-compose.yml with the fixes described here. Meaning that you can simply clone the SNS repository from github cd into the example directory and run docker-compose up and everything should work. A big thanks to the open source community and people like Sergey Novikov for providing such great tooling. Its great to be able to give something back!

Advertisements

Mutual client ssl using nginx on AWS

An interesting problem I’ve recently had to solve is to integrate with a third party client who wanted to communicate with our services over the internet and use mutual client auth (ssl) to lock down the connection. The naive way to solve this would have been to put the code directly into the java service itself. This however is quite limiting and restrictive. For example if you want to update the certificates that are allowed you need to rerelease your service and your service is now tightly coupled to that customer’s way of doing things.

A neater solution is to offload this concern to a third party service (in this case nginx running on a docker container). This means that the original service can talk using normal http/s to nginx and nginx can do all of the hard work of the mutual client auth and proxying the request onto the customer.

When implementing this solution I couldn’t find a full example of how to set this up using nginx so I wanted to go through it. I want to split the explanation into two halves outgoing and incoming. First lets go through the outgoing config in nginx:

http {
  server {
    server_name outgoing_proxy;
    listen 8888;
    location / {
      proxy_pass                    http://thirdparty.com/api/;
      proxy_ssl_certificate         /etc/nginx/certs/client.cert.pem;
      proxy_ssl_certificate_key     /etc/nginx/certs/client.key.pem;
    }
  }
}

This is a pretty simple block to understand. It says we are hosting a server on port 8888 at the root. We are going to proxy all requests to http://thirdparty.com/api/ and use the client certificate specified to sign the requests. Pretty simple so far. The harder part is the configuration for the inbound:


http {
  map $ssl_client_s_dn $allowed_ssl_client_s_dn {
      default no;
      "CN=inbound.mycompany.com,OU=My Company,O=My Company Limited,L=London,C=GB" yes;
  }

  server {
    listen       443 ssl;
    server_name  inbound.mycompany.com;

    ssl_client_certificate  /etc/nginx/certs/client-ca-bundle.pem;
    ssl_certificate         /etc/nginx/certs/server.cert.pem;
    ssl_certificate_key     /etc/nginx/certs/server.key.pem;
    ssl_verify_client       on;
    ssl_verify_depth        2;

    location / {

      if ($allowed_ssl_client_s_dn = no) {
        add_header X-SSL-Client-S-DN $ssl_client_s_dn always;
        return 403;
      }

      proxy_pass localhost:4140/myservice/;
      proxy_set_header Host $host;
      proxy_set_header 'Content-Type' 'text/xml;charset=UTF-8';
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
  }
}

Before I explain the config block above I wanted to point out that in practice you place all of the code in the above two snippets inside the same http block.

Starting at the server block we can see that we are listening on port 443 (normal https port) using the hostname inbound.mycompany.com. This is where the ssl will terminate. We are using an AWS ELB to load balance requests to a pool of nginx servers that handle the ssl termination. The ssl_client_certificate is the ca bundle pem for all of the certificates with trust (intermediate and root authorities). The ssl_certificate and ssl_certificate_key host the server certificate (for the ssl endpoint ie a certificate with the subject name “inbound.mycompany.com”). ssl_verify_client on means to check that the client is trusted and ssl_verify_depth is how far down the certificate chain to check. The if statement says if we have verified that the presented client certificate is indeed one that we trust then lets check that the subject distinguished name is one that we are allowing explicitly. This is done by checking the map above. In my example config above the only subject distinguished name that will be allowed is “CN=inbound.mycompany.com,OU=My Company,O=My Company Limited,L=London,C=GB”. If the subject distinguished name is not allowed then nginx will return http code 403 forbidden. If it is allowed then we proxy the request onto myservice through linkerd.

By using a map to define allowed subject distinguished names we can easily generate this programmatically and it keeps all of the allowed subject distinguished names in a single place.

I really like the solution above as all of our services can talk to their local linkerd container ( see my linkerd post on running linkerd on AWS ECS), then linkerd can take care of talking using https between server boundaries and then nginx can take care of doing the ssl mutual auth to talk to the customer. The service does not need to worry about any of that. In fact as far as the service is concerned it is talking to a service running on the box (the local linkerd instance). This means it is much more flexible and if you have another service that needs to talk to that customer using mutual ssl that service can just talk through the same channel through linkerd and nginx. If you did it using the mutual ssl code directly in your service you would then have two copies (or two references) to a library to handle the mutual ssl that you would have to keep up to date with your client’s allowed certificates. That could quickly explode as you have to write more services or have more customers that want to use mutual client auth and would quickly become a nightmare to maintain. By solving this problem using a single service for just this job all of the ssl configuration is in a single place and all of the services are much simpler to write.

Running Linkerd in a docker container on AWS ECS

I recently solved an interesting problem of configuring linkerd to run on an AWS ECS cluster.

Before I explain the linkerd configuration I think it would help to go through a diagram showing our setup:

A request comes in the top it then gets routed to a Kong instance.  Kong is configured to route the request to a local linkerd instance.  The local linkerd instance then uses its local Consul to find out where the service is.  It Then rewrites the request to call another linkerd on the destination server where the service resides (the one that was discovered in consul).  The linkerd on the service box then receives the request and uses its local consul to find the service.  At this point we use a filter to make sure it only uses the service located on the same box as essentially the service discovery has already happened by the calling linkerd.  We then call the service and reply.

The problem to solve when running on AWS ECS is how to bind to only services on your local box.  The normal way of doing this is to use the interpreter “io.l5d.localhost” (localhost).  Which will then filter services in consul that are on the local host.  When running in a docker container this won’t work as the local ip address of the linkerd in the docker container will not be the ip address of the server it is running on.  Meaning when it queries consul it will have no matches.

To solve this problem we can use the specificHost filter (added recently). We can use this to provide the IP address of the server to filter on. Now we run into another problem where we do not know the ip address of the server until runtime. There is a neat solution to this problem. Firstly we can write our own docker container based off the official one. Next we define a templated config file like this:

interpreter:
kind: default
transformers:
– kind: io.l5d.specificHost
host: $LOCAL_IP

Notice that I have used $LOCAL_IP instead of the actual ip. This is because at runtime we can write a simple script that will set the $LOCAL_IP environment variable to the IP of the box the container is running on and then substitute all environment variables in the config and then run linkerd.

To do this we use the following code inside an entrypoint.sh file:

export LOCAL_IP=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4)
envsubst < /opt/linkerd/conf/linkerd.conf.template > /opt/linkerd/conf/linkerd.conf
echo "starting linkerd on $LOCAL_IP..."
./bundle-exec /opt/linkerd/conf/linkerd.conf

The trick here is to use the AWS meta endpoint to grab the local ip of the machine. We then use the envsubst program to substitute out all environment variables in our config and build a real linkerd.conf file. From there we can simply start linkerd.

To make this work you need a Dockerfile like the following:

FROM buoyantio/linkerd:1.1.0

RUN wget -O /usr/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64
RUN chmod +x /usr/bin/dumb-init

RUN apt-get update && apt-get install gettext -y

ADD entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/usr/bin/dumb-init", "--"]

This dockerfile uses dumb-init to manage the linkerd process. It installs the gettext package which is where the envsubst program lives. It then kicks off the entrypoint.sh script which is where we do our environment substitution and start linkerd.

The nice thing about this is we can put any environment variables we want into our linkerd config template file and then supply them at runtime in our task definition file. They will then get substituted out when the container starts. For example I am supplying the common name to use for ssl this way in my container.

Although this solution works it would be great if linkerd supported having environment variables inside its config file and swapped them out automatically. Maybe a pull request for another time…

SqlJuxt – Active patterns for the win

F# Active Patterns are awesome!! I wanted to start this blog post with that statement. I was not truly aware of F# active patterns until I read the article on F Sharp for Fun and Profit when I was looking at a better way to use the .Net Int32.Parse function.

Active patterns are a function that you can define that can match on an expression. For example:

let (|Integer|_|) (s:string) = 
    match Int32.TryParse(s) with
        | (true, i) -> Some i
        | _ -> None 

To explain what each line is doing the | characters are called banana clips (not sure why) and here we are defining a partial active pattern. This means the pattern has to return Option of some type. So be definition the pattern may return a value or it may not so it is a partial match. This pattern takes a string and then returns Some int if the pattern matches or None if it does not. Once this pattern is defined it can be used in a normal match expression as follows:

let printNumber (str:string) =
	match str with
		| Integer i -> printfn "Integer: %d" i
		| _ -> "%s is not a number" %s

Using this technique it allows us to define a really cool active pattern for matching regular expressions and parsing out the groups that are matched:

let (|ParseRegex|_|) regex str =
    let m = Regex(regex).Match(str)
    match m with 
        | m when m.Success -> Some (List.tail [ for x in m.Groups -> x.Value] )
        | _ -> None

This regex active pattern returns all of the matched groups if the regex is a match or None if it is not a match. Note the reason List.tail is used is to skip the first element as that is the fully matched string which we don’t want, we only want the groups.

The reason why all of this came up is that the getNextAvailableName function that I wrote about in my last blog post is very long winded. For those who haven’t read my last post this function generates a unique name by taking a candidate name and a list of names that have been taken. If the name passed in has been taken then the function will add a number on to the end of the name and keep incrementing it until it finds a name that is not taken. The getNextAvailableName function was defined as:

let rec getNextAvailableName (name:string) (names: string list) =

    let getNumber (chr:char) =
        match Int32.TryParse(chr.ToString()) with
            | (true, i) -> Some i
            | _ -> None

    let grabLastChar (str:string) =
        str.[str.Length-1]

    let pruneLastChar (str:string) =
        str.Substring(0, str.Length - 1)

    let pruneNumber (str:string) i =
        str.Substring(0, str.Length - i.ToString().Length)

    let getNumberFromEndOfString (s:string)  =

        let rec getNumberFromEndOfStringInner (s1:string) (n: int option) =
            match s1 |> String.IsNullOrWhiteSpace with
                | true -> n
                | false -> match s1 |> grabLastChar |> getNumber with
                            | None -> n
                            | Some m ->  let newS = s1 |> pruneLastChar
                                         match n with 
                                            | Some n1 -> let newN = m.ToString() + n1.ToString() |> Convert.ToInt32 |> Some
                                                         getNumberFromEndOfStringInner newS newN
                                            | None -> getNumberFromEndOfStringInner newS (Some m) 
        let num = getNumberFromEndOfStringInner s None
        match num with
            | Some num' -> (s |> pruneNumber <| num', num)
            | None -> (s, num)
        

    let result = names |> List.tryFind(fun x -> x = name)
    match result with
        | Some r -> let (n, r) = getNumberFromEndOfString name
                    match r with 
                        | Some r' -> getNextAvailableName (n + (r'+1).ToString()) names
                        | None -> getNextAvailableName (n + "2") names
                    
        | None -> name

With the Integer and Regex active patterns defined as explained above the new version of the getNextAvailableName function is:

let rec getNextAvailableName (name:string) (names: string list) =

    let result = names |> List.tryFind(fun x -> x = name)
    match result with
        | Some r -> let (n, r) = match name with
                                    | ParseRegex "(.*)(\d+)$" [s; Integer i] -> (s, Some i)
                                    | _ -> (name, None)
                    match r with 
                        | Some r' -> getNextAvailableName (n + (r'+1).ToString()) names
                        | None -> getNextAvailableName (n + "2") names
                    
        | None -> name

I think it is pretty incredible how much simpler this version of the function is. It does exactly the same job (all of my tests still pass:) ). It really shows the power of active patterns and how they simplify your code. I think they also make the code more readable as even if you didn’t know the definition of the ParseRegex active pattern you could guess from the code.

Check out the full source code at SqlJuxt GitHub repository.

XBehave – compiling the test model using the Razor Engine

In the last post I left off describing how I implemented the parsing of the XUnit console runner xml in the XUnit.Reporter console app. In this post I want to talk through how you can take advantage of the excellent RazorEngine to render the model to html.

I am going to talk through the console app line by line. The first line of the app:

var reporterArgs = Args.Parse<ReporterArgs>(args);

Here we are using the excellent PowerArgs library to parse the command line arguments out into a model. I love how the API for PowerArgs has been designed. It has a slew of features which I won’t go into here for example it supports prompting for missing arguments all out of the box.

Engine.Razor.Compile(AssemblyResource.InThisAssembly("TestView.cshtml").GetText(), "testTemplate", typeof(TestPageModel));

This line uses the RazorEngine to compile the view containing my html, it gives the view the key name “testTemplate” in the RazorEngine’s internal cache. What is neat about this is that we can deploy the TestView.cshtml as an embedded resource so it becomes part of our assembly. We can then use the AssemblyResource class to grab the text from the embedded resource to pass to the razor engine.

var model = TestPageModelBuilder.Create()
                                .WithPageTitle(reporterArgs.PageTitle)
                                .WithTestXmlFromPath(reporterArgs.Xml)
                                .Build();

We then create the TestPageModel using the TestPageModelBuilder. Using a builder here gives you very readable code. Inside the builder we are using the XmlParser from earlier to parse the xml and generate the List of TestAssemblyModels. We also take the optional PageTitle argument here to place in the page title in our view template.

var output = new StringWriter();
Engine.Razor.Run("testTemplate", output, typeof(TestPageModel), model);
File.WriteAllText(reporterArgs.Html, output.ToString());

The last 3 lines simply create a StringWriter which is used by the engine to write the compiled template to. Calling Engine.Razor.Run runs the template we compiled earlier using the key we set “testTemplate”. After this line fires our html will have been written to the StringWriter so all we have to do is extract it and then write it out to the html file path that was parsed in.

That’s all there is to it. We now have a neat way to extract the Given, When, Then gherkin syntax from our XBehave texts and export it to html in whatever shape we chose. From there you could post to an internal wiki or email the file to someone, that could all be done automatically as part of a CI build.

If anyone has any feedback on any of the code then that is always welcomed. Please check out the XUnit.Reporter repository for all of the source code.

XBehave – Exporting your Given, Then, When to html

For a project in my day job we have been using the excellent XBehave for our integration tests. I love XBehave in that it lets you use a Given, When, Then syntax over the top of XUnit. There are a couple of issues with XBehave that I have yet to find a neat solution for (unless I am missing something please tell me if this is the case). The issues are

1) There is not a nice way to extract the Given, When, Then gherkin syntax out of the C# assembly for reporting
2) The runner treats each step in the scenario as a separate test

To solve these to problems I am writing an open source parser that takes the xml produced by the XUnit console runner and parses it to a C# model. I can then use razor to render these models as html and spit out the resultant html.

This will mean that I can take the html and post it up to a wiki so every time a new build runs the tests it would be able to update the wiki with the latest set of tests that are in the code and even say whether the tests pass or fail. Allowing a business/product owner to review the tests, see which pieces of functionality are covered and which features have been completed.

To this end I have created the XUnit.Reporter github repository. This article will cover the parsing of the xml into a C# model.

A neat class that I am using inside the XUnit.Reporter is the AssemblyResource class. This class allows easy access to embedded assembly resources. Which means that I can run the XUnit console runner for a test, take the resultant output and add it to the test assembly as an embedded resource. I can then use the AssemblyResource class to load back the text from the xml file by using the following line of code:

AssemblyResource.InAssembly(typeof(ParserScenarios).Assembly, "singlepassingscenario.xml").GetText())

To produce the test xml files for the tests I simply set up a console app, added XBehave and then created a test in the state I wanted for example a single scenario that passes. I then ran the XUnit console runner with the -xml flag set to produce the xml output. I then copied the xml output to a test file and named it accordingly.

The statistics for the assembly model and test collection model are not aligned to what I think you would want from an XBehave test. For example if you have this single XBehave test:


public class MyTest
{
[Scenario]
public void MyScenario()
{
"Given something"
._(() => { });

"When something"
._(() => { });

"Then something should be true"
._(() => { });

"And then another thing"
._(() => { });
}
}

Then the resultant xml produced by the console runner is:

<?xml version="1.0" encoding="utf-8"?>
<assemblies>
<assembly name="C:\projects\CommandScratchpad\CommandScratchpad\bin\Debug\CommandScratchpad.EXE" environment="64-bit .NET 4.0.30319.42000 [collection-per-class, parallel (2 threads)]" test-framework="xUnit.net 2.1.0.3179" run-date="2017-01-27" run-time="17:16:00" config-file="C:\projects\CommandScratchpad\CommandScratchpad\bin\Debug\CommandScratchpad.exe.config" total="4" passed="4" failed="0" skipped="0" time="0.161" errors="0">
<errors />
<collection total="4" passed="4" failed="0" skipped="0" name="Test collection for RandomNamespace.MyTest" time="0.010">
<test name="RandomNamespace.MyTest.MyScenario() [01] Given something" type="RandomNamespace.MyTest" method="MyScenario" time="0.0023842" result="Pass" />
<test name="RandomNamespace.MyTest.MyScenario() [02] When something" type="RandomNamespace.MyTest" method="MyScenario" time="0.0000648" result="Pass" />
<test name="RandomNamespace.MyTest.MyScenario() [03] Then something should be true" type="RandomNamespace.MyTest" method="MyScenario" time="0.0000365" result="Pass" />
<test name="RandomNamespace.MyTest.MyScenario() [04] And then another thing" type="RandomNamespace.MyTest" method="MyScenario" time="0.000032" result="Pass" />
</collection>
</assembly>
</assemblies>

If you look carefully at the xml you will notice a number of things which are counter-intuative. Firstly look at the total in the assembly element it says 4, when we only had a single test. This is because the runner is considering each step to be a separate test. The same goes for the other totals and the totals in the collection element. The next thing that you will notice is that the step names in the original test have had a load of junk added to the front of them.

In the parser I produce a model with the results that I would expect. So for the above xml it I produce an assembly model with a total of 1 test, 1 passed, 0 failed, 0 skipped and 0 errors. Which I think makes much more sense for XBehave tests.

Feel free to clone the repository and look through the parsing code (warning it is a big ugly). Next time I will be talking through the remainder of this app which is rendering the test results to html using razor.

SqlJuxt – Implementing the builder pattern in F#

As an imperative programmer by trade a pattern that I often like to use is the builder pattern for building objects. This pattern is especially useful for test code.

The reasons the builder pattern is so useful is because:

  • It means you only have to “new” the object up in a single place meaning your test code effectively goes through an API (the builder API) to create the object
  • It makes you code really readable so you can understand exactly what it is doing without having to look into how the code works

In my SqlJuxt project I started by coding it in C#. I wanted a builder to make a table create script. This would mean that in my integration tests I could fluently create a script that would create a table and then I could run it in to a real database and then run the comparison. This makes the tests very readable and easy to write. In C# using the table builder looks like:

    var table = Sql.BuildScript()
                   .WithTableNamed("MyTable", t => t.WithColumns(c => c.NullableVarchar("First", 23)
                                                                       .NullableInt("Second")))

I think that is pretty nice code. You can easily read that code and tell that it will create a table named “MyTable” with a nullable varchar column and a nullable int column.

I wanted to achieve the same thing in F# but the catch is I did not just want to translate the C# in to F# (which is possible) I wanted to write proper functional code. Which means you should not really create classes or mutable types! The whole way the builder pattern works is you store state on the builder and then with each method call you change the state of the builder and then return yourself. When the build method is called at the end you use all of the state to build the object (note an implicit call to the Build method is not needed in the above code as I have overridden the implicit conversion operator).

I hunted around for inspiration and found it in the form of how the TopShelf guys had written their fluent API.

This is how using the table builder looks in F#:

    let table = CreateTable "TestTable"
                    |> WithNullableInt "Column1"
                    |> Build 

I think that is pretty sweet! The trick to making this work is to have a type that represents a database table. Obviously the type is immutable. The CreateTable function takes a name and then returns a new instance of the table type with the name set:

    let CreateTable name =
        {name = name; columns = []}

Then each of the functions to create a column take the table and a name in the case of a nullable int column and then return a new immutable table instance with the column appended to the list of columns. The trick is to take the table type as the last parameter to the function. This means you do not have to implicitly pass it around. As you can see from the CreateTable code above. The Build function then takes the table and translates it in to a sql script ready to be run in to the database.

Here is an example of a complete test to create a table:

    [<Fact>]
    let ``should be able to build a table with a mixture of columns``() =
       CreateTable "MultiColumnTable"
           |> WithVarchar "MyVarchar" 10
           |> WithInt "MyInt"
           |> WithNullableVarchar "NullVarchar" 55
           |> WithNullableInt "NullInt"
           |> Build 
           |> should equal @"CREATE TABLE [dbo].[MultiColumnTable]( [MyVarchar] [varchar](10) NOT NULL, [MyInt] [int] NOT NULL, [NullVarchar] [varchar](55) NULL, [NullInt] [int] NULL )
GO"

I think that test is really readable and explains exactly what it is doing. Which is exactly what a test should do.

If you want to follow along with the project then check out the SqlJuxt GitHub repository.