XBehave – compiling the test model using the Razor Engine

In the last post I left off describing how I implemented the parsing of the XUnit console runner xml in the XUnit.Reporter console app. In this post I want to talk through how you can take advantage of the excellent RazorEngine to render the model to html.

I am going to talk through the console app line by line. The first line of the app:

var reporterArgs = Args.Parse<ReporterArgs>(args);

Here we are using the excellent PowerArgs library to parse the command line arguments out into a model. I love how the API for PowerArgs has been designed. It has a slew of features which I won’t go into here for example it supports prompting for missing arguments all out of the box.

Engine.Razor.Compile(AssemblyResource.InThisAssembly("TestView.cshtml").GetText(), "testTemplate", typeof(TestPageModel));

This line uses the RazorEngine to compile the view containing my html, it gives the view the key name “testTemplate” in the RazorEngine’s internal cache. What is neat about this is that we can deploy the TestView.cshtml as an embedded resource so it becomes part of our assembly. We can then use the AssemblyResource class to grab the text from the embedded resource to pass to the razor engine.

var model = TestPageModelBuilder.Create()
                                .WithPageTitle(reporterArgs.PageTitle)
                                .WithTestXmlFromPath(reporterArgs.Xml)
                                .Build();

We then create the TestPageModel using the TestPageModelBuilder. Using a builder here gives you very readable code. Inside the builder we are using the XmlParser from earlier to parse the xml and generate the List of TestAssemblyModels. We also take the optional PageTitle argument here to place in the page title in our view template.

var output = new StringWriter();
Engine.Razor.Run("testTemplate", output, typeof(TestPageModel), model);
File.WriteAllText(reporterArgs.Html, output.ToString());

The last 3 lines simply create a StringWriter which is used by the engine to write the compiled template to. Calling Engine.Razor.Run runs the template we compiled earlier using the key we set “testTemplate”. After this line fires our html will have been written to the StringWriter so all we have to do is extract it and then write it out to the html file path that was parsed in.

That’s all there is to it. We now have a neat way to extract the Given, When, Then gherkin syntax from our XBehave texts and export it to html in whatever shape we chose. From there you could post to an internal wiki or email the file to someone, that could all be done automatically as part of a CI build.

If anyone has any feedback on any of the code then that is always welcomed. Please check out the XUnit.Reporter repository for all of the source code.

XBehave – Exporting your Given, Then, When to html

For a project in my day job we have been using the excellent XBehave for our integration tests. I love XBehave in that it lets you use a Given, When, Then syntax over the top of XUnit. There are a couple of issues with XBehave that I have yet to find a neat solution for (unless I am missing something please tell me if this is the case). The issues are

1) There is not a nice way to extract the Given, When, Then gherkin syntax out of the C# assembly for reporting
2) The runner treats each step in the scenario as a separate test

To solve these to problems I am writing an open source parser that takes the xml produced by the XUnit console runner and parses it to a C# model. I can then use razor to render these models as html and spit out the resultant html.

This will mean that I can take the html and post it up to a wiki so every time a new build runs the tests it would be able to update the wiki with the latest set of tests that are in the code and even say whether the tests pass or fail. Allowing a business/product owner to review the tests, see which pieces of functionality are covered and which features have been completed.

To this end I have created the XUnit.Reporter github repository. This article will cover the parsing of the xml into a C# model.

A neat class that I am using inside the XUnit.Reporter is the AssemblyResource class. This class allows easy access to embedded assembly resources. Which means that I can run the XUnit console runner for a test, take the resultant output and add it to the test assembly as an embedded resource. I can then use the AssemblyResource class to load back the text from the xml file by using the following line of code:

AssemblyResource.InAssembly(typeof(ParserScenarios).Assembly, "singlepassingscenario.xml").GetText())

To produce the test xml files for the tests I simply set up a console app, added XBehave and then created a test in the state I wanted for example a single scenario that passes. I then ran the XUnit console runner with the -xml flag set to produce the xml output. I then copied the xml output to a test file and named it accordingly.

The statistics for the assembly model and test collection model are not aligned to what I think you would want from an XBehave test. For example if you have this single XBehave test:


public class MyTest
{
[Scenario]
public void MyScenario()
{
"Given something"
._(() => { });

"When something"
._(() => { });

"Then something should be true"
._(() => { });

"And then another thing"
._(() => { });
}
}

Then the resultant xml produced by the console runner is:

<?xml version="1.0" encoding="utf-8"?>
<assemblies>
<assembly name="C:\projects\CommandScratchpad\CommandScratchpad\bin\Debug\CommandScratchpad.EXE" environment="64-bit .NET 4.0.30319.42000 [collection-per-class, parallel (2 threads)]" test-framework="xUnit.net 2.1.0.3179" run-date="2017-01-27" run-time="17:16:00" config-file="C:\projects\CommandScratchpad\CommandScratchpad\bin\Debug\CommandScratchpad.exe.config" total="4" passed="4" failed="0" skipped="0" time="0.161" errors="0">
<errors />
<collection total="4" passed="4" failed="0" skipped="0" name="Test collection for RandomNamespace.MyTest" time="0.010">
<test name="RandomNamespace.MyTest.MyScenario() [01] Given something" type="RandomNamespace.MyTest" method="MyScenario" time="0.0023842" result="Pass" />
<test name="RandomNamespace.MyTest.MyScenario() [02] When something" type="RandomNamespace.MyTest" method="MyScenario" time="0.0000648" result="Pass" />
<test name="RandomNamespace.MyTest.MyScenario() [03] Then something should be true" type="RandomNamespace.MyTest" method="MyScenario" time="0.0000365" result="Pass" />
<test name="RandomNamespace.MyTest.MyScenario() [04] And then another thing" type="RandomNamespace.MyTest" method="MyScenario" time="0.000032" result="Pass" />
</collection>
</assembly>
</assemblies>

If you look carefully at the xml you will notice a number of things which are counter-intuative. Firstly look at the total in the assembly element it says 4, when we only had a single test. This is because the runner is considering each step to be a separate test. The same goes for the other totals and the totals in the collection element. The next thing that you will notice is that the step names in the original test have had a load of junk added to the front of them.

In the parser I produce a model with the results that I would expect. So for the above xml it I produce an assembly model with a total of 1 test, 1 passed, 0 failed, 0 skipped and 0 errors. Which I think makes much more sense for XBehave tests.

Feel free to clone the repository and look through the parsing code (warning it is a big ugly). Next time I will be talking through the remainder of this app which is rendering the test results to html using razor.

Converting decimal numbers to Roman Numerals in C#

I decided to create a little project to implement converting decimal numbers to Roman Numerals in C#. You can solve this problem in quite a few different ways, I wanted to talk through the pattern based approach that I went for.

The Roman Numerals from 0 to 9 are as follows:

  • 0 = “” (empty string)
  • 1 = I
  • 2 = II
  • 3 = III
  • 4 = IV
  • 5 = V
  • 6 = VI
  • 7 = VII
  • 8 = VIII
  • 9 = IX

To start the project I wrote a set of tests that checked these first 10 cases. I like taking this approach as it allows you to solve for the simple base case, then you can refactor your solution to be more generic. Our first implementation that solves for the first ten numbers is:

public static string ToRomanNumeral(this int integer)
{
       
     var mapping = new Dictionary<int, string>
     {
         {0, ""},
         {1, "I"},
         {2, "II"},
         {3, "III"},
         {4, "IV"},
         {5, "V"},
         {6, "VI"},
         {7, "VII"},
         {8, "VIII"},
         {9, "IX"},
     };

     return mapping[integer];
}

Obviously there is no error checking etc we are just solving for the 10 first cases. I decided to implement the method as an extension method on int as it makes the code look neat as it will allow you to write:

    var romanNumeral = 9.ToRomanNumeral();

The next step is to look at the Roman Numerals up to 100 and see if we can spot any patterns. We don’t want to have to manually type out a dictionary for every Roman Numeral! If we look at the Roman Numeral representation for the tens column from 0 to 100 we find they are:

  • 0 = “” (empty string)
  • 10 = X
  • 20 = XX
  • 30 = XXX
  • 40 = XL
  • 50 = L
  • 60 = LX
  • 70 = LXX
  • 80 = LXXX
  • 90 = XC

We can straight away see that it is exactly the same as the numbers from 0 to 9 except you replace I with X, V with L and X with C. So lets pull that pattern out into something that can create a mapping dictionary given the 3 symbols. Doing this gives you the following method:

private static Dictionary<int, string> CreateMapping(string baseSymbol, string midSymbol, string upperSymbol)
{
    return new Dictionary<int, string>
    {
        {0, ""},
        {1, baseSymbol},
        {2, baseSymbol + baseSymbol},
        {3, baseSymbol + baseSymbol + baseSymbol},
        {4, baseSymbol + midSymbol},
        {5, midSymbol},
        {6, midSymbol + baseSymbol},
        {7, midSymbol + baseSymbol + baseSymbol},
        {8, midSymbol + baseSymbol + baseSymbol + baseSymbol},
        {9, baseSymbol + upperSymbol},
    };
}

We can now call the above method with the symbols for the column we want to calculate passing in the correct symbols. So for the units column we would write:

    var unitMapping = CreateMapping("I", "V", "X");

Now we have this we it is straight forward to create the mapping for the hundreds column. To complete our implementation we want to add some error checks as we are only going to support Roman Numerals from 0 to 4000. The full solution is now quite trivial. We simply check the input is in our valid range (between 0 and 4000). Then we loop through each column looking up the symbol for that column in our mapping dictionary that we generate using our function above. Then we simply concatenate the symbols together using a StringBuilder and return the result.

The full solution with all of the tests is available on my GitHub repository: https://github.com/kevholditch/RomanNumerals.

A neat query pattern written in C#

By using nuget packages such as the brilliant Dapper it is possible to create a very concise way to access your database without using very much code. I particularly like the query pattern I’m going to go through today, it’s lightweight, simple and encourages composition!

Lets work from the outside in. Clone the query pattern github repository so you can follow along. I have written the program in the repository to work against the Northwind example database. If you haven’t got the Northwind database installed you can find information on that over on msdn.

Take a look at the program.cs file the essence of which is captured in the code snippet below:

Console.WriteLine("Enter category to search for:");
var name = Console.ReadLine();

var categories = queryExector.Execute<GetCategoriesMatchingNameCriteria, GetCategoriesMatchingNameResult>(new GetCategoriesMatchingNameCriteria { Name = name }).Categories;

Console.WriteLine("categories found matching name: {0}", name);

foreach (var category in categories)
    Console.WriteLine(category);

The program above takes an input from the user and then does a like match with any category from the Northwind database that matches the user’s input. You can see how few liens of code this has taken to achieve. Note nowhere do we have reams of ADO .net code cluttering up the joint.

We are modelling a query as something that takes TCriteria and returns TResult. The interface for a query is shown below:

public interface IQuery<in TCriteria, out TResult>
{
    TResult Execute(TCriteria criteria);
}

By representing a query in this way and using Dapper the implementation is very short and to the point:

public class GetCategoriesMatchingNameQuery : IQuery<GetCategoriesMatchingNameCriteria, GetCategoriesMatchingNameResult>
{
    private readonly IDbConnection _dbConnection;

    public GetCategoriesMatchingNameQuery(IDbConnection dbConnection)
    {
        _dbConnection = dbConnection;
    }

    public GetCategoriesMatchingNameResult Execute(GetCategoriesMatchingNameCriteria matchingNameCriteria)
    {                        
        string term = "%" + matchingNameCriteria.Name.Replace("%", "[%]").Replace("[", "[[]").Replace("]", "[]]") + "%";

        var result = new GetCategoriesMatchingNameResult
        {
            Categories = _dbConnection.Query<CategoryEntity>(@"select CategoryID, CategoryName, Description from categories where categoryName like @term", new { term })                
        };


        return result;
    }
}

Note that is the class above the actual query is 3 lines of code! It can be done on one line if you wanted but that would make the code harder to read. As the queries are created automatically using an abstract factory in Castle Windsor we can do things like decorate them to apply caching or logging across the board or even both. Any cross cutting concern you can think of can be done easily. Queries can also be composed together easily you just have a query take a dependency on another query or multiple queries, then simply chain them together.

I really love how clean and concise this code is. By letting Dapper do the heavy lifting we aren’t bogged down with lots of ADO .net code that isn’t part of the IP of your business application.

An easy way to test custom configuration sections in .Net

Due the way the ConfigurationManager class works in the .Net framework it doesn’t lend itself very well to testing custom configuration sections.  I’ve found a neat solution to this problem that I want to share.  You can see all of the code in the EasyConfigurationTesting repository on github.

Most of the problems stem from the fact that the ConfigurationManager class is static. A good takeaway from this is that static classes are hard to test.

One approach to testing your own custom config section would be to put your config section in an app.config inside your test project.  This would work to the extent you could read the configuration section from it and test each value but it would be hard, error prone and hacky to test anything other than what was in the app.config file when the test ran.

In the config demo code that is on github the config section we are trying to test has a range element allowing you to set the min and max.  We want to test that we can successfully read the min and max values from the config and what happens in different scenarios like if we omit the max value from the config section.  Take a look at how readable the following test is:

TestAppConfigContext testContext = BuildA.NewTestAppConfig(@"<?xml version=""1.0""?>
            <configuration>
              <configSections>
<section name=""customFilter"" type=""Config.Sections.FilterConfigurationSection, Config""/>
              </configSections>
              <customFilter>
                <range min=""1"" max=""500"" />
              </customFilter>
            </configuration>");


var configurationProvider = new ConfigurationProvider(testContext.ConfigFilePath);
var filterConfigSection = configurationProvider.Read<FilterConfigurationSection>(FilterConfigurationSection.SectionName);

filterConfigSection.Range.Min.Should().Be(1);
filterConfigSection.Range.Max.Should().Be(500);            

testContext.Destroy();

Notice how we can pass in a string to represent the configuration we want to test, get the section based upon that string, assert the values from the section and then destroy the test context.

Under the covers this works by creating a temporary text file based upon the string you pass in. That temporary text file is then set as the config file to use for ConfigurationManager. The location of the temporary file is stored in the TestAppConfigContext class which also has a helpful destroy method to clean up the temporary file after the test is complete.

By using the builder pattern it makes the test very readable. If you read the test out loud from the top it reads “build a new test app config”. Code that describes itself in this way is easy to understand and more maintainable.

A cool trick that I use inside the builder is to implement the implicit operator to convert from TestAppConfigBuilder to TestAppConfigContext. This means that instead of writing:

var testContext = BuildA.NewTestAppConfig(@"...").Build();

You can write:

TestAppConfigContext testContext = BuildA.NewTestAppConfig(@"...");

Notice on the second row you can omit the call to Build() because the implicit operator takes care of the conversion (which incidentally is implemented as a call to build). In this case you could argue that you like the call to Build() as it means you can use var rather than the type but I personally prefer it.

With this pattern we can easily add more tests with different config values:

TestAppConfigContext testContext = BuildA.NewTestAppConfig(@"<?xml version=""1.0""?>
            <configuration>
              <configSections>
                <section name=""customFilter"" type=""Config.Sections.FilterConfigurationSection, Config""/>
              </configSections>
              <customFilter>
                <range min=""1"" />
              </customFilter>
            </configuration>");


var configurationProvider = new ConfigurationProvider(testContext.ConfigFilePath);
var filterConfigSection = configurationProvider.Read<FilterConfigurationSection>(FilterConfigurationSection.SectionName);

filterConfigSection.Range.Min.Should().Be(1);
filterConfigSection.Range.Max.Should().Be(100);

testContext.Destroy();

In the test above we are making sure we get a max value of 100 if one is not provided in the configuration. This gives us the safety net of a failing unit test should someone update this code.

I think this pattern is a really neat way to test custom configuration sections. Feel free to clone the github repository with the sample code and give feedback.

BuilderFramework – Dependent steps

Last time I started to detail about a new open source builder framework that I was writing. Today I wanted to speak about dependent steps.

It is important for the builder to support dependent steps. For example you might have one step to create an order and another step to create a customer. Obviously the step to create the customer will need to run before the step to create the customer. When reversing these steps they will need to run in the opposite order.

To manage this I have written an attribute DependsOnAttribute. This attribute takes a type as it’s constructor parameter.  The attribute allows you to annotate which steps your step depends on. For example:


public class CreateCustomerStep : IBuildStep { ...

[DependsOn(typeof(CreateCustomerStep))]
public class CreateOrderStep : IBuildStep { ...

To support this the commit method needs to sort the steps into dependency order. It also needs to check for a circular dependency and throw an exception if one is found. We are going to need a separate class for managing the sorting of the steps (remember single responsibility!). The interface is detailed as follows:

public interface IBuildStepDependencySorter
{
    IEnumerable Sort(IEnumerable buildSteps);
}

Before we implement anything we need a set of tests to cover all of the use cases of the dependency sorter.  That way when all of the tests pass we know that our code is good. I always like to work in a TDD style.  (The tests I have come up with can be seen in depth on the github source page or by cloning the source).

At a high level these are the tests we need:

  • A simple case where we have 2 steps where one depends on the other
  • A simple circular reference with 3 steps throws an exception
  • A complex circular reference with 5 steps throws an exception
  • A more complex but valid 4 step dependency hierarchy gets sorted correctly
  • A multiple dependency (one step dependent or more than one other step) gets sorted correctly

It is so important to spend a decent amount of time writing meaningful tests that test all of the use cases of your code.  Once you have done this it makes it so much easier to write the code.  I see so many people writing the code first and then retro fitting the tests.  Some developers also claim that they haven’t got time to write tests.  I can’t see this logic.  When writing tests first your code is quicker to write as you know when the code is working.  If you write your tests last then you are just caught in a horrible debugger cycle trying to work out what’s going wrong and why.  You should rarely if ever need the debugger.

To implement dependency sorting we need to implement a topological sorted as detailed on wikipedia. I have decided to implement the algorithm first described by Khan (1962).

Here is the Psuedo code for the algorithm:

L ← Empty list that will contain the sorted elements
S ← Set of all nodes with no incoming edges
while S is non-empty do
    remove a node n from S
    add n to tail of L
    for each node m with an edge e from n to m do
        remove edge e from the graph
        if m has no other incoming edges then
            insert m into S
if graph has edges then
    return error (graph has at least one cycle)
else
    return L (a topologically sorted order)

Here is that code in C#:

public class BuildStepDependencySorter : IBuildStepDependencySorter
{
    private class Node
    {
        public Node(IBuildStep buildStep)
        {
            BuildStep = buildStep;
            IncomingEdges = new List<Edge>();
            OutgoingEdges = new List<Edge>();
        }

        public IBuildStep BuildStep { get; private set; }
        public List<Edge> IncomingEdges { get; private set; }
        public List<Edge> OutgoingEdges { get; private set; }
    }

    private class Edge
    {
        public Edge(Node sourceNode, Node destinationNode)
        {
            SourceNode = sourceNode;
            DestinationNode = destinationNode;
        }

        public Node SourceNode { get; private set; }
        public Node DestinationNode { get; private set; }

        public void Remove()
        {
            SourceNode.OutgoingEdges.Remove(this);
            DestinationNode.IncomingEdges.Remove(this);
        }
    }

    public IEnumerable<IBuildStep> Sort(IEnumerable<IBuildStep> buildSteps)
    {
        List<Node> nodeGraph = buildSteps.Select(buildStep => new Node(buildStep)).ToList();

        foreach (var node in nodeGraph)
        {
            var depends = (DependsOnAttribute[])Attribute.GetCustomAttributes(node.BuildStep.GetType(), typeof(DependsOnAttribute));
            var dependNodes = nodeGraph.Where(n => depends.Any(d => d.DependedOnStep == n.BuildStep.GetType()));

            var edges = dependNodes.Select(n => new Edge(node, n)).ToArray();
            node.OutgoingEdges.AddRange(edges);

            foreach (var edge in edges)
                edge.DestinationNode.IncomingEdges.Add(edge);
        }

        var result = new Stack<Node>();
        var sourceNodes = new Stack<Node>(nodeGraph.Where(n => !n.IncomingEdges.Any()));
        while (sourceNodes.Count > 0)
        {
            var sourceNode = sourceNodes.Pop();
            result.Push(sourceNode);

            for (int i = sourceNode.OutgoingEdges.Count - 1; i >= 0; i--)
            {
                var edge = sourceNode.OutgoingEdges[i];
                edge.Remove();

                if (!edge.DestinationNode.IncomingEdges.Any())
                    sourceNodes.Push(edge.DestinationNode);
            }
        }

        if (nodeGraph.SelectMany(n => n.IncomingEdges).Any())
            throw new CircularDependencyException();

        return result.Select(n => n.BuildStep);
    }

}

Imagine how hard this code would’ve been to get right with no unit tests! When you have unit tests and nCrunch an indicator simply goes green when it works! If you haven’t seen or heard of nCrunch before definitely check that out. It is a fantastic tool.

Now that we have the dependency sorter in place all we need to do is add some more tests to the builder class.  These tests ensure that the steps are sorted into dependency order before they are committed and they are sorted in reverse dependency order when they are rolled back. With those tests in place it is quite trivial update the builder to sort the steps for commit and rollback (see snippet from builder below):

private void Execute(IEnumerable<IBuildStep> buildSteps, Action<IBuildStep> action)
{
    foreach (var buildStep in buildSteps)
    {
        action(buildStep);
    }
}

public void Commit()
{
    Execute(_buildStepDependencySorter.Sort(BuildSteps),
                buildStep => buildStep.Commit());
}

public void Rollback()
{
    Execute(_buildStepDependencySorter.Sort(BuildSteps).Reverse(),
                buildStep => buildStep.Rollback());
}

I love how clean that code is. When your code is short and to the point like this it is so much easier to read, maintain and test. That is the importance of following SOLID principles.

As always I welcome your feedback so feel free to tweet or email me.

BuilderFramework – a framework for committing and rolling back test setup

In a recent piece of work the need has come up again to write some builder code for use with tests.  I feel passionately that you should take as much care with your test code as you do with your main software code that goes out of the door.  The reason for this is that your tests are your specification.  They prove the software does what it says it is going to do.  Having well written, clean and repeatable tests is vital.  Tests that are hard to maintain and brittle get ignored when they aren’t working.  Unfortunately you hear phrases like “Oh don’t worry about that test it’s always broken” all too often.

Part of the reason that a lot of tests I see out in the real world aren’t repeatable is that they rely on 3rd party systems that they can’t control very easily.  The biggest one of these is a database.  I’ve seen acceptance tests relying on a certain set of products having certain attributes in a database.  It’s not hard to see why this isn’t a great idea.  As products change in the database the tests start breaking.

To fix this problem a nice approach is to use the builder pattern to build setup your data in a certain way, run your test and then roll the data back to how it was before.  This is something that I have written various times so I’ve decided to start an open source project on github.  The product will provide the boiler plate code so you can just concentrate on writing the steps.

The project will have an interface that you have to implement that looks like this:

public interface IBuildStep
{
    void Commit();
    void Rollback();
}

It doesn’t get any more straight forward than that. Once you have written a build step you will be able to add it to the main builder in 2 ways. The first way is you can just pass in an instance:

var builder = new Builder()
                    .With(new MyStep());

The second way is that you can provide a type that can build your step. This allows you to write a builder that can build up a class using a fluent interface and just plug it in. For example if I had a builder:

public class MyStepBuilder
{
    private MyStep _myStep;

    public MyStepBuilder()
    {
        _myStep = new MyStep();
    }
    
    public MyStepBuilder WithValue(int value);
    {
       _myStep.Value = value;
    }
    // more methods here to setp all of the properties on _myStep
    // ...

    public MyStep Build()
    {
       return _myStep;
    }
}

Then you would be able to plug that builder into the main builder giving you a fluent interface:

    var builder = new Builder()
                        .With<MyStepBuilder>(s => s.WithValue(3)
                                                   .Build());

Either way once you have an instance of the builder you can then commit your steps by calling commit and then roll them back by calling rollback.

Keep an eye on the github project to see how this developed. If you like the idea or have any thoughts about how it can be improved please give me a shout.

Fluent Installation

A friend of mine Nick Hammond has recently embarked on an interesting open source project to provide a fluent interface to setup a web server.  What is clever about the project is that it makes use of cmdlets meaning that it is easy to use in Powershell. Allowing you to plug into your deployment pipeline.

Far too often Dev, Test and Live environments differ wildly.  They all have their little quirks and nuances.  Comments like “Oh if you use test environment B remember that the site is on drive D not drive C” are far too common.

This project attempts to solve that problem by moving the task of setting up an environment from a manual to an automated repeatable one.

Here is a snippet of some sample code to configure a web site using the fluent API:

context
    .ConfigureWebServer()
    .CreateWebsite(site =>
        {
            site.Named(parameters.SiteName);
            site.OnPhysicalPath(@"C:\");
            site.UseApplicationPool(parameters.SiteName);

            site.AddBinding(binding =>
            {
                binding.UseProtocol("https");
                binding.OnPort(80);
                binding.UseHostName("local.site.com");
                binding.UseCertificateWithThumbprint("8e6e3cc19bf5abfe01c7ee12ea23f20f4a1d513c");
            });

            site.AddApplication(application =>
            {
                application.UseAlias("funkyapi");
                application.OnPhysicalPath(@".\api");
            });

            site.AddVirtualDirectory(virtualDirectory =>
            {
                virtualDirectory.UseAlias("assets");
                virtualDirectory.OnPhysicalPath(@".\Assets");
            });
        })
    .Commit();

The syntax is clear and readable. The hallmark of a good syntax/API is one where you know what it is doing just from reading the code.  You don’t need to dive into the implementation.

Under the covers the fluent API makes use of the Microsoft.Web.Administration assembly and in particular the ServerManager class.  In typical Microsoft fashion the ServerManager class wields a lot of power but out of the box it is hard to use and test (please start using interfaces Microsoft!!).  With Nick’s fluent installation project all of that changes!

I have already started to get involved in this project as I think it could grow into a very useful tool for automating the deployment/setup of websites. Why not check out the source code on github and get involved.

Strongly Typed ScenarioContext in SpecFlow part II

Last time I posted details on my nuget package which supplies strongly typed ScenarioContexts in SpecFlow.

In the comments Darren pointed out there is a way to do this using a generic wrapper which is built into the framework:

ScenarioContext.Current.Set<AnotherComplexObject>(new ComplexObject());
var anotherComplexObject = ScenarioContext.Current.Get<AnotherComplexObject>();

I didn’t realise that this existed and is very useful. This is another good way to skin the same cat.

This is a great solution and will probably work for you. The only small downside is that because it is casting under the covers you have to provide the type that you are expecting/setting through the generic argument. This is probably not a problem but makes the code a little verbose.

The nuget package moves the casting further up the pipeline into the interceptor so you don’t have to worry about it. By the time that you come to use the context the objects have already been cast into your interface type. The downside to the nuget package however is that you have to derive you step definition classes from an abstract base class BaseBinding. In some circumstances this may not be possible and is a nuisance. The reason I had to do it this way is because it’s the only way that I could hook into the object creation pipeline. If there is a better way then please let me know.

Whilst we are on the subject of the StronglyTypedContext I’d like to take this opportunity to point out one more test case for the shared context. It works across multiple step definition classes as can be seen from the code snippet below:

    public interface ISharedContext
    {
        int CustomerId { get; set; }
    }

    [Binding]
    public class MultipleStepDefinitionsOne : BaseBinding
    {
        [ScenarioContext]
        public virtual ISharedContext Context { get; set; }

        [Given(@"I have a step definition file with a scenario context property")]
        public void GivenIHaveAStepDefinitionFileWithAScenarioContextProperty()
        {
            // empty step
        }

        [When(@"I set the property in one step definition class")]
        public void WhenISetThePropertyInOneStepDefinitionClass()
        {
            Context.CustomerId = 1234;
        }


    }

    [Binding]
    public class MultipleStepDefinitionsTwo : BaseBinding
    {
        [ScenarioContext]
        public virtual ISharedContext Context { get; set; }

        [Then(@"I can read the property in a different step definition class")]
        public void ThenICanReadThePropertyInADifferentStepDefinitionClass()
        {
            Context.CustomerId.Should().Be(1234);
        }

    }

That code is now a test in the source code. If anyone has any suggestions about how the project can be improved let me know.

Strongly Typed ScenarioContext in Specflow

I write a lot of BDD tests day to day using SpecFlow.  I really love SpecFlow as a tool, as it allows you to write clear acceptance tests that are driven by business criteria and written in a language that the business can understand.

One of the things I dislike about SpecFlow however is the way the ScenarioContext is implemented.  The ScenarioContext is a dictionary that is passed to each step in your test by the SpecFlow framework.  It is a place where you can store test data between steps.  The trouble is the signature to get and set an item is as follows:

// Set a value
ScenarioContext.Current.Add(string key, object value);

// Get a value
var value = ScenarioContext[string key];

The trouble with this is that when people are not disciplined you end up with magic strings everywhere, scattered throughout all of your step definition files.  You also lose your compiler as a safety guard.  For example:


[Binding]
public class Steps
{
    [Given("I have done something")]
    public GivenIHaveDoneSomething()
    {
        ScenarioContext.Add("customerId", 1234);
    }

    // then further down
    [When("I do this action")]
    public WhenIDoThisAction()
    {
        // This line will throw a NullReferenceException
        var customerId = (int)ScenarioContext.Current["wrongKey"];

    }
}

The two problems that I mention are highlighted above. Firstly you have to remember the magic string that you used in the ScenarioContext. You also have to know the type to cast it. The above code will compile but will throw an error at runtime. Runtime errors are much harder to catch than compile time errors. Where possible it’s best to make the compiler work for you.

To aid with that I have written a nuget package called StronglyTypedContext. This nuget package provides you with the ability to have strongly typed scenario contexts alleviating both of the problems above.

Example using StronglyTypedContext nuget package:


public interface IContext
{
    public int CustomerId { get; set; }
}

[Binding]
public class Steps : BindingBase
{

    [ScenarioContext]
    public virtual IContext Context { get; set; ]

    [Given("I have done something")]
    public GivenIHaveDoneSomething()
    {
        Context.CustomerId = 1234;
    }

    // then further down
    [When("I do this action")]
    public WhenIDoThisAction()
    {
        var customerId = Context.CustomerId;

    }
}

Look how much cleaner the code above is. No casting is necessary and we still have access to our ScenarioContext variables throughout the test. The heavy lifting for this is done using a constructor the base class (BindingBase) which uses Castle Dynamic Proxy at runtime. Note how in the code above nowhere do we actually implement the IContext interface?

So how does this work?

1) The abstract class BindingBase has a constructor that looks for public virtual properties marked with ScenarioContextAttribute

2) Castle Dynamic Proxy is then used to generate a proxy implementation for each property and sets each property to the created proxy. The proxy is created with a custom interceptor set.

3) The custom interceptor is invoked when you call get or set on the public virtual property. For example in the code snippet above when you call Context.CustomerId you are actually invoking CustomerId on the proxy which gives the interceptor a chance to get involved.

4)  The interceptor then checks to see if the property is being get or set. It then either retrieves or stores the value in the SpecFlow Scenario context based on a key that is generated from a combination of the type and property name.

I wrote the whole library using a TDD approach.  You can see the full source code on the github page. If you are interested in digging in more detail I would suggest you start by looking at the ProxyInterceptor class. This is where the magic happens.