BuilderFramework – Dependent steps

Last time I started to detail about a new open source builder framework that I was writing. Today I wanted to speak about dependent steps.

It is important for the builder to support dependent steps. For example you might have one step to create an order and another step to create a customer. Obviously the step to create the customer will need to run before the step to create the customer. When reversing these steps they will need to run in the opposite order.

To manage this I have written an attribute DependsOnAttribute. This attribute takes a type as it’s constructor parameter.  The attribute allows you to annotate which steps your step depends on. For example:

public class CreateCustomerStep : IBuildStep { ...

public class CreateOrderStep : IBuildStep { ...

To support this the commit method needs to sort the steps into dependency order. It also needs to check for a circular dependency and throw an exception if one is found. We are going to need a separate class for managing the sorting of the steps (remember single responsibility!). The interface is detailed as follows:

public interface IBuildStepDependencySorter
    IEnumerable Sort(IEnumerable buildSteps);

Before we implement anything we need a set of tests to cover all of the use cases of the dependency sorter.  That way when all of the tests pass we know that our code is good. I always like to work in a TDD style.  (The tests I have come up with can be seen in depth on the github source page or by cloning the source).

At a high level these are the tests we need:

  • A simple case where we have 2 steps where one depends on the other
  • A simple circular reference with 3 steps throws an exception
  • A complex circular reference with 5 steps throws an exception
  • A more complex but valid 4 step dependency hierarchy gets sorted correctly
  • A multiple dependency (one step dependent or more than one other step) gets sorted correctly

It is so important to spend a decent amount of time writing meaningful tests that test all of the use cases of your code.  Once you have done this it makes it so much easier to write the code.  I see so many people writing the code first and then retro fitting the tests.  Some developers also claim that they haven’t got time to write tests.  I can’t see this logic.  When writing tests first your code is quicker to write as you know when the code is working.  If you write your tests last then you are just caught in a horrible debugger cycle trying to work out what’s going wrong and why.  You should rarely if ever need the debugger.

To implement dependency sorting we need to implement a topological sorted as detailed on wikipedia. I have decided to implement the algorithm first described by Khan (1962).

Here is the Psuedo code for the algorithm:

L ← Empty list that will contain the sorted elements
S ← Set of all nodes with no incoming edges
while S is non-empty do
    remove a node n from S
    add n to tail of L
    for each node m with an edge e from n to m do
        remove edge e from the graph
        if m has no other incoming edges then
            insert m into S
if graph has edges then
    return error (graph has at least one cycle)
    return L (a topologically sorted order)

Here is that code in C#:

public class BuildStepDependencySorter : IBuildStepDependencySorter
    private class Node
        public Node(IBuildStep buildStep)
            BuildStep = buildStep;
            IncomingEdges = new List<Edge>();
            OutgoingEdges = new List<Edge>();

        public IBuildStep BuildStep { get; private set; }
        public List<Edge> IncomingEdges { get; private set; }
        public List<Edge> OutgoingEdges { get; private set; }

    private class Edge
        public Edge(Node sourceNode, Node destinationNode)
            SourceNode = sourceNode;
            DestinationNode = destinationNode;

        public Node SourceNode { get; private set; }
        public Node DestinationNode { get; private set; }

        public void Remove()

    public IEnumerable<IBuildStep> Sort(IEnumerable<IBuildStep> buildSteps)
        List<Node> nodeGraph = buildSteps.Select(buildStep => new Node(buildStep)).ToList();

        foreach (var node in nodeGraph)
            var depends = (DependsOnAttribute[])Attribute.GetCustomAttributes(node.BuildStep.GetType(), typeof(DependsOnAttribute));
            var dependNodes = nodeGraph.Where(n => depends.Any(d => d.DependedOnStep == n.BuildStep.GetType()));

            var edges = dependNodes.Select(n => new Edge(node, n)).ToArray();

            foreach (var edge in edges)

        var result = new Stack<Node>();
        var sourceNodes = new Stack<Node>(nodeGraph.Where(n => !n.IncomingEdges.Any()));
        while (sourceNodes.Count > 0)
            var sourceNode = sourceNodes.Pop();

            for (int i = sourceNode.OutgoingEdges.Count - 1; i >= 0; i--)
                var edge = sourceNode.OutgoingEdges[i];

                if (!edge.DestinationNode.IncomingEdges.Any())

        if (nodeGraph.SelectMany(n => n.IncomingEdges).Any())
            throw new CircularDependencyException();

        return result.Select(n => n.BuildStep);


Imagine how hard this code would’ve been to get right with no unit tests! When you have unit tests and nCrunch an indicator simply goes green when it works! If you haven’t seen or heard of nCrunch before definitely check that out. It is a fantastic tool.

Now that we have the dependency sorter in place all we need to do is add some more tests to the builder class.  These tests ensure that the steps are sorted into dependency order before they are committed and they are sorted in reverse dependency order when they are rolled back. With those tests in place it is quite trivial update the builder to sort the steps for commit and rollback (see snippet from builder below):

private void Execute(IEnumerable<IBuildStep> buildSteps, Action<IBuildStep> action)
    foreach (var buildStep in buildSteps)

public void Commit()
                buildStep => buildStep.Commit());

public void Rollback()
                buildStep => buildStep.Rollback());

I love how clean that code is. When your code is short and to the point like this it is so much easier to read, maintain and test. That is the importance of following SOLID principles.

As always I welcome your feedback so feel free to tweet or email me.

BuilderFramework – a framework for committing and rolling back test setup

In a recent piece of work the need has come up again to write some builder code for use with tests.  I feel passionately that you should take as much care with your test code as you do with your main software code that goes out of the door.  The reason for this is that your tests are your specification.  They prove the software does what it says it is going to do.  Having well written, clean and repeatable tests is vital.  Tests that are hard to maintain and brittle get ignored when they aren’t working.  Unfortunately you hear phrases like “Oh don’t worry about that test it’s always broken” all too often.

Part of the reason that a lot of tests I see out in the real world aren’t repeatable is that they rely on 3rd party systems that they can’t control very easily.  The biggest one of these is a database.  I’ve seen acceptance tests relying on a certain set of products having certain attributes in a database.  It’s not hard to see why this isn’t a great idea.  As products change in the database the tests start breaking.

To fix this problem a nice approach is to use the builder pattern to build setup your data in a certain way, run your test and then roll the data back to how it was before.  This is something that I have written various times so I’ve decided to start an open source project on github.  The product will provide the boiler plate code so you can just concentrate on writing the steps.

The project will have an interface that you have to implement that looks like this:

public interface IBuildStep
    void Commit();
    void Rollback();

It doesn’t get any more straight forward than that. Once you have written a build step you will be able to add it to the main builder in 2 ways. The first way is you can just pass in an instance:

var builder = new Builder()
                    .With(new MyStep());

The second way is that you can provide a type that can build your step. This allows you to write a builder that can build up a class using a fluent interface and just plug it in. For example if I had a builder:

public class MyStepBuilder
    private MyStep _myStep;

    public MyStepBuilder()
        _myStep = new MyStep();
    public MyStepBuilder WithValue(int value);
       _myStep.Value = value;
    // more methods here to setp all of the properties on _myStep
    // ...

    public MyStep Build()
       return _myStep;

Then you would be able to plug that builder into the main builder giving you a fluent interface:

    var builder = new Builder()
                        .With<MyStepBuilder>(s => s.WithValue(3)

Either way once you have an instance of the builder you can then commit your steps by calling commit and then roll them back by calling rollback.

Keep an eye on the github project to see how this developed. If you like the idea or have any thoughts about how it can be improved please give me a shout.

Fluent Installation

A friend of mine Nick Hammond has recently embarked on an interesting open source project to provide a fluent interface to setup a web server.  What is clever about the project is that it makes use of cmdlets meaning that it is easy to use in Powershell. Allowing you to plug into your deployment pipeline.

Far too often Dev, Test and Live environments differ wildly.  They all have their little quirks and nuances.  Comments like “Oh if you use test environment B remember that the site is on drive D not drive C” are far too common.

This project attempts to solve that problem by moving the task of setting up an environment from a manual to an automated repeatable one.

Here is a snippet of some sample code to configure a web site using the fluent API:

    .CreateWebsite(site =>

            site.AddBinding(binding =>

            site.AddApplication(application =>

            site.AddVirtualDirectory(virtualDirectory =>

The syntax is clear and readable. The hallmark of a good syntax/API is one where you know what it is doing just from reading the code.  You don’t need to dive into the implementation.

Under the covers the fluent API makes use of the Microsoft.Web.Administration assembly and in particular the ServerManager class.  In typical Microsoft fashion the ServerManager class wields a lot of power but out of the box it is hard to use and test (please start using interfaces Microsoft!!).  With Nick’s fluent installation project all of that changes!

I have already started to get involved in this project as I think it could grow into a very useful tool for automating the deployment/setup of websites. Why not check out the source code on github and get involved.

Strongly Typed ScenarioContext in SpecFlow part II

Last time I posted details on my nuget package which supplies strongly typed ScenarioContexts in SpecFlow.

In the comments Darren pointed out there is a way to do this using a generic wrapper which is built into the framework:

ScenarioContext.Current.Set<AnotherComplexObject>(new ComplexObject());
var anotherComplexObject = ScenarioContext.Current.Get<AnotherComplexObject>();

I didn’t realise that this existed and is very useful. This is another good way to skin the same cat.

This is a great solution and will probably work for you. The only small downside is that because it is casting under the covers you have to provide the type that you are expecting/setting through the generic argument. This is probably not a problem but makes the code a little verbose.

The nuget package moves the casting further up the pipeline into the interceptor so you don’t have to worry about it. By the time that you come to use the context the objects have already been cast into your interface type. The downside to the nuget package however is that you have to derive you step definition classes from an abstract base class BaseBinding. In some circumstances this may not be possible and is a nuisance. The reason I had to do it this way is because it’s the only way that I could hook into the object creation pipeline. If there is a better way then please let me know.

Whilst we are on the subject of the StronglyTypedContext I’d like to take this opportunity to point out one more test case for the shared context. It works across multiple step definition classes as can be seen from the code snippet below:

    public interface ISharedContext
        int CustomerId { get; set; }

    public class MultipleStepDefinitionsOne : BaseBinding
        public virtual ISharedContext Context { get; set; }

        [Given(@"I have a step definition file with a scenario context property")]
        public void GivenIHaveAStepDefinitionFileWithAScenarioContextProperty()
            // empty step

        [When(@"I set the property in one step definition class")]
        public void WhenISetThePropertyInOneStepDefinitionClass()
            Context.CustomerId = 1234;


    public class MultipleStepDefinitionsTwo : BaseBinding
        public virtual ISharedContext Context { get; set; }

        [Then(@"I can read the property in a different step definition class")]
        public void ThenICanReadThePropertyInADifferentStepDefinitionClass()


That code is now a test in the source code. If anyone has any suggestions about how the project can be improved let me know.

Strongly Typed ScenarioContext in Specflow

I write a lot of BDD tests day to day using SpecFlow.  I really love SpecFlow as a tool, as it allows you to write clear acceptance tests that are driven by business criteria and written in a language that the business can understand.

One of the things I dislike about SpecFlow however is the way the ScenarioContext is implemented.  The ScenarioContext is a dictionary that is passed to each step in your test by the SpecFlow framework.  It is a place where you can store test data between steps.  The trouble is the signature to get and set an item is as follows:

// Set a value
ScenarioContext.Current.Add(string key, object value);

// Get a value
var value = ScenarioContext[string key];

The trouble with this is that when people are not disciplined you end up with magic strings everywhere, scattered throughout all of your step definition files.  You also lose your compiler as a safety guard.  For example:

public class Steps
    [Given("I have done something")]
    public GivenIHaveDoneSomething()
        ScenarioContext.Add("customerId", 1234);

    // then further down
    [When("I do this action")]
    public WhenIDoThisAction()
        // This line will throw a NullReferenceException
        var customerId = (int)ScenarioContext.Current["wrongKey"];


The two problems that I mention are highlighted above. Firstly you have to remember the magic string that you used in the ScenarioContext. You also have to know the type to cast it. The above code will compile but will throw an error at runtime. Runtime errors are much harder to catch than compile time errors. Where possible it’s best to make the compiler work for you.

To aid with that I have written a nuget package called StronglyTypedContext. This nuget package provides you with the ability to have strongly typed scenario contexts alleviating both of the problems above.

Example using StronglyTypedContext nuget package:

public interface IContext
    public int CustomerId { get; set; }

public class Steps : BindingBase

    public virtual IContext Context { get; set; ]

    [Given("I have done something")]
    public GivenIHaveDoneSomething()
        Context.CustomerId = 1234;

    // then further down
    [When("I do this action")]
    public WhenIDoThisAction()
        var customerId = Context.CustomerId;


Look how much cleaner the code above is. No casting is necessary and we still have access to our ScenarioContext variables throughout the test. The heavy lifting for this is done using a constructor the base class (BindingBase) which uses Castle Dynamic Proxy at runtime. Note how in the code above nowhere do we actually implement the IContext interface?

So how does this work?

1) The abstract class BindingBase has a constructor that looks for public virtual properties marked with ScenarioContextAttribute

2) Castle Dynamic Proxy is then used to generate a proxy implementation for each property and sets each property to the created proxy. The proxy is created with a custom interceptor set.

3) The custom interceptor is invoked when you call get or set on the public virtual property. For example in the code snippet above when you call Context.CustomerId you are actually invoking CustomerId on the proxy which gives the interceptor a chance to get involved.

4)  The interceptor then checks to see if the property is being get or set. It then either retrieves or stores the value in the SpecFlow Scenario context based on a key that is generated from a combination of the type and property name.

I wrote the whole library using a TDD approach.  You can see the full source code on the github page. If you are interested in digging in more detail I would suggest you start by looking at the ProxyInterceptor class. This is where the magic happens.

Mocking Frameworks in .Net

As a contractor I have seen a lot of different companies’ code bases and been on many different projects. Most companies now employ TDD (which is a good thing). This means that often you need a mocking framework. Mocking frameworks are ten to the dozen and I have used many different ones over the years.

The most common mocking framework in .Net that I have come across by far is Moq. I quite like Moq but the syntax is a bit clunky. For example to setup a mock repository to return a fake customer:

 var mockRepository = new Mock();
 mockRepository.Setup(m =&gt; m.GetCustomer(It.IsAny()).Returns(new Customer{ Id = 1234, Name = "Fred" });
 var customerController = new CustomerController(mockRepository.Object);

What I don’t like is that the variable mockRepository is not an ICustomerRepository it is a Mock<ICustomerRepository>. The Mock is a wrapper around your type. This means that when you inject your mock into the class you are testing you need to use the .Object property as can be seen on line 3 above.

Compare this to FakeItEasy:

 var mockRepository = A.Fake();
 A.CallTo(m =&gt; m.GetCustomer(A.Ignored).Returns(new Customer{ Id = 1234, Name = "Fred" });
 var customerController = new CustomerController(mockRepository);

This feels so much cleaner to me because this time mockRepository variable is a ICustomerRepository. The A.Fake<T>() method is a factory method from the FakeItEasy library that does some clever work with a dynamic proxy to fill the provided interface (in this case ICustomerRepository) on the fly. This also means that when we pass the mock into our CustomerController this time we just pass mockRepository, we don’t need to use the .Object extension which feels much cleaner (see line 3 in the 2nd snippet above).

Which mocking frameworks have you used? What are the pros/cons?

XNA Game – Rotation – Download from iTunes – Part 19

I have just finished porting the blog from one host to another. Hence the glut of posts that have come in one day. Since my last post almost one year ago I have successfully released the game on iTunes. The game is completely free so why not download it and give it a try.

When I set out on this journey I was trying to write a game using a TDD approach. I don’t know much about the game industry but in business application programming this is how we approach software. I thought it would be both a great learning exercise and interesting experiment to see how the TDD approach would lend itself to game development. The answer, surprisingly well.

In most cases the first time I ran the code it worked. I can only remember using the debugger in earnest once, which is pretty cool. This shows the value of TDD. It also shows the value of writing good unit tests that cover business logic. Anyone who tells you can write code faster without tests is probably lying, if they say they can write code that meets business requirements without tests faster then they definitely are.

So this brings the Rotation game coding series to a close. Stick with the blog for other coding adventures.

XNA Game – Rotation – Finishing up rotation… – Part 18

This will be the final part of my series on making an XNA game from scratch. I hope you have enjoyed the series and learnt a thing or two along the way.

The last part that I needed to do was adding a count of how many rotations the player has made. Then each time the user rotates (either left or right) to decrement the count. If the user makes a block then the user gets an extra rotation for every square in the block. This gives the player a goal for the game.

Again I sound like a broken record here but hopefully this is hammering the point home for any doubters, by using the single responsibility principle and writing decoupled code this was very easy to implement. All I needed to do was have a new class called the RotationManger whose job it is to keep track of rotations that the player has made. The RotationManager has two methods on it rotation made and block found. One will decrement the rotations as a rotation has been made. The other will simply loop around all of the squares in all of the blocks made and count them and add them onto the total amount of rotations that the player has left.

This is again a very simple class to write. After that is done all that was left was to call it in the correct places which is for when a rotation is made there are two events that are fired rotated right and rotation left events. Then when the player finds a block the blocks found event fires. This for me really does emphasize how if you structure your code correctly then adding to it is very easy. They are like lots of little building blocks that you can stick together to make a sky scraper.

As always you can grab all of the source code from github. This is part18 or at the time of writing the master branch also. If you are unsure how to get the code then check out this page.

There are many ways you could take the game from here like add menus, high scores, sounds etc. I hope I’ve given you a taster of what is possible…

I’d like to thank you for reading my series on rotation the game. As always comments welcome.

XNA Game – Rotation – Scoring & Levels – Part 17

We are nearing the home straight now for our game journey.  We have come a long way and although it might not look that pretty the game functions pretty well.  The next thing to do is program the score and the levels.

To keep track of the score I have written a class called the ScoreManager.  The score manager’s responsibility is solely for keeping track of the score.  It has a method to update the score which takes an IEnumerable of blocks found.  The score then gets updated and an immutable score object is returned back.  The score code is pretty straight forward.  Scoring works by you get one point for each square you make within a block.  If you make multiple blocks at the same time then you get a multiplier bonus of the amount of blocks you made simultaneously.

Another part of the game that is missing is levels.  I have decided that you have to make a set amount of squares in blocks to advance the level.  The LevelManager is a class that is designed to keep up with this.  The logic for the level manager is a little more complex as it has to calculate which level that you are on based on the current score.  As always it is fully unit tested.  Actually when I first wrote the level manager and tests there was a subtle bug with the LevelManager when you got exactly the amount of squares that you required to get to the next level.  This highlights again the importance of TDD.  To fix this bug I wrote a test for it to reproduce the bug and sure enough the test failed.  All I had to do then was fix the LevelManager to make the test pass, the bug was caused by a greater than inside an if instead of a greater than or equal to.  The beauty of using TDD is that now this bug will never come back.

To get the score and level information on the screen I just had to make score and level implement the IDrawableItem interface.  Then all I had to do was add an item drawer for the score and the level.  The code automatically picked the rest up.  This is the beauty of using factories to create classes and to do everything dynamically.  If you had to write all of that code manually it would’ve meant updating several places of code to “know” about the new items.  The code is completely decoupled and doesn’t have any knowledge of the items that it’s drawing.

To write the text on the screen I had to use a sprite font.  As I am using mono game there are some nuances with how you have to do this as you have to target the xmb file (that is produced by building content) at the correct target framework.  To do this for Windows all I have done is created a real XNA project solution, added a sprite font to the content and then built it.  Then I have gone back to the rotation game solution and manually added the xmb file as a content file to the content directory of the game project.  When the game gets ported to iPhone this process will have to be repeated except of course I will have to build it to target the correct framework.

One more thing that I wanted to point out in this post is that you really see the value of single responsibility principle when you decide to make changes to how the game works.  When I got the game working I decided that rotating the whole axis as far as it could reach (to the edge of the board) was making the game too easy and not much phone.  So I added another implementation of the ISquareSelector interface.  This time instead of selecting squares right from the center to the nearest edge I created another class called SingleSquareSelector that only goes one square in each direction.  All I had to do (after unit testing the class of course) was to switch the registration in the container to use the SingleSquareSelector and the behaviour changed.  The beauty is that to switch back all I have to do is change the registration back.  This really shows the value of having one class for one purpose.

I know I won’t win any awards for design but the gameplay is really getting there.  You can get the game at this point at part17.  If you are unsure of how to do this then you can check the rotation git details page.

XNA Game – Rotation – Who needs letters? – Part 16

I recently watched the film Indie game. Which is a documentary that follows two indie games for xbox 360 being made (Super Meat Boy & Fez). The film is very good and I would highly recommend watching it (especially if you are interested in these blog posts). During the film the guys were saying that the key to a great game is one that is easy to learn and difficult to master. After watching the film I went back and played on rotation and thought that the concept of rotating to make words, although an interesting idea is fiendishly difficult in practice and would’ve definitely put off potential players of the game.

Instead of rotating your selection to make words I have changed the game so that you rotate colours instead. Much simpler! The idea is to match at four squares together in a square (2 rows of 2) of the same colour. When you successfully create a ‘block’ then that disappears and all of the blocks fall down from the top of the screen to fill in the hole created by the block.

This is where you really see the value of TDD and good use of the single responsibility principle. It took me a little under an hour to make the change from letters to colours, I had to alter the applicable tests to make them work with colours. I updated the code so that all of the tests passed. When I ran the game it worked first time! For anyone that doubts the value of TDD (or having tests in general) that is a key example of why they are so good.

I want to talk a little about the problem I had to solve to make blocks falling animation work smoothly. Workflow of what happens when a block is created:

  1. Board changed event gets fired when the player rotates a selection
  2. In the board changed event handler a check is done to see if any blocks are created, If there are new blocks a blocks created event is raised
  3. In the blocks created event handler a new animation is started to colour in the block that is created
  4. When the animation that colours in the blocks finishes it raises a remove blocks found event (more on this later)
  5. The remove blocks found event handler remaps the board in memory and then starts the blocks falling animation
  6. When the blocks falling animation finishes it raises a board changed event
  7. Notice that the whole workflow is decoupled into reusable chunks. By doing it this way I achieve complete code separation where no part of the system has to know a lot about the other parts. Each unit of the system has a specific job to do. The reason that I raise a remove blocks event and not put that logic in the start of the blocks found animation is to achieve this separation. In the future I might want to animation the falling blocks differently but I probably still will wanted the blocks to be removed. I have got that segregation.

I want to explain how the blocks falling animation works as I think the logic for that is quite interesting. Below is the psuedo code of how the whole process works with the classes the code is in in brackets:

Colour in the blocks so that the user knows they have made a block (BlocksFoundAnimation)
Set the offsets for all of the squares that need to fall (RemoveFoundBlocksEventHandler)
Reorganise all of the squares to their correct positions as if they have fallen down (RemoveFoundBlocksEventHandler)
Replace squares that are in blocks with new colours (RemoveFoundBlocksEventHandler)
Start a falling blocks animation (RemoveFoundBlocksEventHandler)
Decrease the y offset of each square until it reaches 0 (RemoveFoundBlocksEventHandler)
Check to see if the falling squares have created any new blocks (BoardChangedEventHandler)
The important thing to note is that the positions of the squares never actually change. All that I do is swap the tiles between them and set a y offset on each of them to give them the illusion that they are falling. The last piece of the puzzle was to update the square drawer or to be more precise the SquareOriginCalculator to take the y offset of the square into account.

That’s it for this post. As always you can get the code from the part16 branch on github. Check out the instruction page if you are unsure how to do this.