Sunday, 25 September 2011

Injecting behaviour for static method calls (filesystem mocking redux)

This is a direct evolution of my last post, which dealt with decisions faced in testing code that hits the filesystem. The solution I posted was a bit clunky, and inconsistent; this is better, and works generally if you’re faced with a static method whose behaviour you want to change but can’t, immediately.

A lot of the following process has been informed by Michael Feathers’ “Working Effectively With Legacy Code”. Go read it if you haven’t.

So, imagine you have this idealized legacy code:

using System.IO;
using System.Xml.Linq;

namespace AbstractionExamples
{
    public static class TradeFileLoader
    {
        public static ITrade Load(string rootFolder, string file)
        {
            ITrade trade = null;
            var filePath = Path.Combine(rootFolder, file + ".trade");
            if (File.Exists(filePath))
            {
                trade = Trade.Load(filePath);
            }
            return trade;
        }
    }

    public class Trade : ITrade
    {
        public static ITrade Load(string file)
        {
            var xml = XDocument.Load(file);
            return new Trade(xml);
        }

        public Trade(XDocument xml) { /* ... */ }
    }
}

I want to get it under test, changing as little of the existing code as possible in the process.

The first thing to look at is the use of Path, and then File, as they can both be treated the same way. Path and File are static, and I don’t have the code; this means I can't just derive from them. What I want to do is create a class that provides a pass-through to the existing IO code, without modifying any parameters or behaviour in flight by default.

So, I mimic the existing Path class in a new namespace. In the new class I provide delegates for desired behaviour that defaults to the original System.IO behaviour. First I write a few simple tests like:

    [TestFixture]
    public class Tests
    {
        [Test]
        public void PathTest()
        {
            var path = @"root\file";

            var actual = IO.Abstractions.Path.Combine("root", "file");

            Assert.AreEqual(path, actual);
        }
    }

and then the new classes:

using System;
using System.IO;

namespace IO.Abstractions
{
    public static class Path
    {
        public static Func<string, string, string> CombineImpl = Path.Combine;

        public static string Combine(string first, string second)
        {
            return CombineImpl(first, second);
        }
    }

    public static class File
    {
        public static Func<string, bool> ExistsImpl = File.Exists;

        public static bool Exists(string file)
        {
            return ExistsImpl(file);
        }
    }
}

Now I can introduce these new classes along this seam by just swapping the namespace:

//using System.IO;  // no longer needed
using System.Xml.Linq;
using IO.Abstractions;

namespace AbstractionExamples
{
    public static class TradeFileLoader
    {
        public static ITrade Load(string rootFolder, string file)
        {
            ITrade trade = null;
            var filePath = Path.Combine(rootFolder, file + ".trade");
            if (File.Exists(filePath))
            {
                trade = Trade.Load(filePath);
            }
            return trade;
        }
    }

    public class Trade : ITrade
    {
        public static ITrade Load(string file)
        {
            var xml = XDocument.Load(file);
            return new Trade(xml);
        }

        public Trade(XDocument xml) { /* ... */ }
    }
}

The Trade-related changes are slightly more complicated – because we want to replace XDocument. XDocument is not a static class, and we do actually use instances of it.

If I want to avoid all changes to the existing code then I’ll need to turn my as-yet unwritten IO.Abstractions.XDocument into a proxy for System.Xml.Linq.XDocument. Unfortunately System.Xml.Linq.XDocument doesn’t implement an interface directly; fortunately it isn’t sealed, so I can derive from it. Even better, System.Xml.Linq.XDocument also provides a copy constructor so I don’t need to wrap an instance and present the correct methods myself. I just need to derive from System.Xml.Linq.XDocument, and construct using the copy constructor.

namespace Xml.Abstractions
{
    public class XDocument : System.Xml.Linq.XDocument
    {
        public XDocument(System.Xml.Linq.XDocument baseDocument) : base(baseDocument)
        {
        }

        public static Func<string, XDocument> LoadImpl = 
            f => new XDocument(System.Xml.Linq.XDocument.Load(f));

        public new static XDocument Load(string file)
        {
            return LoadImpl(file);
        }
    }
}

I’m a little concerned with the amount of work going on in construction here; I’ll have to take a look at how the copy constructor behaves.

Finally I can introduce the new XDocument along the seam in TradeFileLoader, so we end up with exactly what we started with, except for the changed namespace directives:

//using System.IO;
//using System.Xml;
using IO.Abstractions;
using Xml.Abstractions;

namespace AbstractionExamples
{
    public static class TradeFileLoader
    {
        public static ITrade Load(string rootFolder, string file)
        {
            ITrade trade = null;
            var filePath = Path.Combine(rootFolder, file + ".trade");
            if (File.Exists(filePath))
            {
                trade = Trade.Load(filePath);
            }
            return trade;
        }
    }

    public class Trade : ITrade
    {
        public static ITrade Load(string file)
        {
            var xml = XDocument.Load(file);
            return new Trade(xml); 
        }

        public Trade(XDocument xml) { /* ... */ } 
    }
}

So, the process I've gone through when faced with these is:

  1. Trace static calls until you run out of code you own (e.g. to Path.Combine, XDocument.Load).
  2. Create an abstracted class in a new namespace that mimics the existing static interface.
  3. Provide the new class with delegated behaviour that defaults to passing through to the original.
  4. (And naturally you'll have tests for this new code to verify that).
  5. If the class you’re overriding is not static, derive from the original class and hope that there’s a copy constructor, or be ready for potentially lots more work.
  6. Swap in new behaviour on the seam by just replacing old namespace directives (using System.IO, &c) with the new.

Finally, if I want to test the new class I just need to inject my desired test behaviour into Path, File and XDocument -- after adding a method to ITrade and Trade to return the root element of the trade XML. It turns out that the arrange, act, assert flow looks quite natural:

namespace AbstractionExamples.Tests
{
    [TestFixture]
    public class Tests
    {
        [Test]
        public void ReturnsValidTrade_WhenGivenAValidFile()
        {
            File.ExistsImpl = 
                f => { if (f.Equals(@"root\file.trade")) return true; return false; };
            XDocument.LoadImpl = 
                f =>
		{
                    var doc = new XDocument(
                        System.Xml.Linq.XDocument.Parse("<root></root>"));
                    return doc;
                };

            var trade = TradeFileLoader.Load("root", "file");

            Assert.AreEqual("root", trade.Root.LocalName);
    }
}

Tuesday, 20 September 2011

Hitting the filesystem; mock it out, or abstract as a Store?

In legacy code it's entirely possible that code directly touches the filesystem, and there's no seam at which to modify behaviour. Assuming you want to get the code under test then you need to decide whether you’re going to abstract out the filesystem, or alternatively introduce an abstraction to isolate filesystem operations. Typically I'd look to isolate the filesystem at high level with some sort of Store that abstracts access to underlying persistence tech.

Over a long period either approach might lead to code with both solutions present.

Chris Oldwood and I have been discussing whether to start by introducing a Store, or with abstracting out the filesystem. To me the decision should be based on limiting the number of changes required to meet your goal; and limiting the number of goals.

Introducing a Store meets three goals: one, to abstract access to the technology in which you persist entities; two, to simplify the existing code; and three, to give you a seam along which to inject new behaviour.

On the other hand abstracting out the filesystem really only provides a seam.

It’s a fine point of difference. In principle both introduce a layer of abstraction under which you could swap new technologies. However, to use a Store you change the semantics of the interface, which involves changes in the existing code. If you abstract access to the filesystem then the existing code can remain unchanged, and filesystem semantics don’t map well to other technologies.

So using a Store involves more changes to existing code, and is also meeting more goals than you might immediately need – do you need to abstract the technology used at this point in time?

Take this code for example:

using System.IO;

...

public ITrade DecorateTrade(string rootfolder, string decorationType, ITrade trade)
{
    ITrade decoratedTrade;
    var file = String.Format("{0}.Decoration.dat", decorationType);
    file = Path.Combine(rootFolder, file);

    if(File.Exists(file))
    {
        decoratedTrade = Trade.LoadFromFile(file, trade);
    }
    
    return decoratedTrade
}

Here DecorateTrade knows a lot about the filesystem – how to create a Path, what a file is and how to check that a file exists, and that it needs a filename to load from. We want to test this code but are wary because it touches the filesystem.

We can refactor by introducing a Store that abstracts access to the underlying entities. An example would be:

public class DecorationFileStore : IDecorationStore
{
    public DecorationStore(string rootFolder)
    {
        _rootFolder = rootFolder;
    }

    // Interface method
    public ITrade Decorate(string decoration, ITrade trade) { ... }

    private bool DecorationExists(string decorationType) { ... }

    private string _rootFolder;
}

This tends to force some changes on the existing code, however, which becomes:

//using System.IO; // no longer needed

...

public ITrade DecorateTrade(string rootfolder, string decorationType, ITrade trade)
{
    var store = new DecorationFileStore(rootFolder);

    return store.Decorate(decorationType, trade);
}

As you can see the original code must change to meet the new semantics of the Store interface. It's not bad -- it's just that as well as creating a new class, you've changed the existing code. The new code is actually much simpler; in some sense we’ve swapped local complexity for coarser type complexity. It could be simpler still, as the Store would become a member of the class, and wouldn't need to be instantiated as a local instance. Still, if your intent is to characterize the behaviour of the class you’ll want to make as few changes as possible.

Swapping local complexity for type system complexity like this might also lead to a proliferation of Store types, each operating as an abstraction of the filesystem, which may be unnecessarily complicated.

Alternatively then we could abstract the filesystem operations. Take these two new classes:

namespace IO.Abstractions
{
    public class File : IFile
    {
        public bool Exists(string path)
        {
            return System.IO.File.Exists(path);
        }    
    }

    public class Path
    {
        public string Combine(string first, string second)
        {
            return System.IO.Path.Combine(first, second);
        }
    }
}

These are intended to be direct passthroughs to the filesystem. By decorating the original class with new properties,

public IFile File { get; set; }

public Path Path { get; set; }

and by replacing references to System.IO with IO.Abstractions, we introduce a seam in the existing code without having to make any changes in that code. Instead of calling the static methods on System.IO classes, we call instance methods on IO.Abstractions classes through local properties. This means that we can now write tests for the system without changing the existing code.

// using System.IO; // no longer needed
using IO.Abstractions;

...

public IFile File { get; set; }

public Path Path { get; set; }

public ITrade DecorateTrade(string rootfolder, string decorationType, ITrade trade)
{
    ITrade decoratedTrade;
    var file = String.Format("{0}.Decoration.dat", decorationType);
    file = Path.Combine(rootFolder, file);

    if(File.Exists(file))
    {
        decoratedTrade = Trade.LoadFromFile(file, trade);
    }
    
    return decoratedTrade
}

That is, almost -- the story's not quite finished. We still have a call on Trade.LoadFromFile(...) which hits the filesystem -- and we don't have any seam within Trade to use to isolate the filesystem ops. In theory Trade probably shouldn't know about the filesystem at all, but typically we want to minimize the number of changes needed to get under test.

In this case I tend to heal around this dependency. There's a number of ways to do it -- extracting the access as a public method and inheriting to create a test proxy class works, if your class isn't sealed or static. If it is sealed then you can introduce a seam by wrapping the access in another type, or by publishing a delegate that the existing code will call; either gives you a seam at which point you can override behaviour.

Friday, 16 September 2011

Rhino.Mocks and manual fakes

Habitually, I use Rhino Mocks. Chris Oldwood and I were discussing fakes and mocks, and he commented that he strongly disliked the Rhino syntax. He much preferred writing manual stubs with delegates for test-specific behaviour because the syntax was clearer and more portable.

He may have a point.

To compare, I knocked up an example test; it just checks the number of times that a method is called on some mocked component:

[Test]        
public void UsingRhinoMocks()        
{
    var mockComponent = MockRepository.GenerateMock<component>();
    
    int expect = 10;            
    mockComponent.Expect(x => x.Operation(null))
        .IgnoreArguments()                
        .Return(new Result())
        .Repeat.Times(expect);                        

    var svc = new Service(mockComponent);            
    svc.ServiceMethod(expect);             

    mockComponent.VerifyAllExpectations();        
}

It’s brief, but somewhat hard to parse. In contrast, here’s the same test as Chris would have it:

[Test]        
public void UsingAManualFake()        
{            
    var fakeComponent = new FakeComponent();             

    int expect = 10;            
    bool called = false;            
    int calls = 0;            
    fakeComponent.Operation = () =>              
        {                  
            called = true;                  
            calls++;                  
            return new Result();              
        };             
    
    var svc = new Service(fakeComponent);            
    svc.ServiceMethod(expect);             

    Assert.IsTrue(called);            
    Assert.AreEqual(expect, calls);        
}

The points here are that the specific tests are clear – you don’t need to reparse the expectations by eye to see what the result should be; and that the setting of expectations is clearer – it looks like a coherent body of code, rather than the notionally fluent style used by Rhino.Mocks. Clearly I’ve left out the definition of the FakeComponent class, among other things – and there lies the principle problem. Fakes can become brittle, especially when shared between different test sets. You have to maintain that class as well as any callsites in the tests as the interface changes.

I took the point about clarity and thought: most of the FakeComponent is actually boilerplate on top of the interface. It’d be easy to dynamically instantiate something using our own dynamic proxy.

But then, Rhino.Mocks is basically creating a dynamic proxy for us.

It turns out we can use Rhino.Mocks to do the same thing, with a helper method to instantiate the mock instance:

private static T GetMock<T,A>(Action<T> method, A del) where T : class        
{   
    // yuck ... but we don't have generic delegate constraints!         
    var action = del as Delegate;              
    
    var mockComponent = MockRepository.GenerateStub<T>();            
    mockComponent.Stub(method).IgnoreArguments().Do(action);            

    return mockComponent;        
}

and then the modified test looks like this, with specific asserts at the end, and a coherent non-fluent code body:

         
[Test]        
public void UsingRhinoMocksLikeAFake()        
{            
    int expect = 10;            
    bool called = false;            
    int calls = 0;            
    var method = new Action<IComponent>(x => x.Operation(null));
    var action = new Func<string, Result>(
        (x) => {
            called = true;
            calls++;
            return new Result();
         });             

    var mockComponent = GetMock(method, action);

    var svc = new Service(mockComponent);            
    var ret = svc.ServiceMethod(expect);             

    Assert.IsTrue(called);            
    Assert.AreEqual(expect, calls);        
}

One wrinkle is that now we need to explicitly define the method that will be called on IComponent, and also the behaviour we want to see when the method is called; it’s like a union of the Rhino and Fake approaches. It’s not quite as clear – but it’s only a line longer.

Tuesday, 13 September 2011

Collections of anonymous types

A friend at work today wondered if we could make a list of anonymously-typed objects. It feels like we should be able to -- judging by LINQ -- but there didn't seem to be a mechanism for doing it direct. However, you can create a seed array and generate a list from that:
using System;
using System.Linq;

namespace ListOfAnonymous
{
    class Program
    {
        static void Main(string[] args)
        {
            var seedArray = new[] { new { Foo = "dingo", Bar = "bingo" } };
            var theList = seedArray.ToList();
            theList.Add(new { Foo = "ringo", Bar = "zingo" });
            theList.ForEach(x => Console.WriteLine(x.Foo));
            
            Console.ReadKey();
        }
    }
}
Better, you get compile-time checking of the instance you're adding to the new list -- this doesn't work:
            theList.Add(new { Foo = "ringo" });    // fail ....
And using a similar extension method you can create a dictionary, choosing a certain property as the key:
            var theDict = seedArray.ToDictionary(x => x.Foo);
            theDict["ringo"] = new {Foo = "ringo", Bar = "zingo"};
            theDict.Keys.ToList().ForEach(x => Console.WriteLine(theDict[x]));
after doing which you end up with a dictionary mapping your key property to instances of the anonymous type. This is a bit fiddly -- having to specify the keyname twice -- but if you have an anonymous instance already created then you can just use the right key property explicitly.