Tuesday, 26 October 2010

Host TeamCity in IIS7

Up until recently I have been using the apache isapi redirector to proxy through from IIS to TeamCity (which runs on Tomcat). This was because I had no alternative until today. Whilst searching the interwebs I came across the Application Request Routing extension for IIS7. This amongst other things allows IIS to proxy through to a service running either locally or remotely running on any port.

Installation

Install the extension using the Web Platform Installer, this will install a number of prerequisites as well as the Application Request Routing extension.

Configuration

  • Once the installation is complete open up the IIS7 management tool where you should now see a new node under the websites folder for your server called server farms.
  • Right click and add a new Farm and name it TeamCity.
  • Click on the advanced settings and enter the correct port number for TeamCity, you will not be able to change this so get it right or you will have to delete the server entry and start again.
  • One you have added the server accept the request to automatically create the rules.

That’s it, any request to your server should now proxy through to TeamCity.

Hosting more than one site

You may want to  or you may already be hosting sites on your server via IIS. With what we have done above these sites will no longer be available. To resolve this click on the server node in the IIS management tool. Click on the url rewriting component, this will show you the rule that was created for your Server Farm. Double click the rule and a new condition:

  • Condition input: {HTTP_HOST}
  • Pattern: <Host/Domain name>, example teamcity.mydomain.com
  • Apply the rule.

This should now only push requests to the specified domain name to the proxy. Other sites can now be hosted as normal.

Tuesday, 29 June 2010

Better know a framework: Flushing the System.IO.StreamWriter

I ran into an issue today where content that was being written to a Stream was being truncated when a large amount of text was being written. The code was something similar to:

using (var memoryStream = new MemoryStream())
using (var streamWriter = new StreamWriter(memoryStream , Encoding.UTF8))
{
    streamWriter.Write(someRealyLongStringValue);
    DoSomethingWithTheStream(memoryStream );
}

The consuming method was then using the Stream but the end was truncated but if you take the following code from MSDN which writes to a FileStream using the StreamWriter it works as expected:

DirectoryInfo[] cDirs = new DirectoryInfo(@"c:\").GetDirectories();
 
using (StreamWriter sw = new StreamWriter("CDriveDirs.txt"))
{
    foreach (DirectoryInfo dir in cDirs)
    {
        sw.WriteLine(dir.Name);
    }
}

After a bit of digging I realised that you need to call Flush() before using the Stream or turn on AutoFlush when using really long strings (or as good practice). The reason that the MSDN example worked is because it was writing to a FileStream in  a using block, when the Dispose method is called I presume it calls the flush method before destroying itself thus writing the rest of the content to the file. Whilst it would have been nice to see a comment to this affect in the StreamWriter remarks it does make sense.

Wednesday, 16 June 2010

Using the IMapper interface with the System.Converter delegate

Recently I wrote a post about using a standard mapping interface for your mappers which gave the added benefit of providing an easy way to write an extension method for mapping enumerable. As it turns out it also has the added benefit in fitting in with the System.Converter delegate signature in that the signature of the Map method of the IMapper interface is the same as the System.Converter delegate.

Whilst the extension method provides a way of mapping enumerations there may be cases where it is more preferable to us the Array.ConvertAll and the List<T>.ConvertAll methods. The Array method is static and the List is instance but each of these take in a System.Converter delegate and map to a typed array and a typed list respectively. All of which means we can do the following:

IMapper<ObjectA, ObjectB> myAToBMapper = new MyAToBMapper();
 
ObjectA[] aArray = { new ObjectA(), new ObjectA() };
ObjectB[] bArray = Array.ConvertAll<ObjectA, ObjectB>(aArray, myAToBMapper.Map);
 
List<ObjectA> aList = new List<ObjectA> { new ObjectA(), new ObjectA() };
List<ObjectB> bList = aList.ConvertAll<ObjectB>(myAToBMapper.Map);
 
// Or
 
var aToBConverter = new Converter<ObjectA, ObjectB>(myAToBMapper.Map);
 
bArray = Array.ConvertAll(aArray, aToBConverter);
bList = aList.ConvertAll(aToBConverter);

Whilst I prefer the synatax of MapAll this does show that the IMapper interface has versatility elsewhere in the core library.

Support SQLCE BinaryBlob and StringClob in NHibernate without depending on System.Data.SqlServerCe

Out of the box NHibernate 2 does not support the BinaryBlob or StringClob sql types for SQL Server compact 3.5. This results in NText and Image fields being truncated. One of the common workarounds you can find on the web is to override the existing SqlServerCeDriver and cast the parameter to an SqlCeParameter and manual set the parameter types to NText and Image.

using System.Data; 
using System.Data.SqlServerCe; 
using NHibernate.Driver; 
using NHibernate.SqlTypes;
 
public class MySqlServerCeDriver : SqlServerCeDriver 
{ 
    protected override void InitializeParameter(IDbDataParameter dbParam, string name, SqlType sqlType) 
    { 
        base.InitializeParameter(dbParam, name, sqlType); 
 
        var parameter = (SqlCeParameter)dbParam;
 
        if (sqlType is StringClobSqlType) parameter.SqlDbType = SqlDbType.NText; 
        if (sqlType is BinaryBlobSqlType) parameter.SqlDbType = SqlDbType.Image; 
    } 
}

The problem with this is that it couples you directly to the System.Data.SqlServerCe library. For some people this won’t be a problem as they will only need to implement it in the project that is using SQLCE and already be coupled to the library. However in my case I wanted to put it in our core library where it could be reused but didn’t want users of the library to have a dependency on SQLCE.

One way of achieving what I wanted would have been to create a separate  sub library to our core library which dragged in the sql server ce data library but didn’t pollute  the core which would have been fine but you have to ask yourself how is it that the NHibernate library is not dependent on the sql server ce data library. The answer is that they use reflection, in fact the drivers themselves inherit from a the ReflectionBasedDriver and use reflection to create the parameters and set the data types. So assuming that the NHibernate library does it correctly we just need to look at one of the other implementations and replicate how they set the data types which gives rise to the following:

using System.Collections.Generic;
using System.Data;
using System.Reflection;
using NHibernate.Driver;
using NHibernate.SqlTypes;
 
public class SqlServerCe35Driver : SqlServerCeDriver
{
    private PropertyInfo dbParamSqlDbTypeProperty;
 
    public override void Configure(IDictionary<string, string> settings)
    {
        base.Configure(settings);
 
        using (IDbCommand cmd = CreateCommand())
        {
            IDbDataParameter dbParam = cmd.CreateParameter();
            dbParamSqlDbTypeProperty = dbParam.GetType().GetProperty("SqlDbType");
        }
    }
 
    protected override void InitializeParameter(IDbDataParameter dbParam, string name, SqlType sqlType)
    {
        base.InitializeParameter(dbParam, name, sqlType);
 
        if (sqlType is BinaryBlobSqlType)
        {
            dbParamSqlDbTypeProperty.SetValue(dbParam, SqlDbType.Image, null);
        }
        else if (sqlType is StringClobSqlType)
        {
            dbParamSqlDbTypeProperty.SetValue(dbParam, SqlDbType.NText, null);
        }
    }
}

As you can see there is slightly more work involved and we need to use reflection up front to get a hold of the sql db type property of the SqlCeParameter but result is that there is no dependency on the System.Data.SqlServerCe library and we have conformed to how the rest of the NHIbernate ReflectionBasedDriver implementations.

Thursday, 10 June 2010

Taking the pain out of parameter validation

One of the biggest pains I find when writing API components is validating parameters. Now don’t get me wrong I don’t mind validating a parameter and failing quickly to ensure that your component works correctly it’s the tediousness of the code that bothers me. Take the following method signature for example.

public void SomeMethod(SomeObject someObject, int maxValue)
{
    if (someObject == null)
    {
        throw new ArgumentNullException("someObject", "Parameter 'someObject' cannot be null.");
    }
    if (! someObject.SupportsSomeFunction())
    {
        throw new ArgumentException("Some object does not support some function.", "someObject");
    }
    if (maxValue < 1)
    {
        throw new ArgumentOutOfRangeException("maxValue", maxValue, "max value must be greater than zero.");
    }
}

As you can see this is very tedious and laborious, there is potential duplication with other methods that will have similar error messages, the if statements create lots of noise and my pet hate is having to put in the parameter names as strings.

Existing solutions

One solution to this that I have seen is to use a static or extension method for the type of validation that you want to do such as:

My preferred implementation and one that is very similar to my own is the fluent interface. I prefer the fluent interface implementation not just because it is fluent but because it doesn’t use extension methods. That is not to say that extension methods don’t have their place but once you introduce an extension method that can be used on any object your intelisense will soon become cluttered with irrelevant methods plus the syntax of Validate.Argument is much clearer. Finally whilst they are both good at tackling the laboriousness of the above code they still require you to pass in the name of the parameter.

Getting the parameter name

In order to progress we first need to find a way of getting the name of the parameter from the method. One idea I have seen is to use an expression.

public void MyMethod(SomeObject someObject)
{
    ValidateArgumentIsNotNull(() => someObject);
}
 
public static void ValidateArgumentIsNotNull<T>(Expression<Func<T>> expr)
{
    // expression value != default of T
    if (!expr.Compile()().Equals(default(T))) return;
    var param = (MemberExpression) expr.Body;
    throw new ArgumentNullException(param.Member.Name);
}

Ignoring the fact that the code is buggy it does the job of pulling out the parameter name from the expression but there in lies the problem. Because we are using an expression we would need to compile the expression before we can get anything from it. If our validation is littered throughout our code and our constructors and methods are constantly being called what kind of performance hits will we see?

To get round the potential of performance hits we need to go lower down into the IL. Luckily for me Rinat Abdullin had already made a start.

static Exception ValidateArgumentIsNotNull<TParameter>(Func<TParameter> argument)
{
    var il = argument.Method.GetMethodBody().GetILAsByteArray();
    var fieldHandle = BitConverter.ToInt32(il,2);
    var field = argument.Target.GetType().Module.ResolveField(fieldHandle);
    return new ArgumentNullException(field.Name, string.Format("Parameter of type '{0}' can't be null", typeof (TParameter)););
}

This does the same job as the expression but is much faster, according to Rinat it is in the magnitude of 300 times faster. Unfortunately this code cannot be used in production as it does not handle  code built in release mode because the byte position of the parameter is different and it has trouble with generic types so I needed to take it one step further.

internal class FieldInfoReader<TParameter>
{
    private readonly Func<TParameter> arg;
 
    internal FieldInfoReader(Func<TParameter> arg)
    {
        this.arg = arg;
    }
 
    public FieldInfo GetFieldToken()
    {
        byte[] methodBodyIlByteArray = GetMethodBodyIlByteArray();
 
        int fieldToken = GetFieldToken(methodBodyIlByteArray);
 
        return GetFieldInfo(fieldToken);
    }
 
    private FieldInfo GetFieldInfo(int fieldToken)
    {
        FieldInfo fieldInfo = null;
 
        if (fieldToken > 0)
        {
            Type argType = arg.Target.GetType();
            Type[] genericTypeArguments = GetSubclassGenericTypes(argType);
            Type[] genericMethodArguments = arg.Method.GetGenericArguments();
 
            fieldInfo = argType.Module.ResolveField(fieldToken, genericTypeArguments, genericMethodArguments);
        }
 
        return fieldInfo;
    }
 
    private static OpCode GetOpCode(byte[] methodBodyIlByteArray, ref int currentPosition)
    {
        ushort value = methodBodyIlByteArray[currentPosition++];
 
        return value != 0xfe ? SingleByteOpCodes[value] : OpCodes.Nop;
    }
 
    private static int GetFieldToken(byte[] methodBodyIlByteArray)
    {
        int position = 0;
 
        while (position < methodBodyIlByteArray.Length)
        {
            OpCode code = GetOpCode(methodBodyIlByteArray, ref position);
 
            if (code.OperandType == OperandType.InlineField)
            {
                return ReadInt32(methodBodyIlByteArray, ref position);
            }
 
            position = MoveToNextPosition(position, code);
        }
 
        return 0;
    }
 
    private static int MoveToNextPosition(int position, OpCode code)
    {
        switch (code.OperandType)
        {
            case OperandType.InlineNone:
                break;
 
            case OperandType.InlineI8:
            case OperandType.InlineR:
                position += 8;
                break;
 
            case OperandType.InlineField:
            case OperandType.InlineBrTarget:
            case OperandType.InlineMethod:
            case OperandType.InlineSig:
            case OperandType.InlineTok:
            case OperandType.InlineType:
            case OperandType.InlineI:
            case OperandType.InlineString:
            case OperandType.InlineSwitch:
            case OperandType.ShortInlineR:
                position += 4;
                break;
 
            case OperandType.InlineVar:
                position += 2;
                break;
 
            case OperandType.ShortInlineBrTarget:
            case OperandType.ShortInlineI:
            case OperandType.ShortInlineVar:
                position++;
                break;
 
            default:
                throw new InvalidOperationException("Unknown operand type.");
        }
        return position;
    }
 
    private byte[] GetMethodBodyIlByteArray()
    {
        MethodBody methodBody = arg.Method.GetMethodBody();
 
        if (methodBody == null)
        {
            throw new InvalidOperationException();
        }
 
        return methodBody.GetILAsByteArray();
    }
 
    private static int ReadInt32(byte[] il, ref int position)
    {
        return ((il[position++] | (il[position++] << 8)) | (il[position++] << 0x10)) | (il[position++] << 0x18);
    }
 
    private static Type[] GetSubclassGenericTypes(Type toCheck)
    {
        var genericArgumentsTypes = new List<Type>();
 
        while (toCheck != null)
        {
            if (toCheck.IsGenericType)
            {
                genericArgumentsTypes.AddRange(toCheck.GetGenericArguments());
            }
 
            toCheck = toCheck.BaseType;
        }
 
        return genericArgumentsTypes.ToArray();
    }
 
    private static OpCode[] singleByteOpCodes;
 
    public static OpCode[] SingleByteOpCodes
    {
        get
        {
            if (singleByteOpCodes == null)
            {
                LoadOpCodes();
            }
            return singleByteOpCodes;
        }
    }
 
    private static void LoadOpCodes()
    {
        singleByteOpCodes = new OpCode[0x100];
 
        FieldInfo[] opcodeFieldInfos = typeof(OpCodes).GetFields();
 
        for (int i = 0; i < opcodeFieldInfos.Length; i++)
        {
            FieldInfo info1 = opcodeFieldInfos[i];
 
            if (info1.FieldType == typeof(OpCode))
            {
                var singleByteOpCode = (OpCode)info1.GetValue(null);
 
                var singleByteOpcodeIndex = (ushort)singleByteOpCode.Value;
 
                if (singleByteOpcodeIndex < 0x100)
                {
                    singleByteOpCodes[singleByteOpcodeIndex] = singleByteOpCode;
                }
            }
        }
    }
}

I cannot take full credit for the above code as it is structured on some code I found trawling the web which I have stripped down to do what I want. Apart from being overly complicated the FieldInfoReader parses the Func<> method bodies byte array looking for the correct position to find the parameters name and extracts it.

Plugging it into the fluent interface

Now we know how to get the parameter name we need to plug it all together. As I said before I prefer the static class approach with a fluent interface. The first step is to specify what it is we want to validate and make it clear to anyone reading the code what is under test.

[DebuggerStepThrough]
public static class Validate
{
    public static Argument<TParameter> Argument<TParameter>(Func<TParameter> arg)
    {
        if (arg == null)
        {
            throw new ArgumentNullException("arg");
        }
 
        var test = new FieldInfoReader<TParameter>(arg);
        
        FieldInfo fieldInfo = test.GetFieldToken();
 
        if (fieldInfo == null)
        {
            throw new ValidationException("No field info found in delegate");    
        }
 
        return new Argument<TParameter>(fieldInfo.Name, arg());
    }
}
 
[DebuggerStepThrough]
public class Argument<TParameterType>
{
    internal Argument(string parameterName, TParameterType parameter)
    {
        ParameterName = parameterName;
        ParameterValue = parameter;
    }
 
   internal string ParameterName { get; private set; }
 
    internal TParameterType ParameterValue { get; private set; }
}

The Validate.Argument method takes in a delegate to the parameter and extracts the parameter name and returns an argument class containing both the parameter name and the parameter value. The argument object is the key to the validation process. It is used in conjunction with extension methods for various types of validation to give us our fluent interface, an example of which is below.

[DebuggerStepThrough]
public static class ArgumentValidationExtensions
{
    public static ReferenceTypeArgument<TArgumentType> IsNotNull<TArgumentType>(this Argument<TArgumentType> argument) where TArgumentType : class
    {
        if (argumentParameterValue == null)
        {
            throw new ArgumentNullException(argument.ParameterName);
        }
 
        return new ReferenceTypeArgument<TArgumentType>(argument);
    }
 
    public static ReferenceTypeArgument<string> IsNotEmpty(this ReferenceTypeArgument<string> argument)
    {
        if (argument.ParameterValue.Length == 0)
        {
            throw new ArgumentException("Parameter cannot be an empty string.", argumentValidation.ParameterName);
        }
 
        return argument;
    }
}
 
public class ReferenceTypeArgument<TArgumentType> : Argument { }

The above extension method looks at an Argument that has a class type parameter and checks it for null, if it is null an ArgumentNullException otherwise a ReferenceArgument is returned which can then be used by the string specific validation method that only takes in a ReferenceArgument object meaning that we can now do the following.

public void SomeMethod(string value)
{
    Validate.Argument(() => value).IsNotNull().IsNotEmpty();
}

Putting on the icing

Using the above pattern the amount of validation methods you can use is only limited by your imagination but do you really want to create validation methods for obscure checks that are only going to be done in one or two places? What would be better is to provide methods for the most common of checks and provide a way that the consumer of the validation api can provide their own logic. What we want to do is to provide an interface where the consumer can provide a function to run against the parameter that indicates if the argument is valid.

public static Argument<TArgumentType> Satisifes<TArgumentType>(this Argument<TArgumentType> argument, Expression<Func<TArgumentType, bool>> expression)
{
    argument.ApplyValidation(
        expression.Compile(),
        () => string.Format("The parameter '{0}' failed the following validation '{1}'", argument.ParameterName, expression));
 
    return argument;
}
 
public static Argument<TArgumentType> Satisifes<TArgumentType>(this Argument<TArgumentType> argument, Func<TArgumentType, bool> function, string message)
{
    argument.ApplyValidation(function, () => message);
    
    return argument;
}
 
public static Argument<TArgumentType> Satisifes<TArgumentType>(this Argument<TArgumentType> argument, Func<TArgumentType, bool> function, string messageFormat, params object[] messageParameters)
{
    argument.ApplyValidation(function, () => string.Format(CultureInfo.InvariantCulture, messageFormat, messageParameters));
 
    return argument;
}
 
private static void ApplyValidation<TArgumentType>(this Argument<TArgumentType> argument, Func<TArgumentType, bool> testFunction, Func<string> messageFunction)
{
    if (!testFunction.Invoke(argument.ParameterValue))
    {
        throw new ArgumentException(messageFunction.Invoke(), argument.ParameterName);
    }
}

Here we provide two different ways a consumer can validate an argument. One takes in an expression the other takes in a function and a custom error message. The beauty of the expression is that it is self documenting, when you out put the expression x => x.CanDoSomething() that is what you get. So your error message in your argument exception will contain the expression. The following code would produce something like:

public void SomeMethod(Stream streamA, Stream streamB)
{
    Validate.Argument(() = streamA).Satisfies(stream => stream.CanRead());
    Validate.Argument(() = streamB).Satisfies(stream => stream.CanWrite(), "Cannot write to the stream.");
}
 
ArgumentException -> The parameter 'streamA' failed the following validation 'stream => stream.CanRead()'. Parameter 'streamA'
ArgumentException -> Cannot write to the stream. Parameter 'streamB'

Both of these are acceptable but you may prefer one over the other depending on what you were trying to convey.

Happy validating.

Configuration ignorance

A good friend of mine, Stephen Oakman, did a post on how to hide your dependencies on a particular configuration implementation (Using a configuration provider). I wanted to touch on the basics of the post and show how that can be achieved with a complex implementation of a configuration structure using the System.Configuration namespace objects and builds on my original post of Generic configuration element collections.

If you are not very familiar with the System.Configuration library you should checkout these very good articles by Jon Rista:

The basic premise of Steve’s article is to hide how you retrieve your configuration object behind a configuration provider interface. The implementation behind then goes off and does the nasty work of getting the configuration out of the application settings in the application configuration if you are felling particularly unclean and dirty. I want to show how this can be done using the cleaner approach of using ConfigurationSection and ConfigurationElement implementations whilst not exposing the dependencies of the System.Configuration namespace.

A basic configuration section implementation

Let’s start by looking at a basic implementation of the configuration section.

public class BasicConfigurationSection : ConfigurationSection
{
    [ConfigurationProperty("SomeValue")]
    public string SomeValue
    {
        get { return (string)this["SomeValue"]; }
    }
}

Assuming that this is correctly configured in the application configuration (I won’t go into how this should be done as there are already many posts that do this) we can retrieve it using the following:

(BasicConfigurationSection)ConfigurationManager.GetSection("Our/Xml/Structure/BasicConfiguration");

As you can see any component that needs to use the configuration object will be dependent on the System.Configuration namespace and know how to retrieve the object from the ConfigurationManager. Now you could just inject your configuration in which means you only need to retrieve it at least once but you are still dependent on any module that uses the code being dependent on the System.Configuration namespace which will be dragged in as a reference.

Enter the provider

To offset the responsibility of retrieving the configuration to another component we can introduce Steve’s configuration provider.

public interface IConfigurationProvider<TConfiguration>
{
    TConfiguration GetConfiguration();
}
 
public class BasicConfigurationProvider : IConfigurationProvider<BasicConfigurationSection>
{
    public BasicConfigurationSection GetConfiguration()
    {
        return (BasicConfigurationSection)ConfigurationManager.GetSection("Our/Xml/Structure/BasicConfiguration");
    }
}

Now we can inject our configuration provider implementation into our objects removing the need for consuming code to know where our configuration comes from.

Hiding behind interfaces

Whilst we have hidden how we retrieve the configuration we have not remove the dependency on the ConfigurationSection base class as the consuming code has access to the base class methods. To remove this dependency we can simply hide our configuration section object behind an interface.

public interface IBasicConfiguration
{
    string SomeValue { get; }
}
 
public BasicConfigurationSection : ConfigurationSection, IBasicConfiguration
{
    ...
}
 
public class BasicConfigurationProvider : IConfigurationProvider<IBasicConfiguration>
{
    public IBasicConfiguration GetConfiguration()
    {
        ...
    }
}

Now the only thing that is aware of our configuration implementation is our configuration provider, we can safely inject our provider into our objects without worrying about being dependent on the System.Configuration namespace.

Handling more complicated configuration sections

The previous code handles a basic flat configuration structure but what happens when we want to use a more complicated three dimensional structure with child ConfigurationElements, ConfigurationCollections or even a ConfigurationGroup.  Take the following example where we have a ConfigurationSection which contains a single ConfigurationElement A and a collection of ConfigurationElement Bs.

public class ComplicatedConfigurationSection : ConfigurationSection
{
    [ConfigurationProperty("ConfigurationA")]
    public ConfigurationElementA ConfigurationA { get { ... } }
 
    [ConfigurationProperty("ConfigurationB")]
    public ConfigurationElementBCollection { get { ... } }
}
 
public class ConfigurationElementA : ConfigurationElement { ... }
 
public class ConfigurationElementB : ConfigurationElement { ... }
 
public class ConfigurationElementBCollection : ConfigurationCollection { ... }

Now we have sub elements which inherit from the ConfigurationElement and ConfigurationCollection base classes. To remove the dependency we will need to put each of the configuration elements behind an interface and use one of the collection interfaces for the collection. If you have used something similar to my post on Generic configuration element collections you can use one of the generic collection interfaces.

public interface IComplicatedConfiguration
{
    IConfigurationA ConfigurationA { get; }
 
    IList<IConfigurationB { get; }
}
 
public interface ConfigurationElementA { ... }
 
public interface ConfigurationElementB { ... }

If you do this though you cannot simply change the return types of your configuration properties to be that of your interfaces otherwise you will get a ConfigurationErrorsException : Property 'ConfigurationA' is not a configurationElement. This is because the ConfigurationProperty attribute is binding to the return type of your configuration which is an interface and does not inherit from the ConfigurationElement base class. Instead you will need to wire up your configuration properties for the more complicated types in the configuration section or elements constructor.

public class ComplicatedConfigurationSection : ConfigurationSection, IComplecatedConfiguration
{
    public ComplicatedConfigurationSection()
    {
        Properties.Add(new ConfigurationProperty(
                "ConfigurationA",
                typeof(ConfigurationElementA),
                null));
        Properties.Add(new ConfigurationProperty(
                "ConfigurationB",
                typeof(ConfigurationElementBCollection),
                new ConfigurationElementBCollection()));
    }
 
    public IConfigurationA ConfigurationA { get { ... } }
 
    public IList<IConfigurationB> { get { ... } }
}
 
public class ConfigurationElementA : ConfigurationElement, IConfigurationA { ... }
 
public class ConfigurationElementB : ConfigurationElement, IConfigurationB { ... }
 
public class ConfigurationElementBCollection : ConfigurationElementCollectionBase<ConfigurationElementB, string> { ... }

Finishing touches

The one thing that clutters up this approach is having to inject the configuration provider into the consuming classes, it would be more preferable inject just the configurations themselves. Obviously you could just new up all of your providers at the start and get the configuration objects and add them to your IoC container which is fine but a little messy. If however you are lucky enough to be using Windsor you can make use of the FactorySupportFacility which would  simplify your wiring up of your components.

Was it worth it

So after that what do we have, well we have twice as many objects with all the new interfaces and the configuration sections are a little more complicated but was it all worth it? In my opinion yes otherwise I wouldn’t have bothered posting this. Whilst the implementation is more complicated your consuming code is kept clean and you have the ability to rip out your configuration implementation and replace it with something else without effecting the existing code that consumes the configuration.

Wednesday, 9 June 2010

Automatic collection mapping for your mappers

I was going through our code base the other day and found a large number of mapper objects and their related interfaces. They all looked something like this:

public class SomeObjectMapper
{
    public ObjectA Map(ObjectB source)
    {
        //Perform mapping and return.
    }
 
    public IEnumerable<ObjectA> MapAll(IEnumerable<ObjectB> source)
    {
        return source.Select(x => Map(x));
    }
}

Most of the mappers followed this general style but were inconsistent with mapping method names or what type of collection they returned (ICollection, IList, etc) and how they traversed the collection (for, foreach, linq).

First of all I wanted to bring some form of consistency to the mappers so I introduced an IMapper<,> interface, this is quite common and can be seen in many examples on the blog sphere.

public interface IMapper<TInput, TOutput>
{
    TOutput Map(TInput input);
}

Secondly I wanted to remove the need of the duplicate code for mapping one enumeration/list/collection to another. I could have introduced a base class but because all the mappers now use an interface I can use an extension method which would be far more flexible than a base class.

public static class MapperExtensions
{
    public static IEnumerable<TOutput> MapAll<TInput, TOutput>(this IMapper<TInput, TOutput> mapper, IEnumerable<TInput> input)
    {
        return input.Select(x => mapper.Map(x));
    }
}

Whilst this wont work for all mappers it does provide a way of cleaning up most common mappers so that they conform to the same signature and deal only with the one task of mapping object A to object B without having to be cluttered with mapping collections.

Fed up with writing methods to copy one stream to another

I constantly find myself writing static methods in my applications to copy one stream into another so I finally decided  to add an extension method to our internal library. There is a method on the MemoryStream class called WriteTo which does this but I wanted it to be available to all instances of Stream.
public static void CopyTo(this Stream input, Stream output, int bufferSize)
{
    Validate.Argument(() => input).IsNotNull().Satisifes(stream => stream.CanRead);
    Validate.Argument(() => output).IsNotNull().Satisifes(stream => stream.CanWrite);
    Validate.Argument(() => bufferSize).IsGreaterThan(0);var buffer = new byte[bufferSize];
 
    int len;
 
    while ((len = input.Read(buffer, 0, buffer.Length)) > 0)
    {
        output.Write(buffer, 0, len);
    }
}
 
public static void CopyTo(this Stream input, Stream output)
{
    input.CopyTo(output, 4096);
}

The validation at the top uses a custom library but basically it is checking that both the streams are not null, that the input stream can be read and the output stream can be written to. Apart from that the code is just the bog standard off the shelf code that copies one stream to another.

Initially my method was called Copy but I found out afterwards that Microsoft has already added a method to the Stream class in .net 4.0 which does exactly this so to be consistent I have given it the same name, CopyTo. This also has the benefit of being able to port any code that uses this to .net 4.0 and it will pickup the method on the Stream class without making any changes as extension methods are given the lowest priority.