Tuesday, May 18, 2010

Introduction to Rx Part 1 - Key types

STOP THE PRESS! This series has now been superseded by the online book www.IntroToRx.com. The new site/book offers far better explanations, samples and depth of content. I hope you enjoy!

Microsoft has released a new library for building “reactive” applications. It’s full name is Reactive Extensions for .NET but is generally referred to as just “Rx”. Essentially Rx is built upon the foundations of the Observer pattern. .NET already exposes some other ways to implement the Observer pattern such as Multicast delegates or Events. Multicast delegates (which Events are) however can be cumbersome to use, have a nasty interface and are difficult to compose and can not be queried. Rx looks to solve these problems.
Here I will introduce you to the building blocks and some basic types that make up Rx.

IObservable<T>

IObservable<T> is one of the 2 core interfaces for working with Rx. It is a simple interface with just a Subscribe method. Microsoft are so confident that this interface will be of use to you it has been included in the BCL as of version 4.0 of .NET. You should be able to think of anything that implements IObservable<T> as a Stream of T objects. So if a method returned an IObservable<Price> I could think of it as a stream of Prices.

IObserver<T>

IObserver<T> is the other one of the 2 core interfaces for working with Rx. It too has made it into the BCL as of .NET 4.0. Don’t worry if you are not on .NET 4.0 yet as the Rx team have included these 2 interfaces in a separate assembly for .NET 3.5 users. IObserver<T> is meant to be the “functional dual of IEnumerable<T>”. If you want to know what that last statement meant then enjoy the hours of videos on Channel9 where they discuss the mathematical purity of the types. For everyone else it means that where an IEnumerable<T> can effectively yield 3 things (the next value, an exception or the end of the sequence), so too can IObservable<T> via IObserver<T>’s 3 methods OnNext(T), OnError(Exception) and OnCompleted().
Interestingly, while you will be exposed to the IObservable<T> interface a lot if you work with Rx, I find I don't often need to concern myself with IObserver<T>. Another interesting thing I have found with Rx is that I never actually implement these interfaces myself, Rx provides all of the implementations I need out of the box. Lets have a look at the simple ones.

Subject<T>

If you were to create your own implementation of IObservable<T> you may find that you need to expose method to publish items to the subscribers, throw errors and notify when the stream is complete. Hmmm they all sound like the methods on the IObserver<T> interface. While it may seem odd to have one type implementing both interfaces, it does make life easy. This is what subjects can do for you.  Subject<T> is the most basic of the subjects. Effectively you can expose your Subject<T> behind a method that returns IObservable<T> but internally you can use the OnNext, OnError and OnCompleted methods to control the stream.
In this (awfully basic) example, I create a subject, subscribe to that subject and then publish to the stream.
using System;
using System.Collections.Generic;

namespace RxConsole
{
    class Program
    {
        static void Main(string[] args)
        {
            var subject = new Subject<string>();

            WriteStreamToConsole(subject);

            subject.OnNext("a");
            subject.OnNext("b");
            subject.OnNext("c");
            Console.ReadKey();
        }

        private static void WriteStreamToConsole(IObservable<string> stream)
        {
            stream.Subscribe(Console.WriteLine);
        }
    }
}
Note that the WriteStreamToConsole method takes an IObservable<string> as it only wants access to the subscribe method. Hang on, doesn’t the Subscribe method need an IObserver<string>? Surely Console.WriteLine does not match that interface. Well not it doesn’t but the Rx team supply me with an Extension Method to IObservable<T> that just takes an Action<T>. The action will be executed every time an item is published. There are other overloads to the Subscribe extension method that allows you to pass combinations of delegates to be invoke for OnNext, OnCompleted and OnError. This effectively means I don't need to implement IObserver<T>. Cool.
As you can see, Subject<T> could be quite useful for getting started in Rx programming. Subject<T> is a basic implementation however. There are 3 siblings to Subject<T> that offer subtly different implementations which can drastically change the way your program runs.

ReplaySubject<T>

ReplaySubject<T> will remember all publications to it so that any subscriptions that happen after publications have been made, will still get all of the publications. Consider this example where we have moved our first publication to occur before our subscription
static void Main(string[] args)
{
    var subject = new Subject<string>();

    subject.OnNext("a");
    WriteStreamToConsole(subject);
    
    subject.OnNext("b");
    subject.OnNext("c");
    Console.ReadKey();
}
The result of this would be that “b” and “c” would be written to the console, but “a” ignored. If we were to make the minor change to make subject a ReplaySubject<T> we would see all publications again.
static void Main(string[] args)
{
    var subject = new ReplaySubject<string>();

    subject.OnNext("a");
    WriteStreamToConsole(subject);
    
    subject.OnNext("b");
    subject.OnNext("c");
    Console.ReadKey();
}
This can be very handy for eliminating race conditions.

BehaviorSubject<T>

BehaviorSubject<T> is similar to ReplaySubject<T> except it only remembers the last publication. BehaviorSubject<T> also requires you to provide it a default value of T. This means that all subscribers will receive a value immediately (unless it is already completed).
In this example the value “a” is written to the console.
static void Main(string[] args)
{
    var subject = new BehaviorSubject<string>("a");
    WriteStreamToConsole(subject);
    Console.ReadKey();
}
In this example the value “b” is written to the console, but not “a”.
static void Main(string[] args)
{
    var subject = new BehaviorSubject<string>("a");
    subject.OnNext("b");
    WriteStreamToConsole(subject);
    Console.ReadKey();
}
In this example the values “b”, “c” & “d” are all written to the console, but again not “a”
static void Main(string[] args)
{
    var subject = new BehaviorSubject<string>("a");

    subject.OnNext("b");
    WriteStreamToConsole(subject);
    subject.OnNext("c");
    subject.OnNext("d");
    Console.ReadKey();
}
Finally in this example, no values will be published as the stream has completed. Nothing is written to the console.
static void Main(string[] args)
{
    var subject = new BehaviorSubject<string>("a");

    subject.OnNext("b");
    subject.OnNext("c");
    subject.OnCompleted();
    WriteStreamToConsole(subject);
    
    Console.ReadKey();
}

AsyncSubject<T>

AsyncSubject<T> is similar to the Replay and Behavior subjects, however it will only store the last value, and only publish it when the stream is completed.
In this example no values will be published so no values will be written to the console.
static void Main(string[] args)
{
    var subject = new AsyncSubject<string>();

    subject.OnNext("a");
    WriteStreamToConsole(subject);
    subject.OnNext("b");
    subject.OnNext("c");
    Console.ReadKey();
}
In this example we invoke the OnCompleted method and the value “c” is published and therefore written to the console.
static void Main(string[] args)
{
    var subject = new AsyncSubject<string>();

    subject.OnNext("a");
    WriteStreamToConsole(subject);
    subject.OnNext("b");
    subject.OnNext("c");
    subject.OnCompleted();
    Console.ReadKey();
}
So that is the very basics of Rx. With only that under you belt it may be hard to understand why Rx is a topic of interest. To follow on from this post I will discuss further fundamentals to Rx
  1. Extension methods
  2. Scheduling / Multithreading
  3. LINQ syntax
Once we have covered these it should allow you to really get Rx working for you to produce some tasty Reactive applications. Hopefully after we have covered these background topics we can knock up some Samples where Rx can really help you in your day to day coding.
The full source code is now available either via svn at http://code.google.com/p/rx-samples/source/checkout or as a zip file.
Related links :
IObservable<T> interface - MSDN
IObserver<T> interface - MSDN
Observer Design pattern - MSDN
Rx Home
Exploring the Major Interfaces in Rx – MSDN
ObservableExtensions class - MSDN
Using Rx Subjects - MSDN
System.Reactive.Subjects Namespace - MSDN
Subject<T> - MSDN
AsyncSubject<T> - MSDN
BehaviorSubject<T> - MSDN
ReplaySubject<T> - MSDN
Subject static class - MSDN
ISubject<TSource, TResult> - MSDN
ISubject<T> - MSDN
Back to the contents page for Reactive Extensions for .NET Introduction
Forward to next post; Part 2 - Static and extension methods
Technorati Tags: ,,

Wednesday, May 12, 2010

MergedDictionaries performance problems in WPF

I don’t normally like to blatantly plagiarise other people’s comments, but this seems to be a little know bug that sounds like it should be shared.

A colleague of mine emailed our internal tech list the following email

I strongly urge everyone working with WPF to use this or at least benchmark it in your own applications if you use ResourceDictionaries.MergedDictionaries. I consider this to be a huge problem in WPF. I’m not sure if it exists in Silverlight, but I would assume it does.

I was just debugging a very long render delay in some WPF code and I came across this little tidbit:

http://www.wpftutorial.net/MergedDictionaryPerformance.html

The quote of interest is: “Each time a control references a ResourceDictionary XAML creates a new instance of it. So if you have a custom control library with 30 controls in it and each control references a common dictionary you create 30 identical resource dictionaries!”

Normally that isn’t a huge problem, but when you consider the way that I personally (and have suggested to others) that they organize their resources in Prism projects it gets to be a **serious** problem. For example, let’s say we have this project structure:

/MyProject.Resources
       /Resources
                -Buttons.xaml
                -DataGrid.xaml
                -Global.xaml
                -Brushes.xaml
                -WindowChrome.xaml
                -Icons.xaml
 
/MyProject.Module1
      /Resources
                -Module1Resources.xaml  (References all Dictionaries in /MyProject.Resources/Resources/*)
      /Views
                -View1.xaml
                -View2.xaml
      
/MyProject.Module2
      /Resources
                -Module2Resources.xaml   (References all Dictionaries in /MyProject.Resources/Resources/*)
      /Views
                -View1.xaml
                -View2.xaml
      
/MyProject.Shell
      /Resources
                -ShellResources.xaml   
      /Views
                -MainShell.xaml

If in your views you reference the module-level ResourceDictionary (which helps for maintainability and modularity) then every time you create an instance of View1.xaml for example, you would have to parse all the ResourceDictionaries in /MyProject.Resources/Resources/* every time. This isn’t really a memory concern but it is a huge performance concern. There can potentially be thousands of lines of XAML code to parse and the time really does add up.

I recently switched all of the MergedDictionary references:

<ResourceDictionary>
    <ResourceDictionary.MergedDictionaries>
        <ResourceDictionary Source=”/SomeDictionary.xaml/>
    </ResourceDictionary.MergedDictionaries>
</ResourceDictionary>

To use the attached SharedResourceDictionary which shadows the Source property and keeps a global cache of all ResourceDictionaries parsed:

<ResourceDictionary>
    <ResourceDictionary.MergedDictionaries>
        <SharedResourceDictionary Source=”/SomeDictionary.xaml/>
    </ResourceDictionary.MergedDictionaries>
</ResourceDictionary>

And I saw a performance increase of almost two orders of magnitude … From almost 6000ms to 200ms. I’ve attached this code; I used the basic sample implementation in the link above so this is considered public information for client purposes.

Cheers,

Charlie

Thanks to Charlie Robbins (Lab49) for expanding on Christian’s blog post and for letting me re-print your email.

Saturday, February 13, 2010

Squeezing out performance from Charting

After my review of the charting products available currently I decided to go with the Slverlight/WPF Data Visualization project (from the WPF Toolkit). I did end up coming up with a trick to squeeze some performance out of the charts. Quite simply that charts don’t handle a lot of data very well. When you have rich data templates, animation and more than several hundred data points, performance is pretty poor. I decided that the first thing to compromise on would be animation. It is nice but I would rather speed. By turning off the animation in the charting I get a little speed up but there still is a simple truth that must be considered. If my graph is only 500px wide then why try to render more than 500 data points? This is my first step to gaining some performance; filter out data by sampling so we never try to get the chart to render more data than it ever could.

I can achieve this by creating a custom CollectionViewSource. The new sub class is simple;

  • it has a dependency property of MaxItemCount
  • on any change to the MaxItemCount or the Source we sample the data set, and save the values we want to display into a set
  • we subscribe to the Filter event and only accept an item if it is included in our sample set
public class CollectionSizeFilter : CollectionViewSource
{
    int _count;
    ICollectionView _defaultView;
    HashSet<object> _toKeep;

    public CollectionSizeFilter()
    {
        Filter += CollectionSizeFilter_Filter;
    }

    protected virtual void CollectionSizeFilter_Filter(object sender, FilterEventArgs e)
    {
        e.Accepted = _toKeep == null || _toKeep.Contains(e.Item);
    }

    protected override void OnSourceChanged(object oldSource, object newSource)
    {
        base.OnSourceChanged(oldSource, newSource);
        _defaultView = GetDefaultView(newSource);
        _count = Count(_defaultView.SourceCollection);

        LoadHashset();
    }

    public double MaxItemCount
    {
        get { return (double)GetValue(MaxItemCountProperty); }
        set { SetValue(MaxItemCountProperty, value); }
    }
    public static readonly DependencyProperty MaxItemCountProperty = DependencyProperty.Register("MaxItemCount", typeof(double), typeof(CollectionSizeFilter), new UIPropertyMetadata(1d, MaxItemCountProperty_Changed));

    private static void MaxItemCountProperty_Changed(DependencyObject sender, DependencyPropertyChangedEventArgs e)
    {
        var self = (CollectionSizeFilter)sender;
        self.LoadHashset();
    }

    private void LoadHashset()
    {
        if (_count <= MaxItemCount)
        {
            _toKeep = null;
        }
        else
        {
            _toKeep = new HashSet<object>();
            var gap = MaxItemCount - 1;
            var spacing = _count / gap;
            double nextIndex = 0d;
            int i = 0;
            foreach (var item in _defaultView.SourceCollection)
            {
                if (i >= nextIndex)
                {
                    _toKeep.Add(item);
                    nextIndex += spacing;
                }
                i++;
            }
        }
        if (View != null)
            View.Refresh();
    }

    private static int Count(IEnumerable source)
    {
        if (source == null)
        {
            return 0;
        }
        var is2 = source as ICollection;
        if (is2 != null)
        {
            return is2.Count;
        }
        int num = 0;
        IEnumerator enumerator = source.GetEnumerator();
        {
            while (enumerator.MoveNext())
            {
                num++;
            }
        }
        return num;
    }
}

To use the new “control” we just treat it like a normal CollectionViewSource but we specifiy the MaxItemCount by binding it to the width of the Chart like this.

<Controls:CollectionSizeFilter x:Key="FilteredData" 
    Source="{Binding MyData}" 
    MaxItemCount="{Binding ElementName=chart1, Path=ActualWidth}"/>

This gave some performance improvements but I thought why not follow this concept down the path a little bit more. For most data that  I want to display I am mainly interested in the trend not the minutia. So why do I try to show a data point on every pixel? I could just sample the data; for example if I have a data set of 1500 data points and a graph that is 500px wide, I get some performance gains by reducing the rendered data set to 500 data points, but why not just show one data point every say 10px? If this is acceptable for your data then you can reduce your rendered data set from 1500 down to 50 (30 times less data to render). To do this is even more simple than the code above. We just need to create and implementation of IValueConverter to do some division, a DivisionConverter.

public sealed class DivisionConverter : IValueConverter
{
    #region IValueConverter Members
    public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
    {
        double numerator = ConvertToDouble(value, culture);
        double denominator = ConvertToDouble(parameter, culture);
        return numerator / denominator;
    }

    public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
    {
        throw new NotImplementedException();
    }
    #endregion

    private static double ConvertToDouble(object value, IFormatProvider culture)
    {
        var result = default(double);
        try
        {
            var source = value as IConvertible;
            if (source != null)
                result = source.ToDouble(culture);
        }
        catch
        {
        }
        return result;
    }
}

So now we can just extend our previous XAML to include the new converter and apply our sample size of 10.

<Controls:DivisionConverter x:Key="divisionConverter" />
<Controls:CollectionSizeFilter x:Key="FilteredData" 
    Source="{Binding MyData}"
    MaxItemCount="{Binding ElementName=chart1, Path=ActualWidth, 
        Converter={StaticResource divisionConverter}, 
        ConverterParameter=10}"/>

Putting it all together I end up with code like below. I prefer to keep my concerns near each other, so here I have defined the filters in the LineSeries Resources instead of where some may put them which is the top of the file. I would however define the key to the DivisionConverter at the top of the file or even in the App.xaml as I will probably use it in many places. Note how I use RelativeSource Binding to find the width of the parent chart.

<chartingToolkit:Chart Name="chart1">
    <chartingToolkit:LineSeries 
        Title="{Binding Title}"
        DependentValuePath="Value"
        IndependentValuePath="Date">
        <chartingToolkit:LineSeries.Resources>
            <Controls:CollectionSizeFilter 
                x:Key="FilteredBalances" 
                Source="{Binding Balances}"
                MaxItemCount="{Binding 
                        RelativeSource={RelativeSource FindAncestor, 
                        AncestorType={x:Type chartingToolkit:Chart}}, 
                        Path=ActualWidth,
                        Converter={StaticResource DivisionConverter}, 
                        ConverterParameter=10}"/>
        </chartingToolkit:LineSeries.Resources>
        <chartingToolkit:LineSeries.ItemsSource >
            <Binding Source="{StaticResource FilteredBalances}"/>
        </chartingToolkit:LineSeries.ItemsSource>
    </chartingToolkit:LineSeries>
</chartingToolkit:Chart>

You may find your millage varies with the sampling size. You may be only comfortable with small values like 3-5 or you may be more aggressive with values around 30. It is your data and you will know what is best for you. The real beauty of this is that problem of performance is a presentation problem. With some simple controls we are able to tame the problem in the presentation layer without having to compromise the purity of our ViewModels (like setting max size values that get sent to databases). Here if we resize the chart, we already have all the data so we just render more data points. Also the solution is very generic, there are no dependencies on the WPF Data Visualization assemblies so you may find other uses for them.

Monday, January 18, 2010

My WPF Charting Comparisons

I have recently been looking for some graphing/charting functionality for a home project I am working on. My requirements are fairly simple:

  1. handle data quantities in the region of thousands and tens of thousands of rows/items
  2. be able to display line charts with or without data points (there will be so many data points that they can become noise)
  3. be able to display multiple sets of data to be able to compare data
  4. free or cheap
  5. xcopy install

Now as the Charting products I wanted to compare were all going to be in WPF I assumed that these requirements were just a given but apparently not, so let me specify them as well

  1. be able to bind the data from my own view model (i.e. I don’t want to have controls littering my View Model)
  2. have the graph update as the data changes

Now to see the list of contenders:

So for the really quick review of each

WPF Toolkit Charting

This is the CodePlex project from some of the lads at Microsoft. This is presumable of a lesser quality than the rest of the Toolkit as the Charting component is in preview. The WPF Toolkit allows for great looking charts by utilising the power of WPF Styles. It is one of those balancing acts that must be difficult to make when designing software; extensibility vs. simplicity. The WPF Toolkit leans more towards the extensible option. Extending the charts to look the way you want can be done but many will find it fiddly and frustrating, but once done can be very rewarding and the Graphs can look amazing. The WPF Toolkit also utilises the power of WPF binding by allowing me to bind to my ViewModel. So it looks like a good start, however, the clear and painful problem with the WPF Toolkit is performance. When loading up even hundreds of rows/items the performance is fairly poor. When I tried to throw just over a thousand items at a Line Series the performance was completely unacceptable. One other problem I have is that I get intermittent lock ups. When updating the data, the charting code will run off into a loop and not come out of it, freezing the UI. Hmmm another cross.

Positives:

  • Extensibility allows for beautiful graphs
  • Charts bind to ViewModel
  • Free

Negatives:

  • Woeful performance
  • Random lock ups.

AmCharts

AmCharts appears to be a Charting solution aimed at the financial industry. The chart control that I thought would best fit my needs was the Stock chart. This chart had a great feature that allowed zooming on the X-axis by providing a range slider. Performance was great when I threw ~1500 items at the control. An odd problem I had was the graph would only appear once I resized my window. I think this has to do with binding to a ViewModel as the Demo does not have this problem but it also directly interacts with the control from the Code behind. I want to avoid “messing with controls” from my ViewModel. A more real problem I have is that while the performance is great, the binding seems to be a once off event. Changes to the values in my collection are not reflected by any change to the chart.

Positives

  • Good performance
  • Charts bind to ViewModel
  • Zoom functionality
  • Good samples
  • Smallest DLL size (223KB)

Negatives

  • One time data binding
  • Odd problem with Chart not rendering until i resized the window.

Visifire

VisiFire charts looked to be a great option. They were very easy to get up and running, had some good samples like the AmCharts. My first play with the Visifire charts provided me with a good looking chart. My problems came when I went to bind the Charts to my ViewModel…Visifire does not support data binding! I’m not even sure why someone would write a WPF control that does not support data binding. I wasted plenty off time writing some adapters so that I could get data binding working. Data binding is in the wish list for version 3 (how it didn't make it into the wish list for the 1st version I don’t know). Performance of Visfire was pretty good (not spectacular) and sat in between AmCharts and the hopelessly slow WPF Toolkit.

Positives

  • Easy to get up and running
  • Pretty good looking default charts
  • Moderate performance

Negatives

  • You cant bind a data series to a collection!

Dynamic Data Display

The Dynamic Data Display (aka D3) is another Microsoft project on Codeplex from a Microsoft research team in Russia. D3 authors claim outstanding performance even with massive amounts of data. Sounds like a sure fire winner! The control library also supports different types of charts to the other libraries like Maps and Isolines ( I have no idea what an isoline is). The samples show some good stuff with smooth moving animated graphs with dynamic data points. The big fail on the project is again, no data binding. All manipulation of the charts needs to be done in C# code and needs to be very imperative. There are some guys, however, who have made posts creating an extension to the controls to support data-binding. Either way, while this looked to be a good set of controls, the authors don’t appear to have followed the Pit-of-Success principle. I would go in to details, but the fact it took me hours of reading forums, looking at samples and coding to just get my Model showing on the screen. When it did get on to the screen it was fast, but didn’t update when the underlying data changed. This is a very immature set of controls but may have a bright future if the team can get some fundamentals right.

Positives

  • High performance
  • Easy to scroll and zoom data

Negatives

  • Hardest set of controls to work with. Everything has to be done in code. Authors seem to miss the point of WPF entirely. Presentation and logic feel very much couple together.
  • After all my mucking around the chart didn’t update with my changes to the data.

In summary, I am pretty disappointed with the state of all of these charting controls. What I did manage to get working to a satisfactory state was the WPF Toolkit. As the only real problem I had with the WPF Toolkit charting controls was their performance; I decided that an easy way to get some better performance out of the control was to only show as many data points as there were available pixels. If I only have 400 pixels to show my data it becomes a bit silly to try and get the graph to render 1400 data points. I created a Custom Control that extends CollectionViewSource by having a MaxItemCount property that can be set to effectively filter the amount of data the CollectionViewSource reveals to the Charting controls. The performance was better but I was able to further tweak the performance by adding a DivisionConverter to further reduce the collection size by the parameter specified (10 in my case). This means I only show a data point for every 10 pixels wide the chart is. This ended up being a great compromise….except for the random lock ups. If I play on the Chart for long enough changing the data to update the chart, eventually the program just falls in to a loop. If I can solve this bug I may have a winner on my hands. Ed:—Playing around more I may have got rid of this problem. Still pops up sometimes straight after a build, but a restart fixes it. This may be to do with my build of Win7 (pre release that I am still running). This throws the WPF Toolkit +the 2 tiny bits of filter code clearly into the lead as it can be made to look great and handle tens of thousands of rows.

If any one is interested in the code I used to test/play with each of these libraries you can find a zip of the VS2008 solution here. To see any of the spikes, just set it as the start up project and run or Right click on the project and “Debug”—> “Start new instance”. Only the MyDomain project wont run as that is the Class library that has the small part of the domain to test the charts with.

ChartingPlaygournd.zip – Source code for my tests.

Wednesday, September 23, 2009

Logging in modern .NET applications

Background - A common evolution to logging

Recently we needed to implement a decent logging solution for a new application. The application (like so many) had evolved from a proof of concept (P.O.C) code base. 5 weeks after the cut over from P.O.C to code that was intended to ship, we had a plethora of Debug.WriteLine, Console.WriteLine and Trace.WriteLines sprinkled through the code base. Most of the *.WriteLine code was to help debug
  1. code that implemented 3rd party code
  2. code that had no tests (booo!) and F5 testing seemed to be "good enough"
After a couple of preliminary builds were sent to the client for initial evaluation, several un-handled exceptions had crept through. To be fair to the team, most of these were related to some undocumented features of the 3rd party system and few were my fault where maybe I was pushing WPF a bit to far for the clients machines. The result of these exceptions were a custom popup window on the WPF application, stating that a fatal exception had occurred and would shut down now. We would also email the exception as a quick way to notify the dev team of any errors.
At this point in time the team had decided we had several real problems here:
  1. We had various "debugging" systems (Console, debug and trace)
  2. We had hard coded our implementations of "logging". If logging a specific piece of functionality is a requirement we really should test that we are logging the correct thing!
  3. As the WPF guy I am interested in any WPF binding errors (in the VS output window), but the service/agent guy were flooding the console screen with their pub/sub debug messages. How could I filter out their messages, and how could they filter out mine?
  4. We were catching the un-handled exception at the dispatcher, not at the point it was thrown. This mean we had a huge stack trace (noise) pointing back to the problem and we also were missing some useful data to debug the exception such as the inputs to the method that threw the exception.

Setting the requirements

From the problems we have above the team decided that our requirements for the logging system were:
  • Testable
  • Provide different levels(severity) of logging. Trace, Debug and Exception.
  • Filter on logging level
  • Log to various targets. Email, File and Console.
  • Provide different areas or categories to log as. e.g. Presentation, Agent, Services...
  • Filter on logging Category (so I don't see the agent debug messages and agent developers don't see my presentation guff)
The requirements for the implementation of the logging system are:
  • Log un-handled exceptions so we can analyse then when they occur on live
  • Log the inputs to the method that threw the exception so we can identify it it is data related
  • Replace console/debug/trace calls with new logger.

Journey to a solution

Logging System

First thing I will tackle is the testability. I will create my own interface called ILogger that will do my logging for me
public interface ILogger
{
  void Write(string message);
}

Well that was easy! Next, Provide different levels of logging. I am going to do that by replacing my Write method with level specific methods:
public interface ILogger
{
    void Trace(string message);
    void Debug(string message);
    void Error(string message);
}

To now add Categories. Here I will add a category parameter to each of the methods. I would advise against using enums for your categories, especially if your logger becomes a shared API. Enums can not be extended and really tie you down.

public interface ILogger
{
    void Trace(string category, string message);
    void Debug(string category, string message);
    void Error(string category, string message);
}

Hmmm. It now looks like the write we are making a mess for the poor developer who has to implement this interface. I think I may have been on the right track with the write method in the first place. I am going to change that back a Write method that takes 3 parameters. I will offer extension methods to give users the ability to make calls like the one above (a little bit more work for me but less work for everyone else). See here for more info on extending interfaces with Extension methods.

public interface ILogger
{
    void Write(LogLevel level, string category, string message);
}

Some may be thinking that "Hang on. Strings are bad! Surely a category Enum has to be better?". To answer this I would recommend using an internal class like this

class Category
{
    public const string Agent = "Agent";
    public const string Presentation = "Presentation";
    public const string Agent = "Service";
}

If you want to expose categories at your public API, create a public version of this class that consumers can use an even inherit from to add their own categories too. If you are thinking "shouldnt they be public static readonly?", I will cover that later.
Checking off aginst our requirements list we can cross somethings off

  • Testable (Interfaces make things testable)
  • Provide different levels(severity) of logging. Trace, Debug and Exception.
  • Filter on logging level
  • Log to various targets. Email, File and Console.
  • Provide different areas or categories to log as. eg Presentation, Agent, Services...
  • Filter on logging Category (so I don’t see the agent debug messages and agent developers don't see my presentation guff)
So that is a good start. Now we need to add filtering and targeting various outputs. Well luckily there are plenty of 3rd party logging tools out there that do all of this for us. As our project is using Enterprise Library already we will just use their logging application block. Look to the example at the end of the post for complete example.
Great! Thanks to the 3rd party logging system we have ticked off all of our system requirements, now for our implementation requirements.

Implementation of Logging

Now that we have an interface to code against, let us now look at how we would use it.
Ed- This is completely back to front. You should normally look at how you would use it first and then create the implementation. TDD is a great methodology for this. We are approaching it back-to-front in this post because I think it is easier to consume for the reader.
So the main pain point we have is logging exceptions that occur at the agents (the classes that get data from services and map them to client side entities). This is due to poor 3rd party documentation, Live being slightly different to Dev and QA and integration testing being harder to perform than unit testing.
Prior to our new Logging system some agent code might look like this:
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    Console.WriteLine("AmendPortfolio {0}", portfolio.Id);
    var data = portfolio.MapToServiceData();    //Mapper extension method.
    _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    Console.WriteLine("AmendPortfolio complete");
}

Now if we swapped out the console calls with out logger and then put in some exception logging it may look like this
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    _logger.Debug(Category.Agent, "AmendPortfolio {0}", portfolio.Id);
    try
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
    catch(Exception ex)
    {
        _logger.Error(Category.Agent, ex.ToString);
        throw;
    }
    _logger.Debug(Category.Agent, "AmendPortfolio complete");
}

Oh dear. We now have more code that does logging than code that does real work. While we have satisfied our requirements, we have doubled our work load. Not good. Back to the drawing board.

AOP

Some will have heard of Aspect Orientated Programming. It seemed like 5 years ago it was going to change everything. Well it mainly just changed logging. AOP is a style of programming that allows code to be injected at a given interception point. In our case the interception point would be the start of our agent method, the end of the agent method and when the method throws and exception (which is really the same as the end of the method). As far as I know there are 2 popular ways to achieve this
  1. at run time using a tool like Castle Windsor or Microsoft Unity
  2. at compile time using a tool like PostSharp
I have had some experience with PostSharp as I used it prior to Unity having the ability to add interceptors. So for our solution we went with PostSharp. I believe to switch between any of these options would not be a huge amount of work.
First a quick introduction on how I have previously done AOP logging with PostSharp. I would create an Attribute that I would apply to classes or methods that I wanted to be logged. The attribute would satisfy the requirements of PostSharp so that code would be injected at compile time to do my logging.
code like this
[Logged]
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    var data = portfolio.MapToServiceData();    //Mapper extension method.
    _service.AmendBasket(_session, data, true);    //Call to 3rd party system
}

which at compile time would alter the IL to represent something more like this:
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    Logger.Debug("AmendPortfolio({0})", portfolio.Id);
    try
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
    catch(Exception ex)
    {
        Logger.Error(ex.ToString);
        throw;
    }
    Logger.Debug("AmendPortfolio complete");
}
Well that looks perfect doesn't it? Not really. We don’t have a category specified and we have a hard coded reference the static class Logger from Enterprise Library Logging. It no longer points to our _logger member variable which was of type ILogger. This makes our testing harder to do. If testing your logger is not really something you care about (which is fine) then this AOP solution might be for you. If you do want to be more specific about logging then we need to find a way of getting hold of the instance of ILogger. As PostSharp is a compile time AOP framework, it is a touch harder to integrate than if we used Unity or Windsor. The main problem is how do we get a handle on the Logger? A solution we came up with was to create an ILogged interface
public interface ILogged
{
    ILogger Logger { get; }
}
By doing this we expose the Logger so we can use it in the Aspect/Attribute. Now if we look at out method that we are trying to log in the greater context of the class it resides in we can see what the implementation may now look like.
[Logged]
public class PortfolioAgent : IPortfolioAgent, ILogged
{
    private readonly ILogger _logger;
    private readonly ISomeService _service;
    public PortfolioAgent(ILogger logger, ISomeService _service)
    {
        _logger = logger;
        _service = service;
    }
    public void AmendPortfolio(PortfolioAmmendment portfolio)
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
}

That looks kinda cool to me. One thing my colleagues made a note of is that they would prefer if in this scenario we used property injection for the logger. This is fine with me as long as it is easy to use and we can still test it. The ILogged interface does not preclude using property injection, it is just not my preference. Another thing to note is the lack of Category. Easy fix there is to add a property to our LoggedAttribute of string Category.
[Logged(Category=Category.Agent)]
public class PortfolioAgent : IPortfolioAgent, ILogged
{
...
}


If you remember earlier that I mentioned public const vs public static readonly. This is the reason why I choose to use const types so that they could be used in the attribute.


I am pretty happy with this now. I think we tick off all of our requirements and have not added a lot of complexity to our code. The one last thing that bugs me is that I have to use the LoggedAttribute and the ILogged interface as a couple. If I forget to use one without the other, I either will get no logging, a nasty runtime exception or if I code a aspect correctly I can get a compile time error (the most desirable). At first I coded the attribute to do the latter (compile time error), but then realised that all of my agents were in one project and all used the same Category. To make life a little bit easier I moved the attribute to the AssemblyInfo and had the Aspect apply itself automatically to any class the implemented ILogged. This maybe a step too far towards black magic for some teams, so do what fits best.

Have a look at the sample code here. Both styles are shown in the example so take your pick of which suits you best.

Wednesday, September 16, 2009

Success, Motivation and Management

I just watched an interesting presentation from Dan Pink on The surprising science of motivation. Here he discusses the conventional ways to get productivity out of employees via carrot and stick mentality. Watch the video first (18min) so I don't take the wind from his sails.

What I found interesting especially on refection of my earlier post Projects – Measuring success and providing boundaries, was how he related the 20th century management style to prescriptive roles. Prescriptive roles are roles where you can give very clear guidance on how to perform a task and boundaries of the role. Boundaries normally define measurable tasks with reward/punishment (carrot/stick) attached to the tasks. These can anything from simple things such as

  • If you work late the company will pay for pizza delivery
  • if the project comes in on time you get a bonus
  • if you meet your KPIs you get all of your bonus. Pessimistically viewed as: miss any of your KPIs and we will dock your pay.

However the interesting thing about prescriptive roles is that in the 21st century the game has changed. Any task that can be completed without a level of creativity, abstract thought or reasoning generally can be done:

  • cheaper by outsourced labour
  • faster by automation (mechanical or computer)

This affects the software industry massively. Outsourcing burst on to the scene at the turn of the century and appeared to be the Holy Grail for accountants across western civilisation. This also scared the hell out of any "computer guy" as how was he going to make the payments on his new sports car? Outsourcing was not all that is was cracked up to be with stories of low quality product and communication failures. Outsourcing seems to be making a small come back and I think that we will see this see-saw rock a little bit more before outsourcing becomes part of our day to day life. See the 4 hour working week for some great ideas on outsourcing your life.

Dan Pink discusses that the 20th Century style of carrot/stick management worked well with prescriptive roles. But we are in the 21st century now and I would like to think that any one reading this is not performing a prescriptive role. I would even argue that our role is to eliminate or automate what we can. Normally these things that can be eliminated or automated are prescriptive processes. Roles that add real value to any company do require creativity, problem solving, communication skills, building relationships etc. These things cannot (yet) be automated.

So moving from the production line 20th century to the service orientated 21st century we are seeing a shift from the role of management being one based around compliance (carrot/stick) to self managing, autonomous teams/employees. This is in line with what agile and lean concepts are trying to achieve. Creating a culture where

  1. autonomy
  2. mastery
  3. purpose

are values that are held dear, creates an amazing shift positivity of the team. Instead prescribing exactly what is to be done and creating a drone army (which could be replaced by outsourcing or automation), try setting clear expectations of the outcomes you want achieved and let the team go for it. This will give the team a sense of worth as you are touching on Maslow's Hierarchy of Needs by giving them a channel for creativity and problem solving, but probably more importantly a sense of respect and belonging. Obviously it doesn't have to be a free-for-all, which would no doubt result in total failure, but simple things like burn down charts, daily stand-up's can give 'management' visibility of progress.

So what can you do if you are not management?

I believe that people are attracted to like people. If you value and exercise principles like mastery, purpose, independence, interdependence then I think you will attract to companies where that is the culture. Easiest thing to do is trying for mastery. In the information age it is simple to learn new skills. Simple is not the same as Easy however. Finding the information on a subject is not the same as mastering it, so you have to jump in feet first and make mistakes. Ideally this is not on mission critical projects. I do a lot of mini projects at home to learn a new skill. Some companies like Google are now offering time at work to do this stuff. If your company does not offer this time, then you may want to consider your priorities and see if you can make time for self development. 30 minutes on the way to work, you can read 10 pages of a book. That should allow you around 1 book per month. If you drive/cycle to work, then maybe audio books or podcasts are for you. This is only one part of the path to mastery. You then need to distil the information you are collecting. At the end of a chapter, jot down some notes in your own words of what you just learnt. The act of distilling the information will help you understand it better. To further improve your understanding, actually do something that requires your new skill. Lastly to really see if you understand the skill, try teaching someone else the skill. This really shows you where you are weak still.

Next, start automating the prescriptive parts of your role. For C# guys this mainly done for us with tools like automated build servers and Resharper. If have not worn out our tab key then as a C# code you probably don't use snippets/live templates enough. If you don't have a little keyboard short cut for getting latest, building, running tests and checking in then automate it! If part of your role can't be automated, try outsourcing it. It is amazing what people will do if you ask them nicely - "Excuse me Tony you don't mind taking this mail to the mail room for me?"

Once you are on the track for mastery by creating some sort of self development routine*, you have automated or outsourced the prescriptive parts to your role, you can now concentrate on delivering value from your role. This is where purpose comes in. Ideally as you develop more skills, reduce drone work and start delivering more value from your role management may feel that you can have a level of autonomy. From their perspective giving you autonomy now is really just reducing their work load. If you constantly deliver value, management will naturally move away from compliance management to engagement.

S.M.A.R.T. goals

Projects – Measuring success and providing boundaries

*I try to read a minimum one book per month. I also try to keep up with my blog subscriptions but constantly have hundreds of unread post. As a guide apparently Bill Gates reads 3 books a weekend!

Friday, August 14, 2009

How not to write an Interface in C# 3.0

While working with the IUnityContainer interface from Microsoft's Patterns and Practices team, I decided that it was well worth a post on how not to design interfaces. Recent discussion amongst Rhys and Colin (here as well) have been interesting but I imagine most would agree that both arguments are probably fighting over which polish to use on their code. If these are the big battles you have to face at work then I am jealous.

Introducing IUnityContainer ....
42 members are defined on this interface. Fail.
With much restraint I wont go on about it and flame the guys who write it. Most people know of my disdain for the P&P team at Microsoft. So what tips do I give to make this a usable again? Lets break down the solution into a few tips:

  • Extension methods
  • Cohesion
  • Intention revealing interfaces

How do each of these help us. Let look at some code I was writing pre-extension methods. It was a validation interface that had 2 methods, Validate and Demand

public interface IValidator<T>
{
  IEnumerable<ValidationErrors> Validate(T item);

  ///<exception cref="ValidationException">Thrown when the item is not valid</exception>
  void Demand(T item);
}

The problem with this interface is that all implementations of the interface would/could implement Demand the same way; Make a call to Validate(T item) and if any ValidationErrors came back throw an exception with the validation failures. Time ticks by and I realise the now that I have extension methods in C# 3.0 (.NET 3.5). I only need to have the Validate method on my interface now and provide the Demand implementation as an extension method. The code now becomes:

public interface IValidation<T>
{
  /// <summary>
  /// Validates the specified item, returning a collection of validation errors.
  /// </summary>
  /// <param name="item">The item to validate.</param>
  /// <returns>Returns a <see cref="T:ArtemisWest.ValidationErrorCollection"/> that is empty for a valid item, 
  /// or contains the validation errors if the item is not in a valid state.</returns>
  /// <remarks>
  /// Implementations of <see cref="T:ArtemisWest.IValidation`1"/> should never return null from the validate method.
  /// If no validation errors occured then return an empty collection.
  /// </remarks>
  IEnumerable<ValidationErrors> Validate(T item);
}
public static class ValidatorExtensions
{
  /// <summary>
  /// Raises a <see cref="T:ArtemisWest.ValidationException"/> if the <paramref name="item"/> is not valid.
  /// </summary>
  /// <typeparam name="T">The type that the instance of <see cref="T:IValidation`1"/> targets.</typeparam>
  /// <param name="validator">The validator.</param>
  /// <param name="item">The item to be validated.</param>
  /// <exception cref="T:ArtemisWest.ValidationException">
  /// Thrown when validating the <paramref name="item"/> returns any errors. Only the first 
  /// validation error is raised with the exception. Any validation errors that are marked as
  /// warnings are ignored.
  /// </exception>
  public static void Demand<T>(this IValidation<T> validator, T item)
  {
    foreach (var error in validator.Validate(item))
    {
      if (!error.IsWarning)
      {
        throw new ValidationException(error);
      }
    }
  }
}

Then end result is that the API feels the same as I have access to both methods, but the cost of implementing my interface is reduced to just its core concern.
So, extension methods is one trick we have in the bag. Next cohesion.

The recent discussion between Rhys and Colin is "how many members belong on an interface?" I think both will agree that answer is not 42. Juval Lowy made a great presentation at TechEd2008 on Interfaces and that we should be aiming for 3-7 members per interface. This provides us with a level of cohesion and a low level of coupling. Lets look at some of the members on the IUnityContainer Interface.

  • 8 overloads of RegisterInstance
  • 16 overloads of RegisterType
  • Add/Remove "Extensions" methods
  • 4 overloads of BuildUp
  • 2 overloads of ConfigureContainer
  • CreateChildContainer
  • 4 overloads of Resolve
  • 2 overloads of ResolveAll
  • A TearDown method
  • and a Parent property
Whew! Well how can we tame this beast? When I look at this interface I see certain groups that look like they are related in usage. They would be the
  1. Register and Resolve functionality
  2. And and Remove extensions functionality
  3. Build up and teardown functionality
  4. Container hierarchy functionality (Parent and CreateChildContainer)
  5. Container configuration


Interestingly on our current project we only use the Register/Resolve functionality.
These five groups to me have some level of cohesion. That to me then makes them a candidate for there own interfaces. The big giveaway being that I use unity quite successfully but never use 4/5 of the functionality defined on this interface. So our 2nd tip would be to split out these groups on functionality into there own interfaces.
But what do we name the new interfaces? This is our 3rd tip:

Intention revealing interfaces.

Looking at my list I would imagine some useful names could be:

  1. IContainer
  2. IExtensionContainer
  3. ILifecycleContainer
  4. INestedContainer
  5. IConfigurableContainer
To be honest, I have put little thought into these names. Normally I would put a LOT of effort into getting the naming right, but I don't work on the P&P team so these changes will never be done so why waste my time? Edit: My laziness here really does take the wind our of the sails of this argument. Sorry.
Ok so how can we bring this all together? My proposal would be to have 6 interfaces
  1. IContainer
  2. IExtensionContainer
  3. ILifecycleContainer
  4. INestedContainer
  5. IConfigurableContainer
  6. IUnityContainer : IContainer, IExtensionContainer, ILifecycleContainer, INestedContainer, IConfigurableContainer


Next I would create some extension methods to deal with the silly amount of duplication the multiple overloads incur. Looking at the implementation in UnityContainerBase I would think that all of these methods could be candidates for extension methods

public abstract class UnityContainerBase : IUnityContainer, IDisposable
{
  //prior code removed for brevity
  public IUnityContainer RegisterInstance<TInterface>(TInterface instance)
  {
    return this.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(string name, TInterface instance)
  {
    return this.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(TInterface instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, object instance)
  {
    return this.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, object instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, string name, object instance)
  {
    return this.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
  //Remaining code removed for brevity
}

all of these methods just delegate to the one method overload left as abstract

public abstract IUnityContainer RegisterInstance(
    Type t, 
    string name, 
    object instance, 
    LifetimeManager lifetime);

The obvious thing to do here is to just make all of these extension methods in the same namespace as the interfaces

public static class IContainerExtensions
{
  public IContainer RegisterInstance<TInterface>(this IContainer container, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, string name, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, object instance)
  {
    return container.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, object instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, string name, object instance)
  {
    return container.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
}

This now reduces our IContainer Interface to just the one method

public interface IContainer
{
  IContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
}


One thing to note here is that we have broken the contract because we now return IContainer not IUnityContainer. We will come back to this problem later.
If we then follow suit on the other interfaces we have created, we will have 6 interfaces that look like this:

public interface IContainer
{
  IContainer RegisterType(Type from, Type to, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers);
  IContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
  object Resolve(Type t, string name);
  IEnumerable<object> ResolveAll(Type t);
}
public interface IExtensionContainer
{
  IExtensionContainer AddExtension(UnityContainerExtension extension);
  IExtensionContainer RemoveAllExtensions();
}
public interface ILifecycleContainer
{
  object BuildUp(Type t, object existing, string name);
  void Teardown(object o);
}
public interface INestedContainer
{
  INestedContainer CreateChildContainer();
  INestedContainer Parent { get; }
}
public interface IConfigurableContainer
{
  object Configure(Type configurationInterface);
}
public interface IUnityContainer : IDisposable, IContainer, IExtensionContainer, ILifecycleContainer, INestedContainer, IConfigurableContainer
{}

So now we have some much more managable interfaces to work with. However, we have broken the feature that it had before of returning IUnityContainer from each method. You may ask why would you return the instance when clearly you already have the instance? By doing so you can create a fluent interface. See my other post on Fluent interfaces and DSLs for more information.
Now that we have removed all the noise from the interfaces we actually have a reasonable number of members to work with. This makes me think that maybe we can refactor back to a single interface? Lets have a look:

public interface IUnityContainer : IDisposable
{
  IUnityContainer RegisterType(Type from, Type to, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers);
  IUnityContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
  object Resolve(Type t, string name);
  IEnumerable<object> ResolveAll(Type t);
  IUnityContainer AddExtension(UnityContainerExtension extension);
  IUnityContainer RemoveAllExtensions();
  object BuildUp(Type t, object existing, string name);
  void Teardown(object o);
  IUnityContainer Parent { get; }
  IUnityContainer CreateChildContainer();
  object Configure(Type configurationInterface);
}

Well that is 13 members, which is above my happy limit of 10 and nearly double my ideal limit of 7, however...I think this would be a vast improvement on the silly interface that currently exists with its 42 members.
Just for fun here are the Extension methods that would complete the interface to bring it back to feature complete.

public static class IUnityContainerExtentions
{
  public static IUnityContainer AddNewExtension<TExtension>(this IUnityContainer container) where TExtension : UnityContainerExtension, new()
  {
    return container.AddExtension(Activator.CreateInstance<TExtension>());
  }
  public static T BuildUp<T>(this IUnityContainer container, T existing)
  {
    return (T)container.BuildUp(typeof(T), existing, null);
  }
  public static object BuildUp(this IUnityContainer container, Type t, object existing)
  {
    return container.BuildUp(t, existing, null);
  }
  public static T BuildUp<T>(this IUnityContainer container, T existing, string name)
  {
    return (T)container.BuildUp(typeof(T), existing, name);
  }
  public static TConfigurator Configure<TConfigurator>(this IUnityContainer container) where TConfigurator : IUnityContainerExtensionConfigurator
  {
    return (TConfigurator)container.Configure(typeof(TConfigurator));
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, string name, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, object instance)
  {
    return container.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, object instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, string name, object instance)
  {
    return container.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, string name, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, params InjectionMember[] injectionMembers)
  {
    container.RegisterType(from, to, null, null, injectionMembers);
    return container;
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(from, to, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(from, to, name, null, injectionMembers);
  }
  public static T Resolve<T>(this IUnityContainer container)
  {
    return (T)container.Resolve(typeof(T));
  }
  public static T Resolve<T>(this IUnityContainer container, string name)
  {
    return (T)container.Resolve(typeof(T), name);
  }
  public static object Resolve(this IUnityContainer container, Type t)
  {
    return container.Resolve(t, null);
  }
  public static IEnumerable<T> ResolveAll<T>(this IUnityContainer container)
  {
    //return new <ResolveAll>d__0<T>(-2) { <>4__this = this };
    //This implementation requires more effort than I am willing to give (6hours till my holiday!)
  }
}

Bloody hell. Imagine trying to implement that on every class that implemented the interface!

While I know this entire post is academic as we cant change Unity, but I hope that sews some seeds for other developers so that when they design their next interface it wont have silly overloads that just make the consuming developer's job harder than it ought to be.