Monday, January 18, 2010

My WPF Charting Comparisons

I have recently been looking for some graphing/charting functionality for a home project I am working on. My requirements are fairly simple:

  1. handle data quantities in the region of thousands and tens of thousands of rows/items
  2. be able to display line charts with or without data points (there will be so many data points that they can become noise)
  3. be able to display multiple sets of data to be able to compare data
  4. free or cheap
  5. xcopy install

Now as the Charting products I wanted to compare were all going to be in WPF I assumed that these requirements were just a given but apparently not, so let me specify them as well

  1. be able to bind the data from my own view model (i.e. I don’t want to have controls littering my View Model)
  2. have the graph update as the data changes

Now to see the list of contenders:

So for the really quick review of each

WPF Toolkit Charting

This is the CodePlex project from some of the lads at Microsoft. This is presumable of a lesser quality than the rest of the Toolkit as the Charting component is in preview. The WPF Toolkit allows for great looking charts by utilising the power of WPF Styles. It is one of those balancing acts that must be difficult to make when designing software; extensibility vs. simplicity. The WPF Toolkit leans more towards the extensible option. Extending the charts to look the way you want can be done but many will find it fiddly and frustrating, but once done can be very rewarding and the Graphs can look amazing. The WPF Toolkit also utilises the power of WPF binding by allowing me to bind to my ViewModel. So it looks like a good start, however, the clear and painful problem with the WPF Toolkit is performance. When loading up even hundreds of rows/items the performance is fairly poor. When I tried to throw just over a thousand items at a Line Series the performance was completely unacceptable. One other problem I have is that I get intermittent lock ups. When updating the data, the charting code will run off into a loop and not come out of it, freezing the UI. Hmmm another cross.

Positives:

  • Extensibility allows for beautiful graphs
  • Charts bind to ViewModel
  • Free

Negatives:

  • Woeful performance
  • Random lock ups.

AmCharts

AmCharts appears to be a Charting solution aimed at the financial industry. The chart control that I thought would best fit my needs was the Stock chart. This chart had a great feature that allowed zooming on the X-axis by providing a range slider. Performance was great when I threw ~1500 items at the control. An odd problem I had was the graph would only appear once I resized my window. I think this has to do with binding to a ViewModel as the Demo does not have this problem but it also directly interacts with the control from the Code behind. I want to avoid “messing with controls” from my ViewModel. A more real problem I have is that while the performance is great, the binding seems to be a once off event. Changes to the values in my collection are not reflected by any change to the chart.

Positives

  • Good performance
  • Charts bind to ViewModel
  • Zoom functionality
  • Good samples
  • Smallest DLL size (223KB)

Negatives

  • One time data binding
  • Odd problem with Chart not rendering until i resized the window.

Visifire

VisiFire charts looked to be a great option. They were very easy to get up and running, had some good samples like the AmCharts. My first play with the Visifire charts provided me with a good looking chart. My problems came when I went to bind the Charts to my ViewModel…Visifire does not support data binding! I’m not even sure why someone would write a WPF control that does not support data binding. I wasted plenty off time writing some adapters so that I could get data binding working. Data binding is in the wish list for version 3 (how it didn't make it into the wish list for the 1st version I don’t know). Performance of Visfire was pretty good (not spectacular) and sat in between AmCharts and the hopelessly slow WPF Toolkit.

Positives

  • Easy to get up and running
  • Pretty good looking default charts
  • Moderate performance

Negatives

  • You cant bind a data series to a collection!

Dynamic Data Display

The Dynamic Data Display (aka D3) is another Microsoft project on Codeplex from a Microsoft research team in Russia. D3 authors claim outstanding performance even with massive amounts of data. Sounds like a sure fire winner! The control library also supports different types of charts to the other libraries like Maps and Isolines ( I have no idea what an isoline is). The samples show some good stuff with smooth moving animated graphs with dynamic data points. The big fail on the project is again, no data binding. All manipulation of the charts needs to be done in C# code and needs to be very imperative. There are some guys, however, who have made posts creating an extension to the controls to support data-binding. Either way, while this looked to be a good set of controls, the authors don’t appear to have followed the Pit-of-Success principle. I would go in to details, but the fact it took me hours of reading forums, looking at samples and coding to just get my Model showing on the screen. When it did get on to the screen it was fast, but didn’t update when the underlying data changed. This is a very immature set of controls but may have a bright future if the team can get some fundamentals right.

Positives

  • High performance
  • Easy to scroll and zoom data

Negatives

  • Hardest set of controls to work with. Everything has to be done in code. Authors seem to miss the point of WPF entirely. Presentation and logic feel very much couple together.
  • After all my mucking around the chart didn’t update with my changes to the data.

In summary, I am pretty disappointed with the state of all of these charting controls. What I did manage to get working to a satisfactory state was the WPF Toolkit. As the only real problem I had with the WPF Toolkit charting controls was their performance; I decided that an easy way to get some better performance out of the control was to only show as many data points as there were available pixels. If I only have 400 pixels to show my data it becomes a bit silly to try and get the graph to render 1400 data points. I created a Custom Control that extends CollectionViewSource by having a MaxItemCount property that can be set to effectively filter the amount of data the CollectionViewSource reveals to the Charting controls. The performance was better but I was able to further tweak the performance by adding a DivisionConverter to further reduce the collection size by the parameter specified (10 in my case). This means I only show a data point for every 10 pixels wide the chart is. This ended up being a great compromise….except for the random lock ups. If I play on the Chart for long enough changing the data to update the chart, eventually the program just falls in to a loop. If I can solve this bug I may have a winner on my hands. Ed:—Playing around more I may have got rid of this problem. Still pops up sometimes straight after a build, but a restart fixes it. This may be to do with my build of Win7 (pre release that I am still running). This throws the WPF Toolkit +the 2 tiny bits of filter code clearly into the lead as it can be made to look great and handle tens of thousands of rows.

If any one is interested in the code I used to test/play with each of these libraries you can find a zip of the VS2008 solution here. To see any of the spikes, just set it as the start up project and run or Right click on the project and “Debug”—> “Start new instance”. Only the MyDomain project wont run as that is the Class library that has the small part of the domain to test the charts with.

ChartingPlaygournd.zip – Source code for my tests.

Wednesday, September 23, 2009

Logging in modern .NET applications

Background - A common evolution to logging

Recently we needed to implement a decent logging solution for a new application. The application (like so many) had evolved from a proof of concept (P.O.C) code base. 5 weeks after the cut over from P.O.C to code that was intended to ship, we had a plethora of Debug.WriteLine, Console.WriteLine and Trace.WriteLines sprinkled through the code base. Most of the *.WriteLine code was to help debug
  1. code that implemented 3rd party code
  2. code that had no tests (booo!) and F5 testing seemed to be "good enough"
After a couple of preliminary builds were sent to the client for initial evaluation, several un-handled exceptions had crept through. To be fair to the team, most of these were related to some undocumented features of the 3rd party system and few were my fault where maybe I was pushing WPF a bit to far for the clients machines. The result of these exceptions were a custom popup window on the WPF application, stating that a fatal exception had occurred and would shut down now. We would also email the exception as a quick way to notify the dev team of any errors.
At this point in time the team had decided we had several real problems here:
  1. We had various "debugging" systems (Console, debug and trace)
  2. We had hard coded our implementations of "logging". If logging a specific piece of functionality is a requirement we really should test that we are logging the correct thing!
  3. As the WPF guy I am interested in any WPF binding errors (in the VS output window), but the service/agent guy were flooding the console screen with their pub/sub debug messages. How could I filter out their messages, and how could they filter out mine?
  4. We were catching the un-handled exception at the dispatcher, not at the point it was thrown. This mean we had a huge stack trace (noise) pointing back to the problem and we also were missing some useful data to debug the exception such as the inputs to the method that threw the exception.

Setting the requirements

From the problems we have above the team decided that our requirements for the logging system were:
  • Testable
  • Provide different levels(severity) of logging. Trace, Debug and Exception.
  • Filter on logging level
  • Log to various targets. Email, File and Console.
  • Provide different areas or categories to log as. e.g. Presentation, Agent, Services...
  • Filter on logging Category (so I don't see the agent debug messages and agent developers don't see my presentation guff)
The requirements for the implementation of the logging system are:
  • Log un-handled exceptions so we can analyse then when they occur on live
  • Log the inputs to the method that threw the exception so we can identify it it is data related
  • Replace console/debug/trace calls with new logger.

Journey to a solution

Logging System

First thing I will tackle is the testability. I will create my own interface called ILogger that will do my logging for me
public interface ILogger
{
  void Write(string message);
}

Well that was easy! Next, Provide different levels of logging. I am going to do that by replacing my Write method with level specific methods:
public interface ILogger
{
    void Trace(string message);
    void Debug(string message);
    void Error(string message);
}

To now add Categories. Here I will add a category parameter to each of the methods. I would advise against using enums for your categories, especially if your logger becomes a shared API. Enums can not be extended and really tie you down.

public interface ILogger
{
    void Trace(string category, string message);
    void Debug(string category, string message);
    void Error(string category, string message);
}

Hmmm. It now looks like the write we are making a mess for the poor developer who has to implement this interface. I think I may have been on the right track with the write method in the first place. I am going to change that back a Write method that takes 3 parameters. I will offer extension methods to give users the ability to make calls like the one above (a little bit more work for me but less work for everyone else). See here for more info on extending interfaces with Extension methods.

public interface ILogger
{
    void Write(LogLevel level, string category, string message);
}

Some may be thinking that "Hang on. Strings are bad! Surely a category Enum has to be better?". To answer this I would recommend using an internal class like this

class Category
{
    public const string Agent = "Agent";
    public const string Presentation = "Presentation";
    public const string Agent = "Service";
}

If you want to expose categories at your public API, create a public version of this class that consumers can use an even inherit from to add their own categories too. If you are thinking "shouldnt they be public static readonly?", I will cover that later.
Checking off aginst our requirements list we can cross somethings off

  • Testable (Interfaces make things testable)
  • Provide different levels(severity) of logging. Trace, Debug and Exception.
  • Filter on logging level
  • Log to various targets. Email, File and Console.
  • Provide different areas or categories to log as. eg Presentation, Agent, Services...
  • Filter on logging Category (so I don’t see the agent debug messages and agent developers don't see my presentation guff)
So that is a good start. Now we need to add filtering and targeting various outputs. Well luckily there are plenty of 3rd party logging tools out there that do all of this for us. As our project is using Enterprise Library already we will just use their logging application block. Look to the example at the end of the post for complete example.
Great! Thanks to the 3rd party logging system we have ticked off all of our system requirements, now for our implementation requirements.

Implementation of Logging

Now that we have an interface to code against, let us now look at how we would use it.
Ed- This is completely back to front. You should normally look at how you would use it first and then create the implementation. TDD is a great methodology for this. We are approaching it back-to-front in this post because I think it is easier to consume for the reader.
So the main pain point we have is logging exceptions that occur at the agents (the classes that get data from services and map them to client side entities). This is due to poor 3rd party documentation, Live being slightly different to Dev and QA and integration testing being harder to perform than unit testing.
Prior to our new Logging system some agent code might look like this:
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    Console.WriteLine("AmendPortfolio {0}", portfolio.Id);
    var data = portfolio.MapToServiceData();    //Mapper extension method.
    _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    Console.WriteLine("AmendPortfolio complete");
}

Now if we swapped out the console calls with out logger and then put in some exception logging it may look like this
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    _logger.Debug(Category.Agent, "AmendPortfolio {0}", portfolio.Id);
    try
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
    catch(Exception ex)
    {
        _logger.Error(Category.Agent, ex.ToString);
        throw;
    }
    _logger.Debug(Category.Agent, "AmendPortfolio complete");
}

Oh dear. We now have more code that does logging than code that does real work. While we have satisfied our requirements, we have doubled our work load. Not good. Back to the drawing board.

AOP

Some will have heard of Aspect Orientated Programming. It seemed like 5 years ago it was going to change everything. Well it mainly just changed logging. AOP is a style of programming that allows code to be injected at a given interception point. In our case the interception point would be the start of our agent method, the end of the agent method and when the method throws and exception (which is really the same as the end of the method). As far as I know there are 2 popular ways to achieve this
  1. at run time using a tool like Castle Windsor or Microsoft Unity
  2. at compile time using a tool like PostSharp
I have had some experience with PostSharp as I used it prior to Unity having the ability to add interceptors. So for our solution we went with PostSharp. I believe to switch between any of these options would not be a huge amount of work.
First a quick introduction on how I have previously done AOP logging with PostSharp. I would create an Attribute that I would apply to classes or methods that I wanted to be logged. The attribute would satisfy the requirements of PostSharp so that code would be injected at compile time to do my logging.
code like this
[Logged]
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    var data = portfolio.MapToServiceData();    //Mapper extension method.
    _service.AmendBasket(_session, data, true);    //Call to 3rd party system
}

which at compile time would alter the IL to represent something more like this:
public void AmendPortfolio(PortfolioAmmendment portfolio)
{
    Logger.Debug("AmendPortfolio({0})", portfolio.Id);
    try
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
    catch(Exception ex)
    {
        Logger.Error(ex.ToString);
        throw;
    }
    Logger.Debug("AmendPortfolio complete");
}
Well that looks perfect doesn't it? Not really. We don’t have a category specified and we have a hard coded reference the static class Logger from Enterprise Library Logging. It no longer points to our _logger member variable which was of type ILogger. This makes our testing harder to do. If testing your logger is not really something you care about (which is fine) then this AOP solution might be for you. If you do want to be more specific about logging then we need to find a way of getting hold of the instance of ILogger. As PostSharp is a compile time AOP framework, it is a touch harder to integrate than if we used Unity or Windsor. The main problem is how do we get a handle on the Logger? A solution we came up with was to create an ILogged interface
public interface ILogged
{
    ILogger Logger { get; }
}
By doing this we expose the Logger so we can use it in the Aspect/Attribute. Now if we look at out method that we are trying to log in the greater context of the class it resides in we can see what the implementation may now look like.
[Logged]
public class PortfolioAgent : IPortfolioAgent, ILogged
{
    private readonly ILogger _logger;
    private readonly ISomeService _service;
    public PortfolioAgent(ILogger logger, ISomeService _service)
    {
        _logger = logger;
        _service = service;
    }
    public void AmendPortfolio(PortfolioAmmendment portfolio)
    {
        var data = portfolio.MapToServiceData();    //Mapper extension method.
        _service.AmendBasket(_session, data, true);    //Call to 3rd party system
    }
}

That looks kinda cool to me. One thing my colleagues made a note of is that they would prefer if in this scenario we used property injection for the logger. This is fine with me as long as it is easy to use and we can still test it. The ILogged interface does not preclude using property injection, it is just not my preference. Another thing to note is the lack of Category. Easy fix there is to add a property to our LoggedAttribute of string Category.
[Logged(Category=Category.Agent)]
public class PortfolioAgent : IPortfolioAgent, ILogged
{
...
}


If you remember earlier that I mentioned public const vs public static readonly. This is the reason why I choose to use const types so that they could be used in the attribute.


I am pretty happy with this now. I think we tick off all of our requirements and have not added a lot of complexity to our code. The one last thing that bugs me is that I have to use the LoggedAttribute and the ILogged interface as a couple. If I forget to use one without the other, I either will get no logging, a nasty runtime exception or if I code a aspect correctly I can get a compile time error (the most desirable). At first I coded the attribute to do the latter (compile time error), but then realised that all of my agents were in one project and all used the same Category. To make life a little bit easier I moved the attribute to the AssemblyInfo and had the Aspect apply itself automatically to any class the implemented ILogged. This maybe a step too far towards black magic for some teams, so do what fits best.

Have a look at the sample code here. Both styles are shown in the example so take your pick of which suits you best.

Wednesday, September 16, 2009

Success, Motivation and Management

I just watched an interesting presentation from Dan Pink on The surprising science of motivation. Here he discusses the conventional ways to get productivity out of employees via carrot and stick mentality. Watch the video first (18min) so I don't take the wind from his sails.

What I found interesting especially on refection of my earlier post Projects – Measuring success and providing boundaries, was how he related the 20th century management style to prescriptive roles. Prescriptive roles are roles where you can give very clear guidance on how to perform a task and boundaries of the role. Boundaries normally define measurable tasks with reward/punishment (carrot/stick) attached to the tasks. These can anything from simple things such as

  • If you work late the company will pay for pizza delivery
  • if the project comes in on time you get a bonus
  • if you meet your KPIs you get all of your bonus. Pessimistically viewed as: miss any of your KPIs and we will dock your pay.

However the interesting thing about prescriptive roles is that in the 21st century the game has changed. Any task that can be completed without a level of creativity, abstract thought or reasoning generally can be done:

  • cheaper by outsourced labour
  • faster by automation (mechanical or computer)

This affects the software industry massively. Outsourcing burst on to the scene at the turn of the century and appeared to be the Holy Grail for accountants across western civilisation. This also scared the hell out of any "computer guy" as how was he going to make the payments on his new sports car? Outsourcing was not all that is was cracked up to be with stories of low quality product and communication failures. Outsourcing seems to be making a small come back and I think that we will see this see-saw rock a little bit more before outsourcing becomes part of our day to day life. See the 4 hour working week for some great ideas on outsourcing your life.

Dan Pink discusses that the 20th Century style of carrot/stick management worked well with prescriptive roles. But we are in the 21st century now and I would like to think that any one reading this is not performing a prescriptive role. I would even argue that our role is to eliminate or automate what we can. Normally these things that can be eliminated or automated are prescriptive processes. Roles that add real value to any company do require creativity, problem solving, communication skills, building relationships etc. These things cannot (yet) be automated.

So moving from the production line 20th century to the service orientated 21st century we are seeing a shift from the role of management being one based around compliance (carrot/stick) to self managing, autonomous teams/employees. This is in line with what agile and lean concepts are trying to achieve. Creating a culture where

  1. autonomy
  2. mastery
  3. purpose

are values that are held dear, creates an amazing shift positivity of the team. Instead prescribing exactly what is to be done and creating a drone army (which could be replaced by outsourcing or automation), try setting clear expectations of the outcomes you want achieved and let the team go for it. This will give the team a sense of worth as you are touching on Maslow's Hierarchy of Needs by giving them a channel for creativity and problem solving, but probably more importantly a sense of respect and belonging. Obviously it doesn't have to be a free-for-all, which would no doubt result in total failure, but simple things like burn down charts, daily stand-up's can give 'management' visibility of progress.

So what can you do if you are not management?

I believe that people are attracted to like people. If you value and exercise principles like mastery, purpose, independence, interdependence then I think you will attract to companies where that is the culture. Easiest thing to do is trying for mastery. In the information age it is simple to learn new skills. Simple is not the same as Easy however. Finding the information on a subject is not the same as mastering it, so you have to jump in feet first and make mistakes. Ideally this is not on mission critical projects. I do a lot of mini projects at home to learn a new skill. Some companies like Google are now offering time at work to do this stuff. If your company does not offer this time, then you may want to consider your priorities and see if you can make time for self development. 30 minutes on the way to work, you can read 10 pages of a book. That should allow you around 1 book per month. If you drive/cycle to work, then maybe audio books or podcasts are for you. This is only one part of the path to mastery. You then need to distil the information you are collecting. At the end of a chapter, jot down some notes in your own words of what you just learnt. The act of distilling the information will help you understand it better. To further improve your understanding, actually do something that requires your new skill. Lastly to really see if you understand the skill, try teaching someone else the skill. This really shows you where you are weak still.

Next, start automating the prescriptive parts of your role. For C# guys this mainly done for us with tools like automated build servers and Resharper. If have not worn out our tab key then as a C# code you probably don't use snippets/live templates enough. If you don't have a little keyboard short cut for getting latest, building, running tests and checking in then automate it! If part of your role can't be automated, try outsourcing it. It is amazing what people will do if you ask them nicely - "Excuse me Tony you don't mind taking this mail to the mail room for me?"

Once you are on the track for mastery by creating some sort of self development routine*, you have automated or outsourced the prescriptive parts to your role, you can now concentrate on delivering value from your role. This is where purpose comes in. Ideally as you develop more skills, reduce drone work and start delivering more value from your role management may feel that you can have a level of autonomy. From their perspective giving you autonomy now is really just reducing their work load. If you constantly deliver value, management will naturally move away from compliance management to engagement.

S.M.A.R.T. goals

Projects – Measuring success and providing boundaries

*I try to read a minimum one book per month. I also try to keep up with my blog subscriptions but constantly have hundreds of unread post. As a guide apparently Bill Gates reads 3 books a weekend!

Friday, August 14, 2009

How not to write an Interface in C# 3.0

While working with the IUnityContainer interface from Microsoft's Patterns and Practices team, I decided that it was well worth a post on how not to design interfaces. Recent discussion amongst Rhys and Colin (here as well) have been interesting but I imagine most would agree that both arguments are probably fighting over which polish to use on their code. If these are the big battles you have to face at work then I am jealous.

Introducing IUnityContainer ....
42 members are defined on this interface. Fail.
With much restraint I wont go on about it and flame the guys who write it. Most people know of my disdain for the P&P team at Microsoft. So what tips do I give to make this a usable again? Lets break down the solution into a few tips:

  • Extension methods
  • Cohesion
  • Intention revealing interfaces

How do each of these help us. Let look at some code I was writing pre-extension methods. It was a validation interface that had 2 methods, Validate and Demand

public interface IValidator<T>
{
  IEnumerable<ValidationErrors> Validate(T item);

  ///<exception cref="ValidationException">Thrown when the item is not valid</exception>
  void Demand(T item);
}

The problem with this interface is that all implementations of the interface would/could implement Demand the same way; Make a call to Validate(T item) and if any ValidationErrors came back throw an exception with the validation failures. Time ticks by and I realise the now that I have extension methods in C# 3.0 (.NET 3.5). I only need to have the Validate method on my interface now and provide the Demand implementation as an extension method. The code now becomes:

public interface IValidation<T>
{
  /// <summary>
  /// Validates the specified item, returning a collection of validation errors.
  /// </summary>
  /// <param name="item">The item to validate.</param>
  /// <returns>Returns a <see cref="T:ArtemisWest.ValidationErrorCollection"/> that is empty for a valid item, 
  /// or contains the validation errors if the item is not in a valid state.</returns>
  /// <remarks>
  /// Implementations of <see cref="T:ArtemisWest.IValidation`1"/> should never return null from the validate method.
  /// If no validation errors occured then return an empty collection.
  /// </remarks>
  IEnumerable<ValidationErrors> Validate(T item);
}
public static class ValidatorExtensions
{
  /// <summary>
  /// Raises a <see cref="T:ArtemisWest.ValidationException"/> if the <paramref name="item"/> is not valid.
  /// </summary>
  /// <typeparam name="T">The type that the instance of <see cref="T:IValidation`1"/> targets.</typeparam>
  /// <param name="validator">The validator.</param>
  /// <param name="item">The item to be validated.</param>
  /// <exception cref="T:ArtemisWest.ValidationException">
  /// Thrown when validating the <paramref name="item"/> returns any errors. Only the first 
  /// validation error is raised with the exception. Any validation errors that are marked as
  /// warnings are ignored.
  /// </exception>
  public static void Demand<T>(this IValidation<T> validator, T item)
  {
    foreach (var error in validator.Validate(item))
    {
      if (!error.IsWarning)
      {
        throw new ValidationException(error);
      }
    }
  }
}

Then end result is that the API feels the same as I have access to both methods, but the cost of implementing my interface is reduced to just its core concern.
So, extension methods is one trick we have in the bag. Next cohesion.

The recent discussion between Rhys and Colin is "how many members belong on an interface?" I think both will agree that answer is not 42. Juval Lowy made a great presentation at TechEd2008 on Interfaces and that we should be aiming for 3-7 members per interface. This provides us with a level of cohesion and a low level of coupling. Lets look at some of the members on the IUnityContainer Interface.

  • 8 overloads of RegisterInstance
  • 16 overloads of RegisterType
  • Add/Remove "Extensions" methods
  • 4 overloads of BuildUp
  • 2 overloads of ConfigureContainer
  • CreateChildContainer
  • 4 overloads of Resolve
  • 2 overloads of ResolveAll
  • A TearDown method
  • and a Parent property
Whew! Well how can we tame this beast? When I look at this interface I see certain groups that look like they are related in usage. They would be the
  1. Register and Resolve functionality
  2. And and Remove extensions functionality
  3. Build up and teardown functionality
  4. Container hierarchy functionality (Parent and CreateChildContainer)
  5. Container configuration


Interestingly on our current project we only use the Register/Resolve functionality.
These five groups to me have some level of cohesion. That to me then makes them a candidate for there own interfaces. The big giveaway being that I use unity quite successfully but never use 4/5 of the functionality defined on this interface. So our 2nd tip would be to split out these groups on functionality into there own interfaces.
But what do we name the new interfaces? This is our 3rd tip:

Intention revealing interfaces.

Looking at my list I would imagine some useful names could be:

  1. IContainer
  2. IExtensionContainer
  3. ILifecycleContainer
  4. INestedContainer
  5. IConfigurableContainer
To be honest, I have put little thought into these names. Normally I would put a LOT of effort into getting the naming right, but I don't work on the P&P team so these changes will never be done so why waste my time? Edit: My laziness here really does take the wind our of the sails of this argument. Sorry.
Ok so how can we bring this all together? My proposal would be to have 6 interfaces
  1. IContainer
  2. IExtensionContainer
  3. ILifecycleContainer
  4. INestedContainer
  5. IConfigurableContainer
  6. IUnityContainer : IContainer, IExtensionContainer, ILifecycleContainer, INestedContainer, IConfigurableContainer


Next I would create some extension methods to deal with the silly amount of duplication the multiple overloads incur. Looking at the implementation in UnityContainerBase I would think that all of these methods could be candidates for extension methods

public abstract class UnityContainerBase : IUnityContainer, IDisposable
{
  //prior code removed for brevity
  public IUnityContainer RegisterInstance<TInterface>(TInterface instance)
  {
    return this.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(string name, TInterface instance)
  {
    return this.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(TInterface instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, object instance)
  {
    return this.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public IUnityContainer RegisterInstance<TInterface>(string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, object instance, LifetimeManager lifetimeManager)
  {
    return this.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public IUnityContainer RegisterInstance(Type t, string name, object instance)
  {
    return this.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
  //Remaining code removed for brevity
}

all of these methods just delegate to the one method overload left as abstract

public abstract IUnityContainer RegisterInstance(
    Type t, 
    string name, 
    object instance, 
    LifetimeManager lifetime);

The obvious thing to do here is to just make all of these extension methods in the same namespace as the interfaces

public static class IContainerExtensions
{
  public IContainer RegisterInstance<TInterface>(this IContainer container, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, string name, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, object instance)
  {
    return container.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public IContainer RegisterInstance<TInterface>(this IContainer container, string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, object instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public IContainer RegisterInstance(this IContainer container, Type t, string name, object instance)
  {
    return container.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
}

This now reduces our IContainer Interface to just the one method

public interface IContainer
{
  IContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
}


One thing to note here is that we have broken the contract because we now return IContainer not IUnityContainer. We will come back to this problem later.
If we then follow suit on the other interfaces we have created, we will have 6 interfaces that look like this:

public interface IContainer
{
  IContainer RegisterType(Type from, Type to, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers);
  IContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
  object Resolve(Type t, string name);
  IEnumerable<object> ResolveAll(Type t);
}
public interface IExtensionContainer
{
  IExtensionContainer AddExtension(UnityContainerExtension extension);
  IExtensionContainer RemoveAllExtensions();
}
public interface ILifecycleContainer
{
  object BuildUp(Type t, object existing, string name);
  void Teardown(object o);
}
public interface INestedContainer
{
  INestedContainer CreateChildContainer();
  INestedContainer Parent { get; }
}
public interface IConfigurableContainer
{
  object Configure(Type configurationInterface);
}
public interface IUnityContainer : IDisposable, IContainer, IExtensionContainer, ILifecycleContainer, INestedContainer, IConfigurableContainer
{}

So now we have some much more managable interfaces to work with. However, we have broken the feature that it had before of returning IUnityContainer from each method. You may ask why would you return the instance when clearly you already have the instance? By doing so you can create a fluent interface. See my other post on Fluent interfaces and DSLs for more information.
Now that we have removed all the noise from the interfaces we actually have a reasonable number of members to work with. This makes me think that maybe we can refactor back to a single interface? Lets have a look:

public interface IUnityContainer : IDisposable
{
  IUnityContainer RegisterType(Type from, Type to, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers);
  IUnityContainer RegisterInstance(Type t, string name, object instance, LifetimeManager lifetime);
  object Resolve(Type t, string name);
  IEnumerable<object> ResolveAll(Type t);
  IUnityContainer AddExtension(UnityContainerExtension extension);
  IUnityContainer RemoveAllExtensions();
  object BuildUp(Type t, object existing, string name);
  void Teardown(object o);
  IUnityContainer Parent { get; }
  IUnityContainer CreateChildContainer();
  object Configure(Type configurationInterface);
}

Well that is 13 members, which is above my happy limit of 10 and nearly double my ideal limit of 7, however...I think this would be a vast improvement on the silly interface that currently exists with its 42 members.
Just for fun here are the Extension methods that would complete the interface to bring it back to feature complete.

public static class IUnityContainerExtentions
{
  public static IUnityContainer AddNewExtension<TExtension>(this IUnityContainer container) where TExtension : UnityContainerExtension, new()
  {
    return container.AddExtension(Activator.CreateInstance<TExtension>());
  }
  public static T BuildUp<T>(this IUnityContainer container, T existing)
  {
    return (T)container.BuildUp(typeof(T), existing, null);
  }
  public static object BuildUp(this IUnityContainer container, Type t, object existing)
  {
    return container.BuildUp(t, existing, null);
  }
  public static T BuildUp<T>(this IUnityContainer container, T existing, string name)
  {
    return (T)container.BuildUp(typeof(T), existing, name);
  }
  public static TConfigurator Configure<TConfigurator>(this IUnityContainer container) where TConfigurator : IUnityContainerExtensionConfigurator
  {
    return (TConfigurator)container.Configure(typeof(TConfigurator));
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, string name, TInterface instance)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), null, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, object instance)
  {
    return container.RegisterInstance(t, null, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterInstance<TInterface>(this IUnityContainer container, string name, TInterface instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(typeof(TInterface), name, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, object instance, LifetimeManager lifetimeManager)
  {
    return container.RegisterInstance(t, null, instance, lifetimeManager);
  }
  public static IUnityContainer RegisterInstance(this IUnityContainer container, Type t, string name, object instance)
  {
    return container.RegisterInstance(t, name, instance, new ContainerControlledLifetimeManager());
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, string name, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, null, null, injectionMembers);
  }
  public static IUnityContainer RegisterType<T>(this IUnityContainer container, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(typeof(T), null, name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType<TFrom, TTo>(this IUnityContainer container, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers) where TTo : TFrom
  {
    return container.RegisterType(typeof(TFrom), typeof(TTo), name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, name, null, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, params InjectionMember[] injectionMembers)
  {
    container.RegisterType(from, to, null, null, injectionMembers);
    return container;
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type t, string name, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(t, null, name, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, LifetimeManager lifetimeManager, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(from, to, null, lifetimeManager, injectionMembers);
  }
  public static IUnityContainer RegisterType(this IUnityContainer container, Type from, Type to, string name, params InjectionMember[] injectionMembers)
  {
    return container.RegisterType(from, to, name, null, injectionMembers);
  }
  public static T Resolve<T>(this IUnityContainer container)
  {
    return (T)container.Resolve(typeof(T));
  }
  public static T Resolve<T>(this IUnityContainer container, string name)
  {
    return (T)container.Resolve(typeof(T), name);
  }
  public static object Resolve(this IUnityContainer container, Type t)
  {
    return container.Resolve(t, null);
  }
  public static IEnumerable<T> ResolveAll<T>(this IUnityContainer container)
  {
    //return new <ResolveAll>d__0<T>(-2) { <>4__this = this };
    //This implementation requires more effort than I am willing to give (6hours till my holiday!)
  }
}

Bloody hell. Imagine trying to implement that on every class that implemented the interface!

While I know this entire post is academic as we cant change Unity, but I hope that sews some seeds for other developers so that when they design their next interface it wont have silly overloads that just make the consuming developer's job harder than it ought to be.

Sunday, July 26, 2009

Code inspections

Code Inspections (or Code Reviews or Peer Reviews) is a technique a team can implement to improve the quality of the software they deliver. It is initially a bit of a cultural shift for many teams but can produce benefits not just in software but if done well also in team morale.

Goals

The goals for performing a Code Inspection are the following:

  1. To cross pollinate Domain knowledge
  2. To cross pollinate technical knowledge
  3. To maintain a consistent team process
  4. To identify any obvious errors

A code inspection should be an enjoyable experience where two developers are able to bond and cooperatively improve the quality of software delivered. The exercise should not feel like a witch-hunt or a boasting exercise for one developer to flex their ability over another. Initially the exercise may feel somewhat uncomfortable for the team, this is usually due to many different styles and tolerances the each member of the team brings to the collective skill set. In these scenarios it is better to agree amongst the team the level of detail that is in the remit of the inspections.

Introducing a team to Code Inspections

It is probably best if a team ease into code inspections, for it is better to be doing some code inspections and forgiving style differences than it is to be too aggressive in initial code inspections and cause tension in the team which sees the exercise become avoided. While this may be difficult to accept initially, one must remember that increasing the quality of today's code is of less importance than spreading the knowledge through out the team and maintain a quality team spirit. To know what to target in initial inspections, the team should conduct a brief team meeting and pick a handful of areas that the team agree needs work. This way we can get the biggest gain without creating to much friction.

Tools

The "what" to look for is of less importance than the process itself but the process can be aided quite nicely by the use of automated tools. In the .NET community the tools I would recommend are FxCop (or the Visual Studio Code Analysis), Simian (identifies duplicate code) and NCover (for test coverage).
To ease into code inspections I think it is best to set the bar quite low. For FxCop just enable the following Rules
  • Design
  • Naming
  • Performance
  • Security
  • Usage
While some teams may have their own style that might conflict with some of the styles in FxCop, ultimately this is a community standard now. As developers move between jobs it is great to eliminate this kind of induction friction when all .NET developers can be "singing from the same hymn sheet".
Simian provides the ability to set a tolerance of duplication to 6 lines of duplicate code. This should be fine.
Assuming the team is TDD or at least writing tests the the bar for test coverage could start at something like 80%. While I understand that chasing 100% test coverage can be frowned upon, it is nice to know that code was writing for a reason and does what it was intended on doing.

Setting expectations

Once the initial team meeting has been completed a set of expectations should be drawn up. This will be the bench mark for code inspections. The benchmark should be set at a level that everyone in the team agrees that they are happy for their code to be "rejected" if it does not meet the expectations. As you can imagine this in itself will lower the bar quite considerably as developers know that there are circumstances beyond their control which may prohibit them meeting the expectations that they would like to meet. Project deadlines, new technologies and processes can affect a developers ability to meet these goals. Remember the goals for code inspection is not perfect code, it is to improve the ability of the team to create better code, and continually get better at creating better code.
An example for an initial code inspection check list may look like this for a C# 3.0 project:
  1. Exceptions that are explicitly thrown from a public or protected members must be documented with XML comments.
  2. FxCop Naming and Performance rules must be met.
  3. Classes should be testable.
While this is a very small list of things to consider, just setting these expectations combined with knowledge sharing that the team would perform would see a dramatic improvement on any team not already doing code inspections.
Once the team is comfortable with the process and culture of code inspections, have another meeting to raise the bar a little bit. Again, refrain from being too eager. All team member should feel comfortable with the expectations being set.

Performing Code Inspections

Be fore we cover the actual actions of what to do when doing a code inspection just quickly remeber the goals of a code inspection
  1. To cross pollinate Domain knowledge
  2. To cross pollinate technical knowledge
  3. To maintain a consistent team process
  4. To identify any obvious errors

The inspection is not to create perfect code.
Steps to perform a code inspection (the first few step are as a courtesy to the reviewer):
  1. First perform a quick review yourself.
  2. Indicate to the person you would like to review your code that you will want some of their time in a few moments. It is important that this person not be the same few people. Have a method where you cycle through the team (clock wise round the office, youngest to oldest etc). Generally you will want to do most of your reviews with your immediate team, but if you have a larger development team/department that can be involved in the inspection be sure to include them too. *
  3. Perform any automated analysis (FxCop, Simian, Unit tests etc...)
  4. Open all the necessary documents ready to go, and consider the logical path to show the reviewer through your code.
  5. Some prefer to review code on paper. This is very good for removing the desire to solve any problems with the code, it also allows you to easily make notes against the code and possibly perform the review in a quiet place. However, I also think that if done properly this creates a mass of wasted paper and ink.
  6. When both parties are ready, have the author describe the intent of the code to the reviewer. This should start with a 10 second overview of what set of code does. 
  7. Show the reviewer the output of any automated analysis (FxCop, Tests, Coverage etc.). Obviously the results of which should met the set expectations.
  8. If at the computer give the reviewer the mouse and keyboard. If using a print out have the print out in front of the reviewer.
  9. Describe what the code does, ideally from entry point to each of the leaves. Very good code will basically read like prose anyway so this would be very easy.
  10. The reviewer sets the pace of the review by scrolling or navigating to classes/methods as they understand the code.
  11. The reviewer prompts for "what-if" scenarios and bookmarks any concerns (Ctrl+k+k in Visual Studio). No attempt is made to "fix" any code by either party.
  12. As the reviewer bookmarks any code they should also take a note of why the bookmark was made (Unclear code, not meeting agreed expectations, possible error etc). This can be done on paper or if available a Code Inspection prompt sheet.
  13. The author generally will identify most of their mistakes as they describe the code to the reviewer.
  14. If the code has been flagged in any way that would fail to meet the set expectations -or- would result in probably faulty software; the Author requests the the Reviewer come back to verify their changes and approve the code.**
  15. Once the review(s) have come to a point where no probable faults exists in the code and the team expectations are met the code review is complete.


*In my experience my project teams have been generally in sizes of 3-8. In these scenarios I would try to have each member of the team review my code over the course of a week. Once I had engaged each team member I would next engage someone from another team. This allows for a knowledge transfers across teams too (like finding out each team is using different logging tools!).

**By having the Author suggest that they fix their code, this creates far less stress for the reviewer to feel as if they are judging the authors code. Essentially the reviewer is a soundboard and prompt for the author.

Hints and Tips

  • Ensure that the team is clear that both the Reviewer and the Author are there to find any defects. It is a cooperative approach.
  • Focus on finding defects and failed expectations, not fixing them.
  • The biggest benefit for performing Code Inspections is that it creates a culture where the team all get better technically and also domain knowledge is not isolated to pockets of people.
  • Inspect all new and modified code
  • For best results review often and review early. Not many developers write so much code that 4 reviews in a day would be necessary, but not may developers would write so little code that only 1 or no code reviews would be done in a day. An exmple of a set of code to review could be when an MVP screen is complete; review the Model, View, Presenter and tests. I imagine few developer produce more than 4 tested screens per day.
  • As you get better at code reviews they should become less and less formal and quicker to execute. Extreme Programming takes peer review to the next level by have 2 developer work together constantly in a fashion called Pair Programming.
  • Some teams keep track of their code reviews an can create statistics to analyse where current weaknesses are. If you see value in this than give it a go, but make sure it doesn't just create "busy work". I believe that if the inspections are done correctly the team will know that there is an area of need and should raise it appropriately.
  • Code reviews should be done at the time the code was completed. New code should not be started until the current code has been inspected and approved. Keeping thing fresh in your head is key to a good review. It also means that "unfinished"/un-inspected code doesn't just slip out into production.
  • Some source/version control systems allow you to attribute check ins. If this is the case then code that has passed a check in can be attributed ideally with the reviewer's details. This should then allow the team to analyse all code that has not been reviewed (ie. last check-in for a file was not attributed).

Thursday, July 9, 2009

.NET Parallel Framework - Videos

Way back in June 2008 Joe Albahari gave a presentation on the upcoming parallel programming features in .NET 4.0. I have finally published the recording of his session here. Just for reference the Parallel Framework encompasses PLINQ and is sometimes referred to as the TPL (Task Parallel Library) or System.Threading.dll

Sorry about the delay in the post.

Joe Albarhari Parallel FX and PLINQ videos : 6 videos @ ~10min and 40-50MB each.

Saturday, June 6, 2009

Responsive WPF User Interfaces Part 7

Creating a responsive Image control

In part 6 we discovered problems even when we follow the guidelines from this series. In this case we find that sometimes it can be the controls themselves that are the cause of the lagging interface. In part 6 it was the Image control that was to blame. I make the assumption that it was trying to perform it's bitmap decoding on my precious dispatcher thread. However not only Image controls can get ugly, Charles Petzold shows some problems you may find when using chart controls (or any Items Control) in his Writing More Efficient Items Controls article.

The first thing I wanted to do was to just sub class Image however, I personally was stumped as to how I would then create my control template for it. Next I actually only wanted to accept URIs as my source so that I can do the decoding from file to an ImageSource myself explicitly and not on the UI thread via a TypeConverter. So I want a sub class of Control with a dependency property of UriSource and then 3 read-only properties ImageSource, IsLoading and HasLoadFailed. UriSource will provide the hook to bind your file name to. ImageSource will then provide the decoded ImageSource to be displayed. IsLoading and HasLoadFailed will be there so that you can update the UI appropriate to the lifecycle of the image.

The Style and control template I came up for this design was this:

<Style TargetType="{x:Type controls:Thumbnail}">
  <Setter Property="Template">
    <Setter.Value>
      <ControlTemplate TargetType="{x:Type controls:Thumbnail}">
        <Image x:Name="ImageThumbnail"
               Source="{TemplateBinding Image}"
               HorizontalAlignment="{TemplateBinding HorizontalAlignment}"
               VerticalAlignment="{TemplateBinding VerticalAlignment}"
               MaxHeight="{TemplateBinding MaxHeight}"
               MaxWidth="{TemplateBinding MaxWidth}"
               Stretch="Uniform"
               StretchDirection="DownOnly" />
      </ControlTemplate>
    </Setter.Value>
  </Setter>
</Style>

Where I want to use the Thumbnail control I can have some XAML that hides and shows a loading indicator. This example here just shows the text "Loading Image..." but would probably have a nice animation or an indeterminate progress indicator.

<Border CornerRadius="4" BorderBrush="Silver" BorderThickness="1"
        Width="150"
        Height="150"
        Margin="10">
  <Grid>
    <local:Thumbnail x:Name="Thumbnail"
                     MaxWidth="148"
                     MaxHeight="148"
                     UriSource="{Binding}" 
                     HorizontalAlignment="Center" VerticalAlignment="Center"
                     ToolTip="{Binding}">
      <local:Thumbnail.Style>
        <Style TargetType="{x:Type local:Thumbnail}">
          <Setter Property="Visibility" Value="Visible"/>
          <Style.Triggers>
            <DataTrigger Binding="{Binding RelativeSource={RelativeSource Self}, Path=IsLoading}" Value="True">
              <Setter Property="Visibility" Value="Collapsed"/>
            </DataTrigger>
          </Style.Triggers>
        </Style>
      </local:Thumbnail.Style>
    </local:Thumbnail>

    <TextBlock Text="Loading image..."
               Foreground="Gray"
               HorizontalAlignment="Center"
               VerticalAlignment="Center">
      <TextBlock.Style>
          <Style TargetType="TextBlock">
              <Setter Property="Visibility" Value="Visible"/>
              <Style.Triggers>
                  <DataTrigger Binding="{Binding ElementName=Thumbnail, Path=IsLoading}" Value="False">
                      <Setter Property="Visibility" Value="Collapsed"/>
                  </DataTrigger>
              </Style.Triggers>    
          </Style>
      </TextBlock.Style>
    </TextBlock>
  </Grid>
</Border>

Well that is the easy bit, the public API. Trust me it gets more interesting.

My gut feeling was basically take the code from reflector and use that to decode Uris to ImageSource objects, but perform the action on a background thread. Two problems here:

  1. Most of the fun code that happens when the URI is being parsed and then decoded, is internal. :-(
  2. You cant pass most sub-classes of ImageSource across threads. If you create them on one thread they cant be accessed on another thread. So this stops us blindly copying code from reflector as all that code only runs on the dispatcher thread so wont face this problem.

We do have something to work with however: WriteableBitmap.

In some vain attempt at brevity (he says in his 7th post in the series!), I will try to skip over all of the brick walls I faced and jump straight to the solution I came up with. This was a great learning experience for me, and in the spirit of a learning experience the code is fairly rough (with TODO comments still intact). I will try to build this control into a stable control but in its current state is very much demo-ware.

First thing I want to achieve was to have the decoding and if possible the reading from disk happen off the UI thread. Lets start with the easy bit; reading the file into memory. I'm going to keep that simple and just go with grabbing the file as a byte array like this

byte[] buffer = File.ReadAllBytes(UriSource);

Next I want to decode the byte array into some form of ImageSource. To do this I have used a combination of the BitmapDecoder and the WriteableBitmap. First I take the byte array and load it into a MemoryStream, and then pass the stream to the BitmapDecoder factory method Create. This will return me an instance of one of its implementations (BitmapDecoder is abstract/MustInherit). From here I make a bold assumption that we only care about the first "Frame" of the image. I believe that most formats don't support multiple frames but formats like GIF do which allows them to have animation features. Anyway, in my demo-ware code I just take the first frame.

using (Stream mem = new MemoryStream(buffer))
{
  BitmapDecoder imgDecoder = null;
  imgDecoder = BitmapDecoder.Create(mem, BitmapCreateOptions.None, BitmapCacheOption.None);

  BitmapFrame frame = imgDecoder.Frames[0];
  double scale = GetTransormScale(maxWidth, maxHeight, frame.PixelWidth, frame.PixelHeight);

  BitmapSource thumbnail = ScaleBitmap(frame, scale);

  // this will disconnect the stream from the image completely ...
  var writable = new WriteableBitmap(thumbnail);
  writable.Freeze();
  return writable;
}

From here I figure out the ratio that I want to scale it to. There doesn't seem much point in returning an 8MB image if we only want to see it as 300x300 does it? Next we request the image as what is almost our final product. We have a helper method scale the frame and return it as a BitmapSource. We can't assign this BitmapSource back to our ImageSource dependency property as they don't play nice over thread boundaries. So, the last thing we need to do is take the BitmapSource we just generated and create a WriteableBitmap from it and then call its Freeze method. This puts the WriteableBitmap in an immutable state which then makes it thread safe. Whew!

Just for reference here is the GetTransformScale method

private static double GetTransormScale(double maxWidth, double maxHeight, double currentWidth, double currentHeight)
{
  double xRatio = maxWidth / currentWidth;
  double yRatio = maxHeight / currentHeight;
  double resizeRatio = (xRatio > yRatio) ? xRatio : yRatio;
  if (resizeRatio > 1)
    resizeRatio = 1;
  return resizeRatio;
}

and the ScaleBitmap method

private static BitmapSource ScaleBitmap(BitmapSource source, double scale)
{
  if (scale > 0.9999 && scale < 1.0001)
  {
    return source;
  }
  var thumbnail = new TransformedBitmap();
  thumbnail.BeginInit();
  thumbnail.Source = source;
  var transformGroup = new TransformGroup();
  transformGroup.Children.Add(new ScaleTransform(scale, scale));
  thumbnail.Transform = transformGroup;
  thumbnail.EndInit();
  return thumbnail;
}

So all things considered, once you get over the decoding stuff and then wrestle with the various subclasses of ImageSource that play their part, its not too bad. But like I said earlier; it gets more interesting. So far we have only looked at how to decode the image, we have yet to consider how to make the call to perform the decoding. My first instinct was to make the call on any change to the UriSource property. However I may not have the MaxHeight and MaxWidth information at that point in time. This caused me much stress over when should I call this decode functionality. If no value is ever going to be set for MaxHeight or MaxWidth then I should just process the image, but if first the UriSource is set then the MaxHeight, then the MaxWidth I would end up creating 3 asynchronous calls to render 3 different sized images. This would be a disaster as it would surely end up with race conditions and most likely the wrong sized image being displayed. The other obvious problem with that is that we would be performing 3 times the work. Hmm.

My solution (and mileage may vary as I am green to concurrent programming models) was to create a stack of render requests for each instance of a Thumbnail. As a property was set then a request would be added to the stack and a request to start processing the stack would occur. Periodically while processing the image I would check to see if the current work was invalidated by any new requests to the stack. If the current request was invalid it would terminate its work. The request to start processing the stack would simply try to pop the last request from the stack and clear out all other requests (effectively ignoring stale requests). On returning from the render request it would loop back and try to pop anything new from the stack. This last part while important is very much implementation details and could vary dramatically from anything you may implement. I have just read over the code myself and I could do with some work. It is amazing what just 3months (and reading Joe Duffy's Concurrent Programming for Windows) does for your appreciation of your code.

As always the code is available for you to have a play. Open it up, pick it to pieces, use what you like. Obviously I take no responsibility for the code if you do choose to use it as this is intended on being a learning exercise. Having said that I do try to produce good code for my demos so if you do see something that is not good enough let me know.

This has been a fun series and I hope you liked it and learnt as much as I did from it.

Back to series Table Of Contents

Working version of the code can be found here