Sunday, July 26, 2009

Code inspections

Code Inspections (or Code Reviews or Peer Reviews) is a technique a team can implement to improve the quality of the software they deliver. It is initially a bit of a cultural shift for many teams but can produce benefits not just in software but if done well also in team morale.

Goals

The goals for performing a Code Inspection are the following:

  1. To cross pollinate Domain knowledge
  2. To cross pollinate technical knowledge
  3. To maintain a consistent team process
  4. To identify any obvious errors

A code inspection should be an enjoyable experience where two developers are able to bond and cooperatively improve the quality of software delivered. The exercise should not feel like a witch-hunt or a boasting exercise for one developer to flex their ability over another. Initially the exercise may feel somewhat uncomfortable for the team, this is usually due to many different styles and tolerances the each member of the team brings to the collective skill set. In these scenarios it is better to agree amongst the team the level of detail that is in the remit of the inspections.

Introducing a team to Code Inspections

It is probably best if a team ease into code inspections, for it is better to be doing some code inspections and forgiving style differences than it is to be too aggressive in initial code inspections and cause tension in the team which sees the exercise become avoided. While this may be difficult to accept initially, one must remember that increasing the quality of today's code is of less importance than spreading the knowledge through out the team and maintain a quality team spirit. To know what to target in initial inspections, the team should conduct a brief team meeting and pick a handful of areas that the team agree needs work. This way we can get the biggest gain without creating to much friction.

Tools

The "what" to look for is of less importance than the process itself but the process can be aided quite nicely by the use of automated tools. In the .NET community the tools I would recommend are FxCop (or the Visual Studio Code Analysis), Simian (identifies duplicate code) and NCover (for test coverage).
To ease into code inspections I think it is best to set the bar quite low. For FxCop just enable the following Rules
  • Design
  • Naming
  • Performance
  • Security
  • Usage
While some teams may have their own style that might conflict with some of the styles in FxCop, ultimately this is a community standard now. As developers move between jobs it is great to eliminate this kind of induction friction when all .NET developers can be "singing from the same hymn sheet".
Simian provides the ability to set a tolerance of duplication to 6 lines of duplicate code. This should be fine.
Assuming the team is TDD or at least writing tests the the bar for test coverage could start at something like 80%. While I understand that chasing 100% test coverage can be frowned upon, it is nice to know that code was writing for a reason and does what it was intended on doing.

Setting expectations

Once the initial team meeting has been completed a set of expectations should be drawn up. This will be the bench mark for code inspections. The benchmark should be set at a level that everyone in the team agrees that they are happy for their code to be "rejected" if it does not meet the expectations. As you can imagine this in itself will lower the bar quite considerably as developers know that there are circumstances beyond their control which may prohibit them meeting the expectations that they would like to meet. Project deadlines, new technologies and processes can affect a developers ability to meet these goals. Remember the goals for code inspection is not perfect code, it is to improve the ability of the team to create better code, and continually get better at creating better code.
An example for an initial code inspection check list may look like this for a C# 3.0 project:
  1. Exceptions that are explicitly thrown from a public or protected members must be documented with XML comments.
  2. FxCop Naming and Performance rules must be met.
  3. Classes should be testable.
While this is a very small list of things to consider, just setting these expectations combined with knowledge sharing that the team would perform would see a dramatic improvement on any team not already doing code inspections.
Once the team is comfortable with the process and culture of code inspections, have another meeting to raise the bar a little bit. Again, refrain from being too eager. All team member should feel comfortable with the expectations being set.

Performing Code Inspections

Be fore we cover the actual actions of what to do when doing a code inspection just quickly remeber the goals of a code inspection
  1. To cross pollinate Domain knowledge
  2. To cross pollinate technical knowledge
  3. To maintain a consistent team process
  4. To identify any obvious errors

The inspection is not to create perfect code.
Steps to perform a code inspection (the first few step are as a courtesy to the reviewer):
  1. First perform a quick review yourself.
  2. Indicate to the person you would like to review your code that you will want some of their time in a few moments. It is important that this person not be the same few people. Have a method where you cycle through the team (clock wise round the office, youngest to oldest etc). Generally you will want to do most of your reviews with your immediate team, but if you have a larger development team/department that can be involved in the inspection be sure to include them too. *
  3. Perform any automated analysis (FxCop, Simian, Unit tests etc...)
  4. Open all the necessary documents ready to go, and consider the logical path to show the reviewer through your code.
  5. Some prefer to review code on paper. This is very good for removing the desire to solve any problems with the code, it also allows you to easily make notes against the code and possibly perform the review in a quiet place. However, I also think that if done properly this creates a mass of wasted paper and ink.
  6. When both parties are ready, have the author describe the intent of the code to the reviewer. This should start with a 10 second overview of what set of code does. 
  7. Show the reviewer the output of any automated analysis (FxCop, Tests, Coverage etc.). Obviously the results of which should met the set expectations.
  8. If at the computer give the reviewer the mouse and keyboard. If using a print out have the print out in front of the reviewer.
  9. Describe what the code does, ideally from entry point to each of the leaves. Very good code will basically read like prose anyway so this would be very easy.
  10. The reviewer sets the pace of the review by scrolling or navigating to classes/methods as they understand the code.
  11. The reviewer prompts for "what-if" scenarios and bookmarks any concerns (Ctrl+k+k in Visual Studio). No attempt is made to "fix" any code by either party.
  12. As the reviewer bookmarks any code they should also take a note of why the bookmark was made (Unclear code, not meeting agreed expectations, possible error etc). This can be done on paper or if available a Code Inspection prompt sheet.
  13. The author generally will identify most of their mistakes as they describe the code to the reviewer.
  14. If the code has been flagged in any way that would fail to meet the set expectations -or- would result in probably faulty software; the Author requests the the Reviewer come back to verify their changes and approve the code.**
  15. Once the review(s) have come to a point where no probable faults exists in the code and the team expectations are met the code review is complete.


*In my experience my project teams have been generally in sizes of 3-8. In these scenarios I would try to have each member of the team review my code over the course of a week. Once I had engaged each team member I would next engage someone from another team. This allows for a knowledge transfers across teams too (like finding out each team is using different logging tools!).

**By having the Author suggest that they fix their code, this creates far less stress for the reviewer to feel as if they are judging the authors code. Essentially the reviewer is a soundboard and prompt for the author.

Hints and Tips

  • Ensure that the team is clear that both the Reviewer and the Author are there to find any defects. It is a cooperative approach.
  • Focus on finding defects and failed expectations, not fixing them.
  • The biggest benefit for performing Code Inspections is that it creates a culture where the team all get better technically and also domain knowledge is not isolated to pockets of people.
  • Inspect all new and modified code
  • For best results review often and review early. Not many developers write so much code that 4 reviews in a day would be necessary, but not may developers would write so little code that only 1 or no code reviews would be done in a day. An exmple of a set of code to review could be when an MVP screen is complete; review the Model, View, Presenter and tests. I imagine few developer produce more than 4 tested screens per day.
  • As you get better at code reviews they should become less and less formal and quicker to execute. Extreme Programming takes peer review to the next level by have 2 developer work together constantly in a fashion called Pair Programming.
  • Some teams keep track of their code reviews an can create statistics to analyse where current weaknesses are. If you see value in this than give it a go, but make sure it doesn't just create "busy work". I believe that if the inspections are done correctly the team will know that there is an area of need and should raise it appropriately.
  • Code reviews should be done at the time the code was completed. New code should not be started until the current code has been inspected and approved. Keeping thing fresh in your head is key to a good review. It also means that "unfinished"/un-inspected code doesn't just slip out into production.
  • Some source/version control systems allow you to attribute check ins. If this is the case then code that has passed a check in can be attributed ideally with the reviewer's details. This should then allow the team to analyse all code that has not been reviewed (ie. last check-in for a file was not attributed).

1 comment:

RhysC said...

printing this out now :)