November 22, 2010

Risk Homing Metrics

Originally published 28 May 2009

Recently, I attended a talk by Neal Ford.  He was talking about a couple of metrics you can combine to identify areas for refactoring: cyclomatic complexity and afferent coupling.  He used the ckjm tool to determine what classes were both complex and used by lots of other classes.  Start refactoring these was his recommendation.

I immediately thought of Crap4j; another tool that combines a set of metrics for identifying the riskiest areas of the code base to maintain.  Crap4j implements the CRAP metric, which combines cyclomatic complexity and test coverage, but at a method level.  If a method is both complex and not very well tested, then it's risky to change.

This all led me to a new, more ultimate set of metrics that could be combined to home in on the riskiest areas of a code base:
  1. Code coverage
  2. Cyclomatic complexity
  3. Code execution frequency in the real world
Complex code, executed very often, with low test coverage.

For practical purposes, I like sticking with the granularity of a method.  I can use tools like Cobertura to find the test coverage and JavaNCSS to find the cyclomatic complexity.  (Isn't cyclomatic complexity best applied to the method level anyway?)

That just leaves me with the which-methods-execute-the-most-in-production problem.  This is hard because I can run the other two as part of a continuous build, but I won't be able to identify the hot methods until I get to production and measure true usage.  So the static and dynamic metrics will always be out of sync at some level even if I could get estimated usage through continuous functional and higher level testing.

So what can I do?  I want this to run as part of a continuous build to get feedback as soon as possible that a method is getting a little risky (or with a legacy code base, is already risky).  So, I'll fall back on afferent coupling for practicality.  But afferent coupling is typically measured at a package level.  The finest granularity that I'm aware of with current tools is measuring at a class level with ckjm.  That's a good starting point for identifying highly used code.

So here's my plan.  Use the CRAP metric to find the methods and then factor in the afferent coupling of the those methods' classes to give a prioritized list of methods to go clean up.  I'll see how this goes and consider factoring in method execution frequency from higher level testing runs.

No comments:

Post a Comment