Some notes about the “An Appropriate Use of Metrics” article

Facebooktwitterlinkedinmail

In the same way as the ‘Twenty Top Fails in Executive Agile Leadership’ article, I propose here some extracts from a great article written by Patrick KuaAn Appropriate Use of Metrics.

I) What’s wrong with how we use metrics?

Organizations looking at metrics from a management by numbers perspective follow a process that looks like this:

  1. Management come up with a goal and work out a measure
  2. Management establish a target over a large period (3-6 months up to a year) for the people doing the work
  3. Management communicate only the target (in terms of the agreed metric)
  4. People doing the work do everything in their power to meet the target number

This process encourages overloading a metric. But overloading a single metric with multiple purposes causes many problems, particularly when dealing with knowledge work such as software. Metrics are simplifications of much more complex attributes. The cost of simplifying complexity comes at the cost of losing sight of the real end goal, and ends in a suboptimal result.

Let’s look at an example:

A test manager, let’s call her Mary, holds weekly meeting with the development lead, Dan. "Where are we at with our bug counts?" she asked at their most recent one. Dan answered, "We cleared our three priority one bugs, fixed four priority two bugs and cleared out a record twelve priority three bugs. A pretty good week right?"

Looking at the development lead, slightly shaking her head, Mary responded, "Unfortunately our customer reported five priority one bugs, six priority two bugs and fifteen priority three bugs. You’ll need to work harder next week." Exasperated and feeling overwhelmed at missing his target, Dan left the meeting thinking about asking his team to work yet another weekend.

=> The question posed by Mary fails to neglect the broader goal, and she fails to ask the crucial question that guides Dan and his team towards fixing the underlying reason the bug exists. Without resolving this root cause, Dan and his team are destined to fix bugs for life.

Dan is experiencing single loop learning [1]. Single loop learning is the repeated attempt at the same problem, with no variation of method and without ever questioning the goal. The inappropriate use of metrics leads Dan away from the end goal of delivering useful software and improving overall software quality.

II) Be careful what you measure

Organizations love metrics because it makes setting targets easier, and discourages people from questioning the goal behind the target. This leads managers into a false sense of organizational efficiency.

Organizations must be wary of this actively destructive focus that leads people to neglect other important factors.

Management and product management often ask the question:

“How soon will that feature be complete?”

Teams often choose to interpret this as when coding finishes, succumbing to the idea that testing and the path to production are trivial and inconsequential parts of software process.

Project management reinforces this perception by asking the question:

“How many stories did we finish coding this week?”

instead of the better question:

“How many stories are we happy to release to end users?” or better yet, “How many stories did we release to end users?”

An even better question is, “How much value have our users found from our recent releases?”

Let’s look at the consequences of overtly focusing on this sub optimal goal alone:

Malcolm, a marketing representative always takes a keen interest in what developers built for him that he dropped by the team as often as he could. He often talked to Dan, the developer, asking when his features would be complete. Dan, not wanting to disappoint Malcolm worked hard to focus on finishing whatever Malcolm asked, knowing he wouldn't be far off from returning to ask on progress. He'd often think to himself, "This feature must be really important." Tim, the team's newest tester often needed to approach a developer, like Dan, to understand how to trigger the newly developed features.

Tim approaches Dan one day, "Hi Dan! I really need your help to understand how to test this feature you completed last week." Dan, under pressure to deliver snaps, "Can't you do anything by yourself? I need to get this feature complete so Malcolm gets off my back." Shocked at Dan's response, Tim returns to his desk, and waits. He thinks to himself, "I can't get anything done until Dan helps me out."

Each week this happens, and over time, the stack of stories waiting to be tested grows and grows. Eventually Malcolm calls a meeting with the team concerned he’s yet to see that feature he asked for two months ago in production. Surprised, Dan says he completed it over a month ago. Tim bashfully responds, "I couldn't test that story because I needed some help from Dan and he's been so busy with other work. I didn't want to interrupt him."

What can we learn from this story?

  • What Malcolm really wants is to be able to use it in production
  • But Tim didn’t have knowledge necessary to complete his work
  • The end result was a vicious cycle of work building up in testing, never getting released and with Malcolm puzzled why he hadn’t received the feature he’d asked for
  • Kanban Software Development encourage Explicit Work in Progress limits. It forces people to help out other when bottlenecks appear. These WIP limits work to overcome the undesirable behaviors that emerge when people are measured by the wrong metric of their individual productivity instead of overall value delivered.

The main point is to measure the end to end result instead of simply a small part of the process, refering to the principle called ‘Optimize the Whole’.

III) Guidelines for a more appropriate use of metrics

  1. Explicitly link metrics to goals
  2. Favor tracking trends over absolute numbers
  3. Use shorter tracking periods
  4. Change metrics when they stop driving change

1. Explicitly link metrics to goals

In the traditional style:

  • management decides what the best measure for a particular goal is
  • Management then set a target in terms of that measure
  • Management then articulate only this target to people doing the work, in its, often, numerical representation.

The lines between the measure chosen to monitor progress towards the goal and the actual goal itself blur. Over time, the reason behind the measure is lost and people focus on meeting the target even if that metric is no longer relevant.

Example of metrics in a software development context:

Methods must be less than 15 lines.
You must not have more than 4 parameters to a method.
Method cyclomatic complexity must not exceed 20.

=> With an appropriate use of metrics, every single measure should clearly be linked to its original purpose. The current mechanism for tracking and monitoring must be decoupled from its goal and that goal made explicit to help people better understand the metric’s intent.

The same example with additional info:

We would like our code to be less complex and easier to change.
Therefore we should aim to write short methods (less than 15 lines) with a low cyclomatic complexity (less than 20 is good).
We should also aim to have a small handful of parameters (up to four) so that methods remain as focused as possible.

=> Explicit linking the metrics to the goal allow people to better challenge their relevance, to find other ways of satisfying the need, and to help people understand the intent behind the numbers.

The nature of software development means most work is knowledge work, and is therefore hard to observe. It is easy to monitor activity (how much time they sit at their computer) yet it is hard to observe the value they produce (useful software that meets a real need).

=> A shift towards a more appropriate use of metrics means management cannot come up with measures in isolation. Instead management is responsible for ensuring the end goal is always kept in sight, working with the people with the most knowledge of the system to come up with measures that make the most sense to monitor for progress.

2. Favor tracking trends over absolute numbers

Looking at trends provides more interesting information than whether or not a target is met. The difficult work, and one that management must work with people with the skills to complete is looking at trends to see if they are moving in the desired direction and a fast enough rate. Trends provide leading indicators into the performance that emerges from organizational complexity. It is clearly pointless focusing on the gap in a number when a trend moves further and further away from a desired state.

Focusing on trends is important because it provides feedback based on real data on any change implemented and creates more options for organizations to react. For instance, if the team is trending away from a desired state, they can ask themselves what is causing them to move away from their goal and what can they do about it.

Trends help focus people’s efforts on making movement in the right direction rather than being paralyzed between a gap that looks impossible to resolve.

=> An appropriate use of metrics finds trends much more useful than absolute numbers. Arbitrary targets don’t really have much meaning without the right trend and better questions emerge when thinking about what affects a trend and what else can be done to affect the trend, rather than pointing about what the gap is between an arbitrary number and reality.

3. Use shorter tracking periods

A consequence of revisiting metrics after long periods is that the failure to meet management’s arbitrary target becomes more and more unacceptable. I’ve heard managers say things like:

“You had a whole year to meet your target and you missed it.”

The risk and cost of failure increases the longer the tracking period is.

=> Agile methods prefer shorter periods for review because any performance gap is less costly. Organizations benefit from using shorter tracking periods as it creates more opportunities for re-planning that allows maximum value.

I worked with a team that released software into production every two weeks. The business liked regular releases because they could use the software almost immediately. On using the software deployed after the latest release, the business discovered they had enough features they could do almost everything they needed for a new marketing initiative. It was only a fraction of what they originally asked for.

Instead of the development team writing features that would probably never be used, the business picked a small subset of the leftover stories and started work on the next initiative.

=> An appropriate use of metrics tracks progress in smaller cycles because it gives much more information about where a project may end up further in the future. Tracking smaller periods helps identify trends and the pause gives organizations a more informed position to influence the environment and the rate/direction of a trend.

Tracking smaller periods also enables more collaboration because it provides more opportunity for management to be involved. Rather than simply evaluating people at the end of a larger period, tracking smaller periods provides more data about what is actually happening that influences the trends.

4. Change metrics when they stop driving change

The first guideline to an appropriate use of metrics separates the real goal from the measure selected to monitor progress towards that goal. The real goal must always be made explicit.

Guideline #2 and #3, monitoring trends and doing so over shorter periods is about helping organizations realize their goal faster. It isn’t achieved through the single-loop learning described earlier in the chapter. Organizations require is the double-loop learning Argyris writes about. An appropriate use of metrics drives people to question the goal and, based on collecting real data, implementing change to get there.

Here’s what double loop learning looks like:

Frustrated by fixing bugs every week Dan the developer considers why he is constantly fixing bugs. Over the last three weeks, Malcolm reports many issues about things not working as he expected. He steps back to think about what is really going on, less concerned about the bug count he is always asked about and more about why he has them to begin with.

When Dan picks up a story, he often has lots questions for Malcolm about how it should work. Dan knows Malcolm has his other marketing activities keeping him busy and understands Malcolm cannot sit with him to answer his questions. Dan is under enormous pressure to deliver something, so he makes several assumptions to ensure he can deliver something instead of nothing.

Looking at the bugs, Dan realizes that many of the bugs reported are based on those small assumptions he keeps making. The pressures to deliver something mean that Dan never builds the right thing the first time around.

When Dan explains this to Malcolm, they agree to sit down at the start of each new story to make sure all of Dan’s questions are answered before he starts coding. They try this the next week and the overall number of bugs reported that week decreases.

=> Double loop learning requires more data about what is actually going on. Shorter periods create more data points, making it easier to see any trends.

Changing the system that people work in often has a much greater impact than focusing on the individual’s efforts to work harder or faster. In our story, Dan could have spent more time each week trying to fix bugs, but by adjusting the flow of information and the working relationship between Malcolm and Dan, they changed the system to be much more effective.

Conducting post-mortems at the end of the project offers no chance to actually apply these learnings to the project itself. Agile Retrospectives differ in their intent by seeking change while a project is in flight, where actions have more impact than they would at the end.

When an organization reaches its goals, it’s time to return the metrics used to achieve it. Organizations need to drop metrics that are no longer relevant, instead of holding on to all the metrics they are used to collecting. With an appropriate use of metrics, understanding what metrics to retire will be easy because those metrics have been explicitly linked to the goal and constant monitoring of trends during periods encourage a continuous review of the state of the end goal.

Conclusion

With the appropriate use of metrics, organizations link each measure back to a well-articulated goal that everyone understands. The measure chosen to monitor progress must be decoupled from the goal, and challenging each metric’s relevance welcomed as time passes.

Organizations using metrics more appropriately understand the value in watching the trends, monitoring in smaller periods in order to understand individual, management and organizational influences.

————————————–

[1]: Chris Arygris & Donald A. Schön describes the concepts of single-loop and double-loop learning in their book Organizational Learning: A theory of action perspective.

One thought on “Some notes about the “An Appropriate Use of Metrics” article”

Leave a Reply

Your email address will not be published. Required fields are marked *