The Quest for Mostly Meaningless Metrics

Despite the fact that the things we measure are often not related to the impact of our efforts on the business, this hasn't stopped the relentless quest for metrics to feed legions of ravenous dashboard apps and Excel spreadsheets. In fact, they seem to have slowly become an end unto themselves.

You’d have to be pretty cynical to suggest that attempting to measure success or failure in an enterprise is useless endeavor. I’m not nearly that cynical. I do, however, think we measure the wrong things especially when it comes to areas where metrics aren’t cut and dry. This is true for the more airy interpretations of Enterprise Architecture as well as many other nebulously defined parts of the enterprise (Portals come to mind).

Despite the fact that the things we measure are often not related to the impact of our efforts on the business, this hasn’t stopped the relentless quest for metrics to feed legions of ravenous dashboard apps and Excel spreadsheets. In fact, they seem to have slowly become an end unto themselves. The raison d’etre of at least 3 people I know personally is to gather and collate metrics and generate dashboards for executive consumption. Financial decisions that impact hundreds or thousands of people rest on colorful and glitzy interpretation of (often rather dubious) measurements.

I’ll give you one example. A former client of mine maintained that the way to measure the success or failure of their development teams was to count the number of projects that utilized an in-house SDLC. These numbers were duly collected, rolled up into pie charts and presented in PowerPoint format each month. The PMO claimed success, progress, improvement when the numbers increased. This demonstrated, after all, that by centralizing and promoting a common lifecycle methodology, they were reducing variance and increasing quality.

Unfortunately, as my team pointed out, there were no metrics defining how much money was being saved by reducing variance and increasing quality. In other words, the metric of “projects-using-approved-SDLC” became an end unto itself, disconnected from any real dollar benefit or other value being realized by the company. The more relevant questions that this company soon began to ask were: How much was variance reduced by using the SDLC? How much did that save per year? Where was quality improving? How did that promote profitability? Was it all about money saved, costs avoided or revenue increased? Once these questions were asked and measurements taken to answer them, it became apparent that the overhead of running their custom developed SDLC (including training and enforcement and checkpoints and governance) was greater than the measured dollar benefit of the program.

Why are we measuring? Are we measuring to improve our process or for the sake of meeting some dashboard reporting deadline? Are the things we’re measuring leading indicators for dollar-denominated value delivered to the customer? If not, are we measuring the right things?

Interpretations of the data, on the other hand… well, that’s a different topic altogether.