Methods Are Not Metrics
I spent the past couple of days in evaluator meetings where the primary topic was the development of a set of common metrics. To be honest, I was surprised at the number of evaluators who weren’t sure how to write a metric. Below, I outline three mistakes I saw being made repeatedly. And, since every metric needs a backstory, I’ll set the stage with a fictional example.
For the purposes of this illustration, I’ll de-identify the immense government organization that is funding the consortium I work for (since the fiscal cliff may do that anyway) and re-identify it as a group of after school care programs with individual management, but a common funder. Let’s say there are 25 of these programs spread throughout a region. Each program is a different size, serves a different population, and offers a mix of sub-programs — some are the same across the centers and some are different.
For the last five years, the funder has been requiring the programs to submit counts of attendance (output), graduation rates of program participants (outcome), and a short report that describes generally what they’ve been up to (interesting over coffee). Aside from these yearly updates, each program is responsible for measuring its own progress.
Now, however, the funder is asking for more. They would like to see increased evidence of progress that is linked across all of the programs. In addition, the programs themselves have realized the power — both statistically and politically — of a common dataset. There is a demand for common metrics; metrics that are measurable, transferable, and — surprise — actionable.
Using the above scenario as an example, I’ve listed the three most common mistakes made when composing metrics.
Operationalize your definitions
Example: participant success
Issue: “participant” could mean students, parents, teachers; “success” could mean completing the program, high school graduation, behavioral improvements…
Improved metric: # of student participants who graduate from high school in four years
Comment: see the denominator issue discussed below; what will you be comparing this number to?
Methods are not metrics
Example: a focus group
Issue: a focus group is a method, not a metric
Improved metric: Focus groups will be used to measure parents’ satisfaction with the program’s curricular offerings.
Comment: Technically, the method does not need to part of the metric at all, but it adds a detail that shows how measurable the metric will be.
What is your denominator?
Example: number of participants
Issue: a comparison is needed – just by counting the number of participants, funders will not learn much about a program’s efficiency or effectiveness.
Improved metrics: number of participants per dollar spent on faculty, number of participants compared to last year’s total
Comment: metrics need to be based on an organization’s goals – when thinking of an appropriate denominator, consider the desired outcomes and impact of the program.
These are not all the issues that interfere with writing an actionable metric. Although this post gives some tips on what to avoid, it does not touch on validity or reliability. It also does not tell you what a metric should be. A couple of suggestions for beginning metric writing include Googling “SMART” or reading an excellent article in the October issue of Harvard Business Review by Michael J. Moubassan, “The True Measures of Success” which encourages organizations to make their metrics both persistent and predictable.