Categories
Seeds for Thought

Seeds for Thought: Misleading Numbers

Earlier this year, I wrote a post on simply counting as an easy way to start evaluating a program or initiative. Although this approach can provide some good insights, numbers can easily mislead based on the manner of collection or when viewed in isolation from the broader context, as this week’s two seeds for thought (both from the Harvard Business Review blog) demonstrate.

First up, Peter Kriss provided the example of a hotel chain that revamped their guest feedback website to make it easier to access on mobile devices. In contrast to expectations, overall satisfaction with the hotel took a nosedive in response, which on the surface didn’t make much sense: why would a better survey make people dislike their stay? The answer was that improving accessibility to the survey led to more responses from less-engaged guests, and since their impressions tended toward the midpoint (i.e. neither great nor poor), the addition of these datapoints led to a lower average. The lesson here? Whenever you change how participants interact with or give feedback to your program or organization, be prepared for unexpected results!

A time-honoured method of assessing impact involves taking a baseline measurement of participants before an intervention (“pre”) followed by a similar (if not identical) measurement afterwards (“post”): if there’s a difference in scores, you could claim with some degree of certainty that the program made a difference. Now I could probably write several posts about the pitfalls of pre-post measurement, but Ron Shaich’s article on growth illuminated one that I probably would have missed. In the cutthroat restaurant industry, Shaich discovered, you can’t assume zero growth as a baseline: because of the strong competition, a restaurant that does nothing will naturally lose customers.

Adapting this example to the non-profit world, imagine a service or program that aims to improve participants’ lives in some way (e.g. physical health, food security, access to housing) with a corresponding pre-post measurement. If the program is working in a community or with a population facing multiple challenges, the “norm” might be decline rather than stability: in the absence of the service, things may well get worse rather than stay the same. The good news in this scenario is that a lack of pre-post change therefore might not be a complete failure, but program planners may need to set their sights higher to create a positive change.

The general takeaway from these examples is that you shouldn’t blindly trust the numbers or read too much into what they could mean: instead, take some time to look at other factors that could explain what you’re seeing. Got examples of your own when the numbers were confusing or misleading? Share them below!