How do you kill a nonprofit? According to Mark Hager and Elizabeth Searing over at Nonprofit Quarterly, there are at least ten pitfalls to avoid, including accumulation of debt, trashing your reputation, and the perennial favourite of mission drift. Given my work in evaluation, the one that really interests me is saved for last, labelled as “Think that ‘good’ is good enough”. There are plenty of good causes out there: if you don’t measure and demonstrate your organization’s impact, the authors argue, donors will support other causes that can show the difference they’re making with their dollars (There are other good reasons for evaluating, such as program improvement and development, but let’s roll with this justification for now).
Most people in the non-profit/for-impact sphere understand the importance of showing impact, but how do we go about doing so? In a nutshell, there’s no one way. Let me repeat that – there is no silver bullet, no one statistic that will make donors and funders sit up and shower us with legitimacy, favour, and funds.
Don’t believe me? Let’s take an example from the tech world, home to plenty of numbers and statistics. Ev Williams, CEO of blog platform Medium, noted that commonly-used metrics for social media networks, such as number of active users, unique visitors, pages viewed, or time spent on a site, are all imperfect. Even though Medium uses time spent as its prime statistic, Williams notes that time “[is] not actually measuring value. It’s measuring cost as a proxy for value.” Any single measurement can be mis-used and provide the illusion of success. Quoting Jonah Peretti, Williams further asserts that there is no “God metric”.
If a sector as replete with data as social media does not have a God metric, how can we expect our non-profit world, filled with messiness and uncertainty, to find one?
Donors and funders, unfortunately, have grabbed onto the idea of “efficiency”, defined simply as the proportion of funds spent directly on programs compared to other costs such as overhead and fundraising. Certainly, a very wasteful organization won’t make a large impact: conversely, one that puts all its funds towards programs and none towards the infrastructure that any organization needs (a roof over its head, staff salaries, planning, and IT, for starters) will quickly run out of steam. Others have written much more eloquently about the “overhead myth”: for starters, check out this article by Dan Pallotta from almost five years ago!
Internally, many nonprofits grab onto basic outcome measurements: number of participants (new and returning), number of sessions held, percentage of participants who complete the program. These statistics are a great first step to make sure that the fundamentals of any change effort are in place, but they’re just that: a first step. How do we know that a program is making an impact and that people aren’t just showing up for the free donuts? As an example, a low number of repeat participants could demonstrate many things: lack of engagement with the program, a highly transient population, or an extremely effective initiative where one visit is enough to make a lasting difference. We can’t look at that one number and make a meaningful assessment.
So what’s the solution?
Back to Williams:
If what you care about — or are trying to report on — is impact on the world, it all gets very slippery. You’re not measuring a rectangle, you’re measuring an multi-dimensional space. You have to accept that things are very imperfectly measured and just try to learn as much as you can from multiple metrics and anecdotes.
All of us – organizations, staff, donors, and funders – need to work on our comfort with messiness. We need new ways to conceive of measuring impact, such as developmental evaluation which shifts the conversation from “prove” to “improve”. We need to improve our collaborative efforts through approaches like Collective Impact, which explicitly recognizes the importance of sharing data between organizations to assess impact. We need to move away from searching for a God metric and instead identify multiple sources of information (numbers and stories alike) that can provide insight on the difference we’re making.
“Good” is indeed not good enough for our field, but we also have to realize that there’s no one clear path, no one clear measurement of “better”. Once we acknowledge this truth, we can start learning and working towards improving our impact.