“I don’t care about the details, just tell me what impact you had!” The impact of interventions that deal with complex social issues often can’t be boiled down to a simple yay/nay vote, especially when examining longer-term outcomes: unfortunately, explaining that fact to stakeholders and funders without seeming evasive can be difficult.
Chris Lysy over at Fresh Spectrum has penned five humourous illustrations on this difficult topic, which I’m keeping in my back pocket the next time I have a discussion with anyone about attributing impact. I find the idea of a “logic model repair shop” (#3) to be hilarious (and I’ve seen models that look that complex!), and the image of someone asking a whole community to stop the good work they’re doing to avoid messing up his or her impact assessment raises a good point that nothing we do happens in isolation. That being said, I think my favourite is the quote from John Mayne: “We need to accept the fact that what we are doing is measuring with the aim of reducing the uncertainty about the contribution made, not proving the contribution made.”
What’s your favourite out of the five?
In response to my post last week on open-ended questionnaires, Sheila Robinson over at Evaluspheric Perceptions explored some of the risks in interpreting this type of data. Without a systematic approach to analyzing qualitative data, we can fall prey to confirmation bias, which as described in her post, “causes us to remember or focus on that with which we agree, or that which matches our internalized frameworks, understandings, or hypotheses”. Another risk is that we pay too much attention to extreme viewpoints, whether positive or negative, because they are more likely to be remembered. Check out Sheila’s post for more thoughts!
One question that I want to address quickly is what to do if you have collected some data from an open-ended survey and want to avoid these pitfalls, but don’t know where to begin? As with evaluation in general, one of the simplest starting points is counting. Read through all the responses and keep a running tally of how often certain ideas come up. You may already have some ideas in mind for how to categorize responses, which will help in sorting but could leave you open to confirmation bias: take care that you’re not trying to fit a square-shaped response into your round category! If you come across strong or extreme comments, make sure you view it in relation to general trends (having complementary numerical data helps here!) to determine how representative that position is: that’s not to say that you should ignore a point raised by a small number of people, but as in the example raised by Sheila in her post, you don’t need to rush and make sweeping changes to something that’s working for the vast majority of respondents.
If there’s interest, I can share an extended example from my first experience with qualitative analysis – food for a future post!
During the span of a week, I come across lots of interesting stories, resources, and sites online that may be of interest to those in the non-profit-sector. In line with my approach of connecting people with resources and sharing information, I’m thinking about starting a weekly feature to highlight some of those links – consider this the pilot edition!
This week, I’m highlighting a trio of posts from the Harvard Business Review’s Blog Network, a site I recently started following. Although the focus is primarily on for-profit organizations, I’ve already seen content on social enterprises, philanthropy, and international development, as well as resources and trends that would be equally applicable on the non-profit side.
All three articles below relate to managing and using data, particularly “Big Data”. The term recognizes that collectively we are producing and storing exponentially-greater amounts of data in recent years than at any other point in human history – the first article cites research that 90% of data currently in existence was created in the past two years! This explosion in information can help grow our understanding of practically every facet of life, but there are challenges in analyzing and interpretating these giant data sources as well as limits to how much we can learn from them.
- Jeff Bladt and Bob Filbin’s article title says it all – A Data Scientist’s Real Job: Storytelling. It’s similar to a truism I learned from a great professor during my undergraduate education, that all research projects have to tell a story: we start at some point of knowledge, we run an experiment or collect some information, and we learn something as a result. Tables of numbers and statistical tests are essential tools, but by themselves they do not advance our knowledge. As Bladt and Filbin put it, “Data gives you the what, but humans know the why“.
- Presenting data in an accurate, easily-comprehensible visual form has become a field in its own right. If you’re not sure where to start in sharing information, Nancy Durante gives a simple suggestion: When Presenting Your Data, Get to the Point Fast. Check her post for some good tips on how to help your audience focus on the key numbers (hint: tables of numbers and pie charts are not in the cards!).
- Finally, Kate Crawford explores The Hidden Biases in Big Data. Even databases with millions of records may not cover the full spectrum of a phenomenon: Crawford gives the example of the 20 million tweets generated during Hurricane Sandy, the majority of which came from tech-connected Manhattan compared to harder-hit neighbourhoods. Her prescription? “Take a page from social scientists”: pay attention to where the data comes from, examine your cognitive biases in interpreting the data, and utilize a diverse range of methods including qualitative approaches like interviews to complement the quanatitative data findings.
If you have any thoughts or additional links to share on this topic, I’d love to see them! You can use the comments field below or find me on Twitter. Also, any feedback or suggestions on this approach of weekly annotated links would be greatly appreciated.