Volunteers are an important piece of the puzzle for any non-profit organization. Whether they’re contributing to programs and special events, helping out with fundraising and outreach, or providing guidance and leadership as members of the board, good volunteers are indispensable. As these individuals are giving their time and effort without compensation (at least of the financial kind), organizations are increasingly recognizing that they can’t take these superstars for granted.
Along those lines, this week’s entry in the Seeds for Thought category is a case study from the Stanford Social Innovation Review on volunteer retention for Girls Scouts of Northern California (GSNorCal). Like many other nonprofits focused on youth development, GSNorCal relies heavily on volunteers and as a result already uses many best practices in orientation, training, and recognition: however, broader changes within and outside of the organization has made it difficult to keep volunteers returning. In response, the organization hired a consultant, TCC Group, to “mine its data and pinpoint ways to keep volunteers engaged”. Through a survey of 1,371 current and past volunteers and follow-up focus groups, TCC Group identified factors that predicted volunteer retention and suggested improvements to GSNorCal’s practices.
This example demonstrates the value in using multiple sources of information, in this case quantitative data from a large survey, qualitative insights from small groups of volunteers, and general principles from scholarly research on the topic. If you don’t have the resources that GSNorCal (or even if you do) and want to learn more about your volunteers, what can you do?
- Start by counting. How many volunteers do you currently have, how long have they been volunteering, how many new volunteers have come onboard recently and how many have left? How many hours are they contributing? Are there differences in these numbers based on demographic factors or what tasks they’re doing for your organization?
- Use some simple questionnaires with both current and former volunteers. I could spend a full post or three on what a volunteer questionnaire could look like, but at the very least it should include questions around overall satisfaction, support from the organization (or lack thereof) and what keeps them volunteering and what makes them leave. Just remember to use a mix of question types and watch out for potentially misleading numbers.
- Take a participatory approach. Include volunteers in the discussion, both long-time contributors and those who are new or in a temporary position, such as through a World Cafe: as a bonus, this approach can help improve retention by demonstrating to volunteers that their opinion is valued by the organization. Another idea – have a staff member step into the shoes of a volunteer for a shift to get a firsthand perspective!
- Partner with organizations that can provide a broader view. Many cities have a volunteer centre (either standalone or part of a larger organization like the United Way) or a professional association of volunteer administrators such as PAVRO liaisons in Ontario that can link you with resources on volunteering and keep you in the loop about new developments in the field. Volunteerism is also becoming increasingly recognized as a topic of scholarly research, so look into partnerships with universities: programs related to community development, organizational studies, public policy, and even business are good starting points.
- A bit of self-interest here: consultants can help! If resources are tight, use consulting expertise for specific tasks that may be impractical to do in-house, such as analyzing complex statistical data or acting as a neutral party to collect feedback (current and even former volunteers may be hesitant to provide criticism directly to staff). Volunteer management, especially as it relates to research and evaluation, is one of Strong Roots’ strengths, so drop us a line if you want to have a chat about how to learn more about your volunteers!
Question: What are some strategies that you have seen successfully used to engage volunteers and improve retention?
Earlier this year, I wrote a post on simply counting as an easy way to start evaluating a program or initiative. Although this approach can provide some good insights, numbers can easily mislead based on the manner of collection or when viewed in isolation from the broader context, as this week’s two seeds for thought (both from the Harvard Business Review blog) demonstrate.
First up, Peter Kriss provided the example of a hotel chain that revamped their guest feedback website to make it easier to access on mobile devices. In contrast to expectations, overall satisfaction with the hotel took a nosedive in response, which on the surface didn’t make much sense: why would a better survey make people dislike their stay? The answer was that improving accessibility to the survey led to more responses from less-engaged guests, and since their impressions tended toward the midpoint (i.e. neither great nor poor), the addition of these datapoints led to a lower average. The lesson here? Whenever you change how participants interact with or give feedback to your program or organization, be prepared for unexpected results!
A time-honoured method of assessing impact involves taking a baseline measurement of participants before an intervention (“pre”) followed by a similar (if not identical) measurement afterwards (“post”): if there’s a difference in scores, you could claim with some degree of certainty that the program made a difference. Now I could probably write several posts about the pitfalls of pre-post measurement, but Ron Shaich’s article on growth illuminated one that I probably would have missed. In the cutthroat restaurant industry, Shaich discovered, you can’t assume zero growth as a baseline: because of the strong competition, a restaurant that does nothing will naturally lose customers.
Adapting this example to the non-profit world, imagine a service or program that aims to improve participants’ lives in some way (e.g. physical health, food security, access to housing) with a corresponding pre-post measurement. If the program is working in a community or with a population facing multiple challenges, the “norm” might be decline rather than stability: in the absence of the service, things may well get worse rather than stay the same. The good news in this scenario is that a lack of pre-post change therefore might not be a complete failure, but program planners may need to set their sights higher to create a positive change.
The general takeaway from these examples is that you shouldn’t blindly trust the numbers or read too much into what they could mean: instead, take some time to look at other factors that could explain what you’re seeing. Got examples of your own when the numbers were confusing or misleading? Share them below!
During the span of a week, I come across lots of interesting stories, resources, and sites online that may be of interest to those in the non-profit-sector. In line with my approach of connecting people with resources and sharing information, I’m thinking about starting a weekly feature to highlight some of those links – consider this the pilot edition!
This week, I’m highlighting a trio of posts from the Harvard Business Review’s Blog Network, a site I recently started following. Although the focus is primarily on for-profit organizations, I’ve already seen content on social enterprises, philanthropy, and international development, as well as resources and trends that would be equally applicable on the non-profit side.
All three articles below relate to managing and using data, particularly “Big Data”. The term recognizes that collectively we are producing and storing exponentially-greater amounts of data in recent years than at any other point in human history – the first article cites research that 90% of data currently in existence was created in the past two years! This explosion in information can help grow our understanding of practically every facet of life, but there are challenges in analyzing and interpretating these giant data sources as well as limits to how much we can learn from them.
- Jeff Bladt and Bob Filbin’s article title says it all – A Data Scientist’s Real Job: Storytelling. It’s similar to a truism I learned from a great professor during my undergraduate education, that all research projects have to tell a story: we start at some point of knowledge, we run an experiment or collect some information, and we learn something as a result. Tables of numbers and statistical tests are essential tools, but by themselves they do not advance our knowledge. As Bladt and Filbin put it, “Data gives you the what, but humans know the why“.
- Presenting data in an accurate, easily-comprehensible visual form has become a field in its own right. If you’re not sure where to start in sharing information, Nancy Durante gives a simple suggestion: When Presenting Your Data, Get to the Point Fast. Check her post for some good tips on how to help your audience focus on the key numbers (hint: tables of numbers and pie charts are not in the cards!).
- Finally, Kate Crawford explores The Hidden Biases in Big Data. Even databases with millions of records may not cover the full spectrum of a phenomenon: Crawford gives the example of the 20 million tweets generated during Hurricane Sandy, the majority of which came from tech-connected Manhattan compared to harder-hit neighbourhoods. Her prescription? “Take a page from social scientists”: pay attention to where the data comes from, examine your cognitive biases in interpreting the data, and utilize a diverse range of methods including qualitative approaches like interviews to complement the quanatitative data findings.
If you have any thoughts or additional links to share on this topic, I’d love to see them! You can use the comments field below or find me on Twitter. Also, any feedback or suggestions on this approach of weekly annotated links would be greatly appreciated.