Evaluation Planning and Grant Applications

There are tons of great podcasts out there focusing on the non-profit world and evaluations, enough that I can do a full post on that topic sometime. Perusing the back episodes of the Chronicle of Philanthropy’s “Making Change” podcast, I came across a great interview with evaluation mastermind Michael Quinn Patton. He spoke at length about developmental evaluation and the growing movement for accountability, but one story he shared near the beginning (around the 6:30 mark) caught my attention. Patton related a common experience where a non-profit calls him up to say that they need an evaluation and were recommended to him. He asked what kind of evaluation they need, and after a confused silence they replied “A kind?”. He went on to tell the befuddled caller that just like there are different kinds of restaurants and computers and cars, there are different kinds of evaluation. In response the non-profit representative explained that their three-year foundation grant is coming to an end and they just noticed that they are supposed to do an evaluation as part of their agreement with the funder, at which point Patton responded, “I don’t do that kind of evaluation”.

Funders are increasingly asking applying organizations to evaluate their programs and projects: for example, the Ontario Trillium Foundation, a major funder in that province, has a dedicated page on creating an evaluation plan. Thankfully, OTF points out that evaluation planning should be incorporated in the program design process and take place before the project starts: as Patton’s story implies, leaving the evaluation to the last minute is not an ideal situation for either the evaluator or the organization.

With the approach of fall signalling the beginning of a new grant application season, now’s a great time to think about how evaluation fits in to the process. Every funding body and applying organization brings unique considerations, but there are three general principles to consider when you reach the “Evaluation” component of a grant.

1. Be clear as to what funder is looking for

This point is crucial, not just for the evaluation piece but the grant as a whole. If you don’t meet the requirements or understand what they’re asking for, all that time planning and writing will go for naught. Fortunately, almost all funding calls will list a contact person who can answer your questions and oftentimes provide some informal feedback on your application (or at the very least let you know if your idea is completely out in left field). Seek clarification on any points of uncertainty – ignorance is not bliss!

2. Know what you’re getting into

From the safety of the planning process, it can be easy to create an elaborate plan – sure, we can survey all of our participants four times during the program and run six focus groups! Although you should ensure that you are meeting the minimum requirements of the grant (following point 1 and clarifying with the funder directly if necessary), it’s better to underpromise and overdeliver instead of vice versa. Make sure you have the capacity to follow through on what you’re promising in terms of staff time and skills to plan the evaluation and collect and analyze data, as well as having access to the necessary tools such as online survey platforms or statistical software. Check whether you can use a portion of the grant funding to pay for tools and additional staff time or to hire an external consultant to conduct the evaluation.

3. Make sure the evaluation meets your needs too!

It’s all too easy to see the evaluation component as yet another hoop to jump through to get access to a pool of funds, but there are many benefits to your organization. What do you want to learn about your program, your participants, and your community? If the funder simply checks that you completed the evaluation and dumps it in a file-drawer with no impact on future funding, was the process still useful for you? An evaluation can contribute to the ongoing planning and development of your program, as well as demonstrate effectiveness and impact to your staff, participants, community partners, and future potential funders. In fact, ideally the evaluation planning should happen independently of any grant: when you get to that part of the application, it should just be the case of tweaking the pre-existing plan to meet any requirements of the specific grant.

Any other advice that I’m missing here? Share in the comments below or on Twitter!

If you represent a non-profit organization in the Saskatoon area that’s looking for help with evaluation planning, drop me a line – the initial conversation is (and will always be) free of charge!

Seeds for Thought: Volunteer Retention

Volunteers are an important piece of the puzzle for any non-profit organization. Whether they’re contributing to programs and special events, helping out with fundraising and outreach, or providing guidance and leadership as members of the board, good volunteers are indispensable. As these individuals are giving their time and effort without compensation (at least of the financial kind), organizations are increasingly recognizing that they can’t take these superstars for granted.

Along those lines, this week’s entry in the Seeds for Thought category is a case study from the Stanford Social Innovation Review on volunteer retention for Girls Scouts of Northern California (GSNorCal). Like many other nonprofits focused on youth development, GSNorCal relies heavily on volunteers and as a result already uses many best practices in orientation, training, and recognition: however, broader changes within and outside of the organization has made it difficult to keep volunteers returning. In response, the organization hired a consultant, TCC Group, to “mine its data and pinpoint ways to keep volunteers engaged”. Through a survey of 1,371 current and past volunteers and follow-up focus groups, TCC Group identified factors that predicted volunteer retention and suggested improvements to GSNorCal’s practices.

This example demonstrates the value in using multiple sources of information, in this case quantitative data from a large survey, qualitative insights from small groups of volunteers, and general principles from scholarly research on the topic. If you don’t have the resources that GSNorCal (or even if you do) and want to learn more about your volunteers, what can you do?

  • Start by counting. How many volunteers do you currently have, how long have they been volunteering, how many new volunteers have come onboard recently and how many have left? How many hours are they contributing? Are there differences in these numbers based on demographic factors or what tasks they’re doing for your organization?
  • Use some simple questionnaires with both current and former volunteers. I could spend a full post or three on what a volunteer questionnaire could look like, but at the very least it should include questions around overall satisfaction, support from the organization (or lack thereof) and what keeps them volunteering and what makes them leave. Just remember to use a mix of question types and watch out for potentially misleading numbers.
  • Take a participatory approach. Include volunteers in the discussion, both long-time contributors and those who are new or in a temporary position, such as through a World Cafe: as a bonus, this approach can help improve retention by demonstrating to volunteers that their opinion is valued by the organization. Another idea – have a staff member step into the shoes of a volunteer for a shift to get a firsthand perspective!
  • Partner with organizations that can provide a broader view. Many cities have a volunteer centre (either standalone or part of a larger organization like the United Way) or a professional association of volunteer administrators such as PAVRO liaisons in Ontario that can link you with resources on volunteering and keep you in the loop about new developments in the field. Volunteerism is also becoming increasingly recognized as a topic of scholarly research, so look into partnerships with universities: programs related to community development, organizational studies, public policy, and even business are good starting points.
  • A bit of self-interest here: consultants can help! If resources are tight, use consulting expertise for specific tasks that may be impractical to do in-house, such as analyzing complex statistical data or acting as a neutral party to collect feedback (current and even former volunteers may be hesitant to provide criticism directly to staff). Volunteer management, especially as it relates to research and evaluation, is one of Strong Roots’ strengths, so drop us a line if you want to have a chat about how to learn more about your volunteers!

Question: What are some strategies that you have seen successfully used to engage volunteers and improve retention?

Seeds for Thought: Five and Change

This week’s seed features Chi Yan Lam, a friend and colleague who is completing his PhD in Education at Queen’s University. We share an interest in developmental and collaborative approaches to evaluation, though as you can see from his about page, he comes at it more from the academic and theoretical side.

In support of writing his dissertation, Chi recently relaunched his personal site as a process journal to “chronicle and archive [his] emerging thinking and serendipitous discoveries around evaluation and design”. A recent post brings up the idea of the Stanford $5 challenge, where students in the Technology Ventures program are asked to use $5 and two hours of time to make a profit. Those most successful didn’t end up using the money: that resource all too often turned out to be a trap, too little to turn into anything with taking a huge risk like buying a lottery ticket or hitting the slot machines.

This example really resonates with my experiences in the nonprofit field. The first question that’s usually raised after generating a new idea for a program or service is where will the money and resources come from: in response, many organizations will “gamble” staff time and resources on preparing a grant application. If the gamble doesn’t pay off, the idea is dead in the water, morale drops, and staff are discouraged from coming up with innovative solutions in the future.

Instead of focusing immediately on what we need for success, oftentimes we need to take a step back as Chi suggests and first determine the need for a program (or to borrow from the business world, whether the “market” is there), and then whether our theory of change (the steps from here to there) matches our plan of action. These two steps can help identify faulty assumptions or leaps of logic in your plan, but more importantly, they force you to question if there is a better path to success. For example, is it possible for the program to take advantage of existing in-house resources such as a spare room and some dedicated volunteers, or draw on connections with community partners such as a university community service-learning project? A successful program will at some point need dedicated resources, just as a successful business venture will need capital to go to scale: however, if an idea can show some initial successes on $5 and two hours of time, it’s an easier argument to make that investing more time and money will be worthwhile.

(A quick shameless self-promotion here – my approach to supporting project development takes a similar approach, working with organizations to better understand the need and context, clarify how the program will work, and identify potential resources. If you’re at this stage of a program design and not sure how to proceed, drop me a line!)

Question: Think about a cause or issue you’re passionate about – what would you do to start creating change with $5 and two hours?

Seeds for Thought: Misleading Numbers

Earlier this year, I wrote a post on simply counting as an easy way to start evaluating a program or initiative. Although this approach can provide some good insights, numbers can easily mislead based on the manner of collection or when viewed in isolation from the broader context, as this week’s two seeds for thought (both from the Harvard Business Review blog) demonstrate.

First up, Peter Kriss provided the example of a hotel chain that revamped their guest feedback website to make it easier to access on mobile devices. In contrast to expectations, overall satisfaction with the hotel took a nosedive in response, which on the surface didn’t make much sense: why would a better survey make people dislike their stay? The answer was that improving accessibility to the survey led to more responses from less-engaged guests, and since their impressions tended toward the midpoint (i.e. neither great nor poor), the addition of these datapoints led to a lower average. The lesson here? Whenever you change how participants interact with or give feedback to your program or organization, be prepared for unexpected results!

A time-honoured method of assessing impact involves taking a baseline measurement of participants before an intervention (“pre”) followed by a similar (if not identical) measurement afterwards (“post”): if there’s a difference in scores, you could claim with some degree of certainty that the program made a difference. Now I could probably write several posts about the pitfalls of pre-post measurement, but Ron Shaich’s article on growth illuminated one that I probably would have missed. In the cutthroat restaurant industry, Shaich discovered, you can’t assume zero growth as a baseline: because of the strong competition, a restaurant that does nothing will naturally lose customers.

Adapting this example to the non-profit world, imagine a service or program that aims to improve participants’ lives in some way (e.g. physical health, food security, access to housing) with a corresponding pre-post measurement. If the program is working in a community or with a population facing multiple challenges, the “norm” might be decline rather than stability: in the absence of the service, things may well get worse rather than stay the same. The good news in this scenario is that a lack of pre-post change therefore might not be a complete failure, but program planners may need to set their sights higher to create a positive change.

The general takeaway from these examples is that you shouldn’t blindly trust the numbers or read too much into what they could mean: instead, take some time to look at other factors that could explain what you’re seeing. Got examples of your own when the numbers were confusing or misleading? Share them below!

Summertime Evaluations

Summertime and evalin’ is easy
Surveys are fillin’, and response rates are high
Your dataset’s rich and your graphs are good lookin’
So hush little funder, don’t you cry

(With apologies to the Gershwins and Ella Fitzgerald!)

Despite the song, summertime evaluation has its own challenges. The nicer weather often signals a hiatus to regular programming and an increase in special events such as community BBQ’s and multiple-day festivals, requiring a different approach to engaging participants for their feedback. We also slow down a bit in the summer and limit tasks that seem too heavy – who wants to fill out a long survey when you could be outside having fun?

With that in mind, some thoughts on how to collect useful information when the weather’s nice:

  • Start with the simple metrics, like attendance, ticket sales, or amount of food consumed. They’re easy for stakeholders to understand, but just remember that they can be greatly influenced by factors outside your control (especially if your event is rained out): also, they won’t provide much insight if you’re looking for evidence of a greater impact.
  • Hit the pavement! Set up some volunteers with pencils and clipboards and get them talking with participants. Keep the questions to a minimum (3-4 max) so you’re not taking people away from the event for too long, and consider providing a little reward such as a sticker or coupon for providing their two cents.
  • Alternatively, set up a stationary spot for attendees to come by and participate. This method provides the option for longer surveys or more innovative data collection methods such as dot-voting. The main downside is that you need something to encourage people to come to you: if it’s a hot day a shaded tent and a cup of water may be a strong enough draw, but in any case take a minute to figure out what will appeal to people at your event.
  • Go online! Consider including in your evaluation plan social media statistics such as the number of visitors to the event website, likes on Facebook, and usage of the event hashtag on Twitter. Online conversations through these channels can also provide insights into what’s working and what needs to be changed. Promoting an online survey through social media and at the event itself can help collect data, as long as you remember that participants using these tools may not fully represent everyone who attended the event.
  • Debrief with your team of event organizers, volunteers, staff, and other key partners, using an approach such as the After Action Review. Don’t wait too long to hold it, and remember that your team’s perspectives may not match those of event participants.

Determining which method or methods to use will depend on a number of factors, including the scale of the event and the resources you have available. The main consideration, though, should be the purpose of the evaluation – what do you want to learn from the process, and what does success look like? If you just want to demonstrate that your event is popular, collecting attendance numbers (with perhaps a quick demographics survey) would be sufficient. In contrast, if you’re hoping to see more of an impact such as increased community awareness of your organization or a change in attitudes or behaviour, more time and effort will need to be spent engaging participants.

Got any tips for evaluating in the summer? Share them below!

Seeds for Thought: Negative Results

Whether you are evaluator of a program or someone associated with the initiative being evaluted (the evaluatee?), it’s probably safe to say that everyone hopes for good results: proof that all the planning, effort, and resources that went into the program made a difference. Sadly, that doesn’t always happen, leaving the evaluator to figure out how to present the information accurately and constructively.

Susan Lilley recently compiled a ten-point list (PDF) on this topic, based on discussion from the American Evaluation Association’s listserv (Hat tip to the Better Evaluation blog which provides some more context on the discussion and some commentary on the points). To my eye, all the points are great – in particular, #4 (“Build in time for course correction”) and #8 (“Present results in terms of lessons learned”) provide a great rationale for a developmental evaluation approach that understands from the get-go that some components of any given project will need to be tweaked or changed wholesale in response to changing circumstances. What I really appreciate about this list, however, is the very first point – “Use a participatory approach from the start”. Engaging stakeholders and working as a partner with clients are more than “feel-good” tactics: they help create a sense of ownership of the results and builds that crucial relationship that allows for sharing both good and bad news, as well as having a frank discussion as to what the results mean for future work.

What tip do you think is most crucial for sharing negative results? If you have been on the giving or receiving end of bad evaluation news, what helped turn the episode into something constructive? Share below!

Seeds for Thought: Scale

Today’s Seed for Thought comes from the Stanford Social Innovation Review’s blog, a site that covers (as the name suggests) social innovation and related conepts like philanthropy, social entrepreneurship, and nonprofit organizational development. In the latter category, an article today provided a five-question checklist for nonprofits to assess their readiness to scale and increase their impact. The second question asked whether your program model has been tested: according to a survey of American nonprofits, “only 39 percent of nonprofits that are scaling or intending to scale have evaluated the impact of their work”. To me, that’s a surprising result – in my mind, before growing a program or initiative you should take some time to make sure it’s actually achieving the results that you think it is!

Although I’m glad that evaluation is included in the list, I think there’s a danger that evaluation and research is relegated to a one-time “check it off the list” task. In scaling a program to new communities or populations, organizations are bound to run into unexpected challenges. Elements and approaches that were beneficial in the initial program may be less useful or even detrimental in new situations. One example from my own experience was with an educational support program that had its roots in a dense urban core and was being scaled to other smaller cities. The new site I was involved with was very different from the original program site in terms of geography, history, and demographics: for example, the original site was very ethnically diverse, while the families in the catchment area for the new site were primarily white and had lived in Canada for multiple generations. As a result, our new site did not have to do much work around English as a Second/Additional Language, but we did face unique challenges such as around parent and family engagement. Collecting and analyzing information about our neighbourhood, both government sources like the Census and on-the-ground knowledge from teachers, service providers, and community members, helped us to understand the context and respond appropriately.

Funds, resources, and organizational practices are important elements to consider when scaling up: at the same time, nonprofits need the capacity to recognize the changes that come with growth and adapt accordingly. One tool that can be helpful in this case is developmental evaluation, which as recognized in Michael Quinn Patton’s handbook on the subject, can help organizations identify effective principles from earlier work and determine when it’s better to adapt to local conditions rather than adhere to acontextual “best practices”. By integrating relevant and timely data collection and sense-making into the process, developmental evaluation can help nonprofits learn more about the new situations they are entering, avoid potential pitfalls, and successfully scale.

What else would you add to the checklist?

Word Counts

In response to my post last week on open-ended questionnaires, Sheila Robinson over at Evaluspheric Perceptions explored some of the risks in interpreting this type of data. Without a systematic approach to analyzing qualitative data, we can fall prey to confirmation bias, which as described in her post, “causes us to remember or focus on that with which we agree, or that which matches our internalized frameworks, understandings, or hypotheses”. Another risk is that we pay too much attention to extreme viewpoints, whether positive or negative, because they are more likely to be remembered. Check out Sheila’s post for more thoughts!

One question that I want to address quickly is what to do if you have collected some data from an open-ended survey and want to avoid these pitfalls, but don’t know where to begin? As with evaluation in general, one of the simplest starting points is counting. Read through all the responses and keep a running tally of how often certain ideas come up. You may already have some ideas in mind for how to categorize responses, which will help in sorting but could leave you open to confirmation bias: take care that you’re not trying to fit a square-shaped response into your round category! If you come across strong or extreme comments, make sure you view it in relation to general trends (having complementary numerical data helps here!) to determine how representative that position is: that’s not to say that you should ignore a point raised by a small number of people, but as in the example raised by Sheila in her post, you don’t need to rush and make sweeping changes to something that’s working for the vast majority of respondents.

If there’s interest, I can share an extended example from my first experience with qualitative analysis – food for a future post!

Time to Count

As the one and only person working for Strong Roots Consulting, there are many business elements I have to deal with as part of the trade. There’s various regulatory and legal requirements to fulfill, finances to manage, and – a personal “favourite” on the necessary evil list – time tracking. My general preference is to create a proposal with a set project fee, instead of charging by the hour: however, I still need to determine how much a project should cost. A simple starting point is to estimate the number of hours that I would need to complete the work and multiply that number by a per-hour rate. Time tracking then becomes a data collection method to help me assess the accuracy of my initial estimate – or in other words, the first step in an evaluation.

For many people, conducting an evaluation seems like a complex undertaking. Where do you start? Do you need to create a logic model first? What should you measure? What data collection methods should you use? Quantitative, qualitative, or mixed methods? How do you analyze and present the data you collected? A search for “evaluation” books on Amazon turned up over 128,682 results, while a Google search returned “about 357,000,000 results”, so not much help from those sources (or rather, too much help).

One piece of advice I heard recently (and for the life of me I can’t remember where) is that one of the easiest first steps to take in evaluation is counting. It makes a lot of sense: we learn counting at an early age, after all, and it’s pretty easy to come up with questions that can be answered with a number. How many clients are we serving? How many referrals are we making? How much staff time was dedicated to a certain project? How many people indicated through a client survey that they were happy with our services? I bet if you took a minute right now you could come up with similar questions for your professional or personal life (how many hours of TV do I watch a day?) that can be easily answered by tallying up numbers.
Continue reading

Infographics and Evaluation

Just over a week ago, I started taking a free online course on Infographics and Data Visualization, taught by journalist Alberto Cairo and hosted by the University of Texas at Austin’s Knight Centre for Journalism in the Americas. Although journalism is not one of Strong Root’s core activities, I’m looking forward to learning more about how to visually present data findings – after all, what use is research and evaluation if the data is locked up behind jargon and massive tables of numbers? Ensuring that the research methods are participatory and accessible to everyone whose voice needs to be heard is only the start: the findings should likewise be understandable and relevant to all key stakeholders.
Continue reading