Seeds for Thought: Five and Change

This week’s seed features Chi Yan Lam, a friend and colleague who is completing his PhD in Education at Queen’s University. We share an interest in developmental and collaborative approaches to evaluation, though as you can see from his about page, he comes at it more from the academic and theoretical side.

In support of writing his dissertation, Chi recently relaunched his personal site as a process journal to “chronicle and archive [his] emerging thinking and serendipitous discoveries around evaluation and design”. A recent post brings up the idea of the Stanford $5 challenge, where students in the Technology Ventures program are asked to use $5 and two hours of time to make a profit. Those most successful didn’t end up using the money: that resource all too often turned out to be a trap, too little to turn into anything with taking a huge risk like buying a lottery ticket or hitting the slot machines.

This example really resonates with my experiences in the nonprofit field. The first question that’s usually raised after generating a new idea for a program or service is where will the money and resources come from: in response, many organizations will “gamble” staff time and resources on preparing a grant application. If the gamble doesn’t pay off, the idea is dead in the water, morale drops, and staff are discouraged from coming up with innovative solutions in the future.

Instead of focusing immediately on what we need for success, oftentimes we need to take a step back as Chi suggests and first determine the need for a program (or to borrow from the business world, whether the “market” is there), and then whether our theory of change (the steps from here to there) matches our plan of action. These two steps can help identify faulty assumptions or leaps of logic in your plan, but more importantly, they force you to question if there is a better path to success. For example, is it possible for the program to take advantage of existing in-house resources such as a spare room and some dedicated volunteers, or draw on connections with community partners such as a university community service-learning project? A successful program will at some point need dedicated resources, just as a successful business venture will need capital to go to scale: however, if an idea can show some initial successes on $5 and two hours of time, it’s an easier argument to make that investing more time and money will be worthwhile.

(A quick shameless self-promotion here – my approach to supporting project development takes a similar approach, working with organizations to better understand the need and context, clarify how the program will work, and identify potential resources. If you’re at this stage of a program design and not sure how to proceed, drop me a line!)

Question: Think about a cause or issue you’re passionate about – what would you do to start creating change with $5 and two hours?

Seeds for Thought: Negative Results

Whether you are evaluator of a program or someone associated with the initiative being evaluted (the evaluatee?), it’s probably safe to say that everyone hopes for good results: proof that all the planning, effort, and resources that went into the program made a difference. Sadly, that doesn’t always happen, leaving the evaluator to figure out how to present the information accurately and constructively.

Susan Lilley recently compiled a ten-point list (PDF) on this topic, based on discussion from the American Evaluation Association’s listserv (Hat tip to the Better Evaluation blog which provides some more context on the discussion and some commentary on the points). To my eye, all the points are great – in particular, #4 (“Build in time for course correction”) and #8 (“Present results in terms of lessons learned”) provide a great rationale for a developmental evaluation approach that understands from the get-go that some components of any given project will need to be tweaked or changed wholesale in response to changing circumstances. What I really appreciate about this list, however, is the very first point – “Use a participatory approach from the start”. Engaging stakeholders and working as a partner with clients are more than “feel-good” tactics: they help create a sense of ownership of the results and builds that crucial relationship that allows for sharing both good and bad news, as well as having a frank discussion as to what the results mean for future work.

What tip do you think is most crucial for sharing negative results? If you have been on the giving or receiving end of bad evaluation news, what helped turn the episode into something constructive? Share below!

Seeds for Thought: Scale

Today’s Seed for Thought comes from the Stanford Social Innovation Review’s blog, a site that covers (as the name suggests) social innovation and related conepts like philanthropy, social entrepreneurship, and nonprofit organizational development. In the latter category, an article today provided a five-question checklist for nonprofits to assess their readiness to scale and increase their impact. The second question asked whether your program model has been tested: according to a survey of American nonprofits, “only 39 percent of nonprofits that are scaling or intending to scale have evaluated the impact of their work”. To me, that’s a surprising result – in my mind, before growing a program or initiative you should take some time to make sure it’s actually achieving the results that you think it is!

Although I’m glad that evaluation is included in the list, I think there’s a danger that evaluation and research is relegated to a one-time “check it off the list” task. In scaling a program to new communities or populations, organizations are bound to run into unexpected challenges. Elements and approaches that were beneficial in the initial program may be less useful or even detrimental in new situations. One example from my own experience was with an educational support program that had its roots in a dense urban core and was being scaled to other smaller cities. The new site I was involved with was very different from the original program site in terms of geography, history, and demographics: for example, the original site was very ethnically diverse, while the families in the catchment area for the new site were primarily white and had lived in Canada for multiple generations. As a result, our new site did not have to do much work around English as a Second/Additional Language, but we did face unique challenges such as around parent and family engagement. Collecting and analyzing information about our neighbourhood, both government sources like the Census and on-the-ground knowledge from teachers, service providers, and community members, helped us to understand the context and respond appropriately.

Funds, resources, and organizational practices are important elements to consider when scaling up: at the same time, nonprofits need the capacity to recognize the changes that come with growth and adapt accordingly. One tool that can be helpful in this case is developmental evaluation, which as recognized in Michael Quinn Patton’s handbook on the subject, can help organizations identify effective principles from earlier work and determine when it’s better to adapt to local conditions rather than adhere to acontextual “best practices”. By integrating relevant and timely data collection and sense-making into the process, developmental evaluation can help nonprofits learn more about the new situations they are entering, avoid potential pitfalls, and successfully scale.

What else would you add to the checklist?

AEA Conference – Day 2

Another great day, including meeting a group of awesome people around the topic of community development and cities (I’m an urban nerd at heart!). Lots of talk today about how to introduce and implement developmental evaluation practices, which can be difficult as the whole point of the field is to eschew the “one size fits all” approach: the focus, instead, is on critical inquiry and an ongoing focus on relationships and what the data means over a narrow approach based on specific models or methods.

One insight that came to me today relates to capacity building. My overarching aim for Strong Roots is to help non-profit organizations build the capacity to make a difference in the world (it even says so on the front page of this site!). That approach can easily lead to a focus on accessing concrete resources, with money and volunteer time being obvious examples, but it’s just as important for the organization to have the capacity to adapt to rapidly changing and complex circumstances. Possessing the skills and knowledge to capture information about program participants, the external context, and internal functioning is crucial, as is the ability to make sense of that data and decide how to act on it: a developmental evaluator can help collect and analyze data along the way, and more importantly act as that “critical friend” who can point out the unstated assumptions and values at play and help lead discussions on the potential impact of decisions. Workshop presenter Michael Quinn Patton referenced a quote from General Robert E Lee, “I am often surprised, but I am never taken by surprise” – if I can help an organization learn to navigate all the unanticipated consequences and outcomes that are inherent in working with people and social systems so that they are never caught unprepared, I would say that my aim of building capacity has been met!

Tonight I’m meeting a friend from high school who I haven’t seen in years, and tomorrow (starting bright and early!) is a one-day workshop on participatory methods on evaluation, followed by the start of the conference proper. Until then!

AEA Conference – Day 1

It’s the end of day 1 at the American Evaluation Conference, and so far it’s been a great experience! In addition to the content of the workshop itself (more on that below), I had the opportunity to chat with people from academia, research institutions, government, and non-profits doing work in an array of fields. The breadth of experiences that just a handful of people around a table represented was amazing, as was the friendliness and sense of connection that I haven’t felt at conferences in other disciplines or fields. I definitely look forward to connecting with more attendees during the coming days!

The workshop with Michael Quinn Patton on developmental evaluation has provided many insights – over 1500 words in my notes file from today, including asides to myself on ideas to share through the blog or that colleagues may find especially relevant to their situation. One that I want to share right now revolves around objectives and outcomes. Evaluation has traditionally focused on a linear approach, with specific and measurable outcomes defined before starting a program which would be used to determine whether a program succeeded (if you’ve seen or created a logic model, you know what I’m talking about!). However, innovators tackling complex issues may not be able to articulate what “success” is, but they would know what it looks like when they see it. The job of the developmental evaluator is not to prematurely force innovators to pick what their success is, but to help them work through the questions and decision points, articulate the reasoning behind the approaches they take, and generally tell the story of the successes (and failures!) of the initiative. In today’s business world, no venture would rigidly follow a five-year plan today (or even a one-year plan), as circumstances change too rapidly and require the ability to adapt: yet most evaluation plans assume that the outcomes we choose today will still be important once the program has run its course. Patton cited the quote “No battle plan survives first contact with the enemy”, and that’s equally true of a program or intervention that works with the complexity of people and communities.

That’s an extremely brief summary of one insight from today: there’s lot more that I could share from today, but I’m going to head out soon for dinner with a colleague of mine from Kingston. More to come tomorrow!

Getting to Maybe – Stand Still

The second chapter of Getting to Maybe, somewhat confusingly sharing the title of the book itself, focuses on how social innovators get started. For many, there is a sense of calling or having reached a personal tipping point. There’s a realization that some aspect of the current reality is not merely a problem, but that it is inherently unfair, wrong, injust; a lack of action is no longer an option. Fortunately, by speaking out and taking small steps to change the situation, these change agents often find that they are not alone. Not only do they find allies, but the system itself that previously appeared to be fixed and unyielding suddenly seems ripe for change. The dichotomy between the heroic individual who single-handedly changes everything and historic inevitability breaks down: “[social innovators’] responses both epitomized and provoked a new pattern of interactions”.

Continue reading