Seeds for Thought: Negative Results

Whether you are evaluator of a program or someone associated with the initiative being evaluted (the evaluatee?), it’s probably safe to say that everyone hopes for good results: proof that all the planning, effort, and resources that went into the program made a difference. Sadly, that doesn’t always happen, leaving the evaluator to figure out how to present the information accurately and constructively.

Susan Lilley recently compiled a ten-point list (PDF) on this topic, based on discussion from the American Evaluation Association’s listserv (Hat tip to the Better Evaluation blog which provides some more context on the discussion and some commentary on the points). To my eye, all the points are great – in particular, #4 (“Build in time for course correction”) and #8 (“Present results in terms of lessons learned”) provide a great rationale for a developmental evaluation approach that understands from the get-go that some components of any given project will need to be tweaked or changed wholesale in response to changing circumstances. What I really appreciate about this list, however, is the very first point – “Use a participatory approach from the start”. Engaging stakeholders and working as a partner with clients are more than “feel-good” tactics: they help create a sense of ownership of the results and builds that crucial relationship that allows for sharing both good and bad news, as well as having a frank discussion as to what the results mean for future work.

What tip do you think is most crucial for sharing negative results? If you have been on the giving or receiving end of bad evaluation news, what helped turn the episode into something constructive? Share below!

AEA Conference – Day 4

I’ll admit, today my energy and enthusiasm level at the conference was a bit lower compared to the other days. There’s likely a couple of reasons for that, ranging from a somewhat earlier start time today and yesterday compared to the first two days, to the accumulated effects of being away from home. However, I think the main reason lies in the shift from the pre-conference workshops to the main conference itself. The workshops were longer affairs (minimum half-day, maximum two days): the conference sessions, in contrast, range from 45 to 90 minutes and are usually subdivided into separate topics and speakers who may be have as little as 15 minutes. Exposure to a wide range of presenters and ideas are perhaps hallmarks of any conference, but information overload is a real danger. Likewise, meeting other attendees beyond a cursory “Where are you from and what do you do” is much easier in a workshop where you’ll be interacting with them repeatedly over a day or two. Finally, the transition from pre-conference workshops to conference proper entailed a change of venue from a hotel’s meeting rooms to a rather large convention centre, the latter of which seems to perfectly replicate the feeling of time- and placelessness of a large airport, including the disorienting effects. If I wasn’t flying back to Saskatoon tonight, I would probably have paced myself a bit more over the past few days, but in any case I’m glad the workshops came first.

That being said, I’m thankful to have attended several useful sessions today, most of them on the theme of participatory evaluation and generally moving data collection and presentation beyond the standard surveys and graphs. In fact, one session led by a father and daughter team (he’s a graphic designer, she’s an evaluation researcher) raised considerations about how to present information in a way that clearly communicates your ideas (no Comic Sans font, please!) without the use of bar charts or line graphs. Another workshop introduced the World Cafe model of small group discussion that’s structured to provide a safe space to generate and share ideas, and the final session I attended highlighted different “adhesive formats” for data collection – think dots, stickers and labels instead of checkboxes and fill-in-the-blank questionnaires.

Although today marks my last day at the conference, I have many more ideas and resources to share than I have been able to record in the last few posts. I probably won’t maintain the daily writing schedule once I get back, but there will be at least one or two more posts covering some additional topics from this experience. I also plan to post under the Resource section some brief summaries of the participatory methods I learned about, along with links to websites with more information.

All in all, I had a great time at the conference, and while next year’s event is a bit further afield in Washington (DC), I would seriously consider going again!

AEA Conference – Day 3

Today’s workshop at the American Evaluation Association was on participatory methods, which covers a range of methods and approaches for use with populations that could be categorized as “vulnerable”: those living in poverty, not speaking or understanding the dominant language, identifying as LGBTQ, Aboriginal / First Nations, or a racialized minority, living in an area recently affected by a natural or man-made disaster, having a disability or illness … the workshop provided examples of over 40 such identities, and us in the audience were able to contribute many more to the list! A participatory approach moves beyond methods to incorporate an awareness of the evaluator’s attitudes and behaviours and an emphasis on sharing: rather than coming in as “the expert”, the evaluator focuses on facilitating discussion, respecting the experiences and knowledge of the participants, and forming an authentic partnership. My intention is always to enter relationships (be they professional or personal) in a way that honours our diverse experiences and perspectives, but it was a good reminder that there needs to be an explicit recognition of where we come from and how we can best work together.

Throughout the day, I noticed ways in which developmental evaluation and participatory approaches overlap or complement each other. In both cases, there is a recognition and sensitivity to lived realities: what I know as “the evaluator”, what the standardized approach or model describes and prescribes, does not necessarily match what is on the ground. One of the presenters today talked about previous approaches to international aid research which involved an external consultant coming into the context, “mining” information, and going back home to analyze it with no consideration of how the participants would make sense of the information. I suggested that a metaphor to describe participatory (and developmental!) evaluation would be “farming” or “cultivating” the data, an apt comparison as farmers need to be constantly aware of local conditions and work together to ensure the success of the harvest and the long-term sustainability of the land. As a bonus, that analogy fits well with my Strong Roots brand!

The end of this workshop marked the end of the professional development series for this conference – the rest of the conference consists of smaller-scale seminars, presentations, and roundtables in a more traditionally-academic style. I’m only staying for one more day, and I look forward to seeing what more I can learn in that time!

AEA Conference – Day 2

Another great day, including meeting a group of awesome people around the topic of community development and cities (I’m an urban nerd at heart!). Lots of talk today about how to introduce and implement developmental evaluation practices, which can be difficult as the whole point of the field is to eschew the “one size fits all” approach: the focus, instead, is on critical inquiry and an ongoing focus on relationships and what the data means over a narrow approach based on specific models or methods.

One insight that came to me today relates to capacity building. My overarching aim for Strong Roots is to help non-profit organizations build the capacity to make a difference in the world (it even says so on the front page of this site!). That approach can easily lead to a focus on accessing concrete resources, with money and volunteer time being obvious examples, but it’s just as important for the organization to have the capacity to adapt to rapidly changing and complex circumstances. Possessing the skills and knowledge to capture information about program participants, the external context, and internal functioning is crucial, as is the ability to make sense of that data and decide how to act on it: a developmental evaluator can help collect and analyze data along the way, and more importantly act as that “critical friend” who can point out the unstated assumptions and values at play and help lead discussions on the potential impact of decisions. Workshop presenter Michael Quinn Patton referenced a quote from General Robert E Lee, “I am often surprised, but I am never taken by surprise” – if I can help an organization learn to navigate all the unanticipated consequences and outcomes that are inherent in working with people and social systems so that they are never caught unprepared, I would say that my aim of building capacity has been met!

Tonight I’m meeting a friend from high school who I haven’t seen in years, and tomorrow (starting bright and early!) is a one-day workshop on participatory methods on evaluation, followed by the start of the conference proper. Until then!

AEA Conference – Day 1

It’s the end of day 1 at the American Evaluation Conference, and so far it’s been a great experience! In addition to the content of the workshop itself (more on that below), I had the opportunity to chat with people from academia, research institutions, government, and non-profits doing work in an array of fields. The breadth of experiences that just a handful of people around a table represented was amazing, as was the friendliness and sense of connection that I haven’t felt at conferences in other disciplines or fields. I definitely look forward to connecting with more attendees during the coming days!

The workshop with Michael Quinn Patton on developmental evaluation has provided many insights – over 1500 words in my notes file from today, including asides to myself on ideas to share through the blog or that colleagues may find especially relevant to their situation. One that I want to share right now revolves around objectives and outcomes. Evaluation has traditionally focused on a linear approach, with specific and measurable outcomes defined before starting a program which would be used to determine whether a program succeeded (if you’ve seen or created a logic model, you know what I’m talking about!). However, innovators tackling complex issues may not be able to articulate what “success” is, but they would know what it looks like when they see it. The job of the developmental evaluator is not to prematurely force innovators to pick what their success is, but to help them work through the questions and decision points, articulate the reasoning behind the approaches they take, and generally tell the story of the successes (and failures!) of the initiative. In today’s business world, no venture would rigidly follow a five-year plan today (or even a one-year plan), as circumstances change too rapidly and require the ability to adapt: yet most evaluation plans assume that the outcomes we choose today will still be important once the program has run its course. Patton cited the quote “No battle plan survives first contact with the enemy”, and that’s equally true of a program or intervention that works with the complexity of people and communities.

That’s an extremely brief summary of one insight from today: there’s lot more that I could share from today, but I’m going to head out soon for dinner with a colleague of mine from Kingston. More to come tomorrow!

En Route

Right now, I’m waiting to board my flight to Minneapolis for the American Evaluation Association’s annual conference. I’ve been to other conferences before, but this is the first time attending one in the capacity of a self-employed consultant, rather than as a student or the representative of an organization. I don’t know how this different role will affect my perspective on the event, especially one that by its nature is more oriented towards independent practitioners than the average academic affair. I’m already focusing much more on professional development workshops (3 days) over presentations and seminars (1 day and a bit), suggesting that my interest is biased more towards the immediate and pragmatic “how” over the abstract and contemplative “why” of evaluation.

On that note, you may have noticed that my website does not mention evaluation strongly, nor do I describe myself as an “evaluator”. Like my previous discussion about calling myself a “consultant”, identifying with the field of evaluation carries with it certain connotations and assumptions, especially in a climate where money is tight and funders are increasingly asking recipients to identify program outcomes and demonstrate that their initiative has met certain goals. Ideally, evaluation should provide useful feedback that helps programs grow and evolve in response to changing circumstances, but to non-profit organizations, it can seem more like a standardized test administered by someone who has little (if any) knowledge of the local context and yet has the power to grant life or death to a program.

Thankfully, there are people in the field who prefer the former approach to evaluation, and I’ll be attending two workshops that fit within this theme. The first is on developmental evaluation, an approach that encourages evaluators to work hand-in-hand with those front-line staff who develop and deliver programs in areas of social complexity. Rather than pronounce judgment at the end of an arbitrary trial period, developmental evaluators provide ongoing feedback and help program teams integrate new information about participants and the context so that the program can adapt to changing circumstances. The second workshop is on participatory approaches to evaluation: instead of conducting research “on” participants, particularly those who are traditionally marginalized and left out of the research process (e.g. racialized minorities, new immigrants, those with low literacy skills), the focus is working “with” and “for” these individuals to ensure their voices are heard and their lived experiences incorporated into program development and evaluation.

Whether these two workshops and the general theme of the conference itself (“Evaluation in Complex Ecologies: Relationships, Responsibilities, Relevance”) will lead me to identifying more fully with this field is something to be determined over the next few days. As mentioned in my previous post, I hope to blog daily and look forward to sharing my insights from this experience!

Two Quick Announcements

Two quick announcements. First, the site’s undergone a minor reorganization to make it easier to navigate, with Strong Root’s areas of focus now listed under Activities and a new Resources section. The latter is home right now to a growing list of local grant opportunities, building on my last post on the Funder’s Forum (as a result, there won’t be a part two to that post). I’ll keep adding to that list as I become aware of more opportunities.

Second, I’m heading off Sunday to Minneapolis for the better part of the week to attend the American Evaluation Association conference. In particular, I’m looking forward to attending two workshops, one on developmental evaluation and the other on using participatory methods in evaluation that do a better job at including people who are usually marginalized. I plan to post here regularly during the trip about the experience and ideas that I think are useful for readers of this blog, hopefully every day assuming a reliable internet connection. Keep a watch on this space then!