Right now, I’m waiting to board my flight to Minneapolis for the American Evaluation Association’s annual conference. I’ve been to other conferences before, but this is the first time attending one in the capacity of a self-employed consultant, rather than as a student or the representative of an organization. I don’t know how this different role will affect my perspective on the event, especially one that by its nature is more oriented towards independent practitioners than the average academic affair. I’m already focusing much more on professional development workshops (3 days) over presentations and seminars (1 day and a bit), suggesting that my interest is biased more towards the immediate and pragmatic “how” over the abstract and contemplative “why” of evaluation.
On that note, you may have noticed that my website does not mention evaluation strongly, nor do I describe myself as an “evaluator”. Like my previous discussion about calling myself a “consultant”, identifying with the field of evaluation carries with it certain connotations and assumptions, especially in a climate where money is tight and funders are increasingly asking recipients to identify program outcomes and demonstrate that their initiative has met certain goals. Ideally, evaluation should provide useful feedback that helps programs grow and evolve in response to changing circumstances, but to non-profit organizations, it can seem more like a standardized test administered by someone who has little (if any) knowledge of the local context and yet has the power to grant life or death to a program.
Thankfully, there are people in the field who prefer the former approach to evaluation, and I’ll be attending two workshops that fit within this theme. The first is on developmental evaluation, an approach that encourages evaluators to work hand-in-hand with those front-line staff who develop and deliver programs in areas of social complexity. Rather than pronounce judgment at the end of an arbitrary trial period, developmental evaluators provide ongoing feedback and help program teams integrate new information about participants and the context so that the program can adapt to changing circumstances. The second workshop is on participatory approaches to evaluation: instead of conducting research “on” participants, particularly those who are traditionally marginalized and left out of the research process (e.g. racialized minorities, new immigrants, those with low literacy skills), the focus is working “with” and “for” these individuals to ensure their voices are heard and their lived experiences incorporated into program development and evaluation.
Whether these two workshops and the general theme of the conference itself (“Evaluation in Complex Ecologies: Relationships, Responsibilities, Relevance”) will lead me to identifying more fully with this field is something to be determined over the next few days. As mentioned in my previous post, I hope to blog daily and look forward to sharing my insights from this experience!