Are evaluators a bunch of “nay-sayers” throwing cold water on the public-expenditure party or are they the A-listers without whom the party is, well, just not complete?

In recent years, the attitudes toward evaluation and evaluators among the party throwers – those who plan and design (policymakers, elected officials) and those who spend (program designers, implementers) – have changed considerably.  There is now a growing list of countries that are putting evaluators, and evaluation, on their list of invited guests.

In Latin America, Mexico and Chile both have established traditions of using evaluation. India and South Africa have also made considerable strides in that direction. The role of agencies such as Mexico’s CONEVAL and India’s recently established Independent  Evaluation Organization is not just to confirm whether or not public programs achieved their objectives. It is also make significant contributions to generating relevant and practical knowledge about meeting challenging public sector and, more broadly, development goals.  

So, how does the evaluation profession need to continue to evolve to become the sought after guest who takes the party to interesting and engaging levels? 

The  Center for Learning on Evaluation and Results (CLEAR) in Latin America has found that the evaluation profession needs to understand the development and contextual challenges intimately and tailor evaluation approaches to particular situations  and audiences  so that it can better influence designers and implementers.

Questions and strategies

For example, given the variation in the development of M&E approaches and institutions in Latin America, evaluators often have to take a more proactive role in defining the evaluation questions and identifying the strategic considerations of a given project. This has been the case in the evaluation of conditional cash transfers in the region. While many countries have adopted and evaluated these programs, evaluation questions, strategies and policy implications have varied greatly.

  • In the case of Mexico’s Progresa-Oportunidades program, for example, a randomized controlled trial was instrumental in securing a high stakes political decision to shift from general subsidies to targeted transfers to develop human capital.
     
  • By contrast, in Brazil’s Bolsa Familia,  the emphasis of the evaluation strategy has been secondary to political decision-making.  It has focused instead on service-delivery and implementation improvements.

For evaluators, the challenge is one of developing a clear understanding of the socio-political and development contexts.  Evaluators must manage trade-offs between developing a perfect methodological approach and the timeliness and pertinence of the evaluation results, with a focus on practical utilization.

Bringing new voices to the party

The approaches to evaluation also need to be expanded to include a variety of research methodologies that can respond to current and future information constraints and accountability demands.  In the absence of strong information and monitoring systems, evaluators often have to tailor empirical strategies and articulate evaluations questions accordingly.  At the same time, evaluators need to be able to creatively develop a wide range of empirical approximations and comparisons in order to capture the type of information and nuance that is demanded by stakeholders and citizens and to think of solutions creatively.  

In some cases, this may take the form of qualitative strategies to capture perceptions of program participants across different cultural settings (i.e. indigenous population, immigrants, etc.) or developing stronger implementation analysis and process evaluations before even thinking about tackling impact evaluation questions.

In the last decade, the evaluation profession in Latin America has been pushed to innovate and widen methodological approaches to respond to real life evaluations settings and demands. Evaluation is more highly valued among policy-makers and citizens when it is responsive to these contexts.

The demand for CLEAR’s work shows that we still lack sufficient knowledge of how to effectively connect the evaluation profession with policymaking and program improvement. How can we integrate evaluation into organizational learning where the connections between evaluations and program and policy decisions are seamless and used for furthering development goals? There are national, institutional, and sectoral considerations that shape the environment for M&E in ways that are not easily mapped.  

Does that mean evaluators must become sector specialists and researchers?  Yes and no.  An individual evaluator may not have all of the desired qualities, but a particular evaluation, and the professional as a whole, must draw on a broad base of expertise.  

The party becomes more interesting when the guests actually bring new perspectives while being able to talk the language of others around them. This is beginning to happen now in many places.  But greater understanding and work is needed.  Institutions such as the well-established CONEVAL and the newly created IEO are at the frontlines of these challenges and will blaze the way for the profession.