Consulting on the “Big 5” Evaluation Criteria - What got us here?
The Rethinking Evaluation blog series, a butterfly effect, and a global consultation
Last year, we published our Rethinking Evaluation blog series, which generated a lot of reader interest. Over the past 18 months, people have asked me why I wrote the Rethinking Blog Series. In particular, did I have in mind that it would evolve into the global consultation process recently commissioned by the DAC Network on Development Evaluation.
To be honest, the reactions to my blogs were a surprise. When developing them, we did not think these blogs would have such great readership and discussion—online and among evaluators at conferences and meetings. It was rewarding to see such uptake.
Influencing Evaluation Practice. The “Big 5” had been instrumental in bringing evaluation practitioners (at least in the development community) together, giving us a common understanding of what matters and language to identify with. It is a strength that needs to be preserved, especially as profession and practice evolve, but also require that the criteria stay up-to-date and reflect our shared experiences. Otherwise, practices will evolve in disparate ways and lead to fragmentation rather than unification of a global evaluation community.
Changing Context. As we gained experience in using them, there were a few times along my career path that I wondered how we would improve them. This is even more so today. While the world has always been complex, an increase in inter-connectivity means multiple effects are transmitted faster through a complex system. Ripple effects like the famous “butterfly effect” expand faster, wider, and deeper when systems are closely interlinked rather than isolated from each other. It means, results chains are also more complex and require commensurate ways to evaluate them.
Incentives. I realized that as evaluators we have under-estimated the influence we have. The focus of our evaluations – often determined by evaluation criteria – draws the attention of decision-makers and program implementers to certain things. The advances in behavior economics have helped us understand how incentives work and can be managed. For us, it means thinking through how evaluation criteria incentivize behaviors and making deliberate choices to ensure criteria help (rather than hinder) development outcomes. Likewise, evaluation criteria incentivize the practices and methods of evaluators.
My point of departure in reviewing the criteria was not to start all over again. We did not want to lose the advances we had made.
Instead, it was a matter of reflecting on:
Discussions in which I have participated reflected these three categories.
The blogs, deliberately, did not propose solutions but rather opened the door for consultation. The OECD/DAC has provided the platform for such dialogue in development evaluation in the past and agreed to continue doing so. The many international evaluation conferences and networks can serve to stimulate an exchange of views. In addition, we will run an online consultation to reach those who are not able to participate in person so that they can make their voices heard.
This global process no doubt will be challenging. But, it is bound to make the evaluation practice stronger and possibly contribute to developing it into a profession with shared standards beyond evaluation criteria.
Did you know that OECD/DAC is running a global consultation with the goal of updating the DAC evaluation criteria? Click here to learn how you can contribute to the process. Or go directly to the consultation page.