On My Mind: "When Will We Ever Learn?"
Is it surprising that evaluations keep surfacing many of the same issues?
Is it surprising that evaluations keep surfacing many of the same issues?
By: Caroline HeiderHaving been in evaluation for more than 30 years, the 'broken record' is disconcerting. The reaction we often get is that we don't see anything new; my response frequently has to be: it is the mistakes that are unnecessarily being repeated that necessitate that evaluators keep flagging them. The messages will 'go away' once learning has taken place.
Some of our regular readers will be familiar with the OECD’s Development Assistance Committee (DAC) – a group that brings together some of the largest providers of international development assistance. For several years, IEG has been part of an evaluation network under DAC that meets regularly to exchange information, experiences and foster cooperation among the DAC member organizations.
At the last DAC EvalNet meeting, we discussed a synthesis of evaluations in refugee contexts, which the secretariat had prepared.
I was thrilled for two reasons: first, the synthesis of evaluations helps us identify common lessons and patterns that point to deeper, systemic issues that need resolving; and, second, in this case the synthesis findings had been presented to a policy-making body to inform their deliberations on setting out new policy guidance for responses to refugee crises. This guidance will now be rolled out, vetted, and adapted at operational level. What a great example for ensuring evaluation evidence is deliberately integrated into forward-looking policies and hopefully influence future decisions and practices.
However, what was frustrating was how many of the findings, lessons, and recommendations were repeating earlier ones. Having been in evaluation for more than 30 years, the "broken record" is disconcerting. The reaction we often get is that we don't see anything new; my response frequently has to be: it is the mistakes that are unnecessarily being repeated that necessitate that evaluators keep flagging them. The messages will "go away" once learning has taken place.
But, that does not absolve evaluators from making an effort to stimulate learning. At IEG, we have a whole range of activities to try to close the learning loop:
Will all of this be sufficient that we will eventually learn and change behaviors?
As shown in our evaluations of Learning in World Bank Operations and of the World Bank Group's self evaluation systems, more will be needed to create an environment that is safe for dealing with mistakes, that promotes learning, and embraces adaptive management and change.
Comments
Thanks for sharing this post…
Thanks for sharing this post. Learning should be stimulated at great institutions such as World Bank where theories can be developed but what is difficult is the practicability of learning. What involvement should the field has in learning??
Thank you very much for…
Thank you very much for sharing your thoughts on this. I strongly believe that evaluators (not only for World Bank projects) even will smaller country-based evaluations "keeps repeating recommendations" and in fact, including a repeat on evaluation designs. The comment on "...we don't see anything new" is somewhat half-truth. Many evaluation design is focused on the DAC framework of efficiency, effectiveness, relevance, impact and sustainability and, with that we focus our recommendations within such framework. As a result, we often repeat the ideas on how to manage projects or how to deal with project issues in the covered communities. That is not a problem at all. The problem with learning from the lessons and experiences (even if they were similar or the same ideas) is on how we integrate such ideas in the re-entry plans at the community level. The participation of the people (the covered communities) in the entire evaluation process has been given less importance because the evaluators has the propensity to formulate and ensure that re-entry plans for themselves or donors to undertake. The people in the covered communities are often listeners rather than the doers. I have witnessed several feedbacking and backstopping sessions/meetings but they are intended for the managers and supervisors or team funding institutions rather than the covered communities to act. This is molding "dependent mentalities" rather than empowerment. Perhaps, the emphasis should be on the real participation of the ultimate beneficiaries of the project no matter how long it would take the evaluators to realize this process. This is not a solution as well, but it is a point of reflection to look into the design on how we put learning and change into the hands of the people we work with. Again thanks for sharing.
Add new comment