Having been in evaluation for more than 30 years, the 'broken record' is disconcerting. The reaction we often get is that we don't see anything new; my response frequently has to be: it is the mistakes that are unnecessarily being repeated that necessitate that evaluators keep flagging them. The messages will 'go away' once learning has taken place.

TWEET THIS

Some of our regular readers will be familiar with the OECD’s Development Assistance Committee (DAC) – a group that brings together some of the largest providers of international development assistance. For several years, IEG has been part of an evaluation network under DAC that meets regularly to exchange information, experiences and foster cooperation among the DAC member organizations.

At the last DAC EvalNet meeting, we discussed a synthesis of evaluations in refugee contexts, which the secretariat had prepared.

I was thrilled for two reasons: first, the synthesis of evaluations helps us identify common lessons and patterns that point to deeper, systemic issues that need resolving; and, second, in this case the synthesis findings had been presented to a policy-making body to inform their deliberations on setting out new policy guidance for responses to refugee crises. This guidance will now be rolled out, vetted, and adapted at operational level. What a great example for ensuring evaluation evidence is deliberately integrated into forward-looking policies and hopefully influence future decisions and practices.

However, what was frustrating was how many of the findings, lessons, and recommendations were repeating earlier ones. Having been in evaluation for more than 30 years, the "broken record" is disconcerting. The reaction we often get is that we don't see anything new; my response frequently has to be: it is the mistakes that are unnecessarily being repeated that necessitate that evaluators keep flagging them. The messages will "go away" once learning has taken place.

But, that does not absolve evaluators from making an effort to stimulate learning. At IEG, we have a whole range of activities to try to close the learning loop:

  • Similar to this example, IEG's evaluation of the WB procurement policy was conducted to and influenced the new policy;
     
  • the internal review processes of the World Bank Group, and the subsequent discussion at the Board or its committees, ensure that we draw on our evaluation evidence to inform the formulation of policies and strategies;
     
  • we are experimenting with deep dives into our recommendations and Management Action Records for our major evaluations to understand what we have recommended collectively from a number of evaluations, what has been done, and where the frontiers are for greater learning and change;
     
  • for country strategies, IEG has contributed to the WBG training courses with modules on designing results frameworks and on selectivity. In addition, we are piloting information packages that are provided to new country managers/directors to help them get up to speed on past experience in their country of assignment;
     
  • for projects, where we see great opportunities to improve quality at entry/design, IEG also teaches at mandatory training courses of the World Bank and IFC, and provides training for the preparation of completion reports at exit. In addition, a series of learning engagements focuses on enhancing the understanding of outcomes and associated indicators. Together with the Knowledge Directors of the WB and of the IFC, we have incorporated relevant IEG work into information packages task team leaders receive when they start a new project;
     
  • IEG's Knowledge, Learning, and Communications Department is constantly exploring ways to make evaluation knowledge accessible with a revamped website, social media, and packaging information in easily accessible formats.

Will all of this be sufficient that we will eventually learn and change behaviors?

As shown in our evaluations of Learning in World Bank Operations and of the World Bank Group's self evaluation systems, more will be needed to create an environment that is safe for dealing with mistakes, that promotes learning, and embraces adaptive management and change.

Blog image inspired by The adaptive management cycle for the Tasmanian Wilderness World Heritage Area by Alan Diduck  and

Comments

Submitted by Michée SAGARA on Wed, 01/24/2018 - 12:49

Permalink

Thanks for sharing this post. Learning should be stimulated at great institutions such as World Bank where theories can be developed but what is difficult is the practicability of learning. What involvement should the field has in learning??

Submitted by Buenaventura M… on Thu, 01/25/2018 - 19:43

Permalink

Thank you very much for sharing your thoughts on this. I strongly believe that evaluators (not only for World Bank projects) even will smaller country-based evaluations "keeps repeating recommendations" and in fact, including a repeat on evaluation designs. The comment on "...we don't see anything new" is somewhat half-truth. Many evaluation design is focused on the DAC framework of efficiency, effectiveness, relevance, impact and sustainability and, with that we focus our recommendations within such framework. As a result, we often repeat the ideas on how to manage projects or how to deal with project issues in the covered communities. That is not a problem at all. The problem with learning from the lessons and experiences (even if they were similar or the same ideas) is on how we integrate such ideas in the re-entry plans at the community level. The participation of the people (the covered communities) in the entire evaluation process has been given less importance because the evaluators has the propensity to formulate and ensure that re-entry plans for themselves or donors to undertake. The people in the covered communities are often listeners rather than the doers. I have witnessed several feedbacking and backstopping sessions/meetings but they are intended for the managers and supervisors or team funding institutions rather than the covered communities to act. This is molding "dependent mentalities" rather than empowerment. Perhaps, the emphasis should be on the real participation of the ultimate beneficiaries of the project no matter how long it would take the evaluators to realize this process. This is not a solution as well, but it is a point of reflection to look into the design on how we put learning and change into the hands of the people we work with. Again thanks for sharing.

Add new comment