Having been in evaluation for more than 30 years, the 'broken record' is disconcerting. The reaction we often get is that we don't see anything new; my response frequently has to be: it is the mistakes that are unnecessarily being repeated that necessitate that evaluators keep flagging them. The messages will 'go away' once learning has taken place.
The World Bank began doing self-evaluations of completed loans projects 40 years ago because President McNamara wanted to know the results of the Bank’s investments. In a previous blog, co-authored with Caroline Heider, we described why there is little organizational learning flowing from the systems.
Why is there not more organizational learning from self-evaluation? We can list numerous proximate reasons – self-evaluations are done too late, their lessons are of the wrong type, the processes of assigning and validating ratings distract from real learning, they are based on sometimes weak evidence. But, we submit that the ultimate cause is that learning has taken a backseat to accountability.
In June this year, IEG won an award for having the best mentoring program in the World Bank Group.
I am so proud of this recognition because it validates the progress we have made since the inception of IEG's mentoring program in 2014.
Hosted by the Independent Evaluation Group and the World Bank Group’s Vice Presidency of Learning, Leadership and Innovation
Can the World Bank get better at systematically leveraging the best evidence - from data and past experience – when shaping new development projects and programs?
IEG LIVE: [How] Does the World Bank Group Learn...from its Operations? Lessons from the Fifth Discipline
In some instances, commissioning one standalone evaluation is all that's needed. But increasingly organizations across the spectrum are following the long-standing practices of the multi-lateral development banks: they institutionalize evaluation functions. As institutions embark on establishing evaluation functions, they need to ask themselves what does success look like: what difference does the evaluation function make and how can they get the most value for the money spent on institutionalizing an evaluation function?
It wouldn't be the first or second time that we take this route. In the 1990s we saw extensive and heated discussions about whether quantitative methods trumped qualitative ones, or the other way round. That phase was followed by a decade of debates on whether randomized control trials were the only "true" evaluation of results. A claim that was countered by evaluators committed to other evaluation methods that are more participatory and capture qualitative evidence.
This paper highlights the conundrum faced by World Bank in the way it uses learning and evidence to drive decision-making. It provides evidence for the existence of the conundrum, explores the reasons for its existence, points to specific instances where it has been overcome, and details possible directions for a wide-scale solution.