Having been in evaluation for more than 30 years, the 'broken record' is disconcerting. The reaction we often get is that we don't see anything new; my response frequently has to be: it is the mistakes that are unnecessarily being repeated that necessitate that evaluators keep flagging them. The messages will 'go away' once learning has taken place.
Why is there not more organizational learning from self-evaluation? We can list numerous proximate reasons – self-evaluations are done too late, their lessons are of the wrong type, the processes of assigning and validating ratings distract from real learning, they are based on sometimes weak evidence. But, we submit that the ultimate cause is that learning has taken a backseat to accountability.
Hosted by the Independent Evaluation Group and the World Bank Group’s Vice Presidency of Learning, Leadership and Innovation
Can the World Bank get better at systematically leveraging the best evidence - from data and past experience – when shaping new development projects and programs?
It wouldn't be the first or second time that we take this route. In the 1990s we saw extensive and heated discussions about whether quantitative methods trumped qualitative ones, or the other way round. That phase was followed by a decade of debates on whether randomized control trials were the only "true" evaluation of results. A claim that was countered by evaluators committed to other evaluation methods that are more participatory and capture qualitative evidence.
In 1996, the World Bank announced that it was becoming a knowledge bank to ensure high-quality operations and to enhance the capacity of its clients. The Bank Group’s steps to becoming a premier knowledge hub continue today.