Learning from Evaluation: How can we Stay at the Top of the Game?
Why is there not more organizational learning from self-evaluation in the World Bank Group?
Why is there not more organizational learning from self-evaluation in the World Bank Group?
By: Caroline HeiderRasmus Heltberg
Why is there not more organizational learning from self-evaluation? We can list numerous proximate reasons – self-evaluations are done too late, their lessons are of the wrong type, the processes of assigning and validating ratings distract from real learning, they are based on sometimes weak evidence. But, we submit that the ultimate cause is that learning has taken a backseat to accountability.
Top chess players spend more time analyzing their completed games than actually playing. They spend hours dissecting every weak move, mistake, and blunder they have made. This helps them figure out why they made that weak move. They ask themselves tough questions -- Did I miss an opportunity to launch an attack? Did I press my advantage too hard? Did I underestimate my opponent? Did I succumb to psychological pressure? Do I have a blind spot? They do this not because it is fun and easy—it is hard work and requires a lot of discipline—but because it is the only way to grow as a chess player and learn how to avoid similar mistakes in future games.
Managing for results has been official dogma in the WBG for the last 15-20 years, and systems are in place to track results from all projects and country strategies and display them in the corporate scorecards and the President’s delivery indicators. These systems draw on hundreds of self-evaluation reports written by World Bank Group staff and validated by IEG. The reports measure the results of our investments, assess how well we performed, and formulate lessons intended to help us learn. Writing these reports costs the WBG millions of dollars annually.
The design and operation of the systems adhere to relevant good practice standards, coverage is comprehensive, and many evaluation experts consider the Bank Group’s systems as good as or better than those in comparable organizations. The systems produce corporate results measures that are easy to report externally and to compare across time, contexts, and sectors.
In theory, this is the equivalent of the chess player analyzing past games for clues to causes of weak moves. Yet in reality organizational learning from these systems is disappointing, as we document in a new IEG evaluation, Behind the Mirror.
Sure, individual authors of self-evaluation reports often learn something from visiting the project and writing up their analysis—it would be strange otherwise--but little knowledge flows beyond the authors themselves. It is rare that business units analyze completed self-evaluations or mine their lessons. The WBG conducts a lot of research and hosts many seminars every day but almost none draw on data from mandatory self-evaluation. Lessons rarely turn into revised policies, guidelines, or procedures. IEG’s evaluations point out the same weak spots and missed opportunities, year after year (see Alison Evans’ blog here).
Why is there not more organizational learning from self-evaluation? We can list numerous proximate reasons – self-evaluations are done too late, their lessons are of the wrong type, the processes of assigning and validating ratings distract from real learning, they are based on sometimes weak evidence. But, we submit that the ultimate cause is that learning has taken a backseat to accountability.
The systems’ focus on accountability and corporate reporting—generating ratings that can be aggregated in scorecards and so on--drives the shape, scope, timing, and content of reporting and limit the usefulness of the exercise for learning. If the self-evaluation systems had been set up to primarily serve learning, they would be more solution-oriented (how can we do better?), more selective (which projects offer the greatest learning opportunities?), more programmatic (are there synergies across activities and countries?), better attuned to unintended positive and negative consequences, and done sooner (the median time from approval to review of the Implementation Completion Report for Bank investment projects is nine years).
Lessons contained in self-evaluations rarely touch on internal organizational issues such as flaws in deliberative processes that led to approval of weak projects.
Parts of the system not focused on corporate reporting, such as impact evaluations and other voluntary self-evaluations, tend to be more valued by staff and managers as tools that can help increase effectiveness. Impact evaluations are not mandatory, and they are generally seen as technically credible, necessary investments in monitoring, and are undertaken selectively. What this shows is that when conditions are right, the World Bank Group has strong demand for evaluative information and the ability to supply it.
Operational units could tap into this intellectual energy more systematically.
Already, the Bank and IFC do various retrospectives aimed at learning. These could be scaled up and cover all WBG activities (investment, knowledge, partnerships, and so on) over a period of time in a given sector and country to yield a broader perspective on results and whether different WBG engagements pull in the same direction and dance to the same tune.