Guest blogger Dr. Alison Evans, comments on IEG'€™s RAP 2014 and notes a similarity to previous development effectiveness reports produced during her tenure with the Bank. 

You know that strange and sometimes unsettling feeling of deja vu? Recently I experienced that sensation while preparing for a panel discussion on IEG'€™s 2014 Results and Performance report (RAP2014). On reading the report, I was struck by how familiar the storyline felt. 

As a World Bank staff member in the late 1990s, I worked for three years in Operations Evaluation (before it became IEG). During that time, I co-led the 1997 Annual Review of Development Effectiveness (ARDE) – the report that preceded the RAP. In that report, we focused, as does the RAP, on the implications of an increasingly demanding development agenda and progress with internal reforms to re-position the Bank and improve development effectiveness.

The familiar ring to the overarching narrative of RAP2014 turned into déjà vu when I read and compared the following statements: 

 "The analysis carried out on the determinants of project success points unequivocally to the importance of quality at entry - identification and appraisal - in explaining project and portfolio performance. The quality of Bank supervision is another key determinant. Continuing efforts to improve quality at entry, to better assess and manage risk, and to improve the quality of project supervision are critical, as are continuing efforts to improve the quality and monitorability of the CAS" (ARDE 97, p54)

"€œFor both the Bank and IFC, poor work quality was driven mainly by inadequate quality at entry, underscoring the importance of getting things right from the outset. For the Bank, applying past lessons at entry, effective risk mitigation, sound monitoring and evaluation (M&E) design, and appropriate objectives and results frameworks are powerful attributes of well-designed projects” (RAP 2014 ix)"

Despite important differences in the two reports, it is striking that 16 years later the headline analysis is so similar. Why?  There are a number of possibilities:

One is that the goal posts for delivering development assistance have shifted and "delivery" is a lot more complex and riskier now, compared with 1997. If this is the case, the headlines may look the same but the target has shifted.

Another possibility is that the Bank is not coming to grips with the behaviors and incentives that drive better performance. Internal reforms have repeatedly addressed the Bank'€™s business model. Is the consistency in the analysis a sign that deep down, incentives haven'€™t fundamentally changed? In the ARDE97, we pointed to the need for improvements in learning, in approaches to risk management, performance monitoring and measurement; virtually the same themes as the RAP2014.

A third possibility is the metrics that inform the assessment of QAE and QOS are no longer capturing the most important dimensions of Bank performance.  Is there too much focus on individual project performance versus performance across a bundle of completmentary investments? Has the drive for performance measurement obscured the importance of trial and error? Are we assuming (as per the WDR 2015) we know more about what "€˜good performance"€™ looks like than we actually do?

Some or all these possibilities may be off the mark, but the questions and issues raised by the audience at IEG'€™s event suggest they are not so far off.  There is a clear stream of concern about the Bank as a learning organization:

"Isn't "learning from your predecessor" or previous TTLs a no brainer and yet we don't do it systematically across the WBG. What is the barrier?" 

"Why is learning from ICRs seen as an optional 'extra'?"€

Some audience members also expressed concern over how the Bank can work in a more adaptive way when the pressures to lend are as acute as ever.

"How can we incentivize teams to collect evidence and adapt continuously during implementation when the push for lending and disbursement has only grown?"

These concerns marry well with the findings of IEG’s evaluation of €˜How the Bank Learns (2014).  This report highlights the need for further progress in 'learning from lending and feeding learning back into lending' and developing the culture necessary to experiment and learn rapidly from experience – whether that experience refers to what has worked, or what hasn’t.  

The Bank is not alone in facing this challenge (note two recent reviews by the UK’s Independent Commission for Aid Impact (ICAI) on “How DFID Learns and on €œDFID’s Approach to Delivering Impact€). Scholars and practitioners are increasingly emphasizing the need for development actors to give more space to adaptive and iterative ways of working (for example, “Escaping Capability Traps through Problem-Drive Iterative Adaption (2012); Complexity, Adaptation and Results,€ Barder (2012), Aid on the Edge of Chaos, Ramalingam (2013)). The WDR 2015 calls for a more R&D centered approach that gives priority to adequate diagnosis and experimental implementation and to challenging in-built biases.

What these voices have in common is the need to rethink inherited systems of decision making in favor of more systemic approaches that use performance measurement and management, not as a 'stick,' but as a tool for feedback and course-correction. This requires trust and flexibility. As IEG prepares for its next RAP report in 2015, will it find evidence of a Bank seeking to make the most of its agility rather than its heft?