Evaluation Scope and Evidence Base

The report covers self-evaluation of World Bank operations (investments, policy-based support, knowledge and advisory services, impact evaluations, trust funds, and partnerships); International Finance Corporation (IFC) investment and advisory services; country programs; and, very selectively, Multilateral Investment Guarantee Agency (MIGA) guarantees.

The evaluation relies on diverse data sources and methodological approaches geared to assess complex systems. Data collection and analyses aimed to generate perspectives on the architecture and history of the systems, review specific constituent parts, and analyze behaviors, motivations, and incentives.

The team conducted semi-structured interviews with 110 Bank Group managers and staff, and 14 interviews with staff in partner agencies. Focus group discussions and game-enabled workshops also provided data for the evaluation. A number of background studies, including quantitative and content analyses of project performance data, a review of academic and evaluation literature, and institutional benchmarking, formed the backbone of the analysis.

Findings

The systems mesh well with the independent evaluation systems for which they provide information and the systems have been emulated and adapted by other development agencies.

However, the self-evaluation systems primarily focus on results reporting and accountability needs and do not provide the information necessary to help the Bank Group transform into a “Solutions Bank” or develop learning to enhance performance. Information generated through the systems is not regularly mined for knowledge and learning except by IEG, and its use for project and portfolio performance management can be improved.

The systems produce corporate results measures but need to produce value to staff and line management and to the primary beneficiaries of the “Solutions Bank”—client governments, implementing agencies, firms, beneficiaries and citizens. Most staff do not view the self-evaluation systems as a source of timely, credible, and comprehensive information.

Incentives created inside and outside systems, including through ratings and validation processes, are not conducive to conducting high-quality self-evaluation. Staff engage with the systems with a compliance mindset where candor and thoughtful analysis of drivers of results and failures suffer.

This evaluation identifies three broad causes of misaligned incentives:

  • Excessive focus on ratings
  • Attention to volume that overshadows attention to results
  • Low perceived value of the knowledge created. 

Recommendations

IEG offers five recommendations designed to address the causes of misaligned incentives identified by the evaluation:

  1. Reform the ICR system and its validation to make it more compatible with innovation and course corrections
  2. Help staff understand that project objectives pertaining to innovating, piloting, and testing are feasible and that projects with such objectives are rated appropriately, provided the project development objective and indicators are set in the right way
  3. Strengthen rewards and leadership signals at all levels of the organization to reinforce the importance of self-evaluation
  4. Formulate a more systematic approach to improving M&E quality
  5. Expand voluntary evaluations that respond to learning needs of management and teams.

(See chapter 5 - Conclusions and Recommendations)

 

DOWNLOAD THE REPORT