Back to cover

Meta-Evaluation of IEG Evaluations

Executive Summary

Since 2005, the Independent Evaluation Group (IEG) has been subject to independent external reviews. To support the next review, a meta-evaluation of IEG programmatic and corporate process evaluations was conducted in 2020–21 by independent experts. The purpose of the meta-evaluation was to (i) provide inputs on the quality and credibility of IEG’s evaluations for IEG’s upcoming independent external review and (ii) provide IEG’s leadership team an external perspective and suggestions on how to improve the quality and credibility of evaluations.

The assessment focused on the credibility of evaluations (excluding utility and independence). More particularly, it focused on aspects of credibility that could be gleaned from the reports and Approach Papers. The analysis was conducted in three phases. The first phase (inventory stage) focused on mapping the rationale, scope, use of (innovative) methods, and several research design attributes of all 28 IEG evaluations within the universe of evaluations published from fiscal year (FY)15 to FY19. In the second phase (assessment stage), an assessment framework was developed and applied to a stratified random sample of eight evaluations. The in-depth review assessed evaluations according to their scope and focus, reliability, validity (including construct, internal, external, and data analysis validity), and consistency. Finally, the analysis was supplemented with interviews with IEG team leaders and evaluation officers to obtain contextual information on the design and implementation of evaluations within IEG.

The meta-evaluation arrived at the following six major conclusions and associated suggestions for improvement. First, information presented on scope, rationale, and goals in the evaluation reports and Approach Papers was elaborate, relevant, and thorough. At the same time, the scope of some IEG evaluations tended to be overambitious and diluted. The meta-evaluation offers two suggestions for improvement in this area: (i) The use of portfolio analysis as a standard operational procedure should be reconsidered. (ii) Evaluators should refrain from formulating “bags of questions,” instead devoting more time to refining the focus of evaluations.

Second, IEG evaluations adequately defined concepts (though they did not always operationalize them). More recent evaluations systematically incorporated evidence from the literature and made adequate use of theories of change. However, the function of the theory of change was not always clearly articulated; its relationship to the empirical parts of the evaluative analysis could have been strengthened. The meta-evaluation offers three suggestions in this area: (i) Evaluations should more explicitly articulate the role theories of change play in data collection and analysis, assessing their relationship to relevant empirical work. (ii) Evaluations could be more precise about the content of their theories of change. (iii) Greater attention to operationalizing concepts into variables and measurement instruments could improve construct validity.

Third, clarity in evaluation design has improved in IEG evaluations over the past five years. The use of tools such as the evaluation design matrix is widespread. However, sometimes the evaluation design matrix presents only a list of “evaluative instruments.” Several evaluations still do not show sufficient clarity on how different methods help answer specific evaluation questions and how evidence from different sources is triangulated and used to substantiate evaluation findings. Two suggestions are provided for this area: (i) More attention should be paid to distinguishing between data collection and data analysis methods, fully articulating the ways in which the two complement each other. (ii) Guidance on best practices in the practical implementation of principles of triangulation and synthesis in evaluation should be developed.

Fourth, while there are good examples of evaluations with high internal, external, and data analysis validity of findings, there are ongoing challenges that merit further attention. The meta-evaluation proposes three suggestions for improvement in this area: (i) Although suggestions related to the use of theories of change have already been presented, it should be noted that improvements in this area can also improve internal validity. (ii) A dedicated section on the diagnosis and treatment of internal and external validity issues could be useful in mitigating some of the challenges posed by the complexity of evaluands. (iii) Guidance (as suggested above) on how to triangulate evidence with and across sources of evidence would be helpful.

Fifth, IEG evaluation reports fared quite well with respect to the consistency among rationale, scope, questions, methods, findings, and recommendations. There was generally a strong fit among the use of methods, data sources, and evaluation questions. One suggestion is provided for this area: To further strengthen analytical rigor, IEG evaluations should consider developing a more systematic approach to assess how contextual (macro and meso) characteristics may or may not influence the behavior of beneficiaries of World Bank Group–supported interventions.

Finally, during FY15–19, IEG evaluations demonstrated a broadening of the range of methods used to respond to evaluation questions. While innovation in methods used for data collection and analysis should be applauded, such innovation should not become an end in itself. The meta-evaluation provides the following suggestion for improvement in this area: IEG could benefit from a more strategic view of methodological innovation in evaluation. Given the recent challenges posed by the coronavirus (COVID-19) pandemic, digital tools and approaches will undoubtedly grow in relevance in the work of the Bank Group generally and IEG specifically. IEG should therefore be ready to learn from recent experiences in innovation (especially in the field of data science) and make informed decisions to adapt its practices where needed.