Twenty five years ago, evaluators were an exotic breed and donors trusted we were doing the right thing. Just having evidence and an evaluative assessment on project performance and results was highly valued. In fact, demand was so great that it spun off an incredible expansion in development evaluation.

This growth has had obvious upsides: the profession has grown, is innovating and finding new ways to pull together evidence that tells us what works and why, and is slowly maturing so that we can talk about professionalization and, eventually, accreditation.

It has also created heightened expectations around the quality of evaluation, and raised questions about who is evaluating the evaluators. Can we simply be trusted or should we be required to take our own medicine?

The simple answer is yes: if anything we should hold ourselves to stricter standards than we hold those whose interventions we evaluate.

Room for Improvement?

At IEG we have a multilayered system for quality assurance that involves internal reviews, external peer reviewers, and at times expert panels. We question whether an evaluation warrants being included in our work program, as much as we check the quality at entry and at completion of our evaluations through thorough review processes. We also expose our work to client feedback.

Yet, all of that is not enough, as none of it is as independent as we want it to be.

To fill this gap, we welcome that the Committee on Development Effectiveness, a committee of the World Bank Group's Executive Board, has commissioned an external independent review of IEG. The last such review was undertaken in 2004.

The main objective of this latest review is to provide suggestions and recommendations to the Board of Executive Directors in order to continue to enhance IEG’s impact and further strengthen its role as an independent evaluator of the Bank Group’s work. The review is expected to clearly identify IEG’s main strengths and areas where improvement may be necessary. The process is scheduled to conclude in September 2014.

Measuring results

In the future, the external independent review of IEG will be helped by our forthcoming results framework, which sets out clear objectives and metrics for measuring the outcome of our work so that others will be able to judge whether we contributed to improving the performance and results of the World Bank Group. (See my recent blog on When Rating Performance, Start with Yourself). 

In addition, we engaged a group of renowned evaluation experts - Tom Bernes, Patricia Rogers, Ivory Yong-Protzel, and Franck Wiebe - to undertake a meta-evaluation of three of our evaluations. To ensure the independence of this panel, we separated the accountability for the evaluations, which lies in the departments, from that for the meta-evaluation.

The meta-evaluation panel also developed an assessment tool to ensure a systematic analysis of our products and to give us feedback on the independence, quality, credibility, and influence of our evaluations. This is a pilot program which will allow us to include certain checks in our regular quality assurance process and to decide how often we should commission such meta-evaluations in the future.

Striving for excellence in evaluation is essential for evaluations to be influential. It requires continuous learning and improvement, in which independent assessments of our work play an important role.

Comments

Submitted by Ms S Wijesinha on Wed, 05/07/2014 - 07:33

Permalink
Can I who do regular comments evaluate my self. I would love to get an appraisal from the World Bank IEG

Submitted by Janet Mancini … on Tue, 05/13/2014 - 01:39

Permalink
It was 18 years ago, in 1996, that I evaluated the OED (now IEG). At that time, the emphasis was on looking at the OED's evaluation practices and processes through the eyes of country stakeholders, rather than primarily through financial or technical audits. Thus, in addition to conducting a dozen or so focus group discussions (FGDs) at Bank headquarters with TTLs, Sector Managers, Country Directors, etc., OED asked me to spend some time in Colombia, Indonesia, and Zimbabwe conducting FGDs with national planning departments, governments (line and core ministries), Bank resident mission staff, NGOs, etc. Interestingly, respondents called for more qualitative approaches to evaluation--what Deepa Narayan later called "voices of the poor"--and although I was not tasked with doing the kind of on-the-ground interviews Narayan did in 2000, the stakeholders I interviewed were clear that the beneficiaries at all levels must be heard, first-hand. I am impressed by the evaluation strategy this time around, look forward to the findings, and applaud IEG's commitment to openness and risk-taking. I do hope that stakeholder voices will be included in this round, as well.

Submitted by Caroline Heider on Wed, 05/14/2014 - 03:56

In reply to by Janet Mancini …

Permalink
Janet, many thanks for your contribution about the study you undertook in 1996. Getting the voices of those who should benefit from interventions -- and those who do not -- into our evaluations is very much on our mind. We have used social media, radio and sms outreach to tap into communities on a much larger scale, done primary fieldwork to meet communities, and are exploring ways to use technology to get broader and deeper input from project affected communities. In addition, the World Bank Group's commitment to generate client feedback will be another source we can tap into, to validate feedback and deepen our analysis. All of this to say: I agree with you that it is the triangulation of different views of stakeholders at different levels that make evaluations richer and more useful.

Add new comment