As with development practitioners, evaluators frequently ask themselves: are we making a difference and if we are, in what way? 

In my more than 25 years of practice I’ve found that these questions can be addressed by embedding influence throughout the process: from choosing what to evaluate, how to design and undertake the evaluation, to outreach and follow-up.

Making strategic choices

What to evaluate when and why? More often than not the response is that it’s what the donor wants.  Fair enough, all donors require accountability, as they themselves must report back to their tax payers.

My own sense is that as evaluators we have to bring together two dimensions. The first is “coverage” that allows us to give a verdict on the health of a portfolio or an institution. The power of these evaluations, comprised of evaluations of a representative sample of individual interventions, derives from its aggregation of trend analysis, patterns, and overall assessment. Our Results and Performance Report is a powerful example of this type of work. The strategic choice is to maintain the individual evaluations at such a level that aggregation is possible.

The other dimension is that of strategic choice, finding those game changer subjects where the evaluation can make a contribution to solving a larger problem. These are sometimes demanded by stakeholders, but many times these evaluations touch on critical issues that at least some stakeholders would rather leave uncovered. The notion of “readiness for evaluation findings” is often thought to be essential if an evaluation is to stick. That may be true, but our efforts might also be better spent on a harder topic.

The difficulty is in identifying those topics or issues. We did that in my previous job and the evaluations were finished just as the institution was taking a hard look at the changes in strategic direction it had taken two years earlier. This is more complicated in an institution such as the World Bank Group, given the many dimensions it addresses with its services. In IEG, we are using our results framework to make those strategic choices.

How evaluations get done

The design and process of an evaluation also contribute to its influence.  The evaluation team itself must be credible, as must the process and interactions with stakeholders, and the data collection and analysis.  An evaluation is unlikely to be influential is it’s not credible.

Methods and evaluation questions are also integral parts of creating credibility and trust in the evaluation. Transparency around what questions are being researched, what methods and data will be used, and which stakeholders will be consulted will influence the quality of the evaluation and receptivity of stakeholders to its messages.

An influential evaluation also requires engagement with stakeholders. Independence should not be confused with isolation which can result in a poorer understanding of the intervention being evaluated.

Outreach and follow-through

Finally, there is the importance of outreach and follow-through.  Being part of the World Bank Group, many look to IEG for our findings, whether it is because of the central role the institutions play in the development debate, or because of the span of issues we have covered.

And, we are also fortunate to have the resources to fund activities that range from internal launches aimed at Bank Group staff, to events at the Bank’s Annual and Spring Meetings, to participating in international, regional and national conferences. We develop learning products and integrate lessons from our evaluations into training courses.

But it’s not just about quantity. Making a difference involves tailoring the right product with the right engagement process.   Experts on a particular issue are more likely to engage with the knowledge gained from an IEG evaluation while fellow evaluators may be more concerned with methods and approaches. Likewise, getting attention at a senior level is important for policy buy-in, but engagement at an operational level is essential to making a difference on the ground.

Institutional Factors

The organizational culture of an institution also plays an important role. Our ongoing evaluation of Learning and Results in World Bank Group Lending will shed light on institutional factors that enhance or hinder learning. As we did with the first phase of this evaluation, we’ll be sharing our knowledge through various channels, so please stay tuned.

Are these experiences proof that evaluation is making a difference? We track where and how our evaluations are referenced and used, follow their implementation through the Management Action Record, get feedback through our client survey and have done follow-up studies on select evaluations. Still, we wanted more evidence of influence. 

We have now included indicators in our results framework that will allow us to plan more deliberately for the influence we want to exercise and to measure in the future what we have achieved.

Comments

Submitted by Marco Lorenzoni on Tue, 12/02/2014 - 23:13

Permalink
Thanks for this article, Caroline, which I found very useful. I think that –above all- what makes the difference is Institutional Commitment: is the Institution that requires the evaluation really and genuinely interested in its results and willing to adapt its plans –if needed- following the results of the evaluation? This would definitely impact on the ‘Making of strategic choice’ and in ‘Outreach and follow-through’ and –at an even quality of the work of evaluators- can determine how influential will be the evaluation. Not an easy objective to be achieved: Institutional Commitment deals with the Institution(s) that requests the evaluation and with local Partner Institutions, with sometimes different and conflicting agenda. I think that the work you have done with ‘Learning and Results in World Bank Operations: How the Bank Learns’ is a truly commendable one and all donors should do the same effort: we have all seen too many evaluations that are launched as a closing file exercise, with no Institutional Commitment and no (or very little) follow-up... All the best

Submitted by Caroline Heider on Wed, 12/03/2014 - 06:41

Permalink
Marco, many thanks for your observations; nice to hear from you again. You are entirely right that institutional commitment is essential and makes it easier for evaluation to exercise influence. However, I would argue that we need to exercise our function in ways that are influential -- as described above -- even when the institutional context is adverse or indifferent to evaluation. This way we can influence the institutional attitude towards evaluation and create a more conducive environment by demonstrating how evaluation helps institutions succeed in achieving their goals.

Submitted by Dennis Bours on Wed, 12/03/2014 - 05:08

Permalink
Dear Caroline, Great post! I realize that two topics not directly mentioned are in a way implied in the text. The first one would be evaluation utilization, and how to improve it - being part of the strategic choices you make at the beginning. And secondly; knowledge management - and I feel that the follow-through and institutional factors relate to that. Best, Dennis @Dennisbours

Submitted by Caroline Heider on Thu, 12/04/2014 - 01:13

In reply to by Dennis Bours

Permalink
Dennis, thanks for the nice feedback. And, yes on both accounts. Building the idea of utilization throughout the evaluation, from the moment the subject is selected, to managing knowledge including tailoring and targeting "pieces of information" from larger evaluations to specific audiences. One such example, which we will post shortly is work on partnerships. We took a large number of evaluations and aggregated findings that apply to partnerships in general. Each individual evaluation speaks to an audience interested in the partnership under review, while the summary piece speaks to anyone designing or managing partnerships, or thinking about setting up good M&E systems. Another example is when we took a very rich evaluation on Fragile and Conflict States and extracted out lessons on specific subjects. Work in progress, but a good way to optimize the use of evaluation and manage the knowledge they contain.

Submitted by Marco Lorenzoni on Wed, 12/03/2014 - 22:33

Permalink
Caroline, I cannot agree more -this is part of our ethical duty as evaluators !

Submitted by Jennifer Bisgard on Sat, 12/06/2014 - 04:53

Permalink
Hi Caroline, nice post! The prestige of the lead evaluator is often a critical, yet seldom mentioned. When Micheal Bamberger or Michael Quinn Patton lead an evaluation, the amount of credibility and attention gained is serious. Put an unknown evaluator in the same position, making the same points, using the same methodology and the evaluation often has little or no effect.

Submitted by Caroline Heider on Mon, 12/08/2014 - 20:29

In reply to by Jennifer Bisgard

Permalink
Good point, Jennifer. The credibility of the evaluation team is really important for credibility and then use of the evaluation. In part it comes from name recognition in the evaluation world or, as often at the World Bank Group, in the sector where having professional creditials is essential. In the cases where there is no immediate name recognition, the lead evaluator needs to establish that rapport really quickly with a combination of technical, evaluation, and interpersonal skills. Good point!

Submitted by Deepika Chawla on Thu, 12/11/2014 - 00:32

Permalink
Dear Caroline, this is a great post and some of the comments are very observant of the "culture' of international development. Regardless of whether an evaluation is conducted at the institutional or a program level, in addition to the name recognition of the team lead, the principles of rigorous and sound evaluation must always be center stage. This is even more important today given the trend for impact evaluation as the gold standard of all evaluations. If the culture of the institution is such that it can accept both positive and negative feedback, the lessons learned from an evaluation will be well utilized in future programming so we are not reinventing the wheel every time. However, in a lot of cases with donor-required evaluations, there is very little use of the findings from the evaluations either in terms of program improvement or in terms of design of new programs, making these reports into one more report that sits on a bookshelf.

Submitted by Caroline Heider on Mon, 01/12/2015 - 23:22

In reply to by Deepika Chawla

Permalink
Deepika, well said. Only two points: the discussion has turned to recognizing that today different evaluation methods and tools are needed to understand success and failure, and above all: why things are working or not. Impact evaluations are critical for certain interventions and questions, other methods -- qualitative, participatory -- are key in other situations. Together they can provide a powerful set of insights that help people learn from experience.

Submitted by Raoul Blindenbacher on Fri, 12/12/2014 - 04:17

Permalink
Dear Caroline Great topic! Can you a little bit elaborate what you mean with institutional factors? What institutions do you mean exactly? The Bank? Other funding organizations? Partner governments? Civil society organizations? Etc. Best, Raoul Blindenbacher

Submitted by Caroline Heider on Thu, 12/11/2014 - 22:00

In reply to by Raoul Blindenbacher

Permalink
Raoul, the institutional factors we have found in our first phase evaluation to matter a lot to learning are incentives, including leadership signals, and time. As mentioned, we are right now working on the second phase to unpack the incentives and other factors in greater detail. Stay tuned, there will be a follow-up. For evaluation more generally, institutional factors include the same -- incentives to learn and change, processes that help or hinder such learning -- and the attitude towards critical reflection. Each Institution where I have worked till now has a distinct organizational culture that drives how it engages with evaluation.

Submitted by Md. Nazrul Islam on Thu, 12/11/2014 - 21:07

Permalink
It is really an excellent article for evaluators. I have got many insight from this article. Which methods is more effective -qualitative or quantitative for decision making.

Submitted by Caroline Heider on Thu, 12/11/2014 - 22:07

In reply to by Md. Nazrul Islam

Permalink
Many thanks, Nazrul, for the positive feedback. To your question: it will depend what the decisions are about. In some cases, quantitative data and randomized control trials will give you the assurance you need to make decisions; the pharma-industry relies on these trials when testing new medicines. In other cases, you will need qualitative data and information to understand the roots of certain behaviors or patterns before a decision can be made about a policy change. My preferred option is to bring both methods together, as the combination of different data sources, methods, and stakeholder perspectives can form a rich, informative basis for evaluation that then can inform decision-making.

Submitted by Ms S Wijesinha on Thu, 02/19/2015 - 00:38

Permalink
Implementation of decisions made, by the evaluations by the evaluators which is made towards the betterment of all could be successful. Shashika Wijesinha S

Submitted by Caroline Heider on Wed, 02/18/2015 - 20:37

In reply to by Ms S Wijesinha

Permalink
Shashika, we know at least one part of the equation: whether recommendations were adopted and follow-up actions took place. Our latest data shows that we rated 35 percent of the recommendations as substantially adopted or better in the first year of follow-up, improving to 83 percent by the fourth year. In addition, we have specific examples where our recommendations have made a difference. The next big question: how did that affect the results of the WBG for the people affected by its interventions?

Submitted by Ms S Wijesinha on Mon, 02/23/2015 - 20:49

Permalink
Good question Madam, overall intervention and implementation is useful and I feel is productive. Shashika Wijesinha .S. Sri Lanka

Submitted by Koen on Fri, 07/24/2015 - 04:29

Permalink
Wow! Great to find a post knincokg my socks off!

Submitted by Blindenbacher on Thu, 01/22/2015 - 01:15

Permalink
Dear Caroline, I fully agree that there exist different organizational cultures to be aware off. Yet, it is important to take into account that organizations in political environments are shaped by very unique features and do face very particular forms of learning barriers that do not exist in other non-governmental organizations. Take a look at the Pulitzer Prize–winning book from Barbara Tuchman’s “The March of Folly” where she describes why governments did not learn, nota bene from “From Troy to Vietnam”. All these characteristics and obstacles have to be considered when dealing with learning in governmental and multi-organizational settings. Currently there is a broad debate going on considered to an emerging theory called Governmental Learning, which you may take into account in the second phase evaluation. Best, Raoul

Add new comment