Early November, I was in Berlin at the conference Evidence on a Silver Platter: Evaluation Results for Policy Making in Development Cooperation and chaired a session with users of evaluation: policy makers and people who advise them. This session was an important change to the many evaluation conferences where we speak to each other rather than those to whom we present evaluation evidence.

We  heard from Lant Pritchet who took a skeptical position on evidence and how much (or rather little) it tells us, Howard White, who suggested  that evidence is valuable but not sufficiently used, and Antonie de Kemp, whose  presentation  showed how available evidence was overtaken by events.

The questions to my panelists were clear: are we evaluators really serving evidence on a silver platter? From their perspective, what do they think we could do better?

Top Three on the Wish List.

Some responses were not surprising. Continuing to lead on the list of pleas of evaluation users to us as an evaluation community are:

  • Timeliness, both referring to being right on time when evidence is needed, and shortening feedback loops so that learning can happen sooner;
  • Jargon, meaning our standard evaluation terminology is hard to decipher by non-evaluators. This comment invariably reminds me of doctors using Latin to explain things to patients who end up more confused than helped; and
  • Conciseness: busy decision-makers need to get to the point fast and clearly rather than through lengthy reports.

These issues have been persistent for a long time. Surprisingly, as we should be able to make headway to fix things that are largely under our control. Or else, some "soul searching" evaluation to find out what hinders us from doing so.

Simple Solutions

The panelists in response to questions from the audience came up with some suggestions for us to explore.

  • Thinking strategically and ahead of time. When agreeing on policies and programs, there should be an agreement on key decision points when evidence will be needed. Too often this is not clear from the beginning. Instead, when time comes, evaluators are under pressure to produce quick evidence.  One of the panelists referred to having these conversations as a "pre-mortem."
  • Dialogue for deeper understanding. Whether it is focus, - the evaluation questions we address - language, or ways of communicating, the panelists felt that deepening the dialogue between them and evaluators would help clarify needs and expectations.

In both cases, it takes moving closer together: thinking and learning together about what policy-makers and practitioners need from evaluation, and what evaluators need from them and can deliver. Importantly, this suggestion was not made to compromise the independent, critical voice that evaluators bring to the table, but to increase the effectiveness of evaluation.

Evaluation Failure or Political Reality?

This was another question I asked them: have we failed if evaluation recommendations are not taken up? Maybe we are too harsh on ourselves expecting that each and every of our evaluations and their recommendations will get taken up by policy-makers. My panelists were very clear: political imperatives or other factors drive political choices. When recommendations are not taken up, it does not mean evaluation had failed, but that it could not counter other, stronger drivers. Or, sometimes that it just takes longer to see a shift in context or politics to see an evaluation bear fruit.

Once again, not surprising when you come to think of it.

But, as we are raising the bar on evaluation effectiveness we to mindful of the incentives we create. For instance, when we track how many of our recommendations have been implemented, what is the best benchmark? For instance, if an implementation rate of 100% is expected, evaluators might choose to make easy recommendations rather than addressing challenging fundamental issues. But, would a lower bar let us evaluators off the hook and that while holding others to account for influencing client countries through World Bank Group services? In many ways, measuring the effectiveness of evaluation is akin to assessing analytical and advisory services, where a simple count of laws adopted will tell only a small part of the story.

I came away from the conference with a sense that the discussion needed "to be continued" in the spirit of the dialogue we had started. As the panelists recommended, we have to ensure evaluation can play its role in enhancing development effectiveness.

Comments

Submitted by Glenys Jones on Mon, 11/16/2015 - 22:37

Permalink
Another way to support timeliness is to establish an ongoing online monitoring and reporting system that progressively builds an information resource of evaluation reports which are regularly updated over time. So at any time, interested parties can see what evidence is currently available for significant and selected projects/programs and/or other monitored performance indicators. For example, the first edition of the evaluation report can be published online at the planning stage of the project/program when the desired outcomes/targets have been clearly articulated and the performance indicators to be monitored have been agreed. Thereafter the report is updated periodically or as and when new evidence becomes available. For more information, see my post When Outcomes Matter - The Adaptive Management Cycle at https://www.linkedin.com/pulse/when-outcomes-matter-adaptive-management-cycle-glenys-jones?trk=mp-author-card

Submitted by Caroline Heider on Mon, 11/23/2015 - 07:14

In reply to by Glenys Jones

Permalink
Glenys, this is a great suggestion and the information accumulated this way would be of great help to the independent evaluations that IEG undertakes. It would, however, not be our job to compile that information but rather than of the program managers. As we review the WBG's self-evaluation system, we will probably come back to issues around this topic.

Submitted by Gail Vallance … on Mon, 12/07/2015 - 00:57

Permalink
Hi Caroline I enjoy your posts. This one resonates particularly with me because I recently presented a workshop for members of the Saskatchewan Chapter of the Canadian Evaluation Society on After Data Analysis: Using A Policy Lens to Develop Conclusions and Recommendations. While I agree with the points made in your article, evaluators also need to understand the policy question (as Eleanor Chelimsky has said so often). However, a lot is lost in translation as we move into the sphere of our evaluation expertise. In the workshop we used a case study to look for results that were both robust from a research perspective and significant in terms of policy. We used a Policy Checklist to measure a set of potential recommendations against a set of policy-focused criteria..At that point, the penny dropped and many understood why past evaluations have not had the impact they expected. Thank you for reframing this important topic. Gail Vallance Barrington, PhD, CE

Submitted by Caroline Heider on Wed, 12/16/2015 - 22:49

In reply to by Gail Vallance …

Permalink
Gail, thanks for your feedback and more so for your contribution! Sounds very exciting and useful.

Add new comment