Evidence on a Silver Platter?
Making evaluation results relevant and valuable to our clients
Making evaluation results relevant and valuable to our clients
Early November, I was in Berlin at the conference Evidence on a Silver Platter: Evaluation Results for Policy Making in Development Cooperation and chaired a session with users of evaluation: policy makers and people who advise them. This session was an important change to the many evaluation conferences where we speak to each other rather than those to whom we present evaluation evidence.
We heard from Lant Pritchet who took a skeptical position on evidence and how much (or rather little) it tells us, Howard White, who suggested that evidence is valuable but not sufficiently used, and Antonie de Kemp, whose presentation showed how available evidence was overtaken by events.
The questions to my panelists were clear: are we evaluators really serving evidence on a silver platter? From their perspective, what do they think we could do better?
Some responses were not surprising. Continuing to lead on the list of pleas of evaluation users to us as an evaluation community are:
These issues have been persistent for a long time. Surprisingly, as we should be able to make headway to fix things that are largely under our control. Or else, some "soul searching" evaluation to find out what hinders us from doing so.
The panelists in response to questions from the audience came up with some suggestions for us to explore.
In both cases, it takes moving closer together: thinking and learning together about what policy-makers and practitioners need from evaluation, and what evaluators need from them and can deliver. Importantly, this suggestion was not made to compromise the independent, critical voice that evaluators bring to the table, but to increase the effectiveness of evaluation.
This was another question I asked them: have we failed if evaluation recommendations are not taken up? Maybe we are too harsh on ourselves expecting that each and every of our evaluations and their recommendations will get taken up by policy-makers. My panelists were very clear: political imperatives or other factors drive political choices. When recommendations are not taken up, it does not mean evaluation had failed, but that it could not counter other, stronger drivers. Or, sometimes that it just takes longer to see a shift in context or politics to see an evaluation bear fruit.
Once again, not surprising when you come to think of it.
But, as we are raising the bar on evaluation effectiveness we to mindful of the incentives we create. For instance, when we track how many of our recommendations have been implemented, what is the best benchmark? For instance, if an implementation rate of 100% is expected, evaluators might choose to make easy recommendations rather than addressing challenging fundamental issues. But, would a lower bar let us evaluators off the hook and that while holding others to account for influencing client countries through World Bank Group services? In many ways, measuring the effectiveness of evaluation is akin to assessing analytical and advisory services, where a simple count of laws adopted will tell only a small part of the story.
I came away from the conference with a sense that the discussion needed "to be continued" in the spirit of the dialogue we had started. As the panelists recommended, we have to ensure evaluation can play its role in enhancing development effectiveness.
Comments
Add new comment