Value for Money - Getting it Right During the Evaluation Process
Why focus on Value-for-Money ? Isn't this part of evaluation practice well established by now?
Value can be generated or lost and costs can be incurred or saved throughout the lifecycle of an evaluation, starting with what to evaluate and when; how to evaluate; and with whom and how to share evaluation results, insights, and knowledge.
A few weeks ago, as part of this series on getting value for money from evaluation, I wrote about the importance of making strategic choices on what to evaluate and when. Today, I want to focus on opportunities to increase value-for-money during the evaluation process.
A number of readers might ask: why focus on this? isn't this part of evaluation practice well established by now? Yes it is, but choices made in designing and conducting evaluations can and often do influence the overall cost and value of evaluation.
Ultimately, the value of an evaluation lies in it being used. For that to happen evaluation and its underlying design and process have to be credible, timely, and useful.
Credibility is necessary! Why else would anyone take note and act on findings or recommendations? A large part of credibility is derived from evaluators' professional standing. They are known for their expertise. Their word counts. But, evaluation is more than an expert opinion.
Making the right choices about scope and methods also has an important part to play. For instance, if sampling methods introduce biases, or methods rely on too few sources of information for meaningful triangulation or assessment of results, validity of findings and credibility of the evaluation suffer. Now, does that mean sampling 100% of a program, or asking every conceivable question in a survey are the right answers to credibility? Certainly not, as wastefulness in an evaluation undermines credibility just as much - especially when the evaluation critiques the efficiency of the program it is evaluating.
Credibility is also gained when stakeholders understand how the evaluation is conducted. Transparency around the processes, methods, and yardsticks used to form an evaluative judgment can reassure stakeholders. It gives them greater opportunity to share information or question analyses. Engagement during the evaluation process, if constructive, might also lead to early learning.
Timeliness is quintessential for an evaluation to be used. When a decision needs to be taken, for example, to renew a program, make changes to it, or stop it altogether, evaluation results need to come in well ahead of that decision. Nothing is more disappointing, and costly than to miss such an opportunity to influence decisions. This is the reason for my previous blog: making strategic choices about what to evaluate when should take into account key milestones in decision-processes. That way, an evaluation can start early enough. Another difficulty is when evaluators face trade-offs between scope and timeliness, especially when looking at larger complex programs. There are no simple recipes for that situation, but choices have to be weighed at the outset of an evaluation and managed towards throughout.
Usefulness. Michael Quinn Patton is the grand master of utilization-focused evaluation. He has provided insightful discussions, guides, and checklists to ensure evaluations are focused on use. I couldn't agree more with them on the need to identify specific primary users. But, I would add here that at least in my experience in multilateral organizations, very often we do not have a single prime user. We need to satisfy different demands with the same evaluation. And, even if we did evaluations that are primarily focused on the needs of the Executive Board, to whom IEG reports, we generate important insights for management and operational staff as well. And, they need to take up lessons and recommendations in their actions.
So What? The Importance of Recommendations. A critical aspect of the value-for-money of evaluation derives from its recommendations. Recommendations define the issues that the evaluation prioritizes and suggests will make the biggest difference in the program - its performance and results - if addressed. Recommendations derive from evidence, findings, and conclusions, and often focus on observed shortcomings to correct them. But, especially when assessing large complex programs, many avenues for improvement or scaling up of success might be possible. Recommendations are sometimes exposed to tests whether they are actionable, but less to the question what difference will they make? Once implemented, what value-added will they generate? To foster ownership of recommendations it is important for evaluators to engage with program managers and seek the answer to the "so what?" question. The larger value added from implementing the recommendation, the higher the likelihood that ownership and follow-up action will be high.