Evaluators must make strategic choices at every stage of the evaluation cycle. This matters because value can be generated or lost - and costs incurred or saved - throughout the lifecycle of an evaluation.   While some of the factors that affect value creation are outside the control of evaluation, there are plenty of things that can be anticipated and managed by evaluators.

There are three distinct questions that evaluators can ask in order to create value rather than destroy it.

  • What to evaluate and when;
  • How to evaluate; and
  • With whom and how to share evaluation results, insights, and knowledge.

In this part of our value for money (VfM) of evaluation series, I want to unpack each of these questions, starting today with the importance of making strategic choices on what to evaluate and when.

What to Evaluate When.  The choice of what gets evaluated is particularly important for VfM: value and cost (money) are driven by what gets evaluated and how. This is even more so when looking at the portfolio of evaluations of an institution. Choices need to balance two sets of considerations.

  • Coverage of the institution’s work to generate evidence and evaluation of the health of the institution as a whole; with
  • Strategic issues, which once deeper evidence is generated, can be the biggest change agents for institutional results and performance.

This balance can be achieved by the right combination of business lines that focus on different units of accounts (from projects through country strategies to policies and corporate issues) with different types of evaluations (validation of self-evaluation, independent evaluation, and synthesis products that build on existing evaluation evidence).

Mirroring the Institutional Portfolio. To evaluate the health of the portfolio overall, one needs adequate evaluation coverage of an institution'€™s work. To achieve this, it is important to understand its entire portfolio of activities. More often than not, it will take several variables in combination to describe the portfolio. They can include size, geographical location, sector, type of instrument. In other words: characteristics that need to be taken into account to ensure representativeness of the overall portfolio. For instance, if an institution works predominantly in Africa, but the evaluation portfolio is focused on Asia, the results would not be an accurate representation of the health of the institution's portfolio as a whole. The challenge lies in finding the right balance between evaluating enough to ensure appropriate coverage, but not so much that costs become disproportionate. In some cases, a 100% sample makes no sense, in others it is needed to provide in-depth analyses of important aspects of the portfolio.

Making Strategic Choices. For other evaluations, it is important to understand which of them can make a strategic difference.  This is particularly so for large, costly evaluations. Their value derives from (and their cost can be justified) their ability to bring about evaluative evidence and insights that can stimulate strategic or systemic changes. These evaluations might tackle controversial issues or focus on aspects of ongoing change processes where just-in-time evaluation evidence can help inform decision-makers' choices. But, there is no simple way when to do one or the other, and not just one thing that gets you to the answers about what's strategic.

Ask Your Stakeholders. Stakeholders who want to use evaluation evidence need to receive that information at the time when they are debating and deciding what to do. It is therefore important to understand

  • The issues they are tackling. In some cases, stakeholder readiness to deal with an issues is crucial to materialize the value that evaluation evidence brings to the table. But only, if they need evaluation evidence to understand the direction of change. In other cases, it might be more important to evaluate controversial issues that (some) stakeholders would rather not see addressed; and
  • Decision-making processes, especially milestones when evaluation evidence is needed. It takes thinking ahead – as much time as it takes to complete an evaluation -€“ to pick up strategic issues in a timely way and have an evaluation available right in time for decision-makers' use.

Understand the Past with a View to the Future. When an institution is making a strategic shift, an evaluation of past policies and programs will help understand how far the institution has come, how big and where the gaps are. There are sensitivities that need to be managed well when making strategic choices of this nature. For one, the vision for the future has to be settled, to ensure the evaluation applies the right lens for the future. Otherwise, it will not generate its potential value, but waste resources. In addition, it is important to manage well the concern that the past will be evaluated against a new yardstick. In such case, the risk of damaging relationships is high -€“ a cost that might outlive the specific evaluation - and might outdo the value of evaluation evidence that helps prioritize areas for improvement.

Tackling Complex Issues with a Set of Evaluations. Some issues are fundamental to moving an institution to the next level. But they are too big and complex to tackle at once. Instead, a structured program of strategically selected evaluations can help shine light on the issue from different directions. This is exactly what we are trying to achieve with the Strategic Engagement Areas, which were selected as fundamental for achieving the Bank Group'€™s twin goals, but too big for a sole evaluation.

Comments

Submitted by Abdul Qadir on Tue, 04/26/2016 - 21:46

Permalink
Valuable paper, need further elaboration please.

Submitted by Caroline Heider on Sun, 05/01/2016 - 23:49

In reply to by Abdul Qadir

Permalink
Thanks, Abdul. Anything in particular you would like elaborated?

Submitted by Marco Lorenzoni on Tue, 04/26/2016 - 21:04

Permalink
Great article, Caroline, thanks for it. I think your article is resuming, clearly and concisely some factors that are being discussed since a while in the different evaluation communities – and this is much needed. What I mostly appreciated is your last (and partly innovative) recommendation about ‘Tackling Complex Issues with a Set of Evaluations’, which is often neglected by agencies commissioning evaluations. Being an evaluation practitioner I see a growing number of Terms of Reference that are unrealistic in terms of coverage – particularly if matched with resources that are more and more frequently insufficient to the purpose. So, I very much welcome your call to different agencies to tackle complex issues with a set of evaluations (the advantages are a number in terms of focus, specialisation of the evaluators etc.) and to resist to the temptation to cover the entire world with a single, multi-focussed and… under-budgeted assignment. All the best.

Submitted by Caroline Heider on Sun, 05/01/2016 - 23:50

In reply to by Marco Lorenzoni

Permalink
Many thanks, Marco. Yes, this is a great way to get depth (in the individual evaluations) and breadth (through a set of interrelated evaluations) with a series of work.

Submitted by Kimbi Wango on Tue, 04/26/2016 - 23:22

Permalink
Great article. Thanks for triggering this thought-provoking issue. Looking forward to the other articles on how and with whom...

Submitted by Caroline Heider on Sun, 05/01/2016 - 23:51

In reply to by Kimbi Wango

Permalink
Thanks, Kimbi, for your feedback.

Submitted by Paul Kojo Asare on Thu, 04/28/2016 - 00:39

Permalink
I really like the materials.

Submitted by Caroline Heider on Sun, 05/01/2016 - 23:52

In reply to by Paul Kojo Asare

Permalink
Thank you Paul!

Submitted by Lennise Baptiste on Mon, 05/02/2016 - 00:19

Permalink
Excellent article. Stakeholder readiness to receive information is a key issue, and the usefulness of the evaluation process to provide information about beneficiaries, the processes of the implementing staff and organisation and the operating context is still not understood or accepted fully. As I read I remembered the responses of two different project managers to the matrix I had developed to illustrate the links between stakeholders, data collection and analysis strategies and the evaluation questions in the TOR. In both cases, they wanted to know why the range of stakeholders had to be engaged, because as far as they were concerned my work was not "rocket science". The process of explaining the what, how and with whom was very painful from their end and mine because the opportunity to learn from the evaluation was lost on them because the evaluation was a necessary evil to be performed for the funders.

Submitted by Caroline Heider on Mon, 05/02/2016 - 01:06

In reply to by Lennise Baptiste

Permalink
Lennise, the good, encouraging thing: to read about your practice of doing the stakeholder analysis and engaging with the people, even if they did not see the immediate value of that work. Hopefully it gave them food for thought to think about stakeholder engagement, which is important in all stages of a project cycle -- from inception through design and implementation, to evaluation and feedback.

Submitted by Cosmas Mworia on Mon, 05/02/2016 - 01:25

Permalink
The articles gives practical insights to evaluation specialists and stakeholders at large, great work! I'm doing a course in Sustainable Agriculture and Rural Development in Dublin, but basically I will be working in East Africa (Tanzania). , particularly in Rural Development issues including Agrculture and Rural Enterprise. Can I get contributions about monitoring and evaluation of an effective program planning? what is the evidence and how does an evaluate access such evidence? Thanks

Submitted by Loy Rego on Tue, 05/03/2016 - 02:49

Permalink
Very useful concisely written strategic direction. Thanks. Provides guidance to an evaluation of a network that I am currently doing. look forward to the other two in the series.

Add new comment