In previous blogs, we looked at the various ways in which we can create or increase the value (for money) throughout the evaluation life-cycle. Now it is time to look at what comes after!

What happens after an evaluation is completed can impact its value. Yet, this is a phase in the cycle that is often undervalued and inadequately resourced. For the longest period, evaluators thought their job was done when the evaluation report was completed, sent to decision-makers, and discussed by authorities. Gradually, the realization set in: recommendations didn’t implement themselves.

In this blog, I look at some of the follow-up processes and measures that can help ensure that evaluations result in meaningful action.  Often, institutions do not schedule time or money for this work, but it is essential to get ownership, learning, action and change from an evaluation.

Early Engagement: Credibility and Ownership. My previous blog focused on a couple of things that help increase value (and have cost implications) during an evaluation. Among them was building credibility through transparent evaluation metrics and processes. This kind of engagement during the evaluation process is critical to building ownership of its findings. Surprising stakeholders with findings that are critical and point to problems, is not helpful to getting evaluation findings heard, insights understood, and recommendations taken seriously.   In engaging with decision-makers at the World Bank Group, IEG is, for example, experimenting with participatory processes for developing evaluation recommendations jointly with evaluees. Our engagements allow evaluators and program managers to discuss findings and identify how best to translate them into recommendations. The opportunities: greater ownership as the basis for greater implementation of agreed actions. The challenge: keeping the focus on recommendations that are hard to accomplish, but necessary to achieve step improvements.

Sharing Evaluation Knowledge: With the Right People, At the Right Time, In the Right Language. The answer to better resourcing the outreach phase of an evaluation does not lie in simply spending a lot of money. Outreach activities can be expensive, and even more so if they add to the cost without commensurately increasing the value of an evaluation. Instead, evaluations will increase their chance of influence when they:

  • Differentiate diverse audiences, recognize their diverse information needs and learning styles;
  • Package messages, products and channels to meet these needs to make evaluation evidence and lessons easily accessible; and
  • Inspire and understand the incentives of behavior change.

Diverse Audiences, Diverse Information Needs. Life is much easier if you have a single audience, because the entire evaluation can be targeted to their questions, level of technical detail, etc. But, this is often not the case. At the World Bank Group, for example, IEG’s audiences range from members of executive boards, to senior and mid management, and operational staff and counterparts. Each have a stake in the evaluation, each an interest in learning something about the program under evaluation. For instance, policy makers in board rooms, ministries, and senior management want to know whether strategically a policy, strategy, or program is the right thing to do. Does it contribute to the overall goals of the institution; does it generate the returns – financially and in development outcomes – commensurate with resources invested? At the operational levels, the interest is greater in the nuts and bolts of what has worked, how and why. And, affected communities want to get feedback, especially if they were consulted during the evaluation, on issues they raised and what will be done about them.

Making Messages Accessible. This description of different audiences and their needs already conveys that there cannot be a “one size fits all”. At community level, conveying messages will require different means – including translation into local languages, focus on details that matter to the community, and delivery through means accessible to people – than for instance a board room of a multilateral. For the latter, the interest is in short, focused, strategic messages, while operational people would want more detail, both evidence that proves the points made in the evaluation, and to learn from the details. And for wider audiences and different means of communication – including social media – there is a need to capture and convey messages in different ways. This is why IEG has invested in developing a range of prototypes from infographics to videos.

Incentivizing Behavior Change. Ultimately, evaluation recommendations are about changing what we do and how. Traditionally, evaluators have thought of this step as tracking progress in the implementation of recommendations. The World Bank Group’s management action record is an example of a sophisticated system that tracks recommendations over a four year period. It is an important step in taking stock of what has changed since the evaluation was completed. In future, we will raise the discussion to reflect on systemic, institutional changes as a result of evaluations conducted under each of our Strategic Engagement Areas. This is one step in the direction of recognizing that incentivizing behavior change requires a different kind of engagement than one of (solely) compliance checking.

Value-for-Money. In most organizations where I have worked, it would have been impossible to finance all of these outreach activities. At IEG, we are fortunate to be resourced in ways that we can invest in making our evaluation work more accessible to users, hence facilitate learning and influence change. These investments enhance the value of the work done. It does require making strategic choices about which investments in outreach will generate the highest returns.