In previous blogs, we looked at the various ways in which we can create or increase the value (for money) throughout the evaluation life-cycle. Now it is time to look at what comes after!

What happens after an evaluation is completed can impact its value. Yet, this is a phase in the cycle that is often undervalued and inadequately resourced. For the longest period, evaluators thought their job was done when the evaluation report was completed, sent to decision-makers, and discussed by authorities. Gradually, the realization set in: recommendations didn’t implement themselves.

In this blog, I look at some of the follow-up processes and measures that can help ensure that evaluations result in meaningful action.  Often, institutions do not schedule time or money for this work, but it is essential to get ownership, learning, action and change from an evaluation.

Early Engagement: Credibility and Ownership. My previous blog focused on a couple of things that help increase value (and have cost implications) during an evaluation. Among them was building credibility through transparent evaluation metrics and processes. This kind of engagement during the evaluation process is critical to building ownership of its findings. Surprising stakeholders with findings that are critical and point to problems, is not helpful to getting evaluation findings heard, insights understood, and recommendations taken seriously.   In engaging with decision-makers at the World Bank Group, IEG is, for example, experimenting with participatory processes for developing evaluation recommendations jointly with evaluees. Our engagements allow evaluators and program managers to discuss findings and identify how best to translate them into recommendations. The opportunities: greater ownership as the basis for greater implementation of agreed actions. The challenge: keeping the focus on recommendations that are hard to accomplish, but necessary to achieve step improvements.

Sharing Evaluation Knowledge: With the Right People, At the Right Time, In the Right Language. The answer to better resourcing the outreach phase of an evaluation does not lie in simply spending a lot of money. Outreach activities can be expensive, and even more so if they add to the cost without commensurately increasing the value of an evaluation. Instead, evaluations will increase their chance of influence when they:

  • Differentiate diverse audiences, recognize their diverse information needs and learning styles;
  • Package messages, products and channels to meet these needs to make evaluation evidence and lessons easily accessible; and
  • Inspire and understand the incentives of behavior change.

Diverse Audiences, Diverse Information Needs. Life is much easier if you have a single audience, because the entire evaluation can be targeted to their questions, level of technical detail, etc. But, this is often not the case. At the World Bank Group, for example, IEG’s audiences range from members of executive boards, to senior and mid management, and operational staff and counterparts. Each have a stake in the evaluation, each an interest in learning something about the program under evaluation. For instance, policy makers in board rooms, ministries, and senior management want to know whether strategically a policy, strategy, or program is the right thing to do. Does it contribute to the overall goals of the institution; does it generate the returns – financially and in development outcomes – commensurate with resources invested? At the operational levels, the interest is greater in the nuts and bolts of what has worked, how and why. And, affected communities want to get feedback, especially if they were consulted during the evaluation, on issues they raised and what will be done about them.

Making Messages Accessible. This description of different audiences and their needs already conveys that there cannot be a “one size fits all”. At community level, conveying messages will require different means – including translation into local languages, focus on details that matter to the community, and delivery through means accessible to people – than for instance a board room of a multilateral. For the latter, the interest is in short, focused, strategic messages, while operational people would want more detail, both evidence that proves the points made in the evaluation, and to learn from the details. And for wider audiences and different means of communication – including social media – there is a need to capture and convey messages in different ways. This is why IEG has invested in developing a range of prototypes from infographics to videos.

Incentivizing Behavior Change. Ultimately, evaluation recommendations are about changing what we do and how. Traditionally, evaluators have thought of this step as tracking progress in the implementation of recommendations. The World Bank Group’s management action record is an example of a sophisticated system that tracks recommendations over a four year period. It is an important step in taking stock of what has changed since the evaluation was completed. In future, we will raise the discussion to reflect on systemic, institutional changes as a result of evaluations conducted under each of our Strategic Engagement Areas. This is one step in the direction of recognizing that incentivizing behavior change requires a different kind of engagement than one of (solely) compliance checking.

Value-for-Money. In most organizations where I have worked, it would have been impossible to finance all of these outreach activities. At IEG, we are fortunate to be resourced in ways that we can invest in making our evaluation work more accessible to users, hence facilitate learning and influence change. These investments enhance the value of the work done. It does require making strategic choices about which investments in outreach will generate the highest returns.
 

 

Comments

Submitted by Mike Smith on Wed, 06/22/2016 - 04:33

Permalink

If you want to see a system which incorporates all your comments and based on 15 years of research into learning transfer go to www.ltsglobal.com and look at TransferLogix. Establishes whether learning has impact and adds value, and engages all the stakeholders in the learning cycle. Also see the only validated tool (in 22 countries and over 15k particpants) in learning transfer the Learning Transfer Systems Inventory.

Submitted by LaxmI Limbu on Thu, 06/23/2016 - 14:25

Permalink

I am a learner of M&E related information focusing the use of MRM, MIS database and strengthening M&E system. Actually, I would like to ask - if a programme/project is operating for 18/19 years, to what extent of years, the monitoring should be done ?

Laxmi, monitoring should be an ongoing activity to helps manager program implementation. There are a number of things that need to be monitored, from financial data (expenditures, income) to procurement and milestones in program implementation. Important for evaluation is that results are monitored as well. Some, are more immediate outputs like number of people/children vaccinated or wells built, others are outcomes like changes in the incidence of illnesses. Given the long life of the program, these should be monitorable and both self- and independent evaluation should help understand whether outcomes are achieved and what needs to be done to do so (if not). The "highest level" of results are impacts, e.g. changes in behaviors that are sustained, improved well-being, decreased poverty levels. These are more difficult to measure and do not change as rapidly as other results. Hence, evaluation does not need to be done on a daily basis, but still should be undertaken as frequently as it makes sense given the speed at which impacts could be observed.

Submitted by Susanne Bauer on Sun, 06/26/2016 - 01:06

Permalink

Great tips for early engagement, knowledge sharing, diverse audiences to include, and access of findings at community level to increase: I truly confirm Your excellent summary, with behavioral changes impacting over time. Thank Your for this good step forward to enhance the value of development work, done gradually and with often different results, in the view of us as external evaluators.

Great response to a beginner. Monitoring is an on-going process as explained in her comments. Evaluation can take place at different times; i.e mid-term to ensure that the project activities are being implemented as planned or to identify any constraints, effects of external influences and take appropriate interventions to rectify the situation. Ex-post evaluations are to evaluate relevance( whether the project is designed in such a way to address the identified issues, whether planned activities are geared towards achieving the objectives etc.)efficiency( achieving the outputs and results), effectiveness( use of resources),impact( what is the long-lasting it has made, what changes have been brought-up in the behaviour , attitudes of the beneficiaries as well as the decision makers) and finally, sustainability to ascertain whether the project has built beneficiary as well as institutional capacity to continue the project without outside support and own it, whether they have mobilised resources. So, its done at different times and , one can do impact evaluations after a number of years to see the impact the interventions have made in changing the life of people such as improved living standards, high achievement rates of students, improved health status etc. Most important is to learn, share the findings and recommendations and to establish a mechanism to implement the recommendations!

Submitted by Scott Bayley on Wed, 07/06/2016 - 14:07

Permalink

Readers may also be interested in the 2015 report "Evaluations that make a difference: stories from around the world" that contains a number of lessons based on a series of international case studies. https://evaluationstories.wordpress.com/evaluation-story-publications/ In addition my own recent blog posting on types of evaluation use and formulating high quality recommendations offers some guidance: http://www.ecdg.net/2016/07/04/evaluation-use-and-recommendations/

Submitted by protsa omondi on Fri, 07/29/2016 - 16:47

Permalink

Aspect of information needs and use very useful based on stakeholders profile. keep up give us more information this Thanks

Add new comment