TWEET THIS

To be more influential, evaluation should become more strongly institutionalized at the strategy and policy level.
Evaluators need to develop models to better understand, analyze, and interpret the complexity involved in policy interventions and their context.
Evaluators have to engage and work with policy makers throughout the evaluation cycle.

[Mobilizing private financing and domestic tax resources to achieve the SDGs] will require good policy-making to ensure value for money.  Done well, evaluation can and should be a tool for informing policy.

The international community has adopted ambitious sustainable development goals (SDGs); achieving these goals will require much more financing than can be provided through official development assistance. Therefore, financing for development is now appropriately understood to include mobilization of private financing and domestic tax revenue. Bringing these financing sources to bear on efforts to achieve the SDGs will require good policy-making to ensure value for money. In turn, independent evaluation can and should be a tool for informing policy as one of the core pillars of the evidence-based policy-making agenda.

At this year’s Asian Evaluation Week (AEW), which was hosted last week in Hangzhou, China by the Independent Evaluation Department of the Asian Development Bank, the Ministry of Finance of the People’s Republic of China, the Asia-Pacific Finance and Development Institute, and Zhejiang Provincial Department of Finance, the role of evaluation in policy-making was a central topic of discussion. Over 200 senior evaluators and policy-makers gathered to share their experiences and explore ways to further strengthen the link between evaluation and policy. Over the four days of the conference, various speakers shared examples of where evaluation is influencing policies, including in public expenditure management in China.

As evaluators, we expect our evaluations to be used, to affect policy formulation and implementation. Indeed, the examples we heard at AEW illustrated that this is happening. But are we there yet? My sense is not quite.

So how can we get there?

Drawing on the rich discussions we had over the course of AEW, I would like to suggest three broad opportunities for influencing policy-making through evaluation. First, evaluators must make strategic choices. Secondly, they have to deliver quality evaluations, and thirdly, they must engage more systematically with the demand-side of evaluation.

1. Make Strategic Choices

To influence policy, evaluators must be forward-looking – an exploration of intervention options and means for achieving greater development impact rather than an assessment of failure or success of a given project or program.  Evaluation must be topical and well-timed. As one AEW participant put it, in this fast-changing policy-environment, to be relevant, evaluators have to be two steps ahead and provide readily available knowledge upstream in the policy-formulation process. Moreover, much of current evaluation practice takes place at the project and program level. In order for evaluation to become more influential, it should become more strongly institutionalized at the strategy and policy level.  

While it is natural to expect our evaluations to influence policy-decisions, we as evaluators must recognize that many other factors affect policy-decisions.  As Carol H. Weiss (1999) cogently explains, evaluative evidence is one of several possible sources of Information that together with Interests, Ideologies, and Institution (the four I’s) interact to shape public policy. This recognition calls for us to be even more strategic in deciding what to evaluate and when, if we want to effectively influence policy-making.

Evaluators must also beware of the confirmation bias of policy-makers and not conflate its effect with evaluative influence. In searching for arguments to support a pre-determined policy choice, policy-makers may invoke an evaluative finding. In such instances, while evaluators cannot take credit for having influenced the policy choice, they would not object if their existing findings are used to back good policy choices not directly inspired by their evaluation. A more serious situation would be when unbeknown to an evaluator, her or his well-intentioned findings are misinterpreted to back a policy-choice that is contrary to the spirit of the findings.
Finally, we evaluators have to recognize that evaluation should not seek to influence policy at all times and in all places. For example, as discussed in a couple of breakout sessions during AEW, in decentralized settings and at the subnational level, evaluation may be more beneficial to a local-level audience if it focuses on effectiveness of project and program implementation rather than seeking to influence policy-making.

2. Focus on Quality and Methods

For an evaluation to inform policy, it has to pass the litmus test on quality and credibility. As evaluators, we need to develop models to better understand, analyze, and interpret the complexity involved in policy interventions and their context. This calls for strengthening our methods to capture interactions in complex programs (in ex-post evaluations), explore synergies between several policy interventions or at the minimum assess the performance of several policy options (in ex-ante policy experiments). In a complex world, evaluators should also avoid the fallacy of linear thinking and simple assumptions regarding causality. This calls for theories of change that include a statement of the assumptions and circumstances under which the intervention might work or fail to work. 

To have greater influence on policy-making, evaluators may consider making greater use of systematic reviews of evaluations of individual policies to tease out lessons that cut across several similar policy interventions. This was a clear lesson from an AEW breakout session where we discussed “What makes a good policy”.

Evaluators’ ability to assess unintended consequences and help policy-makers take them into account in policy-formulation and implementation was also discussed. For example, in a session on Gender Equality, participants noted that gender-focused policies can produce sub-optimal outcomes if the cultural context is not well-understood, and some infrastructure policies have had unexpected detrimental impacts of women’s sense of security.

3. Strengthen Outreach and Engagement with the Demand Side

It helps if at the apex level, policy-makers make a conscious decision to seek and use evaluative knowledge to inform policy-making, as was shown in a couple sessions at AEW.  However, evaluators must be aware that when it comes to evaluation, economist J.B. Say’s law stating that supply creates demand does not always work, especially as the demand side tends to focus on the accountability purpose of evaluation (which is often misconstrued as punitive). 

Policy influence is more likely to happen when policy-makers are incentivized to learn – when they see evaluations as sources of insight, and not merely tools to hold them accountable. This means that evaluators have to become more effective at facilitating an engagement with policy-makers and working synergistically with them throughout the evaluation cycle if they are to bring the science of evaluation to bear on policy-making. But there may be limitations. Evaluation should safeguard its independence, for example.

Should policy-makers be accountable for seeking evaluative evidence when designing and implementing new policies? Some participants discussed the role of elected officials in fostering evaluation use. This is a step in the right direction. In multilateral institutions, the Boards could enhance their oversight role as well.  

Finally, technology can play an important role in facilitating more effective outreach and stakeholder engagement. For example, technology can help evaluators and policy-makers to quickly synthesize insights from a much larger body of evaluative evidence than was possible before with the rudimentary tools that were available. It can also facilitate knowledge sharing through wider and faster distribution. Lessons learned in one country can more easily be shared with policy-makers in a different country. 

A lot needs to happen if independent evaluation is to reach its full potential of influencing policy-making. But the direction of travel seems to be right and there are reasons to be cautiously optimistic.

References

Weiss, C.H. (1999). The Interface between Evaluation and Public Policy,  Evaluation, 5(4), 468-486.

Comments

Submitted by Abdourahmane BA on Fri, 10/06/2017 - 05:21

Permalink

Evaluation is part of development organizations and institutions tools and approaches to learning from ongoing and past development actions in order to improve ways to advance current and future development objectives toward greater welfare and freedom of people. It is not a stand-alone approach; it is integrated into a set of development performance management strategies called Monitoring and Evaluation System.

The role of evaluations and other M&E System activities is to improve development organizations and institutions capability to advance development objectives, including the SDGs, through improved Results-based Management (RBM), Knowledge and Information Management (KIM) and Evidence-Based Decision-Making (EBDM).

To ultimately advance development objectives, organizations and institutions in charge of development actions, through improved RBM, KIM and EBDM, should improve their capabilities to design better policy and programs for future actions, take better decisions (based on evidence, System 2 not System 1 – Kahneman) at operational, tactical and strategic level.

We look forward to more relevant events in the future that look to the broad picture not only to one approach of M&E System along lines that improve development organizations decision-making and measured risk taking processes to advance development objectives.

Submitted by Bob Williams on Fri, 10/06/2017 - 05:21

Permalink

Interesting conversation, but hardly new. Clearly a great discussion. However, people have been saying this about evaluation and policy making for decades. The great work done by the Evidence-Based Policy Making research network in the UK during the Tony Blair years almost twenty years ago nailed very well what the issues were and what helped or hindered their application.

One other thing that concerns me greatly is that evaluation is now being asked to do things and expresses aspirations for doing things that are, frankly, impossible. Sometimes it's because the intellectual traditions on which evaluators and policy makers draw will never legitimise the necessary approaches (I've had that happen to me). Sometimes because ontologically, it just cannot be done. And more often than not, we are not given the time or the resources - we are asked to do deep research investigations on shallow evaluation budgets. If evaluation is ever to be considered a profession, it has to have the courage and ability to say 'no, we cannot do this'.

Submitted by Daouda TOURE on Tue, 10/31/2017 - 08:49

Permalink

Very good summary as if I was in this meeting. Very useful for people like me who conduct evaluation at project level and can can now try to use the results to improve policy at strategic level.
For me is it not much clear if we said that strategic evaluation is link to policy-making and project one for sub-level as you said ?

Add new comment