Evaluability assessments help determine “the extent to which an activity or project can be evaluated in a reliable and credible fashion” (OECD-DAC: 2010: 21). In doing so, they inform stakeholders about the potential feasibility, scope, approach, and value for money of an evaluation.

TWEET THIS

Evaluability assessments can be used at the project design and implementation stages. 
Evaluability assessment can be used to better allocate scarce evaluation resources.
Evaluability assessments can be an effective medicine for treating ‘evaluitis’ and thwarting the ‘ritualization’ of evaluation processes.

As the World Bank Group is currently in the process of developing a new overarching evaluation framework, fundamental questions about evaluation are being discussed. When do we evaluate? How do we go about the evaluation process? How can we make effective use of evaluations for accountability and learning?

Responding to these and other fundamental questions touches upon the core issue of evaluability. First introduced by Wholey (1979) and further developed over time by others, evaluability assessments (EA) have been mostly carried out at project and activity levels. They tend to cover the following issues:

  • Clarity of the intervention and its objectives: Is there a logical and clear theory of change that articulates how and under what conditions intervention activities influence particular processes of change?
  • Availability of data: which data are available that can be used in assessing the merit and worth of the intervention (e.g. generated by the intervention, external data sets, policy and academic literature)
  • Stakeholder interest and intended use: to what extent is there a clear interest (and capacity) among stakeholders to use the evaluation’s findings and recommendations in strategic decision-making, program improvement, learning about what works, etc.?

Additional dimensions that are often addressed in evaluability assessments are clarification of the scope of a potential evaluation and options for methodological design. Evaluability assessments help determine “the extent to which an activity or project can be evaluated in a reliable and credible fashion” (OECD-DAC: 2010: 21). In doing so, they inform stakeholders about the potential feasibility, scope, approach, and value for money of an evaluation.

At the project level, evaluability assessments can be particularly useful for non-evaluators (e.g. operational staff, decision makers, donors) for reasons other than evaluation in a narrow sense. There is significant potential for evaluability assessments to be used at the project design stage or during implementation, as it involves a systematic assessment of the quality and logic of the project’s theory of change, the link with the project’s monitoring framework and the identification of potential evidence gaps (see for example Davies, 2013; Trevisan and Walser, 2015).

While having been applied mostly at the project level, evaluability assessment can and should be used at higher levels of intervention such as the program, strategy or thematic area of work. It is at these levels where a reflection on the strategic allocation of scarce resources for evaluation is particularly important to identify the highest value for money of evaluation. At the same time one should ask whether the processes, tools and (monitoring) data within the operational system are adequate to help the organization respond to strategic questions of interest in a particular area of work at the global, regional or country level. In both cases, evaluability considerations come into play.

Contrary to what one might think, the assessment of merit and worth of a strategy, high-level program or thematic area of work is not just about the sum of assessments of underlying projects and activities (e.g. from self-evaluations). I briefly highlight three important challenges in the evaluability of higher levels of intervention. The first two have been discussed by Davies and Payne (2015):

  • Very complex, yet incomplete theories of change. Programs and thematic areas of work often encompass multiple intervention activities at multiple levels (e.g. within country, country, global) across countries or sites involving multiple stakeholder groups. Any attempt to capture the wide diversity in causal pathways between the array of intervention activities in different contexts and a range of outcomes is daunting to say the least. Consequently, this makes credible causal analysis at this level (without further decisions on deconstruction and selectivity) very difficult if not impossible.
  • Diverse, incomplete and unknown data. Even in organizations with highly developed monitoring and self-evaluation systems such as the World Bank Group, there are significant limitations in the scope, depth and quality of available data. Finding out what relevant existing data are available inside and outside the organizational system involves time and resources.
  • Questions of strategic interest and corresponding information requirements. Monitoring and evaluation exercises at project and activity levels tend to focus on the traditional OECD-DAC criteria (see the blog posts by Caroline Heider). At higher levels of intervention strategic questions around coordination, policy coherence, institutional positioning and comparative advantage (versus other institutions active in a policy field) become all the more important. Addressing these questions requires planning and budgeting for the use of adequate methodological (systems) approaches and corresponding data collection activities.

Over the last ten years or so, there has been a renewed interest in the international development community (e.g. DFID, ILO, IADB, WBG) for conducting evaluability assessments, mostly at the project level but also at the level of higher-level programs. Evaluability assessments (if used strategically and not as a requirement) can be an effective medicine for treating the metaphorical disease called ‘evaluitis’ (Frey, 2006), thwarting the ‘ritualization’ of evaluation processes in organizational systems. It can help us ask fundamental questions about the strategic allocation of scarce evaluation resources and strengthen our internal monitoring processes to provide timely and relevant evidence to decision makers and other stakeholder groups.

References

Davies, R. (2013) Planning evaluability assessments: a synthesis of the literature with recommendations. Working Paper, 40. London: DFID.

Davies, R. and L. Payne (2015) Evaluability assessments: reflections on a review of the literature. Evaluation, 21(2), 216-231.

Frey, B. (2006) Evaluitis – eine neue Krankheit, Working Paper, 18. Zurich: University of Zurich, Center for Research in Economics, Management and the Arts.

OECD-DAC (2010) Glossary of key terms in evaluation and results-based management. Paris: OECD-DAC.

Trevisan, M.S. and T.M. Walser (2015) Evaluability assessment: improving evaluation quality and use. Thousand oaks: Sage Publications.

Wholey, J.S. (1979) Evaluation: promise and performance. Washington D.C.: Urban Institute.

Read also:

Comments

Submitted by Susan Stout on Thu, 06/15/2017 - 15:12

Permalink

Hi Joz,
Great post -- glad to see this topic getting more attention. I've always believed that thinking about 'evaluability' (and how to improve it during the design period) is key to ensuring that evaluations actually contribute to learning -- and also an antidote to my two favorite diseases in our field: 'resultsophobia' -- fear of finding failure or 'less than perfect' results, even though disappointing results are likely -- at least in part -- inevitable; and 'indicatoritis' -- the tendency to focus "M and E" plans on debates about long lists of indicators rather than focusing on 'who is going to use what information to make what decision'....
Thanks,

Submitted by Jos Vaessen on Fri, 06/16/2017 - 14:42

In reply to by Susan Stout

Permalink

Thanks Susan for your interest and comment. Quite right. And the different ‘itises’ and ‘phobias’ are probably strongly correlated in practice. A culture of making informed choices about evaluation (less ‘evaluitis’) goes hand in hand with a willingness to learn from failure and success (less ‘resultsophobia’) as well as the efficient and effective use of different types of evaluative evidence for learning, accountability and program improvement purposes (less ‘indicatoritis’).

Yes indeed, as I've long said -- we should interpret M and E as 'motivation and empowerment'. Most everyone working to deliver development results, in countries as well as among agencies -- wants to be making a difference. They can be motivated by evidence of success and empowered to change things when evidence is suggesting failure -- only if we recognize and value their own perceptions and ensure that they have the authority and flexibility to respond to feedback.

Submitted by rick davies on Sat, 06/17/2017 - 14:03

Permalink

Hi Jos

Re the third of the three dimensions of evaluability (theory, data and stakeholders), I think there are two facets of the stakeholders that need to be considered. They could be thought of as “demand” and “supply”. Demand is the nature of stakeholders’ interests in the evaluation. Not only what evaluation questions are of interest to whom, but what risks and opportunities are of concern to whom. On the supply side, there is the question of who will be available or accessible by an evaluation team, when and where and under what circumstances. [The latter was the "practicality" dimension in the 2013 DFID Working Paper]

The other point that may be of interest is how evaluability judgements relate to choices about what evaluation methods to use. Choices about methods can be seen as sitting in the middle of a triangle formed by the above three issues. One way or another, the choices made about methods to use need to be consistent with what is known about the program theory, with what is known about the data that is or could become available, and what is known about the stakeholders (both demand and supply aspects).

regards, rick

Submitted by Jos Vaessen on Mon, 06/19/2017 - 16:25

In reply to by rick davies

Permalink

Dear Rick, thanks for your thoughtful comments. Regarding your second point, I fully agree that choices regarding methodological design of an evaluation should flow from these (and other) considerations. The problem is that at higher levels of intervention (e.g. global thematic areas, strategies, programs) the evaluability assessment (as a stand-alone exercise or an approach paper) could quickly become rather complicated and costly. This raises questions about the optimal level of (financial and analytical) resources to be devoted to it.

Add new comment