Intervention design must consider how trade-offs were weighed and influenced
Intervention design must outline how to track and quickly respond to changing contexts
Intervention design must explore effects on consumption levels according to SDG12

An assessment of the design quality of an intervention asks whether the “intentions were right."...But, will this be enough as we prepare for the future?

Evaluators in international development typically assess the design quality of an intervention. The reason is simple: the intervention design provides the yardstick to determine success and failure. Achievements are measured by comparing what was planned with what has actually been achieved. In addition, we find that “quality at entry” (in other words: the quality of intervention design) is a good predicator of the intervention’s outcomes.

An assessment of the design quality of an intervention asks whether the “intentions were right”. Questions we evaluators use to determine that include whether:  

  1. Objectives were realistic,
  2. There is an internal logic or coherence along the results chain that would ensure inputs and outputs could actually lead to the expected higher level results, and
  3. Relevant measurable indicators were embedded in design and monitoring systems.

But, will this be enough as we prepare for the future?

Previous blogs in this Rethinking Evaluation series discussed issues that need to be reflected upon in both intervention design and its evaluation.

For example, in an earlier blog on effectiveness, we argued that a different approach to managing complexity might lead to a different way to define objectives; one where it matters to understand the web of interrelated factors to identify entry points that will amplify possible impacts and be cognizant of what is until now unintended consequences. Under these circumstances, is it enough to ask whether objectives are realistic?

Or take the Relevance of Relevance blog where I argue that the simple question as to whether something is relevant in a complex network raises additional questions about approaches that have depended on a rather linear interpretation of reality. Logical models and results chains, if actually used in all sincerity, have far more often than not been simplified and linear. Question 2 above is clearly aligned with that questions of internal logic.

There are additional challenges that intervention design needs to take into account in light of the Sustainable Development Goals (SDGs).

For instance, tensions can and will arise especially between goals that require tough trade-offs; a challenge that is embedded in the SDGs that want a better life for all, but in environmentally sustainable ways. In an ideal scenario, the tensions between goals will stimulate innovation and lead to better solutions. Say, lowering of costs of alternative sources of energy to ensure we can meet goals to give equal access to electricity to all without, however, further depleting the earth’s natural resources.  But, these ideals might be hard to get by. Evaluating whether and how trade-offs were weighed and influenced ultimate intervention design will add a much better understanding whether the right decisions were made.

And then there is the need to evaluate whether

  • Diverse perspectives were taken into account to identify and build into the intervention design a focus on central levers of change. In addition, understanding whose perspectives were taken into account is important to understand ownership and how stakeholder groups will be affected.
  • Features were included in the intervention design to track and respond in a timely way to changing contexts, as discussed in Agility and Responsiveness, be they to manage complex (or complicated) political economies, or operate in dynamic institutional contexts.
  • Interventions have intended or unintended, direct or indirect effects on consumption levels and patterns as suggested in SDG12 and discussed in the efficiency blog. Introducing measures to evaluate this dimension now will create greater awareness and incentives to change project designs and implementation.

These factors (and more) need to be reflected in design quality and its assessment, whether through internal quality assurance processes and evaluation. If we do not start now, necessary evidence will not be generated in time to learn from experience and make course-corrections as they are needed.

Read other #Whatworks posts in this series, Rethinking Evaluation:

Have we had enough of R/E/E/I/S?,  Is Relevance Still Relevant?, Agility and Responsiveness are Key to Success, Efficiency, Efficiency, Efficiency, and What is Wrong with Development Effectiveness?


Submitted by Charles Y. Ahe… on Fri, 04/21/2017 - 19:09


This is an interesting new realm in conducting evaluation in evaluation. This approach is an in-depth assessment of the basic principles underlying the Evaluation Practice. Clearly its a serious rethinking of whether logic models and results chains are achieved without compromising the real values of what is intended to be delivered.
A very good opportunity to be more introspective in our inquiry during Evaluations.
Great thoughts.

Add new comment

By submitting this form, you accept the Mollom privacy policy.