TWEET THIS

Evaluation questions should showcase how complexity models can be applied to project design.
Evaluation should move beyond linear results chains into areas of unintended and indirect effects.
Evaluation methods should capture synergies between interventions.
Evaluation should take a systemic perspective of sets of development interventions.

In spite of considerable resources spent, the quality of too many [impact evaluations] is not high, the results deemed not conclusive and limited to rather narrow phenomena, while leaving fundamental gaps on strategic issues. More often than not, these studies conclude that more studies are needed.  Confronting this reality, as well as evidence about the weaknesses in project design – poorly defined objectives, confusion between outputs, outcomes, and impacts, and ineffective M&E Systems – together with insights into complexity theory, gives me pause to think!

Long-term evaluators in the development field will remember the difficult conversations we have had (not too long ago) about measuring impact in a reliable way. The reason for heated debates is simple: positive impact is what development interventions are meant to produce, and negative impact is what they are supposed to avoid—and proving it one way or another is paramount.

Impact is defined as “the positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental and other development indicators. The examination should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions.” (OECD/DAC key terms for evaluation)

Methods for impact evaluation have grown over the past decade. A whole industry has sprung up with many a student leaving university with great aspirations to undertake impact evaluations of a certain kind. But, as many systematic reviews and a 2012 IEG evaluation of the impact evaluations undertaken by the World Bank Group show us: in spite of considerable resources spent, the quality of too many of these studies is not high, the results deemed not conclusive and limited to rather narrow phenomena, while leaving fundamental gaps on strategic issues. More often than not, these studies conclude that more studies are needed. 

Confronting this reality, as well as evidence about the weaknesses in project design – poorly defined objectives, confusion between outputs, outcomes, and impacts, and ineffective M&E Systems – together with insights into complexity theory, gives me pause to think!

Let’s assume development practitioners take the opportunity that complexity theory and enhanced modeling capacities provide –  something that, I believe, will have to happen. Let’s also assume that such a change will result in getting to a better understanding of development challenges, pathways to their solutions, and interventions that are designed in different ways (as argued in my earlier blog What’s Wrong with Development Effectiveness?)  – ways that recognize the systemic effects interventions can have on a more complex network of interrelated development processes.

It is hard to imagine how a logical framework or a traditional M&E system would capture impacts as defined in the DAC evaluation criteria, leave alone the cost of doing so.

Instead, we evaluators need to seize the opportunity to rethink our practice. Evaluation methods and questions can continue to incentivize changes in development practice. This could be by:

  • Showcasing how complexity models can be used in evaluation and, hence, applied to design;
  • Asking evaluation questions that move beyond linear results chains into areas of unintended, direct and indirect effects that interventions may have; and
  • Strengthening methods to capture synergies between interventions, and taking a systemic perspective of sets of development interventions.

Some thinking has gone into what complexity means for evaluation practice. One excellent reference, for example, is Dealing With Complexity in Development Evaluation, authored by Michael Bamberger, Jos Vaessen and Estelle Raimondo. But a lot will need to be done to translate these ideas into evaluation practice.

Watch the video: Are Impact Evaluations Useful?

Read other #Whatworks posts in this series, Rethinking Evaluation:

Have we had enough of R/E/E/I/S?,  Is Relevance Still Relevant?, Agility and Responsiveness are Key to Success, Efficiency, Efficiency, Efficiency, What is Wrong with Development Effectiveness?, and  Assessing Design Quality.

Comments

Submitted by Adam McCarty on Sun, 05/14/2017 - 23:16

Permalink

I would add two other themes: 1) Use of big data through new technologies. This is connected but different to "enhanced modeling capacities". Vast databases can be built using video or mobile phones. Consider the analysis by marketing people about customers in supermarkets using videos. Could we not do similar to understand schools and health clinics? 2) promote post-evaluations: end-project evaluations are only impact hypotheses as they can only guess at sustainability ("for how long will this water pump last?").
ALSO: Focus on impact evaluations is distracting ("crowding out") more pure research into causation analysis (e.g. In Lao PDR dozens of projects over 20+ years have failed to reduce child dropout rate from primary schools. Why not stop and have a team study the obviously complex problem for six months, and in doing so design experiments to rigorously test? Rather than push through a large loan, which tests a few variant "seem good idea" models via household surveys).

Submitted by Peter Eerens on Fri, 05/19/2017 - 19:12

Permalink

Indeed. Dealing with Complexity in Development Evaluation is an excellent resource to reflect on this title, “rethinking evaluation”. A good introduction for the beginner, and an excellent companion work for the experienced evaluator.
But we still need to take more risks to adjust our methods to the reality of a non-linear world. The more we perfect the evaluation, the more we separate the permanent interrogation that should accompany the actor from what she/he is actually undertaking. It is not necessarily true that impact is only at the end of a long temporal sequence. In my experience, as a public health practitioner, impact can be sudden, simultaneous, dramatic, at nanoscale, and too fluid to capture. Pure aesthetics. An experience rarely tackled in evaluation and better captured by poets, artists, or silent communion.
Nature has no purpose and is full of impact. Man is full of purpose and often fails to impact. We still have a lot to learn.
Peter Eerens
Living Health Systems

Add new comment