Rethinking Evaluation - Impact: The Reason to Exist
Complexity theory and enhanced modeling capacities provide opportunities to rethink evaluation methods.
Complexity theory and enhanced modeling capacities provide opportunities to rethink evaluation methods.
By: Caroline HeiderEvaluation questions should showcase how complexity models can be applied to project design.
Evaluation should move beyond linear results chains into areas of unintended and indirect effects.
Evaluation methods should capture synergies between interventions.
Evaluation should take a systemic perspective of sets of development interventions.
In spite of considerable resources spent, the quality of too many [impact evaluations] is not high, the results deemed not conclusive and limited to rather narrow phenomena, while leaving fundamental gaps on strategic issues. More often than not, these studies conclude that more studies are needed. Confronting this reality, as well as evidence about the weaknesses in project design – poorly defined objectives, confusion between outputs, outcomes, and impacts, and ineffective M&E Systems – together with insights into complexity theory, gives me pause to think!
Long-term evaluators in the development field will remember the difficult conversations we have had (not too long ago) about measuring impact in a reliable way. The reason for heated debates is simple: positive impact is what development interventions are meant to produce, and negative impact is what they are supposed to avoid—and proving it one way or another is paramount.
Impact is defined as “the positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental and other development indicators. The examination should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions.” (OECD/DAC key terms for evaluation)
Methods for impact evaluation have grown over the past decade. A whole industry has sprung up with many a student leaving university with great aspirations to undertake impact evaluations of a certain kind. But, as many systematic reviews and a 2012 IEG evaluation of the impact evaluations undertaken by the World Bank Group show us: in spite of considerable resources spent, the quality of too many of these studies is not high, the results deemed not conclusive and limited to rather narrow phenomena, while leaving fundamental gaps on strategic issues. More often than not, these studies conclude that more studies are needed.
Confronting this reality, as well as evidence about the weaknesses in project design – poorly defined objectives, confusion between outputs, outcomes, and impacts, and ineffective M&E Systems – together with insights into complexity theory, gives me pause to think!
Let’s assume development practitioners take the opportunity that complexity theory and enhanced modeling capacities provide – something that, I believe, will have to happen. Let’s also assume that such a change will result in getting to a better understanding of development challenges, pathways to their solutions, and interventions that are designed in different ways (as argued in my earlier blog What’s Wrong with Development Effectiveness?) – ways that recognize the systemic effects interventions can have on a more complex network of interrelated development processes.
It is hard to imagine how a logical framework or a traditional M&E system would capture impacts as defined in the DAC evaluation criteria, leave alone the cost of doing so.
Instead, we evaluators need to seize the opportunity to rethink our practice. Evaluation methods and questions can continue to incentivize changes in development practice. This could be by:
Some thinking has gone into what complexity means for evaluation practice. One excellent reference, for example, is Dealing With Complexity in Development Evaluation, authored by Michael Bamberger, Jos Vaessen and Estelle Raimondo. But a lot will need to be done to translate these ideas into evaluation practice.
Watch the video: Are Impact Evaluations Useful?
Have we had enough of R/E/E/I/S?, Is Relevance Still Relevant?, Agility and Responsiveness are Key to Success, Efficiency, Efficiency, Efficiency, What is Wrong with Development Effectiveness?, and Assessing Design Quality.