Dealing with attribution in an increasingly interconnected and policy-saturated world
Strict attribution is often not sufficient to understanding how interventions work.
Strict attribution is often not sufficient to understanding how interventions work.
By: Jos VaessenWe live in an increasingly interconnected and policy-saturated world. Strict attribution is not the only question we are interested in. We need to understand how interventions work, under what circumstances and for whom. The “why” and “how” questions are at least as important as the “what” question.
About a decade ago, following the seminal paper “When will we ever learn” published by the Center for Global Development (CGD, 2006), debates and funding for impact evaluation in international development received a new impetus. New initiatives such as 3ie were established and the number of impact evaluations increased significantly.
Most of these impact evaluations have focused on the net effect (in terms of a specific outcome) attributable to an intervention, controlling for other factors (using design-based and/or statistical controls). The experimental and quasi-experimental designs that underpin most of these impact evaluations help us to isolate and pinpoint the difference that an intervention has (or has not) made.
This is not the only causal question of interest to us. In 2012, another seminal publication, commissioned by DIFD (Stern et al., 2012), argued for a broader analytical perspective in impact evaluation. The report presented a series of different (related) causal questions (including the question on net effect) and proposed a range of methodological options that would be appropriate for each of these questions.
One causal question of interest that is slightly different from the net effect question is the following. What are the main contributory causes to changes in outcome variable y and what has been the role of intervention x? This causal question explicitly draws attention to a (comprehensive) range of causal factors and the need for capturing these in some way. In addition, it emphasizes causal explanation. While (quasi-)experimental designs that underpin net effect analyses often rely on some type of explanatory model of outcome variables of interest (most good studies do), the main difference is one of perspective and emphasis. Both types of questions complement each other and have merit from an accountability and organizational learning perspective.
To illustrate the difference let me return to an example that I used in a previous blog post, payments for environmental services (PES). Suppose we want to evaluate the impact of PES in a country like Costa Rica. The causal question could be: what is the net effect of PES on avoided deforestation in private forest lands. One could conceive of some type of counterfactual design to empirically analyze this question. A different question, focusing on contributory causes, could be: Given the range of different policy interventions and other explanatory factors, what has been the role of PES in avoiding deforestation in private forest lands? In other words, in what ways and to what extent do policy instruments such as (e.g.) national legislation on land use and its enforcement, (perceived security of) property rights to land, environmental education programs, awareness campaigns and PES influence the attitudes and actions of land users regarding protecting forested areas on their land. Moreover, in what ways and to what extent do underlying factors such as individual values and beliefs, peer behavior, education levels, income levels, (perceived) opportunity costs of land, and so on, affect these causal relations?
Acknowledging that causal factors are interconnected in complex ways and that the behaviors of individuals, communities and institutions are influenced by multiple policy interventions calls for appropriate methodological solutions. A particularly promising field of work is complexity science. Caroline Heider already referred to some of the promising work in this field in her recent blog post. Systems mapping is a good starting point. It is an umbrella term for a range of methods that can help us to develop a visual representation of the system. In contrast to conventional theories of change that tend to rely on the principle of successionist causation, system maps include multiple feedback loops and are (implicitly and explicitly) aligned to principles in complexity science such as non-linearity, emergence and uncertainty in processes of change (Befani et al., 2015).
A system map constitutes a good basis for using simulation techniques such as system dynamics. However, often evaluators and planners do not have the resources and data at their disposal for quantitative modelling of the system (e.g. system dynamics or structural equation modelling as used in economics for example). In such cases (and in general), heuristic frameworks such as critical systems heuristics (Williams and Hummelbrunner, 2011) or Pawson’s VICTORE framework can be quite helpful (Pawson, 2013). A system map also constitutes a good basis for applying “conventional” techniques. In principle, any type of reduced form model in statistics would benefit from a system map as an underlying explanatory model. Moreover, for causal analysis in and across small n settings (e.g. a group of countries in a region), a range of case-based methods (Byrne and Ragin, 2009) is available to evaluators and planners. For example, qualitative comparative analysis constitutes a good example of a method that, if underpinned by a reliable explanatory model (visualized in a system map), can be very helpful in developing insights into the contributory causes of a particular change across a number of countries, communities or institutions.
We live in an increasingly interconnected and policy-saturated world. Strict attribution is not the only question we are interested in. We need to understand how interventions work, under what circumstances and for whom. The “why” and “how” questions are at least as important as the “what” question.
Befani, B., B. Ramalingam and E. Stern (2015). Introduction: Towards systemic approaches to evaluation and impact. IDS Bulletin, 46(1), 1-6.
Byrne, D. and C. Ragin (2009). The Sage handbook of case-based methods. Thousand Oaks: Sage.
CGD (2006) When will we ever learn? Improving lives through impact evaluation. Evaluation Gap Working Group. Washington, D.C.: Center for Global Development.
Pawson, R. (2013). The science of evaluation: a realist manifesto. London: Sage.
Stern, E., N. Stame, J. Mayne, K. Forss, R. Davies and B. Befani (2012). Broadening the range of designs and methods for impact evaluation. London: Department for international Development.
Williams, B. and R. Hummelbrunner (2011). Systems concepts in action: a practitioner’s toolkit. Stanford: Stanford University Press.
How complicated does the (Intervention) Model have to be? (by Jos Vaessen)
What is (good) program theory in international development? (by Jos Vaessen)
Using ‘Theories of Change’ in international development (by Jos Vaessen)
Institutionalizing Evaluation: What is the Theory of Change? (by Caroline Heider)
Influencing Change through Evaluation: What is the Theory of Change? (by Caroline Heider)