We live in an increasingly interconnected and policy-saturated world. Strict attribution is not the only question we are interested in. We need to understand how interventions work, under what circumstances and for whom. The “why” and “how” questions are at least as important as the “what” question.

About a decade ago, following the seminal paper “When will we ever learn” published by the Center for Global Development (CGD, 2006), debates and funding for impact evaluation in international development received a new impetus. New initiatives such as 3ie were established and the number of impact evaluations increased significantly.

Most of these impact evaluations have focused on the net effect (in terms of a specific outcome) attributable to an intervention, controlling for other factors (using design-based and/or statistical controls). The experimental and quasi-experimental designs that underpin most of these impact evaluations help us to isolate and pinpoint the difference that an intervention has (or has not) made.

This is not the only causal question of interest to us. In 2012, another seminal publication, commissioned by DIFD (Stern et al., 2012), argued for a broader analytical perspective in impact evaluation. The report presented a series of different (related) causal questions (including the question on net effect) and proposed a range of methodological options that would be appropriate for each of these questions.

One causal question of interest that is slightly different from the net effect question is the following. What are the main contributory causes to changes in outcome variable y and what has been the role of intervention x? This causal question explicitly draws attention to a (comprehensive) range of causal factors and the need for capturing these in some way. In addition, it emphasizes causal explanation. While (quasi-)experimental designs that underpin net effect analyses often rely on some type of explanatory model of outcome variables of interest (most good studies do), the main difference is one of perspective and emphasis. Both types of questions complement each other and have merit from an accountability and organizational learning perspective.

To illustrate the difference let me return to an example that I used in a previous blog post, payments for environmental services (PES). Suppose we want to evaluate the impact of PES in a country like Costa Rica. The causal question could be: what is the net effect of PES on avoided deforestation in private forest lands. One could conceive of some type of counterfactual design to empirically analyze this question. A different question, focusing on contributory causes, could be: Given the range of different policy interventions and other explanatory factors, what has been the role of PES in avoiding deforestation in private forest lands? In other words, in what ways and to what extent do policy instruments such as (e.g.) national legislation on land use and its enforcement, (perceived security of) property rights to land, environmental education programs, awareness campaigns and PES influence the attitudes and actions of land users regarding protecting forested areas on their land. Moreover, in what ways and to what extent do underlying factors such as individual values and beliefs, peer behavior, education levels, income levels, (perceived) opportunity costs of land, and so on, affect these causal relations?

Acknowledging that causal factors are interconnected in complex ways and that the behaviors of individuals, communities and institutions are influenced by multiple policy interventions calls for appropriate methodological solutions. A particularly promising field of work is complexity science. Caroline Heider already referred to some of the promising work in this field in her recent blog post. Systems mapping is a good starting point. It is an umbrella term for a range of methods that can help us to develop a visual representation of the system. In contrast to conventional theories of change that tend to rely on the principle of successionist causation, system maps include multiple feedback loops and are (implicitly and explicitly) aligned to principles in complexity science such as non-linearity, emergence and uncertainty in processes of change (Befani et al., 2015).

A system map constitutes a good basis for using simulation techniques such as system dynamics. However, often evaluators and planners do not have the resources and data at their disposal for quantitative modelling of the system (e.g. system dynamics or structural equation modelling as used in economics for example). In such cases (and in general), heuristic frameworks such as critical systems heuristics (Williams and Hummelbrunner, 2011) or Pawson’s VICTORE framework can be quite helpful (Pawson, 2013). A system map also constitutes a good basis for applying “conventional” techniques. In principle, any type of reduced form model in statistics would benefit from a system map as an underlying explanatory model. Moreover, for causal analysis in and across small n settings (e.g. a group of countries in a region), a range of case-based methods (Byrne and Ragin, 2009) is available to evaluators and planners. For example, qualitative comparative analysis constitutes a good example of a method that, if underpinned by a reliable explanatory model (visualized in a system map), can be very helpful in developing insights into the contributory causes of a particular change across a number of countries, communities or institutions.

We live in an increasingly interconnected and policy-saturated world. Strict attribution is not the only question we are interested in. We need to understand how interventions work, under what circumstances and for whom. The “why” and “how” questions are at least as important as the “what” question.

References

Befani, B., B. Ramalingam and E. Stern (2015). Introduction: Towards systemic approaches to evaluation and impact. IDS Bulletin, 46(1), 1-6.

Byrne, D. and C. Ragin (2009). The Sage handbook of case-based methods. Thousand Oaks: Sage.

CGD (2006) When will we ever learn? Improving lives through impact evaluation. Evaluation Gap Working Group. Washington, D.C.: Center for Global Development.

Pawson, R. (2013). The science of evaluation: a realist manifesto. London: Sage.

Stern, E., N. Stame, J. Mayne, K. Forss, R. Davies and B. Befani (2012). Broadening the range of designs and methods for impact evaluation. London: Department for international Development.

Williams, B. and R. Hummelbrunner (2011). Systems concepts in action: a practitioner’s toolkit. Stanford: Stanford University Press.

Have you read?

Comments

Submitted by Ashwini Sathnur on Thu, 01/26/2017 - 03:32

Permalink

The impact evaluations for a particular intervention would be captured and measured in data visualizations and system maps. Although there are several new technologies which could be utilized to instantly compute the results of an intervention - post data collection from the countries or regions across the globe.

One such technology is the Neural Networks and Artificial Neural Networks. This could be stated as an artificial Human Brain which would enable calculations, deriving analysis and inferences from the collected data, and also perform decisions on financial management leading to the Payment for Environmental Services. In - depth mathematical computations and data visualizations could be performed utilizing the Neural Networks technology tool.

An example for the intervention PES would be a research article titled "UN Cognitive Edge". PES intervention could also be related to the measure of an aspect "Conservation of Biological Diversity" - which could be measured non - linearly with the subject of Migration of persons from urban to rural areas.

Attaching the detailed concept of the aspect "Conservation of Biological Diversity" below, which utilizes the technology Neural Networks :-

Understanding the Migration of persons from urban to rural areas. Thereby increasing the Conservation of Biological Diversity i.e growing the natural parks and reducing Deforestation - which is ideally, the intervention "PES". This understanding is based on the theory of Neural Networks and Cognitive Science and Brain Research. Statistics of Transportation and Tourism for Road, Railways and Aviation is vital to understand the concept of migration of people from one country to another. Also the transportation data is collected, processed and then the feature of Data Analysis is carried out. The Data Analysis is carried out utilizing the Neural Networks concept. Data mining is performed on the processed/ analyzed data to create graphical representation of the migration statistics. Understand the problems with migration and provide solutions for these migration-related problems.

Submitted by Alejandro Uriza on Thu, 01/26/2017 - 09:33

Permalink

Hi Jos, very grateful for your approach. A question, when you say that "However, often evaluators and planners do not have the resources and data at their disposal for quantitative modeling of the system", this is for? how the programs or projects are designed? or Is it by phase planning and implementation? Where you have to intervene to get the evaluations to answer "why", "how" and "what", more efficiently.

Hi Alejandro, thanks for your comment. These questions apply to both ex ante and ex post evaluative analysis. Systems mapping (and modelling) can be very helpful in understanding the dynamics of a phenomenon (e.g. local economic growth in a particular sector) and the potential role of one or more interventions to influence this phenomenon. This is helpful at the planning stage to inform intervention design and implementation but also at the (ex post) evaluation stage.

Submitted by Michel Laurendeau on Tue, 01/31/2017 - 14:13

Permalink

Congratulations for an article focusing on an issue that the evaluation community has avoided for too long. Evaluations have generally been aiming at confirming the 'official' program theory against rival theories of intervention. Evaluations studies have been relying on multiple sources and/or RCTs to increase the validity of conclusions based on the analysis of incomplete models (and data sets), consistently biasing their estimate (i.e. overestimating) the causal contributions of program interventions because of the absence of external/contextual causal factors that are often at the origin of the program and that have an (usually limiting) influence on, and that also explain, the observed results. Although RCTs have been able to measure net effects by randomly spreading (and thus controlling for) the influence of external/ contextual factors, they have not allowed to transpose and/or generalize the results to normal field situations where the 'unintended' influence of these external/ contextual factors vary significantly.

Program theories of intervention must be broadened and become more comprehensive theories of change that include all relevant external/ contextual causal factors. Unfortunately, planners and evaluators then face important capacity issues whenever they try to ensure that these factors are reliably measured and included in their analyses. Economists have been able to resolve this issue with the help and support of governments. They have been able to develop complex monitoring systems to measure and manage the impact of fiscal and monetary programs/policies. The technology and methodology has been there for some time. The evaluation community should perhaps start pushing for equivalent systems and capabilities to monitor and manage the impact of public programs that are more oriented towards social development, health and environmental issues.

Submitted by Martin Klein a… on Tue, 02/21/2017 - 04:18

Permalink

Dear Jos, thank you for your great series of posts! We read them with enthusiasm and hope you don’t mind that we included them on our Theory of Change portal: www.theoryofchange.nl.

Adopting a learning approach to societal value creation is a necessity but can also be challenging; much of what you write about we recognise from our own practice.

We would love to hear more about your thoughts on the processes and tools in place to support such a learning approach (you already made several suggestions), and how the World Bank implements such. A learning approach of course requires much more than one-time exercises on planning and evaluation, and instead calls on more of a continuous process. Perhaps an idea for a future article?

For example, processes and tools around: co-creation with stakeholders, reflective monitoring, capacity building around critical thinking, continuous adaptation of a ToC, structured communication about the rationale behind an intervention, a culture in place for accountability to what you have learned, access to existing scientific theories and validation, etc.

We are especially interested in your thoughts on this topic because we are building online tooling to support such. We look at the ToC as the common thread uniting all aspects of strategic management of societal value creation. Also from the perspective a lot of know-how exists in the development sector that social enterprises outside the development sector also stand to benefit from. Submitted by Martin Klein and Jan Brouwers

Dear Martin and Jan, thank you for your comment. Your point about promoting the idea of ‘learning organizations’ is quite pertinent and a popular topic of discussion these days. As you rightly acknowledge, it goes beyond methodology and touches upon such aspects as processes, incentives and hierarchies (to mention just a few) in an organization. Some of the complexity around learning in the WBG has been captured by IEG evaluations: http://ieg.worldbankgroup.org/evaluations/learning-results-wb-operation… . Your point on the role of (tacit and explicit) theory/ies in learning processes is certainly worthwhile exploring further. We should talk about this at some point in time.

Add new comment