Process Tracing Method in Program Evaluation
Conclusion
The robustness of process-tracing findings depends on how well theory and empirical observables come together, and it can be assessed according to three criteria: (i) a disaggregated and fine-tuned pToC that captures the key episodes and mechanisms that can explain the links between the intervention studied and the outcome of interest; (ii) the finding of evidence that is unique (that is, love-to-find evidence that cannot be explained by alternative explanations) for each key episode of the pToC; and (iii) trustworthy sources of information and broad access to the empirical record. Conversely, if a pToC is too abstract or simplistic, if the evidence found to corroborate the pToC could also serve to validate an alternative theory, or if the sources employed for evaluating the pToC are biased, causal inference will be weak.
If these three conditions are met, however, process tracing can bolster evaluators’ ability to provide strong evidence of causal links between interventions and outcomes, while also unveiling explanations of how and why a particular intervention triggered a specific process of change that led to the outcome of interest. The scaffolding employed for finding evidence in support of the pToC also provides a transparent way of presenting and assessing the strength of the evidence gathered and triangulating across sources. This transparency is a strength of the approach compared with other case- or theory-based methodologies. In compelling evaluators to focus on causal explanations and the links between actors, their actions, and induced behavioral changes, process tracing also makes it much easier for evaluators to derive practical lessons and ideas on how such activities can be changed to improve outcomes. Process tracing’s comparative advantage over other (impact) evaluation approaches lies in its ability to assess interventions that do not lend themselves to quantification or experimentation, such as research, advocacy, and knowledge and data work, as well as policy dialogue and budget support.
Like other approaches, process tracing is not a silver bullet for solving all evaluation questions or studying all interventions. It also has some limitations to keep in mind when deciding whether to incorporate it in an evaluation design. First, it is not adequate for answering questions that require the estimation of the magnitude of a treatment effect, such as how much of an impact a particular intervention had, on average, on an outcome of interest. Second, although process-tracing principles can be intuitively incorporated into any evaluation design, the full application of the approach requires considerable time and resources because of the need to iterate between evidence and theory, gain familiarity with how to assess the probative value of different types of evidence, and learn how to construct a pToC at a level of abstraction that is fit for the purpose at hand and how to leverage existing literature to theorize well. Third, on its own, process tracing has weak external validity and needs to be paired with a cross-case design to build in generalizability. Ideally, evaluators will trace two or more cases empirically and compare their workings at the processual level to enable them to conclude whether an intervention worked in analogous or different ways in the cases, as well as to more systematically probe the impact that contextual conditions have on how things played out in the cases studied.
When presenting the findings of a process-tracing evaluation to stakeholders, as when using the method itself, evaluators should give their pToC a central role. A simple visualization of key episodes in terms of actors, actions, and links is a good heuristic tool to help stakeholders understand how the intervention actually worked (or did not) and why. Presentations should also clearly flag the strength of evidence underlying the findings (that is, the degree of internal validity based on the strength of evidence found). The benefit of using Bayesian language to summarize the strength of evidence behind a pToC is that it clearly flags for readers how much confidence they can reasonably have in the conclusions (see box 2.1 and table 2.1).
Finally, a report on a process-tracing evaluation should clearly flag both the contextual conditions within which the evaluation’s pToC can be expected to function and whether there is any evidence from other cases that the pToC works in similar ways in those cases. Without this information, readers do not know whether the findings can be applied to other cases and, if so, where the findings might provide relevant lessons for other cases. That being said, evaluation findings should be written to meet the needs of the intended users, and more often than not, this means interpreting the findings and their implications, and writing them in plain language. Methodological appendixes that clearly and transparently lay out the process-tracing approach, the pToC developed, and the evidence found in support of it can be useful in that regard.