Back to cover

The Rigor of Case-Based Causal Analysis

Conclusion

This paper set out to bust two myths related to the selection of methods for meaningful inference in evaluation, moving away from the idea that any specific approach represents a gold standard for the quality of research and evidence generation. As noted earlier, evaluators have started to experiment with mixed methods approaches to causal analysis, combining theory-based methods that use case studies and other qualitative inputs as their primary empirical material. The use of contribution analysis, process tracing, and QCA, among other approaches, is increasingly supported in the literature, helping to refute the first myth—the misconception that causal claims can only be derived from quantitative or “large-n” methodologies in a counterfactual framework. As shown in this paper, case-based and theory-based methods can generate robust causal inferences and fill important knowledge gaps in evaluations. To refute the second myth, the analysis in this paper showed that findings from case-based work can be generalized to other contexts, thereby generating practical insights relevant to complex interventions and the conditions that influence their relative success.

As evaluation practitioners continue to experiment with a range of approaches and methods for answering a variety of causal questions regarding increasingly complex interventions, the need for guidance has increased for both project commissioners and project evaluators on what approach and method to select for a particular intervention and how to ensure that the approach and method are carried out thoroughly. Widner, Woolcock, and Ortega Nieto (2022) offer some useful principles for deciding whether and when case-based approaches can offer robust insights for drawing causal inferences and for determining how far those insights can be extended from the cases that provided them. These principles resonate with the experience of the team involved in carrying out the current study, which involved combining within- and cross-case analysis in the evaluation of a selection of projects in the World Bank’s carbon finance portfolio.

First, as Widner and colleagues highlight, it remains that “quantitative analysis of large numbers of discrete cases is more effective at estimating the strength of the relationship between causes and outcomes that can both be measured quantitatively” (Widner et al. 2022, 4). In that sense, if the primary causal question to be answered is, How much of an effect (on average) has a particular intervention had on a specific measurable outcome?, then case-based approaches of the kind described earlier in this paper will not provide a valid, useful answer. But, case-based approaches have other comparative advantages when it comes to providing causal explanations and identifying the role contextual or implementation conditions play in successful or unsuccessful outcomes of interventions. Notably, case-based approaches help (i) identify causal mechanisms to open the black box processes connecting causes and outcomes; (ii) elicit how processes of change unfold; (iii) explain the circumstances under which causal mechanisms are or not triggered; and (iv) provide what Woolcock (2013, 2022) calls key facts for determining whether a particular intervention could work in other cases. As Woolcock (2013, 95) asserts, “the higher the complexity, the more salient (even necessary) inputs from analytic case studies become as contributors to the decision-making process” regarding whether particular interventions could be effectively scaled or replicated in other contexts.

That said, the promise of case-based approaches can be fulfilled only if certain conditions are met (Johnson and Rasulova 2017). First, evaluators should ensure the defensibility of the causal inferences they draw from the cases they study, with precisely specified causal theories, diligent consideration of alternative explanations, and assessment of the trustworthiness and probative value of the evidence brought to bear to support causal inference in the cases examined (Beach and Pedersen 2019; Cartwright 2022; Mahoney 2000).

Second, evaluators should carefully delimit the boundaries within which their generalization applies. In most instances, the degree of generalization will be modest (Rihoux and Ragin 2009), and it will be delimited to the class of cases that share the variables determined to be necessary or sufficient to trigger the causal mechanisms identified. In conducting case-based evaluations, the following five principles, inspired by and adapted from Widner, Woolcock, and Ortega Nieto (2022), are of particular importance:

  1. Articulating a plausible causal theory that is informed by a thorough review of the literature and practitioners’ experience, is specific enough, and proposes plausible explanations for outcomes for interest and relevant alternatives to those explanations.
  2. Selecting cases for study according to clear and transparent criteria that are pragmatic but do not yield too much to convenience. Researchers should try to include in their studies both cases with positive outcomes and those with negative outcomes.
  3. Articulating clear hypotheses about a handful of contributory factors that will be the object of close scrutiny across the cases reviewed while leaving space for inductive inquiry and the possibility of stumbling on important additional factors to consider.
  4. Providing evidence that has been carefully weighed, often triangulated across sources, and is considered trustworthy. Researchers should ensure that the evidence they provide is as unique as possible to the causal explanation they propose and should be transparent about alternative explanations that they cannot rule out.
  5. Being open about the caveats and limitations and as transparent in the process as possible so that others can check or debate the conclusions reached

Now, even the most thorough case-based design has limitations. Testing a theory against a small number of chosen cases is inevitably a perilous exercise, especially when the number of cases available for study is limited and the number of causal factors that might explain the outcomes is large. Scenarios can quickly arise in which the complexity of phenomena overwhelms the number of observations. In such scenarios, careful reviews of the existing literature can often help narrow the causal field, but not always. Sometimes, exploratory process tracing should be undertaken first. Careful tracing in single cases can help reveal links among activities, actors, the ways they behave and influence others, and ultimately the outcomes of interest. The information these links provide on implementation challenges can also contribute to generating hypotheses about the variables that must hold for change to take place.

There are also undeniable practical challenges to carrying out case-based work that should not be underestimated. Time is often the scarcest resource and may preclude evaluation teams from going deep enough in their analysis to yield useful conclusions. Organizational politics can also be hard to navigate, especially regarding case selection, access to key informants, and what information can or cannot be used as evidence (Aston 2022).