Back to cover

Evaluation of International Development Interventions

Appendix B | Developing Program Theories

Program theories,1 also referred to as logic models or theories of change (and other slightly different terms), are widely used in evaluation. A program theory can be broadly defined as a visual and narrative description of the main program inputs, activities, outputs, and desired outcomes. A central aspect of a program theory is the specification of how these are connected, that is, how the program activities and outputs are assumed to generate the desired outcomes. Program theories are now commonly required by development agencies as part of project planning.

There are many different ways of developing program theories as part of an evaluation. They may be developed prospectively, in the early phase of program design, or retroactively, after the program has entered the implementation phase. In some cases, the evaluator develops the program theory on the basis of program documents, interviews with program staff, or some combination of these. In other cases, program theory development is a collaborative effort between the evaluator and program staff, and perhaps including other stakeholders. These collaborative efforts can be structured around one or more workshops. Finally, program theories may be informed by existing research, relevant social scientific theories, and past evaluations.

There are several reasons for the widespread use of program theories in evaluation. First and foremost, program theories allow for a shared understanding between the evaluation team and program staff of how and in what way the program is intended to bring about change. This shared understanding of how the program is intended to function is important because (among other things) it may improve collaboration, foster agreement on evaluation findings, or reduce tensions. Second, program theories are often tested empirically in the course of evaluations and, as such, they focus the design of the data collection process. Third, a well-developed and well-tested program theory is an essential part of the lessons learned through an evaluation because it facilitates a deeper understanding of how and why the program worked or failed to work. This type of information is essential to inform future planning and program design.

Using program theories in evaluation has plenty of benefits but also presents a number of challenges. One common challenge is poor conceptual framing: how program components are causally connected among themselves and with the desired outcomes is often not well enough detailed, referenced, or specified, and the causal links are either unconvincing or omitted.

Another common issue emerges from the typical disconnect between the program theory and the data collection process; although the former should drive the latter, in practice some parts of theories are often untestable, and confidence in their veracity can be neither strengthened nor weakened with rigorous procedures. A related problem is construct validity: the development of measurements and indicators is often poorly linked with program theory.

Finally, program theories are prone to confirmation bias: the discussion of influencing factors and alternative explanations is often poor or altogether omitted. As a result, many program theories are overly abstract or simplistic and fail to support any in-depth or defensible examination and understanding of how the program works and brings about change under certain circumstances.

Despite these challenges, in situations where the emphasis is on understanding and explaining why and how a program brings about change, it is essentially impossible to avoid dealing with program theories. Therefore we propose a checklist of minimum requirements that program theories should fulfill. To realize their potential and add value to the evaluation, program theories should ideally contain the following elements:

  1. Identify all the program activities, outputs, and intermediate outcomes that are essential to understand the causal logic of how a program works and brings about change;
  2. Explain in sufficient detail how and why these parts are connected;
  3. Specify the external influencing factors (contextual conditions, other programs, and other processes and activities) that could affect program implementation, delivery, and outcomes;
  4. Clearly distinguish (and potentially choose) between theory of action (focused on causal linkages between implementation and delivery) and theory of impact (focused on causal linkages between delivery and outcomes), which allows for a distinction between implementation failure and theory failure;2 and
  5. To the extent possible, formulate alternative explanations (rival hypotheses) that might have produced changes in the program outcomes.

For program theory to be fruitfully used and integrated into the evaluation, it should inform the design of data collection and data analysis. In particular, the evaluator should do the following:

  1. Ensure that data collection covers the most salient program activities, outputs, and outcomes (as detailed in the program theory) and pay attention to both intended and unintended outcomes (positive and negative);
  2. Ensure that data collection covers the most salient alternative explanations and influencing factors;
  3. Examine how the collected data support or bring into question specific aspects of the program theory; and
  4. Refine and modify the program theory as informed by the data.

Together, these guidelines should facilitate a more productive use of program theories in evaluation.

Readings

Astbury, B., and F. L. Leeuw. 2010. “Unpacking Black Boxes: Mechanisms and Theory Building in Evaluation.” American Journal of Evaluation 31 (3): 363–81.

Bickman, Leonard, ed. 1987. “Using Program Theory in Evaluation.” Special issue, New Directions for Program Evaluation 1987 (33).

Brousselle, Astrid, and François Champagne. 2011. “Program Theory Evaluation: Logic Analysis.” Evaluation and Program Planning 34 (1): 69–78.

Funnel, S. C., and P. J. Rogers. 2011. Purposeful Program Theory—Effective Use of Theories of Change and Logic Models. San Francisco: Jossey-Bass.

Leeuw, F. L. 2003. “Reconstructing Program Theories: Methods Available and Problems to Be Solved.” American Journal of Evaluation 24 (1): 5–20.

Leroy, Jef L., Marie Ruel, and Ellen Verhofstadt 2009. “The Impact of Conditional Cash Transfer Programmes on Child Nutrition: A Review of Evidence Using a Programme Theory Framework.” Journal of Development Effectiveness 1 (2): 103–29.

Petrosino, Anthony, Patricia J. Rogers, Tracy A. Huebner, and Timothy A. Hacsi, eds. 2000. “Program Theory in Evaluation: Challenges and Opportunities.” Special issue, New Directions for Program Evaluation 2000 (87).

Rogers, P. J. 2000. “Program Theory: Not Whether Programs Work but How They Work.” In Evaluation Models, edited by D. L. Stufflebeam, G. F. Madaus, and T. Kellaghan, 209–32. Evaluation in Education and Human Services vol. 49. Dordrecht: Springer.

W. K. Kellogg Foundation. 2006. Logic Model Development Guide. Battle Creek, MI: W. K. Kellogg Foundation.

Weiss, C. H. 1997. “Theory-Based Evaluation: Past, Present, and Future.” New Directions for Evaluation 1997: 41–55.

  1. The term program is used in a generic sense to refer to any type of policy intervention (activity, project, program, policy, and so on). One could use the term intervention theory instead of the better-known term program theory.
  2. Failure for outcomes to emerge can either be due to implementation failure (the program outputs were not delivered) or theory failure (the program outputs were delivered but did not make a difference—that is, they may not have been the right solution to the problem in the given circumstances), or a combination of both.