In the planning, implementation and evaluation stages, informed choices need to be made on how to develop useful theories of the entire program as well as selected components that merit special attention...This approach emphasizes the need for ‘seeing the trees without losing track of the forest’. 

In 2000, Paul Krugman published an article titled “How complicated does the model have to be?”, in which he argued  that simple models in macroeconomics, such as the Hicksian IS-LM model, are not necessarily outdated but continue to be relevant for policy and explanatory purposes. One of his main points is that even though simple models are incomplete and inadequate for some questions, more elaborate models are not necessarily generating more accurate predictions or explanations of reality.

Krugman’s argument also applies to program theory, an abstraction of a policy intervention and its assumed causal linkages with processes of change in society. How complicated should the program theory be? While there are no definitive comprehensive answers to this question, below I discuss a number of principles that can inform this issue.

The purpose of the program theory in planning, implementation and evaluation. A common function of program theory is to capture how an intervention is expected to work and influence processes of change and outcome variables of interest. In all stages of an intervention the (explicit) program theory essentially constitutes a sense-making framework that can potentially have a strong influence on how stakeholders perceive and learn from an intervention and its context.

In intervention planning and implementation the program theory ideally informs design, monitoring (including the selection and definition of indicators of interest) and reporting. In evaluation, a program theory can constitute a framework for evaluation design, with the different causal steps in the theory informing a mixed methods approach to data collection and analysis. For example, a common purpose in evaluation is to test whether the theories of action (see Argyris and Schön, 1974) of intervention stakeholders (e.g. donors, implementing agencies, staff), which in this context refer to stakeholders’ assumptions regarding implementation processes and processes of change relating to an intervention, hold in reality. This fundamental function of program theory does not necessarily require a very detailed program theory. It does, however, require some digging into what the theories of action are as they are not fully articulated.

In order to look more closely into particular issues (such as causality) and taking into account the nature of the intervention and its context, other considerations are important in the framing of the program theory.

Adjudicating between rival theories. It is quite common that donors, implementing agencies and other stakeholders largely interpret the intervention reality through the lens of some official manifestation of program theory (e.g. a logframe of a project, or a program document that stipulates the main lines of action, the intended outcomes and key underlying assumptions). As a result, empirical data collection activities tend to be biased in the sense of being heavily influenced by the components of the theory. A single (official) program theory also increases the risk of stakeholders inadvertently favoring any empirical evidence that confirms the program theory. In order to avoid such ‘confirmation bias’ it is good practice to develop rival theories to be tested against empirical evidence. For example, Carvalho and White (2004) develop an insightful case for adjudicating between rival theories in an evaluation of a social funds program. Before confronting rival theories with empirical evidence they should ideally be as robust as possible (see my previous blog on theory specification).

Looking at (and across) different levels of analysis. There are different levels of program theory reconstruction and testing that merit attention at the planning, implementation and evaluation stages: the causal (behavioral) mechanism, the intervention activity, and the level of the (country) program, strategy, portfolio (etc.). The latter more often than not comprises a myriad of intervention activities targeting multiple institutions and groups of citizens in different settings (for a useful typology of intervention activities see for example Bemelmans-Videc, Rist and Vedung, 2003). For example, a national education program can include activities to support new legislation, policy formulation, management of the school system, interventions in selected schools, and so on. In turn, one single intervention activity can trigger multiple behavioral mechanisms leading to different outcomes (see for example Astbury and Leeuw, 2010). For instance, providing microcredit to women can influence their attitude towards self-employment, their self-esteem, the balance of decision-making power between men and women in the household, and so on.

In the planning, implementation and evaluation stages, informed choices need to be made on how to develop useful theories of the entire program as well as selected components that merit special attention. The idea of unpacking interventions into components (often activities) for in-depth analysis in combination with a more holistic analytical perspective (e.g. questions around coherence and coordination) is discussed in Bamberger, Vaessen and Raimondo (2015). This approach emphasizes the need for ‘seeing the trees without losing track of the forest’.

Having kick-started a reflection on framing program theories, we are far from reaching the end of our exploration. In my next blog I will discuss different dimensions of complexity (inspired by recent and not so recent work in complexity science and systems thinking). When dealing with complexity in intervention design and evaluation, Krugman’s views on the comparative merits of simple versus more elaborate abstractions of reality remain highly relevant. In an increasingly interconnected and policy-saturated world these are issues that merit our full attention.

Have you read?

References

  • Argyris, M. and D. Schön (1974). Theory in Practice. Increasing professional effectiveness. San Francisco: Jossey-Bass.
  • Astbury, B. and F.L. Leeuw (2010). Unpacking black boxes: Mechanisms and theory building in evaluation. American Journal of Evaluation, 31(3), 363-381.
  • Bamberger, B., J. Vaessen and E. Raimondo (eds.) (2015). Dealing with complexity in development evaluation: A practical approach. Thousand Oaks: Sage.
  • Bemelmans-Videc, M.L., R.C. Rist and E.O. Vedung (eds.) (1998). Carrots, sticks and sermons: Policy instruments and their evaluation. New Brunswick: Transaction Publishers.
  • Carvalho, S. and H. White (2004). Theory-based evaluation: The case of social funds. American Journal of Evaluation, 25(2), 141-160.
  • Krugman, P. (2000). How complicated does the model have to be? Oxford Review of Economic Policy, 16(4), 33-42.