In the planning, implementation and evaluation stages, informed choices need to be made on how to develop useful theories of the entire program as well as selected components that merit special attention...This approach emphasizes the need for ‘seeing the trees without losing track of the forest’. 

In 2000, Paul Krugman published an article titled “How complicated does the model have to be?”, in which he argued  that simple models in macroeconomics, such as the Hicksian IS-LM model, are not necessarily outdated but continue to be relevant for policy and explanatory purposes. One of his main points is that even though simple models are incomplete and inadequate for some questions, more elaborate models are not necessarily generating more accurate predictions or explanations of reality.

Krugman’s argument also applies to program theory, an abstraction of a policy intervention and its assumed causal linkages with processes of change in society. How complicated should the program theory be? While there are no definitive comprehensive answers to this question, below I discuss a number of principles that can inform this issue.

The purpose of the program theory in planning, implementation and evaluation. A common function of program theory is to capture how an intervention is expected to work and influence processes of change and outcome variables of interest. In all stages of an intervention the (explicit) program theory essentially constitutes a sense-making framework that can potentially have a strong influence on how stakeholders perceive and learn from an intervention and its context.

In intervention planning and implementation the program theory ideally informs design, monitoring (including the selection and definition of indicators of interest) and reporting. In evaluation, a program theory can constitute a framework for evaluation design, with the different causal steps in the theory informing a mixed methods approach to data collection and analysis. For example, a common purpose in evaluation is to test whether the theories of action (see Argyris and Schön, 1974) of intervention stakeholders (e.g. donors, implementing agencies, staff), which in this context refer to stakeholders’ assumptions regarding implementation processes and processes of change relating to an intervention, hold in reality. This fundamental function of program theory does not necessarily require a very detailed program theory. It does, however, require some digging into what the theories of action are as they are not fully articulated.

In order to look more closely into particular issues (such as causality) and taking into account the nature of the intervention and its context, other considerations are important in the framing of the program theory.

Adjudicating between rival theories. It is quite common that donors, implementing agencies and other stakeholders largely interpret the intervention reality through the lens of some official manifestation of program theory (e.g. a logframe of a project, or a program document that stipulates the main lines of action, the intended outcomes and key underlying assumptions). As a result, empirical data collection activities tend to be biased in the sense of being heavily influenced by the components of the theory. A single (official) program theory also increases the risk of stakeholders inadvertently favoring any empirical evidence that confirms the program theory. In order to avoid such ‘confirmation bias’ it is good practice to develop rival theories to be tested against empirical evidence. For example, Carvalho and White (2004) develop an insightful case for adjudicating between rival theories in an evaluation of a social funds program. Before confronting rival theories with empirical evidence they should ideally be as robust as possible (see my previous blog on theory specification).

Looking at (and across) different levels of analysis. There are different levels of program theory reconstruction and testing that merit attention at the planning, implementation and evaluation stages: the causal (behavioral) mechanism, the intervention activity, and the level of the (country) program, strategy, portfolio (etc.). The latter more often than not comprises a myriad of intervention activities targeting multiple institutions and groups of citizens in different settings (for a useful typology of intervention activities see for example Bemelmans-Videc, Rist and Vedung, 2003). For example, a national education program can include activities to support new legislation, policy formulation, management of the school system, interventions in selected schools, and so on. In turn, one single intervention activity can trigger multiple behavioral mechanisms leading to different outcomes (see for example Astbury and Leeuw, 2010). For instance, providing microcredit to women can influence their attitude towards self-employment, their self-esteem, the balance of decision-making power between men and women in the household, and so on.

In the planning, implementation and evaluation stages, informed choices need to be made on how to develop useful theories of the entire program as well as selected components that merit special attention. The idea of unpacking interventions into components (often activities) for in-depth analysis in combination with a more holistic analytical perspective (e.g. questions around coherence and coordination) is discussed in Bamberger, Vaessen and Raimondo (2015). This approach emphasizes the need for ‘seeing the trees without losing track of the forest’.

Having kick-started a reflection on framing program theories, we are far from reaching the end of our exploration. In my next blog I will discuss different dimensions of complexity (inspired by recent and not so recent work in complexity science and systems thinking). When dealing with complexity in intervention design and evaluation, Krugman’s views on the comparative merits of simple versus more elaborate abstractions of reality remain highly relevant. In an increasingly interconnected and policy-saturated world these are issues that merit our full attention.

Have you read?

References

  • Argyris, M. and D. Schön (1974). Theory in Practice. Increasing professional effectiveness. San Francisco: Jossey-Bass.
  • Astbury, B. and F.L. Leeuw (2010). Unpacking black boxes: Mechanisms and theory building in evaluation. American Journal of Evaluation, 31(3), 363-381.
  • Bamberger, B., J. Vaessen and E. Raimondo (eds.) (2015). Dealing with complexity in development evaluation: A practical approach. Thousand Oaks: Sage.
  • Bemelmans-Videc, M.L., R.C. Rist and E.O. Vedung (eds.) (1998). Carrots, sticks and sermons: Policy instruments and their evaluation. New Brunswick: Transaction Publishers.
  • Carvalho, S. and H. White (2004). Theory-based evaluation: The case of social funds. American Journal of Evaluation, 25(2), 141-160.
  • Krugman, P. (2000). How complicated does the model have to be? Oxford Review of Economic Policy, 16(4), 33-42.

Comments

Permalink

Why can't all blogs offer a simple pdf or textual download? Is this because they are considered merely ephemeral? I don't think so. Or is it because the authors believe social media will do the job? If so, they are mistaken. If it's worth writing about, it's worth reflecting on (usually!). So make it easier for us.

Permalink

Dear Jos,

First, hello again! And thanks for a stimulating post.

You raise an interesting question about how complicated a theory-of-change (ToC) model needs to be. As the famous statistician George Box observed: "All models are wrong, but some are useful." Some pictures may be worth a thousand words, and can be very useful. But others that require a thousand words to try to explain may be of questionable value. Einstein is reported to have said: "Everything should be made as simple as possible, but no simpler."

Enough of aphorisms. Some ToC models may indeed appear rather convoluted, and perhaps more so than is really necessary. But these at least may be useful in illustrating that there may be a lot going on that needs to be taken into consideration in attempting to understand how a program is expected "to work" – and in its evaluation. I've reviewed dozens, if not more, ToC/intervention/results models from many different agencies. Perhaps the most common weakness is in assuming linearity, and that the impact of an intervention can be isolated from context, from interaction with other factors, and from the efforts of partners as well as others.

Such overly simplistic models can have significant negative implications, both for planning and implementation of the intervention itself and for its evaluation. For example, they can lead planning of a program to fail to take into account how its own efforts need to interact with those of others, leading, at best, to missed opportunities. For evaluation, it can lead to overly simplistic approaches to evaluation (often self-proclaimed as "rigorous") that assume a direct cause-and-effect relationship, ignoring the influence of context and other factors that are treated as "noise".

Another question that frequently comes up (or, rather, should come up, too often glossed over) is whose model is presented and whose assumptions are reflected in it? Too often, it is an evaluator who develops the model. But does this really reflect the assumptions of stakeholders about how they feel that the program does, or should, "work"? Is there always consensus on this? As Carol Weiss has previously suggested, it sometime may be appropriate for evaluation to adjudicate across rival theories, to test out which sets of assumptions are supported by evidence. This is a point you have flagged, so thank you for this.

Again, many thanks for a stimulating post.

Burt

Dear Burt, thanks for your comments. You raise a number of pertinent points. Indeed, simplicity may be a virtue but paying insufficient attention to an intervention’s embeddedness in the broader economic, political, cultural (etc.) context can foster misleading perceptions about the real relevance and (potential) effectiveness of an intervention. In the sequel to this post I will discuss the risks of ‘intervention-centric’ thinking and how to address these. Thanks again for your interest in this post.

Best,
Jos.

Permalink

why are all the references from Jos and Caroline? Are there not other thinkers and writers on program theory? I can think of Patricia Rogers who has written an awesome book on program theory.

Many thanks for your comment. The references cited on this blog post, above, are as follows:

•Argyris, M. and D. Schön (1974). Theory in Practice. Increasing professional effectiveness. San Francisco: Jossey-Bass.
•Astbury, B. and F.L. Leeuw (2010). Unpacking black boxes: Mechanisms and theory building in evaluation. American Journal of Evaluation, 31(3), 363-381.
•Bamberger, B., J. Vaessen and E. Raimondo (eds.) (2015). Dealing with complexity in development evaluation: A practical approach. Thousand Oaks: Sage.
•Bemelmans-Videc, M.L., R.C. Rist and E.O. Vedung (eds.) (1998). Carrots, sticks and sermons: Policy instruments and their evaluation. New Brunswick: Transaction Publishers.
•Carvalho, S. and H. White (2004). Theory-based evaluation: The case of social funds. American Journal of Evaluation, 25(2), 141-160.
•Krugman, P. (2000). How complicated does the model have to be? Oxford Review of Economic Policy, 16(4), 33-42.

There are also links to related blog posts by Jos Vaessen and Caroline Heider, for those who wish to see what else IEG has published on this subject.

Permalink

I always become uncomfortable when I see discussions about 'confirmation bias' and the need to use rival theories for public program interventions. Although economists do have rival theories to explain the functioning of the global economy, they use much simpler approaches to providing evidence on the impact of more specific government interventions. Econometric analyses rely on the development of models that include all independent variables (representing underlying factors) that have a significant explanatory relationship with the dependent variable. Interpretation bias comes in when models try to explain observed program results in spite of the absence of important explanatory variables. The issue should not be to prove that the 'official' program theory is better than rival ones, but that the program theory of intervention is complete and that 'evidence' is based on reliable measures that take into account all explanatory variables, including those related to the program context.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.