Unfortunately and unsurprisingly ... intervention design tends to be insufficiently informed by existing knowledge repositories, program theories are insufficiently articulated, and evaluators have to do a lot of digging to reconstruct the causal logic underpinning interventions

‘There is nothing so practical as a good theory’. This phrase, purportedly coined by Kurt Lewin in the 1940s and later taken up inter alia by Carol Weiss and Ray Pawson resonates well with most evaluators and development practitioners nowadays. The term theory of change is probably the most popular expression used in the international development community. Personally, I prefer the term program theory as it is less contentious. It is fair to say that in recent years countless publications, meetings and trainings have been devoted to discussing program theory in its various guises and forms. My present contribution to this ‘cacophony’ of voices is not primarily about why program theories are important and how they should be used (although I do come to these points fairly quickly), but first and foremost about what happens in practice and how this differs from what could be done to make program theory more effective in supporting accountability and learning. In this first blog of a series, I will present the framework for discussing these issues.

Program theory refers to a structured set of assumptions regarding how an intervention works (or is expected to work) and how it influences (or is expected to influence) processes of change. Most of the methodological and conceptual work regarding program theory has been developed in the realm of (ex post) evaluation. At the same time the ‘theory of change’ lingo has deeply penetrated the world of intervention design and planning, merging with existing traditions around logical frameworks and other types of results frameworks.

For the sake of argument, I will discuss the use of program theory in relation to both intervention design and evaluation. Notwithstanding the differences between these two processes, the two are closely linked. In an ideal world, evaluators build on the espoused theories of development practitioners (and related stakeholders), which are articulated during the intervention design phase and are informed by past experience and existing knowledge about what works and under what circumstances. The circle of knowledge accumulation is complete when evaluations feed into the knowledge repositories that inform intervention design. Unfortunately and unsurprisingly, intervention realities are often quite different from this ideal: intervention design tends to be insufficiently informed by existing knowledge repositories, program theories are insufficiently articulated, and evaluators have to do a lot of digging to reconstruct the causal logic underpinning interventions. There are many reasons why this is the case. Practical constraints of time, data, financial resources, staff incentives and available expertise are important explanatory factors here.

Despite these constraints, program theory has made considerable headway in the field of international development. Yet, while many development interventions as well as their evaluations nowadays boast the use of an articulated ‘theory of change’, in reality many practitioners and evaluators tend not to be overly concerned with developing a good theory (or theories). At the risk of oversimplification I highlight four ‘symptoms’ of sub-optimal use of program theory in intervention design and evaluation in international development:

  • Symptom 1: A lack of consensus on what program theory is about. This symptom refers to issues such as a lack of clarity on the use of terminology, the principles of theory specification, and the sources of theory.
  • Symptom 2: A lack of clarity on why we (should) use program theory. There are well-known intended uses of theory that most practitioners and evaluators would agree on, but which are paradoxically often not (fully) realized in practice. At the same time there are a number of potential uses of theory in evaluation that can be very insightful in our quest for developing a better understanding of our interventions, yet tend to be unknown or disregarded. The concept of rival theories and adjudication between theories is an important principle which draws into question the entrenched idea of a single ‘theory of change’. This is also closely related to a third symptom.
  • Symptom 3: A lack of clarity on the level of abstraction of a program theory. Trying to capture and understand large-scale interventions (and even smaller projects) -which may comprise multiple stakeholder groups, levels of intervention, sites, activities and pathways of change- in a single program theory can be rather daunting.  To improve our understanding of an intervention we should consider reconstructing and testing theories at different levels of abstraction and focus. A related issue concerns the challenges arising from using intervention-centric theories.
  • Symptom 4: A lack of learning about new developments in theory-based assessment of policy interventions. Applications of program theory are not keeping up with new ideas, concepts and methods (both theoretical and applied) that are being developed in this area of work. This constitutes another important reason why program theory is not meeting its potential of effectively supporting evidence-based and systematic intervention design and evaluation. There have been a number of notable developments in the field of program theory that merit the attention of development practitioners and/or evaluators.

In subsequent blogs, I will focus on each symptom separately, laying out the arguments and providing illustrations and references to (recent) work on program theory where needed.

I end with a disclaimer and a note of clarification. My diagnosis, informed by my own evaluation experience in bilateral and multilateral development organizations as well as valuable insights from some of my peers, as imperfect as it is, is not intended to present any kind of final or summative judgement on the application of program theory in the field of international development. By contrast, I hope that some of the ‘symptoms’ identified above will serve the inspirational purpose of going back to the rich theoretical and empirical literature on program theory and applying some of the useful insights found therein.

Have You Read?

What is (good) program theory in international development?

How complicated does the (Intervention) Model have to be?



Submitted by Bill Ward on Thu, 09/15/2016 - 10:48


Indeed – good theory makes good practice. Part of what ails Bank practice is an economic theory of public intervention that needs updating to reflect intervening developments in economic theory and the potential applications to good practice.

The above-referenced theory derives from Richard Musgrave’s 1959 opus, The Theory of Public Finance, and related work arising from Paul Samuelson’s 1954 attempt at a “Pure Theory of Public Expenditure” – Samuelson’s response to Kenneth Arrow and Gerard Debreu’s economic model presented a few months earlier that had only three sectors (households, real, and financial) and needed no public sector (interventions) because the markets were ‘perfect’. Samuelson defined public goods and argued they would not be provided by private actors, thus necessitating a public sector and the resources to run it. Charles Tiebout (1956) then related externalities to public goods; and Francis Bator’s “The Anatomy of Market Failure” in 1958 merged these and related concepts into an outcome-based definition of market failure (failure to achieve THE potential maximum welfare outcome). This still-accepted, outcome-based definition of market failure poses problems in developing and applying an economic theory of public intervention, as it combines with Richard Lipsey and Kelvin Lancaster’s 1956 “General Theory of Second Best” to make acceptable an open-ended potential list of interventions to improve social welfare. Thus, anything goes in designing projects – so long as it does not reduce social welfare.
The Bank traditionally provided intellectual leadership to other international development organizations in dealing with this incomplete economic theory of public intervention. Since the 1970s, the Bank has used a fail-safe, comparative statics method (cost-benefit analysis, aka CBA) to assure that the open-ended list of theories filling the above void does not result in reducing society’s economic welfare as measured by the welfare economics behind the Arrow-Debreu model – the cornerstone of modern economics theory and practice.

Intervening developments in economics now make possible a more complete economic theory of public intervention (that is, one addressing inputs and processes as well as outcomes related to fulfilling Arrow-Debreu conditions) as regards some aspects of market failure. These include some external diseconomies arising from high transaction costs that we now know can be reduced by information, institutions and other ‘infrastructure’ interventions (for which I and a former Bank colleague are charting the related program logic for a few Bank projects in China). These and other developments in economic analysis (insufficient space here to discuss them) have not yet resulted in an updated theory of public goods, market failures and public intervention that is complete (as defined above) for all market failures. But we are discovering specific applications where all the elements for a complete economic theory of intervention now are present, to which we should be able to apply from project inception to evaluation the basic principles addressed in your blog.

Great having you at IEG. I look forward to your sequels.

Submitted by Jos Vaessen on Fri, 09/16/2016 - 06:57


Dear Bill, thanks for your very thoughtful comments. I appreciate your fundamental perspective on program theory, starting out from an overall theory for public intervention as underpinned by development economics. This is an important debate which has gradually become more eclectic as (inter alia) economic theory has broadened its scope, as evidenced by for example interesting developments in institutional and behavioral economics. I understand that you have been working on more specific program theories related to Bank projects. In what follows I will be referring mostly to this more specific level as well as even more detailed theories on behavioral mechanisms (potentially) induced by (public) policy instruments. Thanks again for your interest in this topic.


Submitted by lylian on Wed, 10/26/2016 - 20:48


Jos, I believe that one of the problems is that evaluations still a reactive process, and this generate a lack of many things.


Submitted by Florence Mulumba on Mon, 12/05/2016 - 14:40


Hello Jos,
Thank you for clearly illustrating the theory of change I studied a module on impact evaluation but I didn't understand it until now! Its a great job done.

Submitted by Candice Morkel on Mon, 01/09/2017 - 05:19


This is a really useful blog post Jos. I'm especially interested in your detailed follow-up on symptoms 2 and 4, as I have seen these play out in the public sector in a very significant way. In my work in this sector over more than 10 years, where the fundamental struggle in conducting reliable evaluations was the absence of good programme theory (or any purposely designed programme theory), alot of our work as M&E specialists was to go back to the drawing board to assist the development planners. Getting all the stakeholders (political and technical) to agree on what the intervention should "look like" often revolved specifically around these two symptoms you outline here, and I will be following this blog with great interest for some insight into how these issues are best addressed.

Dear Candice, thank you for your interest in this blog series. Indeed, arriving at a shared understanding among key stakeholders on how a program is intended to work and influence/induce processes of change constitutes one of the perennial challenges in public policy. A shared (explicit) program theory can be a powerful tool for planning, implementation, monitoring, communication, etc. At the same time, the purpose of program theory goes beyond this (both in planning and evaluation). For example, one could articulate rival theories as a basis for gathering and interpreting data (eventually adjudicating between the theories) or use nested theories (e.g. activities within projects within programs) for in-depth causal analysis and to support a narrative on micro-meso linkages. I have discussed some of the (potential) purposes and applications of program theory in blogposts following this one and will continue to do so in my next post(s). Thanks again for your comment.

Add new comment

By submitting this form, you accept the Mollom privacy policy.