The Independent Evaluation Group (IEG) evaluates the activities of the World Bank Group (WBG) to find “what works, what doesn't, and why” in pursuit of ending poverty and boosting shared prosperity.

But as anyone who has worked in development can tell you, “what works” is rarely as simple as identifying “best practice” and replicating it widely.  Indeed, I would question the utility of the concept of “best practice” in challenging economic environments that are not themselves “first best”, which include most developing economies. I recall an Executive Director at the IMF once saying that “first-best policy in a second-best world is bad policy”. By definition, developing economies exhibit market failures and underdeveloped institutions. Yet development partners too often promote the pursuit of “what works” and “best practice” in their efforts to support development effectiveness without adequately defining the circumstances in which an intervention or policy works well—and when it doesn’t. IEG is no exception in this regard.

if the evaluation community is to consider interaction effects in the search for “what works” in developing countries, it must pay closer attention to the actual processes and incentives that operate within the institutions that it supports and influences, which itself requires a closer dialogue between practitioners and analysts.

The challenge may be even greater in circumstances where policies are affected by more than the stage of market and institutional development. This challenge is well illustrated in a 2009 essay on School Improvement and the Reduction in Poverty[1] by Professor Richard Elmore of the Harvard Graduate School of Education. 

In an interesting digression on policy making and policy research, Professor Elmore makes an important assertion—“the more developed the economy, the more troublesome and problematic becomes the relationship between education, economic well-being, and the reduction of poverty”. This is because, he contends, “all important effects are interaction effects”.  

To underscore his statement about interaction effects, Professor Elmore argues that there are three points of intervention that achieve the greatest impact on student performance in U.S. public schools: the level of “content”, the knowledge and skill that teachers bring to instruction, and the role students play in the instructional process.  Policy actions that don’t take all three things into account (as well as the impact they have on each other) are likely to fall short.

Elmore identifies what he calls the “fallacy of main effects” (and its corollary, the “fallacy of attractive nuisances”) which characterizes the pressure to identify policy interventions that “work”. The attraction of simple answers to decision makers and other stakeholders is obvious—such solutions are easy to communicate, motivate, scale up, and replicate in different contexts.  However, the pursuit of main effects is often accompanied by a temptation to ignore evidence of variability of efforts from one setting to another, explaining away the impact of other factors with what Elmore refers to as the “vapid truism” that “context matters”.    

Elmore’s frustration with the conduct and use of policy research and analysis is further revealed in his assertion that researchers have been so preoccupied with trying to figure out what policies “work” that they have neglected studying the complex interaction effect of policies on the ground that can teach us about the conditions that determine how organizations respond to policy changes. 

When combined with his “logic of policy making”, policy makers are pushed in the direction of less complex, more immediate, and more visible solutions that obey the timing of institutional cycles (e.g., elections, replenishment efforts, important stakeholder meetings). This tendency is reinforced when policy makers become frustrated with inconclusive and disappointing results from policy analysis and when the predominant institutional culture rewards simple solutions. While Elmore is writing about efforts to close the achievement gap in U.S. public schools, his concerns resonate in the world of international development and development evaluation. 

Elmore concludes that what is needed are “more powerful theories about what actually happens and how people, institutions, and policies interact”.  Coming to this kind of deeper understanding requires overcoming the seemingly insurmountable challenge of accurately capturing heterogenous organizational settings, diverse incentives, and complex structures.  This is starting to happen in policy makingwith the integration of insights from behavioral science, which draw on multiple disciplines, including economics, psychology, and sociology. The World Bank, for its part, has established a Mind, Behavior and Development Unit in its Poverty and Equity Global Practice, with a mandate to better understand how context affects behavior and policy impact, but we have a way to go in breaking down barriers between disciplines in pursuit of a more nuanced and accurate understanding of “what works”. 

Elmore’s essay is sobering and the criticality of “interaction effects” is highly relevant to development policy making and evaluation, as is his caution about the “fallacy of main effects” and the temptation to gravitate to simple answers. From his essay, it’s natural to conclude that if the evaluation community is to consider interaction effects in the search for “what works” in developing countries, it must pay closer attention to the actual processes and incentives that operate within the institutions that it supports and influences, which itself requires a closer dialogue between practitioners and analysts.  This realization is part of what underpins IEG’s expanded efforts at outreach in the design and conduct of its evaluations. 

Another implication of acknowledging the importance of interaction effects is that institutional silos—particularly between sectors and policy areas (such as those discussed in IEG’s recent evaluation of Knowledge Flow and under the World Bank’s New Operating Model) can seriously undermine the efficacy of policy reform.  This makes it essential that the World Bank Group alleviate internal incentives that reinforce silos.  This is not an easy thing, with experience suggesting that successful collaboration cannot be mandated from above or approached mechanistically.  It derives from a truly collaborative culture, mutual professional respect, an open flow of information, and an exchange of ideas that is incentivized and rewarded.  The greater good must overcome bureaucratic and professional incentives if the World Bank Group is to meet the challenges before it—including its commitments under IDA19 and the IBRD and IFC capital increases. It will have no choice but to dig deeper, including by reaching across disciplines.

 

[1] Chapter 6 in Poverty and Poverty Alleviation Strategies in North America, Mary Jo Bane and Rene Zenteno, (editors), Harvard University Press 2009

Comments

Permalink

I'll call this the PIP model or ie evaluation that consider the complex interaction effects between and among people, institutions and policy. The model sounds good in dealing with socioeconomic problems and their evaluation particularly dealing with the context of education. I will however go for a pipe model to include the interactions of the environment or nature as we think about sustainability and planetary limits to economic growth and social inclusion.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.