The world’s leading development economists and evaluators have been engaged in a passionate argument for years on Randomized Controlled Trials (RCTs) vs. Observational Studies. For those who want the four minute version of the debate, check out the exchange last year at the NYU Development Research Institute between economist gurus Abhijit Banerjee and Angus Deaton.
Unfortunately, the tendency towards bipolar choice is common in many fields, not least the evaluation profession. I remember the debates way back in the early ‘90s when adherents of quantitative and qualitative methods were arguing that their methodology was the only one that was right. Then, in 2006, the article “When Will We Ever Learn” was instrumental to launching the birth of rigorous impact evaluations based on RCTs. This, in turn, has prompted a counter-movement to advocate qualitative methods, pushing the envelope for participatory evaluations to go beyond focus groups and ask whose reality counts.
This was the good side: at each end of the spectrum, people challenged themselves to enhance methods and deepen debates. But how much time and energy did we have to invest to come out in a better place? Eventually the Network of Networks on Impact Evaluation – an initiative to bring the two sides together – helped reconcile positions, at least among evaluators. But, many donors still demand RCTs, and universities churn out students that are sold on the idea that this is the only way to go.
IEG’s evaluation of impact evaluations undertaken by the World Bank Group flagged some of the weaknesses of this approach. The quality of impact evaluations is not uniform, the choices not strategic but rather clustered around a few subjects like conditional cash transfers, and – perhaps most importantly -- the use of their results was, especially in the early years, negligible.
So, is it time to abandon ship on RCTs? Development Economist Lant Pritchett delivered a searing critique of RCT Randomistas a few weeks ago at the fall meeting of the Evaluation Cooperation Group. His main concern was about external validity and the danger of extrapolating from one context, often at small scale, to another very different context. Instead, he urged for more “structured experiential learning,” which allows implementing agencies to rigorously search across alternative project designs using the monitoring data that provides real time performance information with direct feedback into project design and implementation.
My view? What matters to me more than who is right or wrong, is that we need to draw on each and every method to deepen our understanding of what happened, how and why. And not just, the independent evaluation folks but also those implementing projects who monitor, observe, evaluate – RCT, quasi-experimental, or qualitatively – and can do so real-time for greater learning.
What this means for our work in IEG is that we are now combining systematic reviews of existing impact evaluations with portfolio analyses and findings from qualitative evaluations, and tapping into big data and social media to get more information that is out there. And we combine all of that with our own interviews and site visits. The range of data points and opportunities for triangulation is incredible and each perspective enriches our understanding, not just of the “what” but more importantly the “why and how” that will help us – as development practitioners – replicate success, adapting from one situation to another, and avoid as much as possible failure.
Comments
Add new comment