The lack of an explicit (and comprehensive) understanding of the principles underlying mixed methods inquiry has led to some confusion and even misuses of the concept in the international evaluation community.

For some time now mixed methods approaches have been part and parcel of mainstream debates and practices in evaluation (and other branches of the applied social sciences). Going back to at least the 1950s, the mixed methods tradition picked up momentum in the 2000s. A number of text books (e.g. Tashakkori and Teddlie (2003, 2010)) and a dedicated journal (Journal of Mixed Methods Research) are clear signs of mixed methods research becoming a research paradigm in itself. 

Despite significant progress on mixed methods approaches, for many evaluation stakeholders their application continues to be (partly) shrouded in mystery. To use a simple analogy, the concept of flying is intuitively understood and appreciated by most, yet when asked to explain the underlying principles, many would fall short. This is not to say that evaluators (and researchers in general for that matter) are lost in the dark. Far from this, evaluators tend to have very sensible ideas about combining qualitative and quantitative methods.

Yet, the lack of an explicit (and comprehensive) understanding of the principles underlying mixed methods inquiry has led to some confusion and even misuses of the concept in the international evaluation community. Let me briefly highlight three manifestations of this:

  1. Mixed methods as a justification for suboptimal and less expensive evaluation designs. With the ongoing risk of ‘ritualization’ in evaluation processes and the pressure of delivery under resource constraints, rigorous designs (e.g. for causal inference) are sometimes rejected and replaced by less costly mixed methods designs that are presented as rigorous alternatives without proper explanation.
     
  2. Mixed methods as a justification for rejecting particular methods. The epistemological debates in the evaluation community are far from over. The concept of mixed methods is sometimes invoked as an excuse to reject particular (reasonable) approaches (e.g. quasi-experimental designs, process tracing) for reasons other than relating to evaluation design in the context of real-world constraints.
     
  3. Mixed methods as a justification for ‘anything goes’. Given the intuitive appeal of mixed methods approaches, by simply mentioning the latter some evaluation stakeholders are satisfied that the validity of the evaluation’s findings has been safeguarded. In reality, much of the evaluation work that claims to be underpinned by a mixed methods approach fails to make explicit the rationale for doing so. The unfortunate consequence is that the meaning of mixed methods has become diluted as it continues to represent too much evaluative work that is not supported by a credible and transparent methodological design.

Notwithstanding a continued lack of understanding as well as misuses of the term by some, the imperative for using mixed methods designs to strengthen the validity of evaluation findings is strong. IEG is continuously looking for ways to strengthen its mixed methods evaluation designs. The overall purpose of a mixed methods approach is to offset the biases and limitations of one method with the strengths of another. Greene et al. (1989) in their seminal article on the principles of mixed methods in evaluation present five more precise purposes:

  • Triangulation. To seek convergence, corroboration or correspondence in the findings from different methods. For example, to understand the causes of low student performance among primary school pupils from different perspectives, one could conduct a survey among pupils, use class observation as a method and study the family situation of pupils using interviews, observation or other methods.
     
  • Complementarity. To elaborate or build on the findings of one method with another method. For example, the analysis of survey data might identify associations between awareness-raising campaigns and health-seeking behavior of citizens. This could prompt a series of in-depth interviews with (particular groups of) people to understand their views on staying healthy and the influence of the media.
     
  • Development. To use the findings of one method for developing another. For example, the use of in-depth case studies (including observation, interviews) in different communities to understand the multi-dimensional and context-specific nature of poverty. Subsequently, these insights could then be used to develop questions and variables for a nation-wide income survey.
     
  • Initiation. To seek paradoxes, contradictions, or fresh insights on a particular phenomenon. For example, using one set of methods to explore theory A and another set for theory B.  Subsequently, the evidence gathered with different methods could help to adjudicate between the two theories. See one of my previous blog posts for an example.
     
  • Expansion. To expand the breadth and scope of inquiry using different methods. For example, the use of focused documentary analysis and semi-structured interviews to evaluate the institutional capacity development component of an education project. At the same time (multiple waves of) questionnaires to teachers could be used to evaluate teacher training activities supported by the same project.

The art of developing mixed methods designs goes beyond an understanding of these principles. It has become widely understood in evaluation in the field of international development that different methods have particular comparative advantages (NONIE, 2009; Bamberger et al., 2010). The ongoing challenge of bringing to bear such comparative advantages in the design and sequencing of methods in multi-level and multi-site evaluations constrained by time, budget and data considerations, will continue to require context-specific and creative solutions. In that sense, some of the mystery will always remain to challenge our thinking.

References

  • Bamberger, M., V. Rao and M. Woolcock (2010). Using mixed methods in monitoring and evaluation. Policy Research Working Paper, 5245. Washington DC: World Bank.
  • Greene, J.C., V.J. Caracelli and W.F. Graham (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255-274.
  • NONIE (2009). Impact evaluations and development – NONIE guidance on impact evaluation. Network of Networks for impact evaluation. Washington DC: World Bank.
  • Tashakkori, A. and C. Teddlie (2003, 2010). SAGE handbook of mixed methods in social & behavioral research. Thousand Oaks: SAGE.

Read also: