The lack of an explicit (and comprehensive) understanding of the principles underlying mixed methods inquiry has led to some confusion and even misuses of the concept in the international evaluation community.

For some time now mixed methods approaches have been part and parcel of mainstream debates and practices in evaluation (and other branches of the applied social sciences). Going back to at least the 1950s, the mixed methods tradition picked up momentum in the 2000s. A number of text books (e.g. Tashakkori and Teddlie (2003, 2010)) and a dedicated journal (Journal of Mixed Methods Research) are clear signs of mixed methods research becoming a research paradigm in itself. 

Despite significant progress on mixed methods approaches, for many evaluation stakeholders their application continues to be (partly) shrouded in mystery. To use a simple analogy, the concept of flying is intuitively understood and appreciated by most, yet when asked to explain the underlying principles, many would fall short. This is not to say that evaluators (and researchers in general for that matter) are lost in the dark. Far from this, evaluators tend to have very sensible ideas about combining qualitative and quantitative methods.

Yet, the lack of an explicit (and comprehensive) understanding of the principles underlying mixed methods inquiry has led to some confusion and even misuses of the concept in the international evaluation community. Let me briefly highlight three manifestations of this:

  1. Mixed methods as a justification for suboptimal and less expensive evaluation designs. With the ongoing risk of ‘ritualization’ in evaluation processes and the pressure of delivery under resource constraints, rigorous designs (e.g. for causal inference) are sometimes rejected and replaced by less costly mixed methods designs that are presented as rigorous alternatives without proper explanation.
     
  2. Mixed methods as a justification for rejecting particular methods. The epistemological debates in the evaluation community are far from over. The concept of mixed methods is sometimes invoked as an excuse to reject particular (reasonable) approaches (e.g. quasi-experimental designs, process tracing) for reasons other than relating to evaluation design in the context of real-world constraints.
     
  3. Mixed methods as a justification for ‘anything goes’. Given the intuitive appeal of mixed methods approaches, by simply mentioning the latter some evaluation stakeholders are satisfied that the validity of the evaluation’s findings has been safeguarded. In reality, much of the evaluation work that claims to be underpinned by a mixed methods approach fails to make explicit the rationale for doing so. The unfortunate consequence is that the meaning of mixed methods has become diluted as it continues to represent too much evaluative work that is not supported by a credible and transparent methodological design.

Notwithstanding a continued lack of understanding as well as misuses of the term by some, the imperative for using mixed methods designs to strengthen the validity of evaluation findings is strong. IEG is continuously looking for ways to strengthen its mixed methods evaluation designs. The overall purpose of a mixed methods approach is to offset the biases and limitations of one method with the strengths of another. Greene et al. (1989) in their seminal article on the principles of mixed methods in evaluation present five more precise purposes:

  • Triangulation. To seek convergence, corroboration or correspondence in the findings from different methods. For example, to understand the causes of low student performance among primary school pupils from different perspectives, one could conduct a survey among pupils, use class observation as a method and study the family situation of pupils using interviews, observation or other methods.
     
  • Complementarity. To elaborate or build on the findings of one method with another method. For example, the analysis of survey data might identify associations between awareness-raising campaigns and health-seeking behavior of citizens. This could prompt a series of in-depth interviews with (particular groups of) people to understand their views on staying healthy and the influence of the media.
     
  • Development. To use the findings of one method for developing another. For example, the use of in-depth case studies (including observation, interviews) in different communities to understand the multi-dimensional and context-specific nature of poverty. Subsequently, these insights could then be used to develop questions and variables for a nation-wide income survey.
     
  • Initiation. To seek paradoxes, contradictions, or fresh insights on a particular phenomenon. For example, using one set of methods to explore theory A and another set for theory B.  Subsequently, the evidence gathered with different methods could help to adjudicate between the two theories. See one of my previous blog posts for an example.
     
  • Expansion. To expand the breadth and scope of inquiry using different methods. For example, the use of focused documentary analysis and semi-structured interviews to evaluate the institutional capacity development component of an education project. At the same time (multiple waves of) questionnaires to teachers could be used to evaluate teacher training activities supported by the same project.

The art of developing mixed methods designs goes beyond an understanding of these principles. It has become widely understood in evaluation in the field of international development that different methods have particular comparative advantages (NONIE, 2009; Bamberger et al., 2010). The ongoing challenge of bringing to bear such comparative advantages in the design and sequencing of methods in multi-level and multi-site evaluations constrained by time, budget and data considerations, will continue to require context-specific and creative solutions. In that sense, some of the mystery will always remain to challenge our thinking.

References

  • Bamberger, M., V. Rao and M. Woolcock (2010). Using mixed methods in monitoring and evaluation. Policy Research Working Paper, 5245. Washington DC: World Bank.
  • Greene, J.C., V.J. Caracelli and W.F. Graham (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255-274.
  • NONIE (2009). Impact evaluations and development – NONIE guidance on impact evaluation. Network of Networks for impact evaluation. Washington DC: World Bank.
  • Tashakkori, A. and C. Teddlie (2003, 2010). SAGE handbook of mixed methods in social & behavioral research. Thousand Oaks: SAGE.

Read also:

Comments

Permalink

Thanks for sharing this post. I do not agree with the characterization of mixed methods research as "mysterious". It could be that researchers trained in a single tradition may find it difficult to fully understand mixed methods research, and therefore, tend to be quite suspicious of it. However, this has less to do with mixed methods being mysterious, and more to do with the researcher in question being trained in only a single school of thought. One should remember that good/bad practices exist in every strand of research -- be it qualitative, quantitative or mixed methods. One of the best parts of being a mixed methods researcher is exposing yourself to criticism by qualitative folks and quantitative folks alike. When it's done well, the benefits of mixed methods research outweigh the costs, by a large margin.

Dear Emcet, thanks for your comment and your interest in the blog post. I agree with your main point. My interpretation and use of the word ‘mystery’ with regard to mixed methods approaches does not come so much from a perceived lack of clarity in the literature or lack of guidance on the topic as such, but from the fact that many evaluators (and policy-oriented researchers) have been trained in a particular discipline and methodological tradition which tends to frame their thinking and practice. Methodological specialization is an important skill which is needed in practice. At the same time, we need sufficient evaluators (and policy-oriented researchers) who have a broad understanding of different designs and methods and the capacity to use and integrate them in logical and efficient ways.

Permalink

An excellent, clarifying article Jos, thank you. I sometimes observe that all the evaluations we do are 'mixed methods' to some degree, in that they capture a number of the purposes you outline. However, evaluations often simply mention that mixed methods were used and stop there, without explicitly mentioning the inter-linkages between the different methods adopted and how they are used in combination to resolve a particular evaluation question.

Permalink

Great piece! Many points raised ring true. But the use of sweeping, no-evidence claims and false equivalence does not depart from the reality that all charges apply as well in cases of either the quanti or quali -making all three in equal footing when it comes to "flying high shrouded in mystery". Just a 'pari passu' thing.

Dear Romeo, that is a valid point. A key challenge in mixed methods designs (one could argue that most methodological designs are to some extent mixed methods designs) is about how we select, integrate and sequence methods. This challenge is further compounded by methodological challenges that relate to specific (qualitative or quantitative) methods. Thanks for your comment.

Permalink

Mixed methods paradigm is in cul-de-sac because of one tiny problem: it is ussually assumed that MMP needs to produce scalar results with uniform message for complex questions, such as 42 in The Hitchhiker's Guide; I think that the challenge is different: not only methods are mixed but also results are mixed, and so methodological chalenge is upgraded - how to link mixed results in consistent interpretation. I think that the bridge between production of mixed results with mixed approach in science, and their understanding, is evaluation - that makes sense of contradictory meanings.

Interesting point, Boran. Thanks. If I understand you correctly, you highlight the role of evaluation as a sense-making exercise of different types of evidence of different quality (and the relation with the different types of mixed methods approaches that generate the evidence). I would certainly agree with that.

Permalink

I'm not sure that I have seen wide spread misuse of methodologies purporting to mix methods, but I do see a limited use of the possibilities open to researchers by doing so. I like the term 'creative mix' of methods by Gerald Midgley alluding to mixed method going beyond a simple qual/quant distinction. In work with my colleagues Ellen Lewis (Uni of Hull, UK) and Shravanti Reddy (UN Women) we encourage the use of transdiciplinary mixed method as a way to bring to the methodological design a genuine and meaningful engagement with participants and stakeholders. Such efforts can enhance the rigor and credibility of evaluation and research. It is risky for those schooled in positivist method. It also may be more expensive to conduct but this is traded against obtaining outcomes, even impacts, that are longerlasting and inspire greater confidence in the study. (Forthcoming: Guidance Inclusive Systemic Evaluation for Gender, Environments and voices from the Margins #ISE4GEMs)

Thank you Anne for sharing an interesting example of a mixed methods approach in evaluation. There are as you probably know numerous interesting examples out there, often involving academics. Many of those in a way could be characterized as ‘boutique’ evaluations rather than being representative of large strands of institutionalized evaluation work. My points largely relate to the latter.

I came across this blog while googling for mixed methods with a focus on gender. The blog as well as participants comments were informative and kind of made me wonder if we should have an additional advisory expert in our panel of co-authors. We (authors consortia) are writing a review, 'The effects of transport infrastructure and logistics services interventions for increasing women participation in formal labor markets in low and middle-income countries'. In our review methodology, we plan to use mixed method/triangulation approach. I am wondering if you or anyone from your team would be interested in assisting us in an advisory capacity.

Thanks Manisha for your interest in this blog post. Unfortunately, we do not have the time to take on the role you suggested, but I am happy to have a conversation with you on using mixed methods in a (systematic) review study.

Thank you Jos Vaessen for responding to my post. I understand the time constraint and appreciate your offer to have a conversation on using the mixed methods in our systematic review (SR). This is my third SR, however I am using the mixed methods in SR for the first time. I have knowledge of mixed methods in qualitative research and two co-authors have expertise in qualitative methods - thematic synthesis. Nevertheless, as a lead co-author I am feeling the need to have at least one senior expert who could guide/critique our approach in the protocol. With that said, I am not sure if you would be open to discuss our approach in the conversation. If so, please let me know.

Permalink

Excellent clarifying article. Even for those who are trained in Mixed Methods, it is good to be reminded of the core principles and purposes. Thanks for sharing.

Permalink

Thank you very much, Jos, for your thoughtful piece. It's quite true that "mixed methods" can all too easily become an empty mantra, at any stage of an evaluation but particularly in design. I have found that many colleagues, including clients as well as fellow evaluators, are increasingly skeptical of such recitations, and do look beyond the first-round language in search of just how complementarity or expansion (for example) might work in a particular evaluation design. I've found that once I dig into the options and the steps for getting to "something more" it is not necessarily so simple or straightforward (the data don't always behave as we would like them to). It can be a real challenge but doing "deep mixed methods" can reap real payoffs if the effort is invested wisely. Thanks again for your contribution.

Permalink

Your first list of three reasons scrapes the top of the lack-of-capacity iceberg. You, me, and those who read this are in something of an ivory tower: outside is the vast mass of development sector workers. They typically administer small projects, often for INGOs, with an impact evaluation budget of maybe $20,000. They draft the TORs from previous templates. They copy out the OECD DAC criteria, and then add 40+ more questions. Colleagues might add more questions. They say do "mixed methods", specifying KIIs, FGDs, and a quantitative survey. They do not know or understand any other methods. They do not understand control groups, counterfactuals, etc. My company, Mekong Economics Ltd., does 10+ of these meaningless evaluations every year. Help. We have to do proposals that respond to TORs, or we do not win jobs. Can the Bank pop out of the academic Ivory Tower and think through this dull vocational problem? Training is an obvious solution, but also why no encourage such organisations to put out a TOR that just says: "We have $20,000 and these five key evaluation questions - please propose your approach".

Dear Adam, thanks for your remarks. As advocates for evaluation (and quality evaluation) I think we should always strive to engage in conversations with colleagues across the globe on how to design meaningful evaluations. Training is important and we are doing this through multiple channels. At the same time we need to engage in conversations with governmental, non-governmental and private sector organizations on sensible evaluation strategies. Not everything needs to be evaluated all the time. Strategic selectivity and sensible scoping in line with the needs for evaluative evidence and available resources are very important in this regard.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.