Back to cover

Evaluation of International Development Interventions

Chapter 1 | Guidance to the Reader

Scope and Aim

This guide is intended as a quick reference to evaluation approaches and methods.1 Its aim is to provide easily accessible and jargon-light descriptions of a broad yet select set of methodological approaches and methods used in evaluations in the field of international development (and beyond). The guide provides concise descriptions of established and emerging approaches and methods. The featured selections were chosen especially for readers interested in independent evaluation in international development. At the same time, we think that the approaches and methods reflected in the guide will be relevant to a much broader audience of evaluators and policy researchers.

The guide is inevitably selective in its coverage and by no means all-inclusive, reflecting what we consider some of the most salient current trends in development evaluation practice.2, 3

In our discussion of the selected methodological approaches, we have tried to keep the level of complexity and technical detail to a minimum, focusing on the following key aspects: a short description of the approach or method, main steps involved in its application, variations in methodological principles or application, advantages and disadvantages, and applicability. In addition, we provide some examples of applications of the approach or method and references to the literature (both basic and more advanced). Consequently, the guide will help evaluation stakeholders to become more aware of different methods and approaches and gain practical insights regarding their applicability and where to look for additional guidance.

The guide is not intended as a step-by-step manual on how to design and conduct evaluations. This type of guidance is already provided in a number of widely used publications (see, for example, Bamberger, Rugh, and Mabry 2006; Morra Imas and Rist 2009). Similarly, we will not discuss the ontological and epistemological foundations of the included approaches and methods.4 These debates, although interesting in their own right, have been covered well in other publications (see, for example, Pawson and Tilley 1997; Alkin 2004; Stern et al. 2012, among others) and are outside the scope of this guide. In the end, and despite these boundaries, our modest hope is that the guide will broaden readers’ methodological knowledge and inform their future design, conduct, and use of evaluations.

Finally, a central message of the guide is that there is no single “best” evaluation approach or method. The approach should be determined by the nature of the intervention being evaluated, the types of questions the evaluation addresses, and the opportunities and constraints under which the evaluation is conducted in terms of available time, data, budget, and institutional constraints and preferences.

The Intended Audience of the Guide

We expect that a variety of professionals who are involved with evaluation in some way would find this guide useful: novices entering the evaluation field; experienced evaluators interested in quick-access summaries of a range of established and emerging approaches and methods; project managers or commissioners of evaluations who might not necessarily have a background in evaluation methods but are nevertheless involved in the evaluation function; and so on. Similarly, most of the approaches and methods would be of interest to evaluation stakeholders in a range of institutional settings: multilateral or bilateral organizations, government agencies, nongovernmental organizations, private sector organizations, academia, and other bodies. In addition, professionals working in program planning, management, or monitoring and related roles may also find the content useful. Finally, policy-oriented researchers in international development may also find this guide useful as a quick reference. There is, however, a clear and intentional bias toward the work of independent evaluation offices (IEOs) as found in many multilateral development organizations (for example, multilateral development banks, United Nations agencies and programs), bilateral organizations, international nongovernmental organizations, or foundations.5

The Selected Approaches and Methods in the Guide

Because this guide is explicitly biased toward IEOs, much attention is given to summative evaluation approaches and methods and relatively less attention to formative (including developmental) approaches and methods for evaluation.6

The mandate of most IEOs influences the kinds of evaluation methods they are likely to use. Most evaluations are conducted either after the intervention (for example, project, sector program, or policy) has been completed (retrospective or ex post evaluation) or during an ongoing program or portfolio of interventions. By definition, independence implies that the evaluators are not directly involved in the design or implementation of the organization’s projects, programs, or policies. Furthermore, independence often requires that the IEO has little control over the operational arm of the organization and the kinds of information (useful to retrospective evaluation) that are collected during project design or implementation. Finally, IEO evaluations often operate at higher levels of analysis (for example, country or regional programs, thematic strategies), which influence the extent to which participatory methods can be (comprehensively) applied. Also, there is often a trade-off between breadth and depth of analysis that influences evaluation design and the scope for in-depth (causal) analysis. For these reasons, a number of approaches and methods—some of which are included in this guide (for example, experimental designs)—are often less suited and less commonly applied in IEO evaluations.

Recognizing these methodological “quasi-boundaries,” the guide mainly focuses on approaches and methods that can be used in retrospective (ex post) evaluations. At the same time, although many IEOs cannot regularly use some of the evaluation approaches and methods described in the guide, there are many exceptions. The increasing diversity in evaluation modalities and levels of evaluation (for example, global strategy, country program, thematic area of work, project) that IEOs are engaged in requires the application of a broader range of evaluation approaches.

It should be noted that this guide is intended to be a living document. As new relevant methods and applications for them emerge, we aim to periodically update the guidance notes.

The Structure of the Guide

The remainder of the guide is structured in two chapters. In chapter 2, Methodological Principles of Evaluation Design, we discuss seven guiding principles for designing quality evaluations in a development context, emphasizing the importance of allowing evaluation questions to drive methodological decisions, building program theory on stakeholder and substantive theory, mixing methods and approaches, balancing scope and depth, attending to context, and adapting approaches and methods to real-world constraints. The section is not intended as any sort of comprehensive guide to evaluation design. Rather, it examines methods choice according to a number of core methodological principles that evaluators may wish to reflect on.

Chapter 3, Guidance Notes on Evaluation Approaches and Methods in Development, presents an overview of select methodological approaches and more specific methods and tools. Each guidance note briefly describes the approach and its main variations, procedural steps, advantages and disadvantages, and applicability. Case examples and additional references and resources for each approach are provided.

References

Alkin, M. 2004. Evaluation Roots: A Wider Perspective of Theorists’ Views and Influences. Thousand Oaks, CA: SAGE.

Bamberger, M., J. Rugh, and L. Mabry. 2006. RealWorld Evaluation: Working under Budget, Time, Data, and Political Constraints. Thousand Oaks, CA: SAGE.

Morra Imas, L., and R. Rist. 2009. The Road to Results. Washington, DC: World Bank.

Pawson, R., and N. Tilley. 1997. Realistic Evaluation. Thousand Oaks, CA: SAGE.

Stern, E., N. Stame, J. Mayne, K. Forss, R. Davies, and B. Befani. 2012. “Broadening the Range of Designs and Methods for Impact Evaluations.” Working Paper 38, Department for International Development, London. https://www.oecd.org/derec/50399683.pdf.

  1. For simplification purposes we define method as a particular technique involving a set of principles to collect or analyze data, or both. The term approach can be situated at a more aggregate level, that is, at the level of methodology, and usually involves a combination of methods within a unified framework. Methodology provides the structure and principles for developing and supporting a particular knowledge claim.
  2. Development evaluation is not to be confused with developmental evaluation. The latter is a specific evaluation approach developed by Michael Patton.
  3. Especially in independent evaluations conducted by independent evaluation units or departments in national or international nongovernmental, governmental, and multilateral organizations. Although a broader range of evaluation approaches may be relevant to the practice of development evaluation, we consider the current selection to be at the core of evaluative practice in independent evaluation.
  4. Evaluation functions of organizations that are (to a large extent), structurally, organizationally and behaviorally independent from management. Structural independence, which is the most distinguishing feature of independent evaluation offices, includes such aspects as independent budgets, independent human resource management, and no reporting line to management, but some type of oversight body (for example, an executive board).
  5. The latter are not fully excluded from this guide but are not widely covered.