In recent years, evaluation in the field of international development has undergone significant changes. First and foremost, the world we live in has become increasingly complex and interconnected. In the current era of the Sustainable Development Goals, governments, international organizations, private corporations, civil society organizations, and others are increasingly aware of the challenges surrounding transboundary and global issues such as climate change, migration, and international trade. At the same time, evaluation as a source of independent inquiry into the merit and worth of policy interventions to address global, national, and local challenges has grown in importance. The increased number of evaluation functions in governmental, nongovernmental, and private sector organizations; the growing number of countries with voluntary professional organizations for evaluators; and the growth in repositories of knowledge on policy interventions and their (potential) effects are all signs of this trend.
How should evaluators deal with the increasing complexity of policy interventions and the contexts in which they potentially influence change? How can evaluation as a function, as a practice, effectively contribute to the evolving learning and accountability needs of decision makers, practitioners, financiers, and citizens? These are questions that have no easy answers and to some extent require a departure from how things were done in the past. For example, interventions that influence the lives of the poor, the distribution of wealth, or the sustainable use and conservation of natural resources are often strongly interconnected. Ideally, in such cases these interventions should not be assessed in isolation from each other. Similarly, decision makers and other stakeholders no longer rely on activity-level or project-level evaluations only. Assessments of programs, strategies, or even a comprehensive range of interventions that have a bearing on the same phenomenon are becoming more important as “evaluands.” Multisectoral, multidimensional, and multistakeholder perspectives on change and the way policy interventions affect change are called for. Finally, new technologies for data collection and analysis (for example, machine learning) and new types of data (for example, “big data”) are slowly but steadily making their entry into the practice of evaluation.
In light of these challenges, evaluators should broaden their methodological repertoire so that they are better able to match the evaluation questions and the operational constraints of the evaluation to the right methodological approach. Eminent development thinkers and evaluation scholars, such as Albert Hirschman, Peter Rossi, and Ray Pawson, have called evaluation applied social science research. Evaluators should look to the social sciences when developing their methodological designs and realize that even within the boundaries of their mandates and institutions, there are many opportunities to develop, test, and apply modern methods of data collection and analysis. It is in doing so, and in combining a mix of methods that corresponds to the specificities of the evaluation at hand, that evaluators can provide new insights into development interventions and their consequences.
This guide provides an overview of evaluation approaches and methods that have been used in the field of international development evaluation. Although by no means exhaustive, the modules provide accessible and concise information on useful approaches and methods that have been selected for their actual use and their potential in evaluation. The reading lists at the end of each module refer the reader to useful resources to learn more about the applicability and utility of the methods described. Both the choice of approaches and methods and the associated guidance are by no means definitive. We hope to update this information as evaluation practices evolve.
We hope that this resource will be helpful to evaluators and other evaluation stakeholders alike and inform their practice.