2015 brought together three important development agendas. Central to them are the Sustainable Development Goals (SDGs) which replaced the Millennium Development Goals (MDGs). Created through an extensive consultation process, the SDGs set out a vision for 2030, adopted at the UN Summit in September. The SDG summit was preceded by the Addis-Ababa conference on Finance for Development (Fin4Dev), which brought together key development actors to discuss options for funding the ambitious SDGs. The Paris conference on climate finance (COP21) brought a particular focus on commitments -€“ both in terms of goals and finance - to achieve greater sustainability of development processes and outcomes.  We summarized IEG’s findings relevant to the SDG Summit and the Fin4Dev conference in short publications.

The agendas agreed at these three conferences are large, complex, and involve many different stakeholders, actions, funding sources, new partnerships, and so on. Together they have implications for evaluation, which pose challenges as much as they provide opportunities.

I see five opportunities to change what we evaluate, and how. They will require growth and development of the evaluation profession, in terms of skills, methods, and practices. They offer opportunities for partnerships with new professions -€“ from scientists dealing with behavioral change, complexity, Big Data, and game theory -€“ that will strengthen evaluation without compromising its independence.

1. Overcome Fragmentation. As the MDGs experience taught us, fragmentation - parceling SDGs out into sector silos - is a major risk. In the case of the MDGs, goals like reduced maternal mortality rates quickly got relegated to the health sector. However, they clearly required multi-sectoral responses. Progress suffered as interventions from a cross-section of sectors did not come together towards this goal. The SDGs try to preempt this risk of fragmentation: some cross-reference others to explain connections. While well intentioned, this might not stem institutional incentives to act otherwise.

A further challenge lies in areas that cut across SDGs and various aspects of climate change. Here evaluators need to join up efforts to ensure we shed light on complex interrelated development challenges and outcomes.  At IEG we are realizing that one of our comparative advantages lies in evaluating complex issues that cut across various sectors or issues. For instance, an evaluation of the World Bank’s work on health finance  brought together practitioners from a number of sectors. Our evaluation of World Bank Group'€™s assistance to resource-rich countries showcased different approaches taken and what can be learned from them. Our evaluations thus stimulated dialogue across the three Institutions of the World Bank Group, and across different practices within them.

In a similar vein, we have developed Strategic Engagement Areas that focus on the World Bank Group's twin goals - to reduce poverty to 3% by 2030 and boost shared prosperity – but break them down into more cohesive sets of issues. This approach provides us with the opportunity to increase synergy between evaluations, enhancing their collective impact.

2. Assess Trade-Offs among Competing Priorities. Parts of the discussion in 2015 appeared as if it will be easy to achieve growth, zero poverty, and sustainability all at the same time, and under adverse climatic and economic conditions. Integrating these challenges requires a much better understanding of synergies and trade-offs. Efforts are being made to optimize positive synergies. But in many instances, positive synergies will be harder to identify. Competing needs and interests will put pressure on resources, institutions, and goals. Deeper changes in, for instance, consumption patterns will be needed. Policy-makers have always needed to make trade-offs, but these will become tougher in order to balance growth, poverty, and sustainability. For evaluators, the challenge is to assess how the trade-offs were made, whether they were right, and what the consequences were for development progress and outcomes. Our theory-based approaches that assess interventions against their own objectives are important, but do not shed sufficient light on values that drive choices, techniques that help inform decision-making, and their consequences.

3. Deepen Data and Understanding. Data science, especially Big Data and game theory, offers new opportunities to undertake "virtual"€ experiments in controlled environments. Using technology, game theory, and behavior change science can help evaluators fill data gaps by complementing traditional evaluation methods, help with identification of patterns, and test theories of change. Such applications will require testing and adaptation. They also create opportunities to make evaluations faster, less costly, and test whether or how experience from the past is a predictor for the future. IEG has evaluated cost-benefit analyses in the past and done some recent work on value-for-money that needs to be expanded and incorporated in our evaluations.

4. Faster Feedback Loops. Data science holds the additional promise to make feedback loops shorter and faster in monitoring, self-evaluation, and independent evaluation. Potential risks of, for instance, from big data need to be managed (just like for any other method) to ensure results do not mislead policy-makers and practitioners. Evaluators will need to grow into playing two roles: becoming educated in data science as consumers of Big Data and users of game theories, and become proficient in evaluating the use of Big Data and data science to determine whether and what effect these technologies have on development progress and outcomes.

5. Evaluation capacity development. Last but not least, the future of evaluation involves a much stronger focus on evaluation capacity development. The Fin4Dev conference set out that domestic public finance will become a more important source of finance by 2030. Both governments and citizens will want to know how effective and sustainable their investments are. Together with the long-term trend of increasing interest in evaluation in client/partner countries, there will be a continuing and increasing demand for evaluation capacity development. I see evaluation capacity as a counterpart to statistical capacity, which the World Bank Group and UN have committed to strengthen, and needs to cover demand for evaluation (educating users of evaluation) and supply of evaluation (evaluation practitioners, including developing skills, professional standards, and good practice). IEG has strategically repositioned its evaluation capacity development work through CLEAR, and is working on a curriculum review of the International Program for Development Evaluation Training (IPDET). We are also exploring opportunities for the World Bank to embed evaluation capacity development in its country-level work.