The Results Agenda Needs a Steer—What Could Be its New Course?
Is it time to rethink Results Based Management in international development? We imagine a system that favors learning over compliance.
Is it time to rethink Results Based Management in international development? We imagine a system that favors learning over compliance.
By: Alison EvansEstelle RaimondoStephen HuttonThe Results Agenda was never meant to be an end in itself, but somewhere along the way it became one. Don’t get us wrong, we appreciate the value of well-chosen indicators, outcome evidence, and data-rich M&E systems.
Yet, when these systems tick all the boxes of “best practice” Results Based Management (RBM) but fall short in generating feedback loops with intended users—Boards, management, operational teams, and clients—something is wrong.
Is it time to rethink what RBM systems are for and how they serve intended users? IEG’s evaluation of The World Bank Group Outcome Orientation at the Country Level grapples with these issues.
We focused our review on the results system the World Bank Group uses to manage its country engagements, as distinct from its project level or corporate results systems. In country engagements, the Bank Group’s value lies in its capacity to deploy, combine, or sequence a wide range of lending instruments, analytics, advice, policy dialogue, and convening. As such, it is at the country level that the clearest picture of the Bank Group’s development impact should emerge to inform decision-making.
The Bank Group’s country-level results system has evolved in line with RBM “best practices”: country strategies frame their objectives in terms of outcomes; they use results frameworks as their primary tool for tracking program implementation and measuring performance, premised on the trinity of quantification, attribution and time-boundedness; there is a mid-term review, where teams take stock of progress and adjust the frameworks accordingly; there is also a final self-assessment which generates outcome ratings that are then validated by IEG. All well in line with conventional RBM wisdom.
Yet, we find that the results system, while prioritizing reporting and upward accountability, has become dislodged from the critical cycle of feedback, learning and improving. There are two main reasons for this:
The Board and Management are interested in whether the Bank Group is positively contributing to country level outcomes. The current results system for country engagements is poorly suited to capture this, because such outcomes are hard to quantify, lack clear attribution – being the result of interventions by many actors – and may not be achieved by the end of a strategy cycle. At the same time, relatively little attention or discussion time is paid to the terminal evaluation of country programs. This broken feedback loop does not establish results measurement and management at the country level as a priority.
Country teams must make adaptive management decisions to navigate changing contexts, address operational problems, and ensure synergies across interventions. We found that country teams practice many facets of adaptive management, but they don’t use the results system to help them in doing so. Instead they rely on tacit knowledge, professional experience, and advice from networks when making adaptive decisions.
Country teams find the results system does not give them a sufficiently timely or substantive read out of the country’s progress or whether the Bank Group is hitting key milestones on its results chain. The information does not supplement the project-level evaluation system’s blind spots on the contributions of ASA, convening, or policy dialogue efforts.
At the mid-term review stage—a key moment for evidence-based reflection and adaptation—teams spend most of their time on documenting past decisions and revising results indicators for reporting purposes. Staff find that their incentives focus on project approvals and output delivery rather than results achievement and management.
Country clients are engaged on country strategy design but less on other aspects of country level results management. Frequent turnover by officials means focus on short term gains rather than longer-term outcomes measurement and management, especially when client governments tend not to use a results-based approach to drive their own decision-making. Bank Group teams and other development partners rarely harmonize or use country systems for monitoring and evaluation, which leads to a fragmented monitoring and evaluation landscape and weakly developed feedback loops.
How has all this occurred? In building our results systems, maybe we have focused too much on making them “rigorous”, and not enough on making them useful. Generating data, reporting, and scrutinizing results has almost become an end in itself, without paying close attention to whether the results system could generate constructive feedback loops and help agencies make better decisions.
The challenges identified in our evaluation are well known, and hardly unique to the Bank Group (OECD 2019). But past efforts at correction have doubled down on attribution and added more performance measures, creating a cascade of indicators that have become ever less useful. Is it time to try something different?
What could an alternative model look like which puts users first? The evaluation proposes to adopt a Monitoring, Evaluation and Learning (MEL) approach.
Monitoring could be tailored to track key country outcomes of interest to the authorizing environment. A selective evaluation approach could allow deeper inquiry into critical areas that support country teams’ adaptive management and learning needs. A system that reduced the time spent on reporting and adapting results frameworks would free up space for collective reflection. A model that prioritized the needs of the users would promote greater ownership by teams and more use of evaluative thinking, data and evidence to support decision making.
The evaluation also points the way to a different interpretation of RBM based on notions of mutual accountability, collective learning, informed risk taking, and trust maintained through rewards and effective challenge mechanisms.
We could step back from the reliance on results frameworks and uniform approaches and enable a system that is selective and tailored to the needs of decisionmakers. We could shift institutional incentives towards a better balance between measuring and managing for higher order outcomes and put evidence of learning and adapting at the heart of what it means to be accountable.