The Results Agenda was never meant to be an end in itself, but somewhere along the way it became one. Don’t get us wrong, we appreciate the value of well-chosen indicators, outcome evidence, and data-rich M&E systems.

Yet, when these systems tick all the boxes of “best practice” Results Based Management (RBM) but fall short in generating feedback loops with intended users—Boards, management, operational teams, and clients—something is wrong.

Is it time to rethink what RBM systems are for and how they serve intended users?  IEG’s evaluation of The World Bank Group Outcome Orientation at the Country Level grapples with these issues.

A broken feedback loop in the World Bank Group’s country programs

We focused our review on the results system the World Bank Group uses to manage its country engagements, as distinct from its project level or corporate results systems. In country engagements, the Bank Group’s value lies in its capacity to deploy, combine, or sequence a wide range of lending instruments, analytics, advice, policy dialogue, and convening. As such, it is at the country level that the clearest picture of the Bank Group’s development impact should emerge to inform decision-making.

The Bank Group’s country-level results system has evolved in line with RBM “best practices”: country strategies frame their objectives in terms of outcomes; they use results frameworks as their primary tool for tracking program implementation and measuring performance, premised on the trinity of quantification, attribution and time-boundedness;  there is a mid-term review, where teams take stock of progress and adjust the frameworks accordingly; there is also a final self-assessment which generates outcome ratings that are then validated by IEG. All well in line with conventional RBM wisdom.

Yet, we find that the results system, while prioritizing reporting and upward accountability, has become dislodged from the critical cycle of feedback, learning and improving. There are two main reasons for this:

  • a results measurement toolkit that is more suited to tracking outputs and direct outcomes of individual operations rather than results achieved through multifaceted country engagements, and
  • incentives and signals that focus on meeting targets and reporting, rather than creating space for collective learning or making course corrections, and that at times go counter to staffs’ intrinsic motivation to pursue high-level outcomes.

Looking at the needs of different user groups  

The Board and Management are interested in whether the Bank Group is positively contributing to country level outcomes. The current results system for country engagements is poorly suited to capture this, because such outcomes are hard to quantify, lack clear attribution – being the result of interventions by many actors – and may not be achieved by the end of a strategy cycle. At the same time, relatively little attention or discussion time is paid to the terminal evaluation of country programs. This broken feedback loop does not establish results measurement and management at the country level as a priority.

Having to revise indicators for reporting purposes crowds out opportunities for reflecting on evidence and adjusting programs.
Having to revise indicators for reporting purposes crowds out opportunities for reflecting on evidence and adjusting programs.  Image Credit: Jess3

Country teams must make adaptive management decisions to navigate changing contexts, address operational problems, and ensure synergies across interventions. We found that country teams practice many facets of adaptive management, but they don’t use the results system to help them in doing so. Instead they rely on tacit knowledge, professional experience, and advice from networks when making adaptive decisions.

Country teams find the results system does not give them a sufficiently timely or substantive read out of the country’s progress or whether the Bank Group is hitting key milestones on its results chain. The informa­tion does not supplement the project-level evaluation system’s blind spots on the contributions of ASA, convening, or policy dialogue efforts.

At the mid-term review stage—a key moment for evidence-based reflection and adaptation—teams spend most of their time on documenting past decisions and revising results indicators for reporting purposes. Staff find that their incentives focus on project approvals and output delivery rather than results achievement and management.

Country clients are engaged on country strategy design but less on other aspects of country level results management. Frequent turnover by officials means focus on short term gains rather than longer-term outcomes measurement and management, especially when client governments tend not to use a results-based approach to drive their own decision-making. Bank Group teams and other development partners rarely harmonize or use country systems for monitoring and evaluation, which leads to a fragmented monitoring and evaluation landscape and weakly developed feedback loops.

How has all this occurred? In building our results systems, maybe we have focused too much on making them “rigorous”, and not enough on making them useful. Generating data, reporting, and scrutinizing results has almost become an end in itself, without paying close attention to whether the results system could generate constructive feedback loops and help agencies make better decisions.

The challenges identified in our evaluation are well known, and hardly unique to the Bank Group (OECD 2019). But past efforts at correction have doubled down on attribution and added more performance measures, creating a cascade of indicators that have become ever less useful. Is it time to try something different?

Towards an alternative model

What could an alternative model look like which puts users first? The evaluation proposes to adopt a Monitoring, Evaluation and Learning (MEL) approach.

Monitoring could be tailored to track key country outcomes of interest to the authorizing environment. A selective evaluation approach could allow deeper inquiry into critical areas that support country teams’ adaptive management and learning needs. A system that reduced the time spent on reporting and adapting results frameworks would free up space for collective reflection. A model that prioritized the needs of the users would promote greater ownership by teams and more use of evaluative thinking, data and evidence to support decision making.

A new IEG evaluation envisions a results management system that helps development practitioners adapt programs according to evidence.
A new IEG evaluation envisions a results management system that helps development practitioners adapt programs according to evidence. Image Credit: Jess3

The evaluation also points the way to a different interpretation of RBM based on notions of mutual accountability, collective learning, informed risk taking, and trust maintained through rewards and effective challenge mechanisms.

We could step back from the reliance on results frameworks and uniform approaches and enable a system that is selective and tailored to the needs of decisionmakers. We could shift institutional incentives towards a better balance between measuring and managing for higher order outcomes and put evidence of learning and adapting at the heart of what it means to be accountable.  

Read The World Bank Group Outcome Orientation at the Country Level: an Independent Evaluation

Comments

Submitted by Jack van Holst… on Fri, 02/05/2021 - 11:55

Permalink

Stephen - I read your blog with Alison and Estelle this morning. I wish I had the time at present to read the full report, but I am really busy with a few tasks. I very much agree with the thrust of your interesting blog. My reaction to your request for feedback is that IEG and the evaluation community needs to articulate a short and clear message that summarizes the conclusions of your report. I assume that the last sentence in your blog is your summary of a better way ahead on results-based measurement. With great respect for you and your co-authors it contains too much jargon and may not jolt our colleagues out of "this is what we always do" syndrome. Borrowing from President Kennedy’s inaugural address, a clearer message could be something like “Ask not only about shortcomings in results, but ask about what you can learn about how to improve results”. This is obviously not perfect and you will be able to improve on it (or craft something else) but it is my sincere suggestion on how to clarify and strengthen the point I think you are making at the end of your blog.

Jack,
Thanks so much for the comment. We agree completely on the need for simplicity and clarity. This blog is aimed at the evaluation community so is a more advanced expression of our thinking, trying to get at some of the big picture. Our main report has a short overview which hopefully is clear, but we're also planning a range of other dissemination approaches and discussions with different stakeholders. We take your point that a very simple articulation of what staff can do differently would also help. I think the simplest expression is that results systems need to be useful to the staff and teams engaged in delivering them.
- Stephen

Add new comment