The World Bank Group Outcome Orientation at the Country Level
Chapter 4 | Managing for Outcomes
Highlights
Country teams practice adaptive management by closely monitoring the health of country portfolios focused on disbursements, addressing delivery issues, and navigating changes in country contexts.
However, teams rarely use the country-level results system to inform adaptive management decisions. Instead, they rely on tacit knowledge, professional experiences, and advice from personal networks when making adaptive decisions. This reliance has some disadvantages.
Country strategies’ midterm review instrument, the Performance and Learning Review, largely documents decisions that were already made and adjusts results frameworks; however, the extent to which teams use it for collective reflection and adaptive management depends on the impetus of country team leaders.
Clients do not engage much in the World Bank Group’s country-level results system and the World Bank seldom helps countries develop their own monitoring, evaluation, and results management systems. Moreover, there is little coordination of results management activities among development partners.
This chapter reviews how country teams practice adaptive management and whether country-level systems support them in making course corrections. This chapter uses a typology to define different adaptive management practices for country portfolios to examine the types of adaptation that are most common in Bank Group country engagements (Buffardi et al. 2019). The chapter then looks at how country teams practice adaptive management. It reviews whether country teams use evidence from the country-level results system to inform their judgment, whether they use the PLR as a key adaptation moment, and how they engage clients throughout the engagement cycle on outcome measurements and adaptive management. In general, this chapter finds that country teams adaptively manage their portfolios to meet evolving client needs, but the country-level results measurement system does not effectively support teams in making course corrections to country programs.
Types of Adaptation
Managing country engagements includes a range of adaptive management decisions. In short, adaptive management is an iterative approach to decision-making, whereby interventions and portfolios are adjusted based on evidence and evolving context. The literature shows that most adaptive management approaches were developed for individual projects (Sweetman and Conboy 2018) and later mimicked with little adjustment at the country portfolio level. At the Bank Group, the types of adaptive portfolio decisions country teams can make include establishing and managing the operations pipeline; approving or not approving project extensions, restructuring, and additional financing; determining whether government reforms are sufficient to proceed with policy financing; deciding which pieces of analytics are needed to inform policy or support the future operations pipeline; and (re)allocating resources across the portfolio. To organize and understand these decisions, IEG adapted a typology developed by the Global Learning for Adaptive Management initiative (2019). As shown in table 4.1, the typology identifies six main categories of portfolio analyses: health checks, context responsiveness, sum of the parts, scale-ups, risk and outcome hedging, and comparative advantages. Not all require the same type of evidence.
Table 4.1. Portfolio Adaptation and Evidence Needs
Purpose of Analysis |
Type of Adaptation |
Type of Evidence |
Health checks: What is happening? How are things going? |
Attention of management and clients to specific projects and key actions to resolve problems during the cycle |
Performance and delivery indicators, risk information |
Context responsiveness: What do all activities need to consider or change in response to shifts in the context? |
Shifts in implementation practices or change in composition of the portfolio during the cycle |
Understanding of the external operating context |
Sum of the parts: What are the combined effects of multiple projects? |
Results reporting and organizational learning rather than adaptation |
Outcome evidence |
Scale up or down: What should be scaled up, scaled down, or discontinued? |
Adaptation in implementation of current projects or future engagement, resource allocation shift, client dialogue |
Outcome and value-for-money evidence |
Hedging: How can the portfolio maintain a pipeline of outcomes over different time frames, and how likely is outcome to be achieved, given the risk? |
Adjustment in composition of portfolio at the beginning, throughout, and across the cycle |
Expected outcome trajectory, risks, strength of outcome evidence in other contexts |
Comparative advantage: How can we maximize our unique contribution relative to others? How should future resources be allocated? |
Inform development of future portfolio |
Widest range of evidence: outcome evidence, stakeholder mapping, evidence of others’ effectiveness |
Source: Adapted from Buffardi et al. 2019.
Note: This typology was used as a basis of a content analysis of the Performance and Learning Review and the Completion and Learning Review and as an underlying framework for interviews and case studies on adaptive management practices.
Health Checks
Country teams closely monitor the health of country portfolios. Through various exercises, including the Country Portfolio Performance Review (CPPR), teams monitor the health of the country portfolio, particularly its safeguards, project risks, output delivery, and disbursement indicators. In all sampled PLRs, country teams documented their systematic efforts to identify portfolio delivery concerns early and remedy them with their clients. Portfolio officers in IFC’s regional industry departments conduct regular portfolio reviews that assess financial performance and how evolving macroeconomic situations impact the portfolio’s health. MIGA’s country ratings committee meets quarterly to review country risk ratings for its noncommercial risk coverages, and how these feed into MIGA’s assessments of risks it faces in upcoming projects. In sampled CLRs, country teams reflect on the creative solutions they have applied to bolster delivery. For example, in Haiti the country team established a country implementation strategy drawing lessons from CPPRs. The strategy enshrined specific measures for project preparation, supervision, and performance management, including increasing face time with the client through country-based task team leaders, intensifying scrutiny on portfolio performance with monthly Haiti management team reviews. In Ghana, the country team set up an Implementation Support Team composed of Ministry of Finance and CMU staff to identify underperforming projects and help fiduciary and project teams tackle project issues as they arise rather than solely during supervision missions. The CMU informs the Implementation Support Team’s actions by preparing a monthly portfolio analysis.
Country teams consider health checks an essential but insufficient part of managing country engagements. In interviews, staff emphasized that regular health checks of the portfolio are important for achieving results, especially in contexts where delivery is difficult. At the same time, country teams recognized that there is insufficient attention paid to results information and a disproportionate emphasis on disbursement and output delivery for the Bank Group and on financial return for IFC. As several interviewees put it, “If we don’t disburse or deliver, we can’t expect outcomes.” Teams said the limited attention to results information was due to (i) country-level results systems generating inadequate information; (ii) difficulty engaging clients on the topic of results; and (iii) the incentive of country leaders to prioritize delivery over results management. These three factors are explored further in the second part of this chapter and in chapter 5.
Some teams have tried to consider results evidence in routine health checks through portfolio reviews at IFC or result-focused CPPRs at the World Bank. Vietnam is a good example of this type of practice. The country team opted for a “Programmatic CPPR” informed by a set of indicators to track progress toward outcomes, and they felt this practice improved the portfolio’s performance. It also enabled closer coordination with other partners, notably the Asian Development Bank and the Japan International Cooperation Agency, through joint reviews. Other country staff were more ambivalent about the utility of result-focused CPPRs, saying the practice is ad hoc. Staff also said it was difficult to translate the resulting data into visual, actionable, and understandable information. These CPPRs were deemed useful when the client demonstrates a keen interest in results information, which is rare, or the country managers use the results information as a tool for dialogue with clients. For its part, IFC recently introduced a new type of portfolio review that assesses the progress achieved on the reforms identified in its CSs (the “ifs”) as critical to enable IFC investments in a country’s specific sectors (the “then”). The first set of portfolio reviews gathered industry teams, country teams, and IFC’s senior management, including the chief operations officer and sometimes the chief executive officer. Interviewed IFC staff were positive about these reviews, saying they sent a strong signal from management to use IFC CSs as strategic road maps to understand IFC’s contributions to country development outcomes.
Context Responsiveness
Country teams adapt country engagements to external shocks and evolving government priorities. In interviews, country teams highlighted the importance of adjusting portfolios to evolving client demands while remaining focused on achieving the program’s overall objectives. Country team leaders said these adaptive management decisions require foresight and informed judgment but cannot be entirely data driven. Interviewed clients highlighted that the World Bank’s responsiveness to changing country dynamics was a strength compared with the clients’ other development partners. In PLRs, country teams document how they adapt portfolios to changing country contexts. More than half of the sampled countries with an available PLR (14 out of 25) described adapting portfolios to changing government priorities and government staff turnover and to the presence or absence of key reform champions. Country teams often have to adjust their engagements to external shocks. More than half of sampled countries with an available PLR (13 out of 25) documented adapting to unexpected events such as epidemics, civil unrest, natural disasters, economic shocks, or sudden increases in migration. These events often lead to short-lived intensification of donor collaboration, rapid impact assessments, and a series of short-term adaptive actions to simplify processes, deploy emergency funds, increase risk mitigation measures, and refocus interventions to achieve immediate impacts. For example, in Colombia, the World Bank’s country team conducted evaluative exercises to assess the immediate impact of the Venezuelan migration crisis before engaging in program course corrections.
In FCV countries, country teams constantly adapt to changing circumstances in situations of great uncertainty and low institutional capacity. In these countries, achieving development impact can mean maintaining or expanding development gains or preventing them from slipping backward. Four examples illustrate well the need to constantly adjust to shocks to avert development losses. In Haiti, the World Bank poured in additional resources after a catastrophic earthquake and used existing projects as platforms to address new challenges. When Chad’s security situation deteriorated and government resources were diverted to security, the World Bank provided budget support and restructured an education project to sustain teacher and civil servant salaries and associated service delivery. The World Bank also responded to a crisis in neighboring Central African Republic by using an existing agriculture project to provide short-term assistance to refugees and returnees until a larger project could be approved. In Afghanistan, the World Bank has frequently adapted to changing security situations and evolving donor priorities and expectations. In the Solomon Islands, the World Bank used the turnover of senior government officials as an opportunity to deepen policy dialogues with the new government on macroeconomic issues, which helped renew opportunities for joint donor budget support.
Evidence-Informed Adaptation
Other types of adaptive management require evaluative evidence that the country-level results system does not provide. This evidence includes strong evaluative analysis and evidence on the outcomes of decisions. As shown in table 4.1, this evidence requires analyses of program expansions or contractions, the World Bank’s comparative advantages, the cumulative effects of multiple interventions, and the risks versus rewards trade-offs to achieve outcomes. IEG found little evidence that the country results system provides the necessary evidence to support and inform portfolio adaptations. Staff interviews, case studies, and document analyses suggest that decisions to increase Bank Group interventions are based more on the observed need and satisfaction of clients and whether the program’s implementation and output delivery have been smooth than on evaluative evidence.
Decisions on balancing risks and rewards are also not well informed by the results system. Country teams pay close attention to evolving country risks across the portfolio. However, country teams reported that their risk analysis is largely separate from the information they gather on results and that the two should be combined to inform decisions on whether and when to support risky interventions with high potential impact. Country engagement documents rarely discuss how to balance risks against payoffs. In only three of 25 countries with available PLRs were there details on how the Bank Group “hedged” against different risks; for example, if engagement in sector X were to fail, how would the Bank Group adapt to achieve at least a minimum level of results within the strategy period?
How Teams Practice Adaptive Management
Country teams rarely use the country-level results system to inform adaptive management. Country teams find that information captured by the country-level results systems is not useful for decision-making for several reasons. First, they feel the information is not helpful in giving them a sense of the country’s progress in a sector or outcome area or in grasping whether the Bank Group is hitting key milestones on its results chain. The information does not supplement the project-level evaluation system’s blind spots on ASA, convening, or policy dialogue efforts, because these are also not well covered at the country level. Second, country teams observe that information is not up to date because there are only project monitoring systems to track the CPF’s progress throughout the CPF cycle. Results information is updated only at the PLR and CLR stages. Third, country teams emphasize that results frameworks are not user friendly. In the absence of a portal that can track country-level indicators, country teams are unable to access, update, and visualize results information. IEG found that only a few country teams attempted to adapt formal CPF results frameworks into a results-gathering tool for internal decision-making and client engagement. One team that did this was the Western Balkans CMU, which converted the CPF results framework’s indicators into a traffic light tool, highlighting good, medium, or poor progress on CPF objectives. Results updates based on this tool were discussed internally on a quarterly basis and externally with clients at the time of the CPPR.
Instead, country teams rely primarily on tacit knowledge, professional experiences, and advice from personal networks when making adaptive decisions. Country teams’ adaptive actions are shaped through professional judgment and tacit and informal knowledge rather than formal evidence from country results systems. Teams rely on advice from friends and colleagues, their personal reading of the country’s political economy, and their understanding of government power dynamics to assess the likelihood of success of adaptive actions and navigate accordingly. Country teams use “soft information” that they can observe and learn from but cannot verify with metrics. As one country director agreed, “Data, however robust they are, are no substitute for informed judgment.” When program objectives and intended results are multifaceted and not well captured through metrics—such as objectives related to systems change or institution building—or when the operating environment is highly uncertain, navigating by judgment can be an effective strategy for adaptive management (Aghion and Tirole 1997; Honig 2018, 2020; Muller 2018).
This reliance on tacit knowledge and personal judgment has some disadvantages. The World Development Report highlighted that development professionals, including Bank Group staff, can be susceptible to confirmation bias and fall into decision traps if their choices are influenced by their social environment, mental models, and groupthink (World Bank 2015k). The report shows that “development professionals can make consequential mistakes even when they are diligent, sincere, technically competent and experienced” (189). The IEG report on how the World Bank learns also highlighted several limits to overly relying on tacit knowledge exchange to drive action (World Bank 2015g). Both reports emphasized the importance of measures to counteract the risks of groupthink. They provide examples of “challenge functions,” creating deliberative environments where teams are exposed to opposing views and invited to defend their own and where an independent voice that specializes in challenging assumptions has been shown to effectively counter ingrained biases.
Country teams sometimes use evidence from sources outside of results reporting systems to inform adaptive management. To inform program adaptation, country teams sometimes carry out internal stocktaking exercises to maximize candor and self-reflection. Country teams also find rapid feedback gathering useful when shaping action. Several mentioned the Survey of Well-Being via Instant and Frequent Tracking and Iterative Beneficiary Monitoring as particularly useful. “Listening to Tajikistan” is an example of a frequently recurring survey that allows the country team to monitor Tajikistan’s shocks, fragility, and the life satisfaction of Tajiks during the country’s engagement cycle. The Morocco “portfolio footprint analysis” is another example of targeted information gathering to inform outcome orientation. The Morocco team was concerned that projects focused too much on geographic areas where implementation would be easy rather than areas where needs are high. In collaboration with project task team leaders, the country team overlaid poverty data with the geographical scope of the projects to gauge the poverty-reducing potential of the World Bank’s operations. This exercise equipped the team with visual tools to improve project targeting and led to a poverty targeting index that ranks the CPF priority in geographical areas at the provincial and communal levels. Country teams consider this type of feedback much more useful than information provided by the formal country-level results system.
Frequent turnover of staff and managers makes it difficult to retain knowledge over time and weakens tacit knowledge and accountability. Interviewees suggested that turnover hinders outcome management. Country teams rely on staff’s tacit knowledge about country context and their personal relationships with stakeholders, access to networks, and experience in the field to make informed decisions. As such, the frequent turnover in key staff and managerial positions without a proper handover or succession plan can be detrimental and complicate the Bank Group’s ability to focus on long-term results. The combination of a truncated institutional memory and the lack of results systems that focus on long-term success makes repeating mistakes more likely. Interviewees noted that there are few formal mechanisms to transfer knowledge during staff handovers, and the quality of this transfer depends on the behavior of the incoming and outgoing staff. Country teams also argued that staff rotation undermines accountability and responsibility. Managers and staff who are working on country issues during the CPF design stage have sometimes moved on by the time of the PLR reality check and certainly by the time results are achieved or evaluated. Several country managers acknowledged this as a weakness of the system, or as one put it: “I have no incentive to invest in a results approach that is useful for managerial or reporting purposes, as I will have moved to another country by the time we have to make adjustments or have to show what we’ve achieved.” Among IEG’s sample of cases, there were only a few instances where the same country managers who designed the CPF stayed on until the PLR stage or beyond. In these cases, there was stronger ownership over the strategy and commitment to achieve results.
Some country teams have improved their adaptive management capacity by finding ways to build and retain institutional knowledge. Interviews and case studies revealed that country teams must find ways to overcome the frequent turnover of staff and management. Methods to pass on knowledge and hand over responsibilities from departing to arriving staff are considered very important by country teams and counterparts alike. However, these methods are too often neglected because there are no formal handover mechanisms and, as a result, handovers occur based on staff’s individual discretion and motivation. To mitigate this challenge, some teams have used past country managers or directors as peer reviewers for current CPFs or PLRs to ensure some continuity in thinking. Another way to overcome this challenge is to empower local staff to take an active role in portfolio and outcome management. Local staff can be powerful conduits of institutional and portfolio memory, can maintain strong networks across a range of stakeholders, and can carry a long-term view of the program’s outcome orientation. In the Western Balkans, for instance, the cross-country portfolio analysis is implemented primarily by local staff who play an increasingly active role in the countries’ engagement cycles and contribute significantly to the results management of country portfolios. Yet another way to maintain institutional knowledge is by establishing regular safe spaces to discuss the successes and failures of the portfolio. In Peru, for instance, the country director frequently convenes the full country team and GP members to brainstorm lessons from past actions and experiences.
Performance and Learning Reviews
Country teams use PLRs to document changes in the country engagement. The Guidance on Country Engagement expects that “the WBG [World Bank Group] carries out a continuous process of monitoring and learning from implementation” (World Bank Group 2019u, 16). The PLR is envisaged as a key adaptation moment with feedback loops between evidence and action. In interviews, the majority (83 percent) of staff who expressed an opinion on PLRs found them useful for revising the program’s ambition. Many (40 percent) found them useful for adjusting the CPF results frameworks. IEG’s document review found that they use it well for that purpose. Out of the 60 CPF objectives tracked by IEG, there were 27 changes to CPF objectives made at the PLR stage. Most of these changes (17) were to the CPF objectives scope or expected outcome levels. Another nine of these changes were caused by project delays or failures and changes in the country’s situation. MIGA and IFC also use PLRs to update targets and indicators and review the pipeline.
Much of the time and effort put into PLRs is taken up by revising the results frameworks, and 81 percent of indicators are revised or dropped. It makes sense that program indicators from the CPF results framework are updated or dropped at the PLR stage, given the difficulty of forecasting these results at the CPF stage. IEG reviewed 334 indicators from all sampled CPFs that had a PLR. After the PLR stage, only 19 percent of these CPF indicators were maintained and tracked as originally formulated. Most of the CPF indicators were either dropped (38 percent) or revised (43 percent). Thirty percent of PLR adjustments address shortcomings in the initial program design or align results frameworks to portfolio changes caused by new activities, project cancellations, and changes to the scope of work, among other reasons. Country teams considered these adjustments to be tedious and mechanical, crowding out the space to use PLRs for learning and reflection. Having the PLR as a Board deliverable adds to the transaction costs of the exercise, with a high level of scrutiny and multiple drafts going back and forth. Country teams also emphasized that the process was time-consuming, especially without online portals that link CPF results frameworks to underlying indicator data sets. As a result, country teams described having to “run after project TTLs [task team leaders] and IOs [Investment Officers]” for indicators. Overall, teams did not see the value of having a fully fledged results framework at the CPF design stage when there is uncertainty surrounding the pipeline. Instead, they felt that the PLR stage was a more opportune time to elaborate the country program’s results framework.
Country management significantly influences how much PLRs are used as a platform for collective reflection and adaptive management. For the most part, PLRs are used to report past decisions to the Board, not as an inflection point toward future change. Country teams, especially those from IFC, view the PLR primarily as a reporting requirement, not an adaptive management platform. PLRs contain little information on the adaptive management actions that IFC and MIGA take to maintain or improve the health of their portfolios, with fewer than half containing such information. Unless country management convenes GP and IFC industry staff for PLR collective learning, those groups are rarely involved, other than to provide updates to CPF results frameworks, and GPs generally found little value from the exercise. That said, IEG found examples of country leaders who proactively used PLRs to collectively reflect within the CMU team and with GPs and clients to uncover meaningful lessons. In those cases, country teams appreciated this moment for “taking a step back” as several put it. For instance, in the Maldives, the PLR was used to reorient the portfolio’s focus from developing the capital city to developing secondary centers. However, this change in the country team’s narrative and tactics was serendipitous rather than planned because the PLR process coincided with elections and a new government that had different priorities from the previous one.
Client Engagement
Clients engage country teams primarily when designing CPFs and carrying out portfolio health checks rather than when measuring and managing results. Clients that are most engaged with country teams at the CPF stage and when projects are being designed tend to have less interest in other aspects of the country engagement cycle, such as the PLR and the CLR. Country teams said that frequent government turnover incentivized officials to focus on short-term issues and “quick wins,” which made it difficult to engage them on medium-term outcomes and longer-term results measurement. Out of 25 CPFs, 18 rated political and governance risks as high in the Systematic Operations Risk-Rating Tool framework. Peru and Sri Lanka are examples of high-capacity, high-turnover governments, where a CPF strategy can quickly become obsolete. As a response, Peru’s country team has relied on signing Memorandums of Understanding with each incoming government because they serve as more realistic road maps for action than the CPF. By contrast, country teams more easily engage clients on results management issues in certain scenarios, including when the government has a strong “planning” track record (for example, in Colombia, Mexico, or Vietnam) or when the Bank Group contributes to a high-profile government priority, for example, Honduras’s Dry Corridor Alliance.
The Bank Group plays a key role in strengthening clients’ statistical systems but is little involved in developing their monitoring, evaluation, and results management systems. The Bank Group is among the world’s largest providers of development cooperation to build the capacity of governments’ National Statistical Offices and country statistical systems (World Bank 2017a, 21). This type of support is mainstreamed across country engagements. All sampled CPFs, except one, clearly articulated the Bank Group’s support, through projects and ASA, for statistical capacity. However, only in rare cases, such as Serbia, does this support extend to strengthening national evaluation or results management systems (World Bank 2018c, 2019d). In interviews, Bank Group staff said the Bank Group does not rely on country-owned results frameworks to supply outcome evidence but instead relies on World Bank–funded project-level results frameworks, which it considers higher capacity. IEG could not identify any CPF that clearly uses country systems for monitoring and evaluations. Interviewed clients highlight that most development partners similarly rely on their own systems, which leads to a highly fragmented monitoring and evaluation landscape. This is a well-known and well-documented issue in the Sustainable Development Goal era when country-owned results frameworks and underlying monitoring and evaluation systems are supposed to be a core mechanism for measuring Sustainable Development Goals, yet are underperforming and overlooked by the donor community (OECD 2019, OECD and UNDP 2019; Vähämäki, Schmidt, and Molander 2011).
There is little coordination of results management activities among development partners. All 29 reviewed CPFs mention, often in general terms, some form of collaboration with donors to achieve CPF objectives. However, only one CPF mentions monitoring and evaluation coordination. Country teams and clients report that development partners’ results frameworks and indicators are driven by their own internal policies, practices, and approval cycles. Staff noted that the Bank Group and its development partners rarely aligned their operational systems even when there was an obvious upside, such as in procurement, so they saw a low likelihood they would align monitoring and evaluation frameworks. This lack of alignment across development partners makes it more difficult for governments to aggregate and assess donor contributions. Country teams see advantages when coordination takes place, often at times of crisis or in relation to budget support, but in their view, alignment’s transaction costs rarely justify the payoff. In Haiti, for example, joint learning exercises and portfolio reviews were undertaken with the Inter-American Development Bank when both donors increased their programs after a catastrophic earthquake. In the Pacific Islands, major donors—including the World Bank, EU, Asian Development Bank, Australian Department of Foreign Affairs and Trade, and New Zealand Ministry of Foreign Affairs—have jointly provided budget support and policy reforms with a shared policy matrix and indicators. This had advantages in focusing efforts and achieving far-reaching policy changes.