Learning in World Bank Lending
Chapter 2 | Knowledge and Learning in the Linear Model
Highlights
The World Bank uses a linear knowledge model that excels at producing technical knowledge sourced from both inside and outside the World Bank to inform project designs.
However, the linear model is not ideal for fostering learning during the project implementation phase and underemphasizes country knowledge.
Some of the most valuable learning comes from early, less formal opportunities and from unpacking of tacit knowledge. This includes early and informal discussions with project peer reviewers and early, low-stakes (“safe space”) reviews. There is much variation in the extent to which managers create such early, low-stakes opportunities to share knowledge.
Case study projects relied far more on tacit knowledge from previous or ongoing projects than on formal lessons from Implementation Completion and Results Reports.
The World Bank has little learning from canceled and dropped operations and from failure.
This chapter evaluates the World Bank’s successes and failures in embedding knowledge in projects. It shows that the World Bank uses a linear model to embed knowledge in its financing through key entry points in a project’s preparation, approval, implementation, and completion phases and that the policies, procedures, and guidance for lending create defined learning moments for staff that vary depending on the financing instruments’ design (table 2.1). It finds that the World Bank’s linear approach to embedding knowledge into lending has weak lesson learning and imbalances that emphasize producing reports over applying knowledge, steers its knowledge work toward a project’s design phase rather than toward learning during the project implementation phase, and focuses its explicit knowledge work on generating sectoral knowledge rather than country knowledge, which tends to be tacit and unsystematic. This chapter examines each of the project phases in turn, from project preparation to completion.
Table 2.1. Overview of World Bank Financing Instruments
Aspect |
IPF |
PforR |
DPF |
---|---|---|---|
Support of |
Ring-fenced project activities |
A wider government program |
A set of policy and institutional actions |
Disbursement based on |
Borrower incurring eligible investment project expenditures |
Verified achievement of program’s DLIs with no tracing of specific expenditures |
Achievement of development policy actions |
Disbursement to |
Designated accounts or specific accounts for reimbursement |
General budget (exceptionally to implementer account) |
General budget |
Implementation mechanism |
Bank IPF rules and procedures |
Borrower program systems |
Country policy processes |
Knowledge that underpins |
Technical assessments of projects’ technical design, approach, and appropriateness |
At design: Integrated risk assessments to help projects address technical, fiduciary, environmental, and social risks Annually during implementation: Data to verify achievement of DLIs |
Analysis of economy-wide or sectoral policies and institutions, and the poverty and social impacts of proposed policies |
Source: Independent Evaluation Group, based on Operations Policy and Country Services guidance.
Note: DLI = disbursement-linked indicator; DPF = development policy financing; IPF = investment project financing; PforR = Program-for-Results.
Project Preparation
All project teams in the evaluation’s sample generated and used diagnostic studies to inform project designs during project preparation. This analytic work was highly diverse and had strategic, instrumental, and conceptual applications depending on the lending instrument (table 2.2). Different financing instruments generate and use analytic work differently during project design and, in so doing, adhere to guidance for these instruments:
- The 20 IPF operations in the evaluation’s sample used studies to identify new projects or design specific pieces of infrastructure and financial arrangements. This approach adheres to the World Bank’s corporate guidance on IPF operations that recommends technical assessments on a project’s technical design, approach, and appropriateness. Teams often used needs assessments or global sector studies strategically to justify projects—at least 53 percent of IPF in the evaluation’s cases had such strategic knowledge use. For example, a needs assessment in Benin built the business case for investing in nutrition. Teams also often used project-commissioned studies for highly specific instrumental purposes—at least 80 percent of IPF in the evaluation’s sample used studies this way. For example, a study helped integrate nature-based solutions into a drainage and solid waste project in Côte d’Ivoire, and another study assessed flood risk and climate scenarios for embankment specifications in Viet Nam (table 2.2).
- All eight DPF operations in the evaluation’s sample conducted policy impact modeling, studies on the benefits of specific policy actions, and other reform analyses. Some of the DPF operations also had conceptual knowledge uses, for example, to understand policy reforms’ distributional consequences. This conforms to corporate guidance for DPF, which recommends analysis on a country’s economy-wide or sectoral policies and institutions, and the poverty and social impacts of any proposed policies.
- The six PforR operations in the sample used integrated risk assessments to bolster these projects’ technical soundness and their support for clients’ systems and capacity within a sectoral program. In accordance with corporate guidance, these integrated risk assessments focused on ensuring that projects addressed technical, fiduciary, environmental, and social risks. According to the team leaders, the PforR assessments’ focus on technical soundness and reinforcing country systems helped orient teams toward achieving outcomes.
Table 2.2. Examples of Instrumental, Strategic, and Conceptual Knowledge by Lending Instrument
Financing Instrument |
Instrumental Knowledge |
Strategic Knowledge |
Conceptual Knowledge |
---|---|---|---|
IPF |
Studies of future flood risk and how to build embankments to withstand long-term climatic changes for infrastructure resilience (Viet Nam). |
Studies showing the benefits of immunization and vaccine uptake promotion (Pakistan’s National Immunization Support Project). |
A study of global brownfield remediation and redevelopment best practices helped conceptualize risk-based approaches and legal liability issues (China). |
DPF |
Monitoring of policy reform implementation (all sample DPF). |
Studies demonstrated the benefits of green growth, covering fisheries, energy, pollution, and climate change (Morocco Green Growth). |
Studies on air pollution, urban infrastructure, social housing, carbon trading, and urban forest conservation helped identify and prioritize environmental and urban resilience issues (Mexico). |
PforR |
Dialogues with clients on the annual verification of the DLIs led to inclusion of additional institutions (Kenya) and other outcome-oriented changes (all PforR projects in the sample). |
Cities’ urban mobility plans, done as part of the DLIs, led to the development of a national urban transport strategy (Morocco Urban Transport Program). |
Studies helped conceptualize and incorporate environmental objectives in the country’s Green Agricultural and Rural Revitalization (China). |
Source: Independent Evaluation Group’s case studies conducted for this evaluation.
Note: Refer to chapter 1 for the definitions of the terms. DLI = disbursement-linked indicator; DPF = development policy financing; IPF = investment project financing; PforR = Program-for-Results.
Project teams use a broad spectrum of knowledge, not just from the World Bank sources, to inform project preparation. The sample projects’ technical assessments were often conducted collaboratively with clients, at times incorporating the clients’ own research. For example, in Viet Nam, the World Bank’s technical expertise supplemented what government agencies lacked in knowledge capacity. Conversely, in China, where government agencies possess more expertise, the World Bank used analytics to inform project designs and integrate global knowledge and best practices into country strategies. IEG used artificial intelligence and text mining to examine the sources of the explicit knowledge cited in the Project Appraisal Documents (PADs; appendix D). The analysis found that one-third of these citations were authored by the World Bank, another third were individually listed authors (internal and external to the World Bank), 13 percent were client government sources, 9 percent were United Nations agencies, and the remaining were other multilateral institutions and other sources (table 2.3). The relatively frequent use of client and partner sources is encouraging. Overall, IEG’s analysis found that 41 percent, or 1,271, of the total 3,102 project document citations were published by the World Bank or included current World Bank staff as the author or coauthor. Many cited references were also from World Bank knowledge collaborations. Twenty-five of the 1,271 World Bank references were coauthored by the World Bank and client governments, another 20 were coauthored by the United Nations agencies, and 237 were coauthored by a mix of individual authors from the World Bank and other institutions.
Table 2.3. Distribution of All Project and Program References by Authorship
Author Type |
References (no.) |
---|---|
World Bank Group |
1,033 |
Individual authors |
994 |
Client government |
396 |
United Nations agency |
284 |
Other multilateral institutions |
87 |
Private sector |
79 |
International Monetary Fund |
79 |
Nongovernmental organization |
77 |
Bilateral donor |
52 |
Multilateral development bank |
46 |
University |
22 |
Research organization |
20 |
News organization |
6 |
Organization type could not be assigned |
76 |
Total |
3,102 |
Source: Independent Evaluation Group.
Note: The number of projects in the sample is 1,020.
PADs cite a variety of World Bank documents, but few cite core ASA. World Bank management has designated specific categories of country-focused reports as core and extended core ASA—a specific set of World Bank reports meant to inform country programs and help clients advance development objectives. The Knowledge Compact prioritizes core ASA production for all client countries. Project documents refer to Enterprise Surveys, global flagship reports, partnership strategies, sector studies, economic updates, and project-related technical assessments. However, as shown in table 2.4, project documents only sparingly cite core ASA. Only 2 percent of all references and 5 percent of Bank Group references are to core and extended core ASA. The project case studies show that DPF used core ASA more than IPF, which more often used project-commissioned technical assessments. This is unsurprising: DPF teams are expected to use and be familiar with core ASA. Core ASA inform decision-making and policy making at strategic levels for the World Bank, its clients, and other partners and stakeholders, in a way that transcends specific operations, and their broad scope and shortened format may not match the specific needs of projects. Therefore, core ASA may still hold value for operations in indirect ways even if project teams cite them sparingly, perhaps because they are less familiar with them.
Regarding the types of knowledge, the project teams in the evaluation’s sample relied mostly on tacit knowledge to understand country dynamics. In stark contrast to the explicit sectoral knowledge that informed the project designs, teams relied on talking to the right individuals to understand a country’s context, political economy, implementing agency capacity, and so on. The cases found little documentation or dissemination of this country knowledge except in the Systematic Country Diagnostics, which are no longer mandatory. A handful of team leaders acknowledged receiving tacit country knowledge from Country Management Unit staff. National staff often maintained strong connections with key national figures to stay abreast of country knowledge, which was useful for teams. A few national staff even alternated between positions as World Bank staff and counterpart staff. For example, one of the World Bank’s national staff in Sri Lanka became the project director for the Sri Lanka biodiversity project. Projects also relied on partners for country knowledge; for example, the Moldova tax project team relied on a resident European Union official as a peer reviewer and informal adviser, even after leaving the country.
Table 2.4. Core and Extended Core Advisory Services and Analytics Referenced in Project Appraisal Documents
ASA Type |
References (no.) |
---|---|
Core |
55 |
Country Climate and Development Report |
6 |
Country Economic Memorandum |
8 |
Country Private Sector Diagnostic |
9 |
Poverty Assessment |
11 |
Public Expenditure Review |
21 |
Extended core |
11 |
Agriculture Sector Review |
1 |
Financial Sector Assessment Program |
2 |
Infrastructure Sector Assessment Program |
1 |
Public Expenditure and Financial Accountability |
3 |
Risk and Resilience Assessment |
4 |
Total |
66 |
Source: Independent Evaluation Group.
Note: The number of projects in the sample is 1,020.
Teams composed of global and in-country staff helped make projects technically sound and anchored in country contexts. Project teams often had members who specialized in different technical areas. For example, the Jordan PforR had one team member for each of its three disbursement-linked indicators (DLIs). Many project teams had TTLs or co-TTLs stationed in country offices, which made them well-placed to engage with counterparts and grasp the nuances of the country context. For example, in Brazil, a local TTL for the Amazonas Fiscal and Environmental Sustainability Programmatic DPF—the World Bank’s first engagement with the state—facilitated the integration of state-specific insights into the project design. In the Philippines, having a financial sector DPF co-led by country office staff helped underpin policy reforms with a deep understanding of the country’s context. This helped keep the project relevant and prevented the reforms from being reversed after the government changed. Similarly, in Pakistan, an immunization project’s success was partly due to the leadership of national staff who were well-versed in the political dynamics surrounding the country’s ongoing devolution efforts. In some instances, travel restrictions or the limited physical presence of in-country staff in smaller countries impeded the team’s ability to forge client relationships and devise optimal implementation strategies. For example, the World Bank’s ability to coordinate multiple projects in Eswatini was constrained by having only one staff member in the country. The Marshall Islands encountered similar obstacles because of World Bank staff’s limited on-the-ground presence.
Project Appraisal and Approval
The most valuable learning opportunities during project approval and appraisal processes come at the least formal moments. IEG analyzed project approval meeting minutes and comments, observed some concept and decision meetings, interviewed TTLs and peer reviewers, and examined the selection of peer reviewers quantitatively. The structured nature and hierarchical dynamics of internal Concept Note and decision meetings limited the free flow of ideas and lessons learned, according to IEG’s direct observations and case study interviews. Decision meetings, which are the final step in the approval process, determine whether the project is ready for appraisal and negotiations but are not designed for and do not provide learning opportunities. That said, the decision meeting’s agenda and the specific guidance sought by teams can enhance these meetings’ usefulness. Similarly, early-stage peer reviews and quality enhancement reviews, which are the least formal parts of the design and approval processes, often contribute the most valuable knowledge and learning to teams. This section further discusses these dynamics.
The World Bank has a structured peer review process. The appointed peer reviewers of project Concept Notes and appraisal documents impart instrumental and conceptual knowledge during the project approval process. They endorse or critique project objectives, results frameworks, and technical designs, contributing to coherent project designs. Managers rely on peer reviewers to add credibility to projects’ technical design quality as evidenced by IEG interviews. The World Bank’s introduction of a more streamlined peer review process in 2017 improved the quality of peer review feedback by making it more targeted and succinct, as shown by case studies.
When peer reviewers provide timely advice, they are more likely to add value; however, many comment shortly before meetings. The case studies’ assessment of peer review advice stored in the Operations Workspace shows that timely reviews, especially those provided well ahead of project approval at the concept review meetings or the decision review meetings, contributed to improved project designs. For example, peer review comments at the concept review meetings led the Côte d’Ivoire Urban Resilience and Solid Waste Management Project to remove a risky landfill component and Türkiye Climate and Disaster Resilient Cities Project to adjust its design to concentrate on earthquake risk instead of multiple hazards, thereby simplifying its design. By contrast, in the Panama water and sewerage project, peer reviewers advised the project team to promote a tariff scheme instead of relying on expected government subsidies for the system’s financial viability. However, the project team ignored this advice and subsidies never materialized, eventually contributing to the project’s cancellation. IEG’s analysis of the timing of peer review advice at projects’ concept review stage shows that 25 percent of comments arrived the same day or one day before, and 16 percent of comments only two days before the concept review meeting (figure 2.1). When teams receive such late review comments, it can be hard for them to fully absorb and act on the advice before meetings, according to the case studies. In interviews, TTLs complained about receiving the reviews late. Sectors vary in the timeliness of peer review advice. The median lead time for peer review advice for projects mapped to the Environment, Natural Resources, and Blue Economy GP was seven and a half days, whereas for projects mapped to the Social Protection and Jobs GP it was only two days, and other GPs falling in between those extremes.
Early and informal knowledge inputs maximize the project teams’ learning. Informal discussions well before projects are ready for approval are highly valuable, with many TTLs appreciating in-depth, off-the-record conversations with project peer reviewers and others on project design. For example, the TTL for the West Africa Unique Identification for Regional Integration and Inclusion project commended the benefits of informally engaging with peers for technical advice. One reason it was so valuable is that peer reviewers often provide more frank and candid feedback in these informal settings. In addition, the evaluation’s case studies found that less formal meetings, such as quality enhancement reviews, led by practice managers are more conducive to open learning because they focus on technical details that may not be addressed in more formal settings.
Figure 2.1. Timing of Peer Reviewers’ Advice Before Project Concept Review Meetings

Source: Independent Evaluation Group.
Note: The number of projects in the sample is 725. 20+ = more than 20.
There is much variation in the extent to which managers have embraced the value of early, low-stakes processes to unpack tacit knowledge. The World Bank has a standard early review meeting, quality enhancement reviews, and nonstandard informal meetings variably referred to as preproject Concept Note meetings, clinics, and safe space meetings. These meetings, held at the discretion of managers, are all opportunities for a small group of advisers to provide candid feedback for projects under preparation. Managers use quality enhancement reviews for 91 percent of PforR, 72 percent of IPF, and 1 percent of DPF operations under preparation (figure 2.2). The early, low-stakes meetings represent managers’ intentional efforts to unpack tacit knowledge. For example, the Social, Urban, Rural, and Resilience GP began organizing such safe spaces. The Social, Urban, Rural, and Resilience GP director at the time explained the value of these safe space sessions, “These brainstorming sessions offered a forum to explore new operational frontiers by leveraging global tacit knowledge from experts working in different regions on similar issues. As the reviews were designed to influence the design of new operations under preparation, the connection between knowledge and solutions was strong” (Ijjasz-Vasquez et al. 2024). The Governance GP has also created informal spaces, referred to as clinics, to brainstorm technical project designs before approval.
Figure 2.2. Operations with a Quality Enhancement Review by Lending Instrument Type and Lead Global Practice

Source: Independent Evaluation Group.
Note: The number of projects in the sample is 1,942 approved since FY19. DPF = development policy financing; IPF = investment project financing; PforR = Program-for-Results.
The World Bank has curated a peer reviewer database, but managers do not always use it when selecting reviewers. Setting up the peer reviewer database was part of the World Bank’s action plan to implement its Strategic Framework for Knowledge. The database includes World Bank staff vetted as qualified to be peer reviewers. Sixty-four percent of lending project peer reviews in FY 2018–23 were conducted by peer reviewers in the database, according to IEG’s analysis of project peer reviews. GPs use the database unevenly—the share of reviewers from the GPs in People (previously Human Development) and Prosperity (previously Equitable Growth, Finance, and Institutions) was higher than for Infrastructure and Planet (previously Sustainable Development; figure 2.3). Some interviewed managers readily admitted to never using the database.
Figure 2.3. Share of Peer Review Advice from the Peer Reviewer Database

Source: Independent Evaluation Group.
Note: The number of projects in the sample is 1,011. PRDB = peer reviewer database.
Project Implementation
During project implementation, staff use operational and other knowledge to improve projects via restructurings. The case studies revealed examples of teams using Mid-Term Reviews for IPF and PforR to adjust or restructure projects. Several IPF teams conducted assessments that contributed to improving project results frameworks, scaling up operations, or reallocating funds among project components. Past IEG studies found that when project teams engage effectively in adaptive learning, they can overcome implementation challenges—for example, by identifying risks early, eliciting support from managers, and acting quickly to restructure projects or mitigate these risks in other ways—and linked such adaptive learning to improved project performance (World Bank 2020b, 2023). However, IEG studies and research also concluded that the World Bank’s incentives, results measurement systems, and risk-averse corporate culture do not support adaptive management well. Incentives for project staff are sometimes focused on checking the box—that is, meeting targets and feeding the demands for corporate monitoring data—more than on promising learning (Honig 2018, 2020; World Bank 2016, 2020b, 2020d).
Routine project documents do not report on staff’s knowledge use or learning. Adhering to the World Bank guidelines, all PforR and IPF operations in the sample conducted biannual in-country visits or missions that are documented in external management letters, aide-mémoire, and Implementation Status and Results Reports. DPF operations, for their part, function differently because all prior actions need to be completed before these operations are approved, but they also have missions, Implementation Status and Results Reports, and aide-mémoire. Interviewed TTLs for IPF and PforR reported that they gained valuable tacit knowledge from missions, often on practical operational matters, such as procurement, financial management, and implementation specifics, including strengths and weaknesses of client-appointed project managers. The evaluation’s case studies had many examples of teams collaborating with clients and using knowledge to solve implementation challenges and build capacity, which often solved project implementation challenges and showed positive shifts in supervision ratings. However, Implementation Status and Results Reports and aide-mémoire did not document this knowledge and learning but instead focused on the project’s status and compliance with the World Bank policies and procedures. In this, teams adhered to these documents’ standardized reporting templates, which do not have fields to capture learning.
Project teams are often too busy or underresourced to carry out knowledge work and promote learning during implementation. The project preparation phase has formal requirements to produce specific knowledge inputs and technical assessments as shown in table 2.1, but the implementation phase has few such requirements. In addition, teams rarely have the budgets to produce studies during implementation. As a result, the case studies show that project teams develop far fewer knowledge inputs and obtain less learning during implementation than during preparation. Among lending instruments, IPF tended to carry out the least knowledge work. Twenty-seven percent of the TTLs managing IPF projects, whom IEG interviewed as part of the case studies, stated that there were barriers—such as work pressures, compliance requirements, immediate problem-solving needs, and budget constraints—to conducting or using analytic work during the implementation phase. These TTLs described project implementation challenges as barriers to generating and using knowledge, but, ironically, the urgency to address implementation challenges, and ultimately pursue outcomes, is precisely why relevant knowledge is so valuable. That said, some projects from the sample collected data or conducted informal studies, often financed by trust funds, to help them address operational challenges or introduce innovations in pursuit of development outcomes. For example, the Morocco Urban Transport Program PforR commissioned a gender survey to improve women’s access to public transport. Pakistan’s National Immunization Support Project developed advocacy plans and innovative mechanisms to improve vaccination coverage. The Sri Lanka Ecosystem Conservation and Management Project produced and shared publications on landscape approaches and managing human–elephant conflicts. However, the World Bank lacks a systematic way to store, share, or reuse studies and innovations produced by operations, and project teams lack time, budgets, and incentives to do so.
International staff rotations have pros and cons. The World Bank Human Resources Career Development and Mobility Framework mandates that most international staff rotate positions every three to four years. This is a deliberate strategy by the World Bank to transfer knowledge across Regions, among other reasons. Indeed, the case studies showed examples where staff rotations facilitated knowledge sharing. For example, some case study TTLs recently rotated out of global units known as centers of expertise, such as the Global Facility for Disaster Risk Reduction and Recovery, where they had gained knowledge and subsequently used that knowledge in projects in their next position. However, there is also a downside to this strategy, as IEG’s evaluation of the World Bank’s global footprint found (World Bank 2022a). TTLs accumulate a wealth of tacit sector and country knowledge and establish trusting client relationships when in a certain position or geographic location, and, therefore, when they rotate out of these positions, it tends to create gaps in knowledge and disrupt client relationships. Econometric studies have correlated TTL turnover with negative project performance as measured by IEG outcome ratings (Ashton et al. 2023).
TTL turnover rates are high and increasing, which can potentially exacerbate tacit knowledge losses. IEG developed an indicator to measure annual TTL turnover per project using panel data on project team composition for all projects within the evaluation period (appendix F). IEG’s analysis shows that TTL turnover averaged 0.85 rotations per year over the FY14–23 period. The turnover increased threefold from 0.4 in FY14 to 1.2 per year in FY23 (figure 2.4). Since the average World Bank operation had 2.2 TTLs in FY23, this means that 1.2 of an average operation’s co-TTLs rotate out per year while 1 remains. This is an alarming statistic. The case studies found several instances of discontinuity among successive TTLs. In addition, TTLs explained that handover notes were of uneven quality and that they found tacit exchanges more helpful and better suited to convey sensitive information about clients.
An increase in overlapping co-TTL arrangements may have mitigated the associated disruptions. Co-TTL arrangements can offer valuable mentoring opportunities between senior and junior co-TTLs, observed in several of the case studies. Overlapping transition periods between outgoing and incoming TTLs and co-TTL arrangements can also reduce knowledge gaps. IEG adjusted the TTL turnover indicator to account for the presence of overlapping co-TTLs. The adjusted indicator shows that TTL-out-rotations that were not mitigated by a co-TTL overlap were much lower, at about 0.1 TTL-out-rotations per year. Moreover, this rate has remained stable since FY18. In effect, the World Bank’s growing use of co-TTL arrangements has kept pace with the increase in staff rotations. Among instruments, DPF operations had marginally higher adjusted TTL turnover rates compared with IPF and PforR projects. Basic TTL turnover rates were notably higher for the countries affected by fragility, conflict, and violence, likely due to shorter rotation cycles, but the adjusted turnover rates showed minimal difference (appendix F). These findings suggest that the World Bank is proactively managing the trade-off between maintaining knowledge continuity within projects and the desire to rotate staff for knowledge transfer and other reasons. That said, relationships of trust between staff and clients are critical to achieving results, and the high and increasing TTL turnover rates should be cause for concern.
Figure 2.4. Task Team Leader Turnover Rates

Source: Independent Evaluation Group.
Note: The number of projects in the sample is 2,785. TTL = task team leader.
PforR operations used learning to pursue project results. In all six case study PforR projects, teams used integrated reviews and annual verifications to maintain a strong focus on learning during implementation because of the need to verify DLIs every year. Learning from these projects strengthened country fiduciary systems, built client capacity, refocused the government’s sector strategy, or enhanced the project’s focus on results and sustainability. For example, in Tanzania, workshops on the DLI verification’s findings that involved the World Bank, the government, and an independent verification agency generated feedback and reflection on the project’s results, according to aide-mémoire and interviews. In other PforR operations, annual verifications informed course correction initiatives. For example, in a PforR in Kenya, the verification process brought in additional institutions to enhance environmental and social management midway through the project’s implementation. Moreover, by disbursing funding against outcomes rather than project inputs, PforR operations give implementing agencies more room to reflect, learn, adapt, and innovate. In the West Africa Unique Identification for Regional Integration and Inclusion PforR project, ongoing learning in the five participating countries promoted peer-to-peer learning across countries and led to adjustments in the program’s direction, policies, and protocols.
Some of the sample DPF supported policy monitoring. Much explicit knowledge goes into preparing DPF operations, as already mentioned. During implementation, half of the evaluation sample’s eight DPF operations used monitoring and evaluation systems to gain knowledge on the programs’ implementation. In some cases, the focus was on monitoring whether government agencies complied with the agreed policy reforms. In some other cases, the focus extended to assessing the policy reforms’ impacts. For example, in the Mexico Environmental Sustainability and Urban Resilience DPF, the World Bank used trust funds to design monitoring frameworks, generate estimates of policies’ expected distributional effects, collect new data, and assess policy impacts, thereby bringing positive policy impacts to light. The learning continued after the operation closed and eventually informed Mexico’s Country Climate and Development Report and other products.
MPA programs’ learning agendas hold promise. MPAs are not a lending instrument but rather an approach to sequencing or combining projects over multiple years. MPA programs foster learning during implementation by mandating learning agendas that cover the lifespan of phased, long-term, or multicountry programs. When well-designed, these learning agendas identify knowledge gaps, monitor progress, and use adaptive learning to make program adjustments (table 2.5). IEG assessed 34 MPA projects’ learning agendas against seven criteria of a well-designed learning agenda.1 IEG found that 4 out of 34 learning agendas met just one of these criteria, another 4 met two criteria, 23 met three criteria or more, 13 met four criteria or more, 5 met five criteria or more, and 2 met six criteria. None met all seven criteria. Only 2 of the learning agendas identified learning outcomes—the least commonly met criterion (figure 2.5). The review of these 34 learning agendas’ designs concluded that MPAs offer a structured approach to generating and using knowledge extensively; however, not all MPA programs have comprehensive learning agendas in place at project approval.
Table 2.5. Purposes of Knowledge in Multiphase Programmatic Approach Programs
MPA Program Aspect |
Instrumental Knowledge |
Strategic Knowledge |
Conceptual Knowledge |
MPA overall |
Use cross-fertilization of lessons to problem-solve, develop standardized documentation, and increase harmonization of regulations (regional MPAs). |
Provide continuous support for institutional development, capacity building of implementing agencies, and stakeholder coalitions (India River Basin Development). |
An ethnographic study of beneficiaries’ needs and concerns was used to conceptualize the design (West Africa Unique Identification for Regional Integration and Inclusion). |
MPA learning agenda |
Continually improve implementation quality through periodic M&E assessments that feed back into design of activities (Kenya Digital Economy Acceleration). |
Prepare a feasibility study for a statewide flash flood forecasting system during phase 1, which is then piloted in phase 2, and, based on insights, expanded sitewide (India River Basin Development). |
Use pilot programs in private sector development to create approaches that are suitable, fair, and effective in supporting women entrepreneurs (Fiji Tourism Development). |
Source: Independent Evaluation Group’s review of 34 MPA learning agendas.
Note: M&E = monitoring and evaluation; MPA = Multiphase Programmatic Approach.
Early indications suggest that learning during MPA program implementation is below potential. MPAs do not monitor the learning agendas—MPA operational systems and guidance do not require or support such monitoring—and the majority of MPA programs are still early in implementation (see also IEG’s evaluation of the MPAs; World Bank 2024). As a result, it is premature to reach firm conclusions on how well MPA programs implement their learning agendas, support learning with clients, and use this learning to make changes to the program. IEG reviewed three MPAs that were at an advanced stage of implementation: the West Africa Unique Identification for Regional Integration and Inclusion, Western Balkans Global COVID-19, and Madagascar Health projects. Only one of these, West Africa Unique Identification for Regional Integration and Inclusion, generated robust learning across countries and across project phases. West Africa Unique Identification for Regional Integration and Inclusion did so by creating frequent opportunities for tacit learning across the program’s countries, both for clients and World Bank staff, on different topics, such as how to conduct know-your-customer compliance during pandemic lockdowns. In the Madagascar Health MPA, the implementing agency’s capacity shortcomings limited its ability to apply the knowledge in the reports produced by the learning agenda, and COVID-19 travel restrictions hindered the World Bank team’s ability to support learning. As a result, the project team added capacity building to the learning agenda for the project’s second phase. In the Western Balkans regional MPA, the implementing agencies’ insufficient readiness to execute the project, coupled with World Bank management’s urgency to advance swiftly to the program’s second phase, did not allow enough time to generate and incorporate lessons from the first phase.
Figure 2.5. Reviewed Multiphase Programmatic Approach Learning Agendas That Met Select Quality Criteria

Source: Independent Evaluation Group.
Note: N = 34. See appendix C. MPA = Multiphase Programmatic Approach.
Completion and Lesson Learning
The case study projects seldom used formal lessons from the Implementation Completion and Results Reports (ICRs). The World Bank has a long-established self-evaluation system. After a project closes, project teams complete an ICR, which is validated by IEG through the ICR Review; compliance with the process’ requirements’ is high. However, the case studies’ document reviews and TTL interviews found that new projects rarely used lessons from previous projects’ ICRs. The case studies showed that many staff perceive the formal self-evaluation of projects as administrative tasks rather than valuable knowledge inputs. This observation is consistent with the findings from IEG’s evaluation of the Bank Group’s self-evaluation systems (World Bank 2016). ICRs tend to focus on ratings, and the lessons they capture vary in quality, validity, and relevance. Past World Bank initiatives to create a database of project “delivery challenges,” termed DeCODE (Delivery Challenges in Operations for Development Effectiveness), and to provide an automated, curated “knowledge package” with lessons and other information to TTLs were discontinued, but the ongoing reform of the ICRs provides an opportunity to improve the quality and use of lessons. Extraction, synthesis, and application of lessons need judgment to apply well to context and are hard to automate. Many evaluations have documented weaknesses in how the World Bank evaluates and learns from its projects (Ravallion 2016; World Bank 2023). The weak formal lesson learning limits outcome orientation because staff use process or informal evidence of lower quality instead.
In contrast, the World Bank often uses tacit knowledge from previous or ongoing projects, particularly sector- and country-specific knowledge. The case studies found that World Bank teams far more often used tacit lessons than they did explicit lessons from ICRs, Completion and Learning Reviews, or other sources. Tacit lesson learning from prior projects and peers’ experiences contributed instrumental, conceptual, and strategic knowledge in the early stages of project development. For example, project teams applied tacit lessons from a previous IPF operation to different financial instruments, leading the World Bank to adopt a hybrid PforR and IPF model in Tanzania’s water sector. This model allowed for a more sustainable and results-focused sector strategy. Similarly, in China’s agriculture sector, the World Bank’s strategic approach was shaped by tacit past experiences, leading to a PforR that aligned with the government’s Green Agricultural and Rural Revitalization program. Cross-country lesson learning is less common, but multicountry MPAs stood out as an exception by facilitating peer-to-peer learning across countries.
The World Bank has no established system or safe space for capturing lessons from canceled and dropped operations, hindering learning from these experiences. Political and institutional sensitivities lead to some approved projects being fully or partially canceled or dropped before approval, sometimes after years of preparation effort. The understanding of such political and institutional sensitivities is an important tacit knowledge. For example, in the Iraq Emergency Social Stabilization and Resilience Project, management urged the project team to include a component that the team thought was ill-suited to the context. Given the team’s clearer understanding of the on-the-ground realities, the component was ultimately deemed unsuitable and had to be dropped during implementation. However, the World Bank has no space for sharing such experiences except through occasional IEG evaluations and cursory notes in the project files. IEG’s Nepal Country Program Evaluation similarly found that the World Bank’s country team had no mechanism to learn from the political economy obstacles that led the World Bank to drop or cancel several projects and components. Such a mechanism could potentially have been valuable to identify the reasons for the repeated droppages and cancellations in the Nepal program and mitigate these. This is part of a larger organizational culture in the World Bank and other multilateral development banks that sometimes focuses on compliance, disbursements, and meeting targets and that tends to project progress and success. Such a corporate culture can induce risk aversion and reduce openness about problems, mistakes, failures, and shortcomings (EBRD 2021; World Bank 2020b). Yet mistakes and failures are important for learning and innovation, perhaps more so than successes. Some foundations and civil society organizations actively promote learning from mistakes. BRAC publishes an annual Failure Report with examples of programs that did not scale, did not meet beneficiaries’ needs, or failed to make a dent in big problems (BRAC 2024). The World Bank has tried to promote learning from failure but with limited success.
- IEG identified the following seven criteria for a comprehensive learning agenda: (i) setting explicit goals and outcomes for learning, (ii) identifying knowledge gaps to understand what is missing, (iii) characterizing data sources to use in the learning process or to assess learning progress, (iv) listing mechanisms to capture lessons and use these to improve and inform subsequent phases, (v) providing capacity building for clients to participate in learning, (vi) identifying types of knowledge generated and needed for progressing through the different phases, and (vii) specifying the clients and partners who will be part of feedback loops or support the learning.