Back to cover

Private Sector Advisory Projects

Chapter 3 | Assessing Project Effectiveness: Challenges and Perspectives

Assessing Project Effectiveness: Achievement of Outcomes and Impacts

The guidelines for evaluating IFC advisory services projects require the project’s development objectives to focus on outcomes and impacts, rather than outputs. Outputs are the activities or tasks completed by IFC. Driven by outputs or connected to the activities undertaken, outcomes, by contrast, reflect changes in clients’ behaviors, knowledge, and practices, whereas impacts capture the broader effects of those changes on clients, stakeholders (for example, suppliers, borrowers, and the public), and the market (refer to table 3.1 for definitions). For example, outcomes may include a company implementing energy efficiency measures based on IFC’s recommendations, a government enacting and implementing an IFC-supported regulation, or a financial institution launching gender-focused financial products developed with IFC’s assistance. Corresponding impacts, conversely, may include greater access to essential services among beneficiaries, cost savings, or increased productivity for businesses adopting IFC-driven changes. IEG evaluators focus significantly on these two dimensions because they capture projects’ core results about development.

Table 3.1. Outputs, Outcomes, and Impacts in the Advisory Services Self-Evaluation System

Outputs

Outcomes

Impacts

Source: IFC 2020.

Therefore, when assessing outcomes of advisory services projects, IEG considers reported changes in the behavior, practice, and organization of IFC’s clients (that is, the recipients of the advice), looking for evidence that supports each reported change. IEG evaluators rely on information captured in various project reports as their primary source of evidence. These include clients’ reports explaining the changes they adopted because of the advice provided and IFC monitoring documents, which include key performance indicators and qualitative information. Sometimes there is insufficient (or no) evidence to support a particular outcome change the project team has reported. In such cases, an evaluator requests more specific information from the project team—who, in turn, might request it from the client—or attempts to gather the needed information from secondary sources, which may include publicly available information such as annual reports, websites, news reports, official publications, or announcements by the government in a client country.

For example, for a project that sought to improve a commercial bank’s management of credit risk, the project team provided, as evidence of behavioral and practice changes, several changes that the bank made to its credit risk processes, without identifying changes the bank made specifically because of IFC’s recommendations. Solid evidence goes beyond general claims (for example, “recommendations were adopted”) and provides specific, observable changes that can be directly attributed to project interventions. In this case, relevant evidence would include any bank communications or documents that clearly reflect the adoption of IFC’s recommendations, such as newly adopted credit risk policies or manuals, revisions to other internal policies, documentation of staff training and corresponding assessments, and establishment or restructuring of a risk management unit.

When assessing a project’s impact, IEG looks at the effects of the changes (that is, the project’s outcomes) on the client’s operational or financial performance and the effects on beneficiaries beyond the client (refer to box 3.1 for types of impacts). To continue with the example just introduced, on its impact, the improvement in the bank’s risk management was expected to improve the quality of the bank’s loan portfolio. Loan portfolio quality is usually tracked in IFC’s monitoring reports and can also be assessed based on information available in the bank’s audited financial statements. In this case, the evaluators could confirm the quality of the loan portfolio through the bank’s financial statements.

However, the assessment of impact may become complicated if a project’s effects have not materialized by project completion or if there is limited evidence of effects among beneficiaries or in markets. In some cases, the impact on direct beneficiaries can still be established. For example, a project supporting the upgrade of a power distribution system can claim it has had an impact (in the form of citizens benefiting from improved access and use of power services) as soon as the physical work is completed given that, shortly afterward, a reduction in power outages, a reduced cost of electricity, and similar indicators can demonstrate the improvement in power provision. However, in most cases, evidence of impact beyond clients is challenging to find at project completion and in the period shortly thereafter.

Therefore, assessing outcomes and impacts usually requires working with imperfect and incomplete evidence. In the next section, we outline the main obstacles and provide insights into types of evidence on outcomes and levels of impact.

Box 3.1. Types of Impact by Stakeholder

Impact at the client level. There is evidence that changes in client behavior (products, services, and practices) contribute to clients’ commercial or financial sustainability or operational improvements. The Independent Evaluation Group (IEG) generally has a sizable amount of data on client-level impacts either because the International Finance Corporation collects postcompletion data or because the client for the project is also an investment client. In such cases, time series data on the client’s financial, operations, and strategic directions are readily available. The client’s public annual and financial reports are another good source for impacts information. Finally, sustainability of change of behavior is also considered under impact. Sustainability is understood as the client continuing the new behaviors, products, services, or practices on its own without the support of the project. This demonstrates the business case for the client—the new behaviors increased revenues or reduced costs through, for example, access to new markets, improved supply chain, cleaner production updates, improved operations, increased efficiency, or access to new financiers.

Impacts beyond the client. These are impacts on direct beneficiaries, for example, borrowers; distributors; farmers; micro, small, and medium enterprises; or the general population (in the case of public services) via increased productivity or better quality of production, increased access to and use of basic services or better-quality products or services, or access to new markets or clients. IEG often lacks data at this level because they require surveys of or interviews with ultimate beneficiaries or similar activities. Only a few advisory services projects collect data on impacts beyond the direct client, who is often the intermediary (for example, financial institution, manufacturing company, or government unit). IEG reviews generally involve no primary data collection because of their office- and desk-based nature.

Market-level and demonstration effects. These are impacts, beyond the client and direct beneficiaries, on the overall private or public sector in the country where the project was conducted, such as a new market niche being opened, other companies following best practices, and wider adoption of higher standards (for example, other companies making the same changes that the client made because of the project intervention). IEG tries to gather market-level information through the project team, online research, and externally available data and sources. However, IEG sometimes cannot obtain sufficient data at this level to effectively evaluate a project’s effects.

Source: Independent Evaluation Group.

Challenges Related to Relevance, Reliability, and Availability of Evidence Regarding Performance

To assess outcomes and impacts, IEG evaluators must balance qualitative and quantitative pieces of evidence using a combination of qualitative changes and quantitative indicators tracked by IFC as primary evidence. Even though qualitative information is essential for properly assessing whether a project has achieved its development objective, sometimes operational teams rely on the achievement of quantitative targets to such an extent that they use indicators and their targets interchangeably with projects’ development objectives. Although this problem is common in results-based management (see Vähämäki and Verger [2019] for a review and analysis of evaluation evidence on results-based management implementation challenges across Development Assistance Committee members and other agencies), it is particularly salient in capacity-building interventions, in which quantitative metrics are often inadequate for capturing processes of behavioral and institutional change. Thus, in some instances, a project is perceived as successful because quantitative targets have been reached, even though key qualitative information may be lacking.

For example, if a project intended to improve a company’s corporate governance practices, a completion report stating that the company implemented 9 of its 16 recommendations (versus a target of 5 recommendations implemented) would be insufficient to enable evaluators to assign a positive rating for the project’s outcome, even though the quantitative target was apparently not only met but exceeded. To judge whether the company has substantially improved its practices (the intended objective), evaluators need to understand the nature of the recommendations implemented and how important they were. Not all recommendations have the same value, nor do all contribute in the same way to improving overall corporate governance; for example, the establishment of an internal audit function reporting to the board, rather than to the management, has a higher impact on overall corporate governance than the disclosure of corporate governance policies on the company’s website. PCRs should include qualitative analysis of this type, or, alternatively, project teams should provide it at the time of IEG validation.

For qualitative information to be useful, it must, however, meet minimum quality requirements. For instance, a common question from operational teams is whether client feedback and self-reported data suffice as evidence. IEG accepts client feedback as proof that IFC’s recommendations were implemented, but the feedback must be specific, describing in detail the changes adopted (that is, outcomes), ideally supported by evidence (for example, a copy of internal procedures changed, the setting up of new units, and staff training). Evidence provided by clients might not always be of high quality or comprehensive. Clients sometimes provide only a general statement that “most changes have been implemented.” In such cases, evaluators require more specific evidence. For example, for a project supporting the adoption of energy efficiency measures among private companies, on the evaluator’s request, the project team provided evidence of companies’ investments in energy efficiency machinery that included invoices, installation reports, and technical audits.

Well-structured survey tools are crucial for ensuring reliable client-reported outcomes. When IFC uses client surveys or feedback forms (common when a project is providing advice to a group of clients), evaluators consider the quality of the survey or feedback tool to help determine the reliability of claims. For example, a questionnaire should not include leading questions (such as “Because of the project, what changes has your company introduced?”); instead, open-ended questions (such as “Can you describe changes, if any, that occurred during or after the project, and what may have influenced them?”) provide objective information that has greater reliability for evaluation purposes. In addition, IFC verification of clients’ self-reported changes (that is, project outcomes) is ideal, and project teams perform such verification in selective cases, but it is not common.

The use of standard quantitative indicators, rather than indicators customized to a particular project, may also limit the quality and availability of evidence for validation. Although IFC develops standard results frameworks with indicators for its main advisory services products and services, in some instances, these indicators may lack the specificity needed to fully capture a project’s intended outcomes and impacts. In addition, projects may use standard indicators that are not relevant for evaluation purposes. For example, projects supporting the institutional development of credit bureaus usually include indicators such as the number of inquiries received and the number of financial institutions participating. While useful in most instances, these indicators are not relevant for projects in which improving data quality, either by enhancing the quality of data submitted by institutions that report to the bureaus or by building the data management capabilities of the bureaus themselves, is the key project objective. For evaluation purposes, projects of this type should include indicators that pick up data quality on top of the standard indicators, such as the number of lender complaints, the number of files submitted by lenders but rejected by the credit bureaus, and hit ratios (the percentage of applications that result in a credit report being found in the credit database and used to assess creditworthiness). The absence of relevant indicators from standard reporting does not necessarily mean IFC has not been tracking them.

Although IFC projects collect a substantial amount of information during project implementation, PCRs tend to report only a few selected standard indicators. Hence, it is common to find that more relevant indicators are available and that data regarding them have been routinely gathered through mandatory client reports. Even in cases in which needed information was not collected during project implementation, project teams might be able to reach out to clients during the validation process and obtain it. IEG’s validation is not constrained to the indicators selected by project teams; rather, IEG considers all qualitative and quantitative information available internally and externally if that information is aligned with a project’s causal chain.

Triangulation of information is key to addressing gaps in evidence and obtaining a comprehensive view of project results. Triangulation relies on secondary data, obtained from internal (that is, Bank Group) or external sources. IEG commonly relies on internal information to fill in any gaps in the evidence or to verify project data. IFC investment documentation is particularly useful for evaluation purposes when advisory services are delivered to an IFC client, thanks to the significant amount of operational; financial; and environmental, social, and governance information collected and the long-term nature of the investment engagement, which facilitates access to postcompletion information.

IFC’s environmental and social experts review and assess the environmental, social, and governance performance of investment clients (possibly making field visits to do so), and IEG uses their reports to establish whether clients indeed implemented environmental, social, and governance improvements recommended by advisory services projects. For example, in East Asia, IFC provided advisory services to a company preparing to issue social bonds, which required a sound environmental, social, and governance system to be in place. Thanks to the project, the company developed an integrated system for managing environmental and social risks and impacts that enabled it to monitor the proceeds of the bonds it issued. IEG used gathered data on the bonds from IFC’s yearly environmental and social supervision reports as evidence (internal IFC document). Similarly, World Bank project documents, diagnostic reports, or analytical reports might include results of IFC’s advisory services activities, particularly those related to the investment climate or the business environment.

Beyond information available within Bank Group documents, IEG routinely uses secondary data from external sources such as companies’ or government websites, companies’ annual reports, news or articles discussing changes in legal and regulatory frameworks, and reports from rating agencies (in the case of financial institutions). Evaluators should not find any discrepancies among these pieces of information regarding results or timelines of events, and if they do, IEG seeks clarification from project teams. For example, a project that trained a local partner in conducting a certain type of certification in the local market reported as an achievement the number of certifications conducted and number of active certifiers. However, these numbers did not match the information in the local partner’s annual reports posted on its website, which showed what appeared to be declining certification activity. IFC teams reached out to the local partner, which clarified the discrepancy and provided detailed reports of its activities confirming the achievement.

To keep projects from taking credit for any observed outcome or impact regardless of whether it can legitimately be attributed to the project’s intervention, evaluators conduct contribution analysis. This involves taking a close look at project outputs and how they might have enabled project outcomes. This, in turn, requires a clear theory of change, but determining the nature of outputs or activities (types, quality, and recipients of outputs and timeline of delivery) and their links to reported or intended outcomes is a crucial step, and an evaluator might spend a significant amount of time on it. For example, claims that workshops and training delivered by a project raised awareness (and triggered subsequent adoption) of energy-efficient measures among private sector companies may be weakened if, on close review, the list of project trainees includes mostly officials from government and nongovernmental organizations.

Identification of externalities is another key step in contribution analysis. Identifying external factors that might have influenced a project’s observed outcomes is a difficult task. Because validation is office based, an evaluator’s knowledge is constrained within the boundaries of the information that has been collected through IFC’s monitoring work, which is biased toward confirming the causal chain. This weakness can be mitigated through conversations with IFC teams, the use of secondary sources of evidence, and the engagement of consultants local to the project venue to collect evidence or external research.

However, to properly identify externalities, evaluators must either be knowledgeable enough regarding the subject or sector of the project they are evaluating or have experience evaluating previous similar interventions. It is in these instances that IEG peer reviewers add the most value based on their specialization by sector and years of experience in the function. For example, a project might claim that the project intervention generated a substantial increase in a client bank’s portfolio of small and medium enterprises (SMEs), when in reality directives introduced by the governing central bank, or money or technical assistance provided by other donors or investors, rather than IFC’s support, might have triggered the improvement. This is particularly true if advisory services support was limited to diagnostic assessments or light-touch training (such as short workshops with broad subjects).

Although all the issues just discussed regarding evidence apply to assessment of both outcomes and impacts, data and evidence on impacts present some particularities worth mentioning. As noted earlier, for most projects, evidence on impacts may not be available on project completion. Unlike outcomes, which must typically be observed within a project’s lifespan, impacts involve longer-term effects on stakeholders and may require extended time to materialize. This can make them difficult to capture at project closure. The fact that IEG validation usually occurs several months or up to a year after the PCR has been prepared partly mitigates this challenge. The lag between report completion and validation provides extra time for projects to show emerging impacts. In the best-case scenario, IFC has collected information since project completion (a possibility IFC’s system allows) through monitoring reports or by commissioning an external evaluation. For example, IFC conducted ad hoc reviews of impact achievement in a group of projects involving public-private partnerships (privatization of highways, power sectors, and the like) years after their completion. It hired a consultant, who traveled to the project sites, verified whether the expected improvements in services had materialized, and collected relevant metrics. IEG used the consultant’s report to assess impact for a few of these projects selected for validation. If evidence of this kind is not collected, then IEG usually asks IFC teams to reach out directly to clients to request information on impacts.

Because of the long-term nature of the engagement with investment clients, IFC investment documentation can also provide information on impacts, especially those related to the financial performance of a client. For example, for a financial institution that received IFC banking advice regarding SMEs, IEG can check whether the bank subsequently showed strong growth and financial performance of its SME portfolio (profitability, quality of loans, business involving SMEs, and so on). Additionally, if IFC continued to work with the client in follow-on advisory services projects, those follow-up projects’ diagnostics and assessments could provide useful information on the original project’s impact on the client or market. Such was the case of an IFC project aimed to promote construction of resource-efficient buildings in a client country by introducing cost-effective green building certification into the market. When the project was completed, no information on market impacts was available. However, by the time IEG validated the PCR, IFC had launched a follow-on project in that same country that included a market study that provided data on the status of the market for green buildings (internal IFC document). The study confirmed that the market players had adopted the IFC-promoted green certification.

Yet given the limited information on impacts collected at the operational level, the desk-based nature of IEG validations restricts evaluators’ ability to assess impacts beyond project clients (refer to box 3.1). Broader effects on beneficiaries and the market are difficult to capture if operations have not collected the information through ad hoc monitoring or evaluation work. Expanding evaluation methodologies to include direct engagement with stakeholders and field assessments could enhance the credibility and completeness of impact assessments. As part of the management and evaluation of some projects, IFC has commissioned external evaluations of impacts. For example, one IFC client wanted to increase overall performance and sales among retailers. The intended impacts were that farmers would increase their net incomes by using more effective crop protection and retailers would increase their sales and loyalty payments (financial incentives) to the farmers. An IFC-commissioned study of the project’s impact by a research institution compared the change in retailer sales and profits and farmer perceptions before and after the intervention and with those in a control group (internal IFC document). In addition, in selected cases, IEG conducts project evaluations to complement its validations of PCRs and to delve deeper into impact trajectories and tackles longer-term issues and broader effects in its Country Program Evaluations and thematic evaluations.

Similarly, advisory services projects often operationalize impact narrowly, reducing it to achievement of a couple of quantitative indicators. However, this is often insufficient to provide a comprehensive understanding of projects’ broader and deeper effects across different levels. For example, an IFC project provided a country with firm-, regulatory-, and market-level support that aimed to encourage private sector development through improved performance and increased access to finance among companies in the country (internal IFC document). The project team justified the positive impact ratings claimed in the PCR based solely on quantitative targets related to companies’ performance at project closure. However, these indicators did not reflect the project’s work in the regulatory and market areas, which intended to enforce minimum standards for corporate governance that all players in the sector would adhere to. In addition to considering the quantitative targets, IEG investigated the extent to which local governments had enforced and provided guidance on adoption of corporate governance codes.

As noted, IEG validation is desk based and therefore relies primarily on project documentation, which presents two key limitations that evaluators must consider. First, with few exceptions, they have little or no direct interaction with the main recipients of advisory services outputs—that is, clients. Although, in a few cases, IEG evaluators have contacted clients or conducted field visits to gather stakeholder perspectives and deepen their understanding of a project’s impact, such interactions remain limited. For example, IEG and IFC had different views on the development effectiveness of a project aimed at enhancing the competitiveness of the country’s energy sector by promoting energy-efficient technologies through government regulatory work. To provide a fair assessment, the IEG team got directly in touch with two beneficiaries of the government reforms to understand the impact the project had on the markets in their country (internal IFC document). The information provided rich insights into the project’s activities and confirmed IEG’s view that despite adoption of an important decree, there were weaknesses in enforcement of the regulations, preventing the project from achieving its intended impact.

Second, because of resource and data constraints in the validation process, evaluators focus on validating evidence that backs up observed and self-reported outcomes and impacts. Yet self-evaluations may not capture all relevant project effects, leading to a risk of omission bias (intended or unintended, positive or negative). In one case, an IEG team conducted validation of the completion report for a project designed to streamline trade facilitation services. Because of the project’s complexity, IEG opted for stakeholder interviews and a field assessment. This field visit revealed that one of the project’s key contributions—a new information technology system—not only faced significant challenges during its implementation but was also unable to handle the volume of trade transactions, disrupting the clearing of imports and exports. The client had therefore abandoned the system and reverted to its previous one (internal IFC document). This unintended effect, which materialized after project completion, would have been undetected without the field assessment.