Private Sector Advisory Projects

Methodological Reflections from the Independent Evaluation Group’s Experience and Approach

This paper looks at the key methodological challenges of evaluating advisory projects in the private sector and offers practical lessons on incorporating self-evaluation and validation across private sector advisory interventions. 

Private Sector Advisory Projects: Methodological Reflections from the Independent Evaluation Group’s Experience and Approach

Advisory and capacity development work—whether described as technical assistance, capacity building, or knowledge support— are an integral part of development. They help strengthen the skills, systems, and institutions needed to turn financing into real, lasting reforms. Despite the steady increase in the volume of advisory services, however, many development finance institutions lack systematic approaches to measure and evaluate the contribution of these services and learn from their experience.

Several factors contribute to this: first, advisory services projects often yield behavioral or institutional changes that are harder to capture. Second, they generally reveal their impacts over a longer time horizon, making it challenging to track and evaluate them within typical project cycles. Third, they are often delivered along with other interventions, making it difficult to disentangle their contributions.

Using the International Finance Corporation’s (IFC) Advisory Services self-evaluation system and IEG’s independent validation work, this paper takes a closer look at why evaluating advisory projects is so challenging, and what can be done to improve it. It highlights common methodological obstacles and reflects on what IEG has learned from years of validating these projects. The paper offers practical steps to strengthen evidence, learning, and the institutional systems that are needed to support credible self-evaluation and validation across advisory interventions.

Chapter 1: Framing and Measuring Capacity Development

Although widely recognized as essential for development, capacity development has no single definition or framework for assessment. This is because it spans many levels - individual, organizational, and systemic - and involves both tangible and intangible changes in skills, behaviors, norms, and institutions. Its multidimensional and relational nature makes it hard to capture under one concept, often leading to conceptual ambiguity. To address this complexity, researchers and practitioners have created various frameworks that highlight different aspects of capacity development or focus on specific dimensions. For example, while the World Bank does not have an overall framework, it often relies on the Institutional Change Assessment Method (ICAM). Similarly, other institutions assess their capacity development interventions with frameworks rooted in the criteria established by the Organization for Economic Co-operation and Development’s (OECD) Development Assistance Committee (DAC): relevance, coherence, effectiveness, efficiency, impact, and sustainability. These frameworks are useful but also have several downsides when it comes to evaluating advisory work. Read more in Chapter 1.

Chapter 2: How IFC’s Self-Evaluation of Advisory Services and IEG’s Independent Validation Work

With the growth of its advisory services portfolio, IFC needed to develop an internal governance system that ensured accountability and transparency throughout the project cycle for its client-facing activities. It took a systematic approach - developing standard mandatory documents for the project life cycle, results frameworks, biannual supervision reports, and a self-evaluation document, the Project Completion Report (PCR). IEG validates a stratified random sample of PCRs through detailed evidence review, interviews, and triangulation of internal and external sources, producing Evaluation Notes that confirm or adjust ratings. Projects are assessed on development effectiveness, IFC’s role and contribution, and work quality, using guidelines jointly developed by IFC and IEG. While IFC and IEG have adopted a collaborative approach, challenges and tension points remain. Read more in Chapter 2.

Chapter 3: Assessing Project Effectiveness—Challenges and Perspectives

The guidelines for evaluating IFC advisory services projects require the project’s development objectives to focus on outcomes and impacts, rather than outputs. Outputs are the activities or tasks completed by IFC. Outcomes, by contrast, reflect changes in clients’ behaviors, knowledge, and practices, while impacts capture the broader effects of those changes on clients, stakeholders, and the market. Evaluating outcomes and impacts can be difficult on account of several factors: challenges in establishing attribution and causality, lack of indicators that meaningfully capture behavioral or institutional change, reliance on self-reported client information, discrepancies between project reports and external sources, and limited data on impacts beyond the client, among others. Evaluators often seek to address these gaps through triangulation, contribution analysis, and careful examination of project outputs, timelines, and external factors. But the assessments remain constrained by desk-based validation and limited post-completion data. Read more in Chapter 3.

Chapter 4: Beyond Effectiveness: Objectives, Theories of Change, and Work Quality

Clear, outcome-focused objectives are essential for evaluation, yet many IFC advisory projects present broad, output-oriented, or unclear objectives. Project objectives may also change during implementation due to new information or shifts in context. Early adjustments can replace original objectives, while later changes trigger split ratings that assess performance before and after restructuring. Weak or incomplete theories of change often require evaluators to rebuild causal pathways from project documentation and assumptions.
In addition, IFC rates work quality to assess whether teams followed required procedures, and these ratings separate team performance from project results. This creates a challenge for evaluators, who must judge decisions based on information available at the time, and not with hindsight. IEG evaluators, generally, rely on IFC’s internal governance and documentation, including supervision reports, memos, and evaluations, to understand how projects were designed and managed. Read more in Chapter 4.

Chapter 5: Conclusions

As the World Bank, IFC, and the Multilateral Investment Guarantee Agency (MIGA) move toward a unified Knowledge Bank model, the ability to assess whether and how knowledge interventions generate results has become increasingly important. Yet, evaluating knowledge and capacity building activities remains difficult since they often produce intangible, hard-to-measure outcomes. Drawing on fifteen years of IFC’s advisory self-evaluation system and IEG’s validation work, this paper highlights the methodological challenges of evaluating advisory projects and the evidence standards needed for credibly assessing and learning from them. The paper underscores the importance of better evidence-collection and triangulation, selective fieldwork, clearer objectives and theories of change, and the use of programmatic and thematic evaluations to capture longer-term effects. Clearly distinguishing between project and team performance can further improve the credibility and robustness of evaluations of advisory services. Read more.