Using Evaluation to Enhance the Performance of Development Partnerships
What do we know about the performance of partnership programs, and how can evaluation enhance their effectiveness?
Building credible evidence on the development effectiveness of partnership programs and on how well they integrate into the global aid architecture is critical.
Over the years, programs like the Global Environment Facility, the GAVI Alliance and others, have demonstrated the value of working in partnership to address global development challenges. With the new Sustainable Development goals, there will be even more impetus for development actors across the public, private and non-profit sectors to work together. In this context, building credible evidence on the development effectiveness of partnership programs and on how well they integrate into the global aid architecture is critical.
So what do we know about the performance of partnership programs, and how can evaluation enhance their effectiveness? This was the subject of a recent workshop hosted by the Independent Evaluation Group and attended by over 40 representatives of partnership programs, private foundations, and international and regional agencies. The workshop was a follow up to the OECD - DAC Network on Development Evaluation meeting held last year in Paris, where many participants expressed interest in strengthening the monitoring and evaluation of partnership programs.
A major theme emerging from the discussions was the need to develop a culture of evaluation within partnership programs to ensure that the evaluation becomes part of the program's life cycle and learning from evaluation is used to improve programs.
Evaluation and, by extension, monitoring need to be integral to the program from the start. This is somewhat easier for large partnerships and for partners with existing evaluation functions than it is for smaller programs and for partners with limited evaluation capacity. Workshop participants suggested that it would be useful to have guidelines for a minimum acceptable level of evaluation (and monitoring) for various program sizes or types, to which end a typology of programs would need to be developed.
Adapting evaluation practice and process is important for ensuring effective evaluation of partnership programs. This must start at the program initiation stage by establishing mutually acceptable, clear rules for evaluation arrangements, regardless of the program size. There are many partnerships that do not have or need their own evaluation functions. These programs currently lack the guidance they need to commission and use evaluation effectively.
While the IEG/OECD-DAC sourcebook (2007) provides guidance on the conduct of evaluations, and many large development institutions have well-developed evaluation functions, there are still some gaps, particularly with regard to the capabilities and competencies required of evaluators of partnership programs. Such guidelines can help those who commission evaluations, those who conduct evaluations, and those who use the evaluation findings.
Another gap was insufficient attention to assessing the partnership themselves. For example, in evaluating partnership effectiveness, evaluators need to be able to gauge the contribution of the partnership to the outcomes of the programs and activities in which the partnership engages.
Collaboration around evaluation is required within each partnership program. Donors often have differing reporting and information requirements that can lead to multiple evaluations of a program. Such duplication of efforts can also arise from the political economy of the partnership when there is a lack of trust between partners.
A common theme running through the discussions was the need for some authority to act as the arbiter of good evaluation practice for partnership programs. One suggestion was the creation of a global partnership forum that would serve as a mechanism for the exchange of perspectives on the issues involved in partnership program evaluation.
While a global forum might fill some gaps, other participants suggested that some authority is needed to identify and promulgate good practice, including good governance standards. A peer review process was suggested as another way to ensure adherence to good practice.
To be valued, evaluations need to have a demonstrable impact on the operation of partnership programs and the outcomes they produce. Most evaluations offer recommendations to the programs and their stakeholders. A mechanism for tracking the implementation of recommendations in partnership programs would help ensure that evaluations result in actions to improve program processes and results.
Workshop participants noted that the planning for a partnership evaluation should start with an assessment of the potential for using the findings and identification of the points of influence that might be used to ensure that recommendations are acted upon.
Despite the diversity of viewpoints, there was general agreement on two clear major messages:
First, partnership programs need guidance regarding norms and principles that they can apply systematically to carry out high-quality assessments of a program's development effectiveness.
Second, evaluators of partnership programs would benefit from a systematic effort to identify and share good practices that are unique to these programs.
includes participants' recommendations on how to advance the use and practice of evaluation in partnership programs