Working in partnership is today more common and more important than ever before, and given the complex challenges the development community is dealing with the need for effective partnerships is only going to increase.

For IEG, a longstanding partnership forum has been the Network on Development Evaluation which regularly brings together bilateral development partners. During the network’s most recent meeting last month, I spoke about what IEG has learned about evaluating partnership programs based on a large number of evaluations of programs in which the World Bank Group is involved.

1.  To improve the authorizing environment, make sure there is a mutually agreed evaluation policy.

Many partners will have different ideas about evaluation. Agreeing, right from the start, what will be evaluated, by whom, when and why is important to ensure evaluations are conducted in a timely manner. If it is a self-evaluation, then arrangements for an independent validation or review are needed to ensure credibility.

2. To improve credibility, ensure evaluation independence.

How much independence is necessary?  The answer to that question will vary considerably across multilateral development institutions and bilateral donors. Varying standards of independence among partners is often a source of confusion and has a direct impact on evaluations of partnership programs. That’s why it is important that partners develop a shared understanding about what are the acceptable standards for independent evaluation.

3. Invest time in planning the evaluation.

Partnership programs often cover a wide range of activities, all of which need to be assessed by the evaluation. Frequently, the partnership arrangements themselves, from governance, to funding and staffing issues, will also be evaluated. Defining the scope and coverage of the evaluation in clearly defined ways is necessary to ensure that partners share expectations, that their most important questions are answered, and that adequate resources are allocated for the exercise. The clearer the boundaries, the more focused the evaluation will be.

4. Choose criteria that fit the purpose.

Evaluations of partnership programs use two broad sets of criteria: one for the development outcomes of the partnership program itself, and the other for its organizational effectiveness to measure how well the partnership is functioning. In assessing outcomes, evaluations build on the standard evaluation criteria established by the OECD/DAC  in 1991. For questions of organizational effectiveness, the focus is on governance structures - sharing of voice and power on one hand and of financial burden-sharing on the other - and on whether the partners are in fact getting what they expected out of the partnership.

5. Make sure the evaluation is transparent and that key stakeholders are consulted.

It’s not enough for an evaluation to be properly carried out – if it is to have an impact, its findings need to be well received, discussed, and acted upon. Transparency is paramount in this context. If any of the partners or stakeholders do not understand the evaluation process or the criteria against which activities are evaluated, they may reject the findings, however sensible they may be. Consultations with stakeholders – from all different perspectives and interest groups – are essential. They will also help to ensure that the findings don’t come as a surprise.

6. Ensure that recommendations are agreed on and followed-up.

As with all evaluations, recommendations should derive from findings and conclusions based on evidence. In partnership programs, these recommendations might be directed more towards one group than another. In general, the wider the scope of these evaluations, the harder it can be to limit the recommendations to those that are the highest priority and where their implementation can be monitored.

7. Plan dissemination in advance.

Making a difference with an evaluation can also require thinking beyond the main stakeholders.  Broader sharing of the evaluation’s findings through the internet and various other means is an important aid to learning. Unfortunately, we’ve found that this is something that too few of these evaluations do.

These principles are just a starting point in evaluating the effectiveness of partnership programs. Ensuring that both partners in these programs are in fact “better together” is something that we at IEG together with EvalNet will continue to work on.

 

Comments

Submitted by M.I.Zuberi on Tue, 12/09/2014 - 21:09

Permalink
Definitely..it is 'better together'....especially in the 'modern era' when things became much complicated and interlinked. And more true for anything related to environment and sustainability.....these are systems requiring holistic approach for any consideration.

Submitted by Caroline Heider on Wed, 12/10/2014 - 07:53

In reply to by M.I.Zuberi

Permalink
Yes, this is definitely true. And, because of the interlinkages, it is coalitions of different actors that play an increasingly important role.

Submitted by Anna Guerraggio on Wed, 12/10/2014 - 04:53

Permalink
Caroline, thank you very much for sharing the IEG lessons learnt on how to evaluate partnership programs. If I may, as a sort of #3bis to the list, I would like to highlight the importance of ensuring enough time for reflection and adjustment on the evaluation methodology and emerging evidence along the way. I think we have too often the tendency to think the evaluation process as compartmentalized. The different phases since the definition of the terms of reference (data collection, data analysis, drafting) follow a linear path, and the risk is to arrive at the drafting stage and realize not to have the data and the time to understand in depth some of the emerging evidence. This will have an impact not only on the relevance/credibility of the evaluation, but also on the evaluation's capacity to formulate highly pertinent recommendations.

Submitted by Caroline Heider on Wed, 12/10/2014 - 07:58

In reply to by Anna Guerraggio

Permalink
Anna, as always, a very good observation. In an ideal world, the idea of reflecting on what success looks like at the end would start at the time a partnership is conceived. That vision of success would align partners, help shape what the partnership should achieve and how, as well as define milestones and monitoring progress along the way. In short: an evaluative mindset would be a great foundation for evaluation, and yes, an evaluation needs time to digest data and information collected from different stakeholders and yet be relevant and timely to decision-making process. We try to achieve this by getting a headstart on our major evaluations. What do others do?

Submitted by Marco Lorenzoni on Tue, 12/09/2014 - 20:25

Permalink
A perfect synthesis of 'THE' guiding principles that should apply not only when evaluating partnership interventions, but more widely when evaluating any intervention. There is a point that is surely implicit in this article but I would further stress, and this is that local partners in the 'beneficiary' countries are to be involved together with donors in all these seven steps as they are full members of the 'community of partners'. Thanks for these reflections, Caroline !

Submitted by Caroline Heider on Wed, 12/10/2014 - 01:44

In reply to by Marco Lorenzoni

Permalink
Marco, you make a good point, but my own experience is that the engagement of local partners in an evaluation process varies considerably. And, here I mean not simply as resource people to provide information but become partners in the evaluation process. When it works, it is great, adds a lot of value and context specificity that helps interpret evaluation findings. But, there can also be tensions when evaluation findings are not all that positive, and sometimes these tensions are culturally difficult to manage.

Submitted by Tessie Catsambas on Wed, 12/10/2014 - 01:18

Permalink
Thank you for this sensible list of good practices, Caroline. I agree with Marco Lorenzoni that this is a list of good practices for any evaluation. In fact, even in the same organization, individual stakeholders have different perspectives about the evaluation's purpose and questions. In that way, even an intact group is a "partnership" of people who bring their different talents and skills together to achieve common outcomes. This is why well-designed participatory methods lead to better evaluations, and contribute to more useful evaluations for clients. In some ways, this blog is linked to the one on "value for money" in evaluation. Good to keep these reflections in the forefront of our work, Caroline!

Submitted by Caroline Heider on Thu, 12/11/2014 - 03:24

In reply to by Tessie Catsambas

Permalink
Thank you, Tessie, for flagging another important thing: that even within a group we believe to be homogenous, we will find different perspectives and need to build "partnerships" (or a team, for that matter). The exercise you did with us to surface values associated with shared prosperity was really helpful in this regard.

Submitted by Neeli Satyanarayana on Thu, 12/11/2014 - 07:20

Permalink
Independence and transparency play a vital role in evaluation.

Submitted by Caroline Heider on Thu, 12/11/2014 - 00:51

In reply to by Neeli Satyanarayana

Permalink
Couldn't agree more, Neeli.

Submitted by Sandrine Beaujean on Fri, 12/12/2014 - 21:34

Permalink
This is an excellent analysis. Thank you. I was very interested by the 4th point - Find criteria that fit the purpose. The CAD/OECDE criteria to assess the outcomes of a Partnership are globally known. But, on the other hand, I have the feeling that there is no consensus yet on the criteria or indicators to use to assess the organizational effectiveness of a Partnership. Am I wrong?

Submitted by Caroline Heider on Wed, 12/17/2014 - 21:55

In reply to by Sandrine Beaujean

Permalink
You are right. There are no mutually agreed standards and indicators for evaluating how well the partnership program functions. Many issues related to governance effectiveness are process issues, for which objective indicators are difficult to design. It is often left to evaluators to decide how to evaluate the organizational effectiveness of partnership programs. This makes the need for some indicative principles even more compelling. As international public sector institutions using taxpayers’ resources, partnership programs involving the World Bank and the UN should live up to some standards of good governance. Donors, beneficiaries and other stakeholders increasingly push for better governance of partnerships as well. Over the last six years, IEG in collaboration with OECD/DAC Network on Development Evaluation is championing the application of a set of generally accepted indicative principles of good governance— legitimacy, accountability, efficiency, transparency, and fairness — that are based on OECD’s Principles of Corporate Governance.

Submitted by Dan on Mon, 12/15/2014 - 23:03

Permalink
These are very important principles. I like no. 6-the evaluations are conducted in order to learn something and if the recommendations are not agreed on and followed up, the goals of the evaluation will not have been fully met.

Submitted by Caroline Heider on Wed, 12/17/2014 - 21:55

In reply to by Dan

Permalink
Yes, another key ingredient to ensure learning from evaluation is to engage the stakeholders at key stages of evaluation so they can take the ownership of the findings and recommendations.

Submitted by Shabnum Budhwani on Wed, 12/17/2014 - 20:26

Permalink
I also believe communication plays a very vital role and is instrumental in the success of any partnership. As has been mentioned earlier even in a so called homogeneous team there maybe different interpretations. Therefore open and ongoing communication are crucial in order to ensure that there is a common understanding and agreement rather then moving forward on presumptions. In any dynamic project, things evolve and embracing change based on an ongoing evaluation is the key to success. However this understanding has to be shared and most importantly communicated. Thank-you for the thought provoking article.

Submitted by Caroline Heider on Wed, 12/17/2014 - 21:56

In reply to by Shabnum Budhwani

Permalink
Yes, that is true, and in all the strong partnerships we looked at there was an effective secretariat that facilitated communication. However, we also feel that partnerships could have stronger results frameworks so that all partners are in agreement on what they – collectively – are trying to accomplish. Otherwise, chances are they will all interpret their goals slightly different and will often have to litigate them.

Submitted by Gabriele Quinti on Thu, 12/18/2014 - 02:12

Permalink
I agree on all the 7 points. I have just to suggest few remarks. - point 3: we need adequate time not only in planning the evaluation, but also in analysing and understanding as better as possible the results that ofter have to be "contextualised"; a "brute" and "immediate" use of results can sometimes be misleading; - point 5: as far as possible key stakeholders have to be non only consulted, but also involved in every step; it is true that "there can also be tensions when evaluation findings are not all that positive, and sometimes these tensions are culturally difficult to manage", but, in my opinion, an evaluation is important not only for the findings, but also as an improvement exercise; in many cases, it could be better to manage tension (f.i. through "interpretative negotiation procedures) than use an approach that can avoid these tensions (thanks to an interpretative negotiation, among other, recommendations - point 6 - could be easily agreed and followed-up).

Submitted by Caroline Heider on Mon, 01/12/2015 - 23:14

In reply to by Gabriele Quinti

Permalink
Great suggestions, Gabriele. Yes, if these tensions can be resolved through evaluation that's very useful. At times, though, I found that the time during which evaluators are in the communities, and the time needed for digesting the information that is gathered (through those field visits and compared to documents, existing research, data, etc.), might mean that the evaluation team is no longer in the field when all of the pieces of information come together. That makes it hard to deal with the issues you suggest. A different type of evaluation, one that is ongoing, participatory, could play the role you describe.

Submitted by Tara Sharafudeen on Fri, 01/09/2015 - 06:08

Permalink
From my experience of managing partnerships I found that in many cases the independent evaluators where we arranged evaluations were former Bank staff. This actually affects the credibility of the evaluation as far as the partners are concerned. It works better for us since the former staff understand the Bank and its constraints better. I completely agree that we should agree on what should be evaluated and how and by whom in advance. In cases where this was not done we ended up with an evaluation by the partner which actually did not capture the whole impact of the partnership. Another issue we faced was that the partner often clubbed the evaluation of several partnerships with the Bank and ADB etc together. This included field visits, discussions with clients etc and resulted in a lopsided evaluation. The countries where ADB had their programs where not countries where we had a large program funded by the partner. In the interest of time and money they choose to go to countries where ADB was active leading to an evaluation which we could not agree with. Under organizational impact it is also useful to look from the partner perspective what the impact of the partnership has been on the Bank and its programs.

Submitted by Caroline Heider on Mon, 01/12/2015 - 23:16

In reply to by Tara Sharafudeen

Permalink
Tara, many thanks for sharing your experience. Very useful.

Submitted by Fanny Nyaunga on Sun, 01/11/2015 - 01:56

Permalink
Very insightful article and very revealing observations and comments-Great.

Submitted by Caroline Heider on Mon, 01/12/2015 - 23:17

In reply to by Fanny Nyaunga

Permalink
Many thanks for the feedback!

Submitted by Dilki on Fri, 07/10/2015 - 01:56

Permalink
As others said these 7 points are well matched with any evaluations and thanks for putting into one page . When i went through the points it gave me space to again reflect my own experiences in partnership evaluation. My experience is in managing partnership evaluation is really a frustrated task. when it comes to evaluating partnership programme , the more powerful partner get more authority through the evaluation automatically. Every partner doesn't see the evaluation through holistic view and they focus only their part of the porgramme only( compartmentalize) , this is highly visible stages such as: giving feedback for the draft report and inception reports. if we don't have well partnered during the programme design, monitoring and evaluation , how can we get the meaningful partnership in engaging in evaluation

Submitted by Beede Amare on Tue, 03/08/2016 - 06:21

Permalink
Comments given here are all valuable. Currently we are evaluating professional educational training between one European university and three universities in the South. We found no appropriate evaluating guideline for our work. We felt the best way of evaluation would be self evaluation with out undermining independent group. All the stakeholders are very satisfied with all the outcomes of the program. We were able to meet most important criteria. We feel that best evaluation is planned along with program planning. In face of the vast dimensions of this kind of partnership we have to work to come up with close to perfect evaluation criteria. We need that not only for outcome evaluation but also to compare different programs.

Add new comment