When I became Director of Evaluation at the Norwegian Development Cooperation Agency (Norad) in 2011, my natural instinct was to look for the evidence of impact of Norwegian aid. What did the money achieve that Norwegian citizens were investing in far-away countries? 

I sat down one afternoon and went through all of the evaluation reports published by our Evaluation Department over the previous year, and to my disappointment had to conclude that none of the evaluations and studies commissioned by the Eval­uation Department and finalized in 2011 could report sufficiently on results at the level of outcomes or impact.

lessons_learned_nor_pub.gifThe report showed well what money was being spent and what direct activities or services were being delivered. But critical questions about whether those services gave rise to real benefits for poor people and other target groups proved elusive.  As it turned out this was not a very popular insight. Why?

First, while it was clear to me that “no evidence of effect” does not mean “there was no effect” – equating these two very different statements was a misunderstanding common in the press, among colleagues and the general public.

This was not a harmless misunderstanding - there was substantiated fear that it would be used to support political voices that wanted to cut Norway’s generous aid. Others found my statement difficult to digest because it implied the necessity of seeing an effect of Norwegian Aid – when they felt measuring results was irrelevant, perceiving aid as merely a foreign policy tool.  And finally, it was simply human to resist criticism and scrutiny – and my statement indeed suggested that we needed to do a better job in ensuring the evaluability and monitoring of the programs funded. 

Out of this realization, we decided to carry out the Evaluation Department’s first ‘corporate evaluation’.  The resulting report titled Can we demonstrate the difference that Norwegian aid makes? (1) was very successfully launched last week in Norway and presented at the World Bank Group this week, hosted by the Independent Evaluation Group (IEG). Vibecke Dixon, Senior Advisor and Manager of the evaluation and Ida Lindkvist, an advisor in the Evaluation Department, explained how the evaluation had been framed around a series of testable hypotheses: were the arrangements for planning results in grants adequately designed and specified; were staff adequately trained to manage for results in grant management; were policies and systems correctly implemented when grants were approved?

The study also checked to see if problems were arising in the way the evaluation department designed and managed evaluations: did it ensure evaluation designs placed an appropriate emphasis on measuring results; and were the consultants recruited sufficiently competent?


Organizational culture and incentives - the silver bullet?

Limitations were found in all of the above-mentioned dimensions, and concrete recommendations for improvements made.  The main take-away was the need for a clear commitment from senior management for results measurement, through role modeling, signaling, resources, and incentives. This finding is consistent with IEG’s evaluation of learning in World Bank lending (part I), which will be discussed at the Bank’s Board on May 12th. This report finds that tinkering with systems and organizational structures will not ensure that learning and knowledge sharing flourish in the organization’s operations unless measures are taken to transform the organizational culture and incentives.


The partner-led approach – an excuse for not reporting on outcomes or an opportunity to strengthen the systems?

One of the major differences between the comparator agencies in the report -DFID, the World Bank Group - and the Norwegian aid system is that while the former impose strict procedures on their borrowers or grant recipients, with potentially significant transaction costs, Norway has long followed a partner-led approach built mainly on trust.  

This approach could be strengthened by underpinning it with a clear theory of change that would drive any supported initiative or policy, and planning, evaluating and reporting on results without losing the recipient-responsibility principle. This then would be a lesson highly relevant to a World Bank Group that is moving towards a more partner-led approach.  Indeed, as IEGs Director General, Caroline Heider, pointed out during the panel-discussion ”There is no contradiction between a partner-led approach and a strong emphasis on outcomes – the client should be looking to get the most out of the money, not just the most money.”


IEG’s “no benefit of doubt” approach not well-understood

It’s interesting to note that the misunderstanding around the statement that there is ‘no evidence of effect’ would mean ‘evidence of no effect’ is also common across the World Bank Group. In order to incentivize results measurement and evaluation of World Bank projects, the Independent Evaluation Group and the Bank’s Operational Policy teams harmonized guidelines and introduced the concept of ‘no benefit of the doubt’ in 2006.

In practice this means that lack of evidence leads to a downgrade on the efficacy measurement of projects in the same way that an evidence-based lack of effect does. This discrepancy is perhaps the most important cause of a persistent disconnect between World Bank internal ratings and IEG ratings, and it causes disagreements as much as hurt feelings.

Better communications and more outreach to World Bank staff will be needed to enhance the understanding of the underlying concept and to get broader buy-in for the approach.

Whether it is Norwegian aid or World Bank lending, we have to know that what we do works, that it has no adverse effects and that it could not be achieved more efficiently through other means – people’s lives and well-being are at stake. Meaning well doesn’t make it well. Only knowing well will lead to improvements.

1. The report was prepared by a team drawn from Itad Ltd working in association with the Christian Michelsen Institute (CMI).

Comments

Permalink

I agree with the points raised in this posted article . Money provided for aid by donors should be systematically and independantly evaluated for effectiveness and demonstrates how the results benefit the poor. From my younger age in Africa, i used to hear that our villages have benefitted multiple projects from differents donors (Fida, WB, AfDB, etc) but our conditions of living were never impacted at all. But that does not mean that results didn't exist. They existed but were redirected to others and not us. Managing grants for results should be embedded in policy and programs design, the project cycle management and the independant evaluation operations. Otherwise money from donors are a waste if no one is accountable for results! Caring mostly for their career more than the condition of the poor, Country Managers work with corrupted government officers in a vacuum of results-based and accoyntability system, and money never get where it should be! Back to my african village, if i was asked the unique and aggregated ndicator that could show that the agricultural / education / Health world bank project has impacted our household would be the positive change of our household income as better health, education and agrucultural practices could draw increased production and revenues. This happened to our household but the increase of income was attributed to other factors and not to the projects! During the evaluation design and methodology implementation, this question should be meticulously studied and solved.
Permalink

The attempt at self-critique by the Norwegian Agency is a welcome development. It is no gainsaying to insist that most aid flowing from development agencies, either unilateral or multilateral, never gets to benefit the ideal targeted recipients. Those who count the blessings of such assistance are the operators- in some cases, the agencies' field men and most times, the operators of aid programmes in recipient nations. To bring about an improvement, there is need to evolve a community-based approach to aid delivery and monitoring. As reported in Andre Bucumi's comment, most projects meant to be established for improvement in the life of members of the communities of the poor never progress beyond site inspection and some initial mobilization. Due to corruption and embezzlement, where they are at all initiated, within few months of their their initiation, they become abandoned entities that later rust away without making a single impact on the poor. Often, owing to parochial inclination of official/facilitators within the government, facilities are most times located at very awkward places, where they are rarely noticed or needed and therefore rarely maintained. For example, all over the country, in the quest to provide safe water, there are sunk boreholes, but many of them never get started while some only functioned for barely few weeks or months; some, due to corrupt disposition, never at all come into effect. If within a nation, money constitutional meant for an arm of government never gets to them all through a whole year and budget implementation never in the actual sense rarely reach 40% even when the targeted revenues are available, but misappropriated by state officials, one can then imagine what will happen to funds made available by far-away donors. As it is for water provision, so it is for health and agro-related services. Except when by cheer luck, projects result from the demand of community members, and are strategically located in the midst of the targeted community of the poor, who also are also effectively conscientized and mobilized to ensure success, the effect of aid-projects are rarely felt. Where wrongly sited, they represent official symbols or monuments that makes no contribution to life of community members. This accounts for the high rate of morbidity of such projects. While it is not the case that aid to developing countries should be stopped, there is urgent need for a change of paradigm. Evaluation of needs and delivery of service should explore the possibility of using non-government community-based approach. Involvement of government should be limited to supervision. Since most aid projects are based on established and proven prototypes, the Office of Country Administrators of aid should be in position to deal directly with benefiting communities, taking advantage of financial mediating and related institutions for disbursement of fund. Monitoring should be by agency staff and independent bodies. For aid to achieve its aim of restoring hope of living to the poor in developing world, a paradigm change is a must.
Permalink

Reading this article including your comments, I tend to think that the main problem is how we design and monitor impact of our programs. Using Participatory Learning and Action (PLA) tool in the designing programs has always proved to be effective. Involving the community more at design stage so that they can share their poverty related needs to inform the design of the program. Community participation also offers a platform for us to be able to monitor the impact of the program using the community set structures; who participated in the design of the program. Involving the community more across the project cycle management should help to know whether the program has had any impact or not. By community participation you are also able to manage expectations. That way you are able only to evaluate the expected results of the program which the community participated in the design.
Permalink

I have been surfing online greater than 3 hours lately, yet I never found any fascinating article like yours. It's lovely price enough for me. Personally, if all site owners and bloggers made excellent content material as you probably did, the web might be much more helpful than ever before.

In reply to by nike shox

Permalink

Many thanks, NIke, this is really great feedback. I'm pleased your find our blog so useful and informative!
Permalink

Thanks for the engaging comments. As you pointed out, it is not always that the projects benefit those intended, but often we just don’t know. This is why it is crucial to establish good monitoring and evaluation systems, check that these are in fact correctly collecting and reporting on the situation on the ground, and use them actively for decision-making. This will ensure that we both know whether the intended program is delivered to the intended beneficiaries, and that it is working in the sense of delivering the intended outcomes (which doesn’t necessarily follow, if the theory of change was flawed). You make very good points about the importance of beneficiary involvement both in the design of projects, and also in their monitoring. IEG is now looking into innovative ways of increasing beneficiary involvement and consultation in its evaluations – this is not straightforward, as many of the evaluations we do are covering more than one project, often a whole sector across regions, but new technologies such as mobile phones are providing exciting opportunities for reaching more beneficiaries than ever before at an affordable price.
Permalink

It is very encouraging to see evaluation departments like NORAD’s going for the gold standard of impact measurement. For too long now development cooperation and humanitarian policies have been shaped by perception studies. Tax payers and the electorate are increasingly sceptical about whether their money is actually providing value for money and makes a difference for the people we say we are helping. They want to see real evidence (not just perceptions) of that what we do, works. In order to embark on an evidence-based programming agenda, as Marie suggests we need to improve our monitoring and evaluation systems. But how? I have been associated with some recent work in impact in the humanitarian sector and from this I would suggest NORAD invest in the following areas: • a well-defined theory of change; • good formative research to understand the context and background of the initiative; • explicit or implicit counterfactuals that help measure what would have happened in the absence of the intervention; • qualitative and quantitative, baseline and end line data; • well-defined set of beneficiaries and outcome variables; • identification methods that use these data to quantifiably measure changes in outcomes that may have occurred due to the intervention; • the ability to use the evidence in other situations and contexts.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.