TWEET THIS

Evaluation could complement the estimates made at design with data on actual costs and benefits.
The hidden costs of social and environmental impacts need to factor into the cost of interventions.
Evaluation methods for efficiency will need to become more sophisticated to deal with waste.

One could think that evaluating efficiency does not matter, in spite of resource scarcity and the ever increasing need for improved cost-effectiveness. However, if anything we need to get better at assessing efficiency for a number of reasons.

Efficiency is often defined in terms of “measuring the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the aid uses the least costly resources possible in order to achieve the desired results. This generally requires comparing alternative approaches to achieving the same outputs, to see whether the most efficient process has been adopted.” (OECD/DAC key terms for evaluation)

Way back when I was evaluating development projects at the Asian Development Bank, we used a definition that focused on the economic efficiency of projects; a practice shared across multilateral development banks. It is implicit in the above definition (note the reference to the economic term and least cost models). It is calculated as economic rate of return, and uses a “net present value” of the investment – a standardized rate – to determine efficiency against alternative investment opportunities. This approach goes beyond the narrow definition of efficiency that compares input-output relationships, maybe more often used in grant-funded aid projects.

But, as pointed out in an IEG evaluation of 2010, the practice of Cost-Benefit Analysis has been on the decline at the World Bank for several decades, dropping from 70% of projects including calculations of economic rates of return in the 1970s to 25% in the 1990s. This drop was in part explained by an increasing number of projects in sectors for which this kind of cost-benefit analysis was not feasible. Even when undertaken, the results of the analyses were not used in deciding whether to fund a project or not, undermining the rationale for undertaking the calculations in the first place. Another study, commissioned by the German Ministry of Development Cooperation, compared methods to assess efficiency used both at appraisal and evaluation. It concluded that many methods were little known and used.

One could think that evaluating efficiency does not matter, in spite of resource scarcity and the ever increasing need for improved cost-effectiveness. However, if anything we need to get better at assessing efficiency for a number of reasons.

The systems approach that complexity requires us to use, has the potential for comparing different intervention options and a combination of them. Let’s assume we could model a development challenge just like the US Army had the conflict in Afghanistan (see TED Talk by Eric Berlow), it could allow development practitioners to identify not only options that would generate the highest impact, but also options that are more or less costly and determine the most cost-effective package of interventions. Evaluation could assess the quality of those assessments, and whether they were used in decision-making, as well as complement the estimates made at design with data on actual costs and benefits at the time of evaluation.

Less futuristic, there is a great need to factor into the cost of interventions the hidden costs of social and environmental impacts. Today, the cost of pollution is more often factored into investments, especially when mitigating measures have to be taken or technology has to be adapted to clean up pollutants rather than releasing them unfiltered into the atmosphere. But more will need to be done in evaluating the efficiency of these investments over alternative choices.

Finally, evaluation methods for efficiency will need to become more sophisticated to deal with waste. Losses, such as in electricity or water distribution systems do get accounted for in the evaluation of economic efficiency. However, as the SDGs call for a change in consumption patterns, methods will need to develop a better understanding of the consumption patterns implicitly (and hopefully increasingly explicitly) that an intervention promotes, determine when they are wasteful, to signal the need for rethinking of incentives.

Is evaluation ready to rise to these challenges?  Comment below and share your opinion with us.

Read other #Whatworks posts in this series, Rethinking Evaluation:

Have we had enough of R/E/E/I/S?,  Is Relevance Still Relevant?, and Agility and Responsiveness are Key to Success

Comments

Submitted by Jindra Cekan on Tue, 02/28/2017 - 14:17

Permalink

Imagine a #ReturnOnInvestment #ROI valuing the sustained impact and return on our projects' investments plus those of local communities vs the original investment. That is the way to be #ValuingVoices and #DoDevelopmentDifferently

Submitted by Klaus Zimmermann on Thu, 03/02/2017 - 10:52

Permalink

Efficiency in ODA - Projects is often defined in terms of “measuring the outputs – qualitative and quantitative – in relation to the inputs.
It is an economic term which signifies that the aid uses the least costly resources possible in order to achieve the desired results.
This generally requires comparing alternatives approaches to archiving the same outputs, to see whether the most efficient
process has been adopted (OECD/DAC key terms for evaluation)”. There is now a great need to factor into the costs of interventions
the hidden costs of social and environmental impacts. The systems approach for evaluation that nowadays complexity requires us to use,
need to have the potential for comparing different intervention options and a combination of them.

Submitted by Kenneth Watson on Thu, 03/02/2017 - 13:15

Permalink

It is true that "efficiency" has been somewhat slighted in the North American tradition of evaluation, where it has been seen, in Canada particularly, as a distraction from focused attention on outcomes and impacts. This has not been true of the Australian/New Zealand/UK traditions where the "new public management" movement emphasized CEO performance contracts that are substantially composed of output targets - the idea being that the link between the target outputs and the desired outcomes has been established prior.

As well, the term "efficiency" has several meanings.

As used by economists it means that a project returns more than its opportunity cost. In this usage there is little or no difference between "efficiency" and "effectiveness".

The second and more colloquial meaning of "efficiency" is measured by the amount of outputs per unit cost (or, much the same metric, the cost per unit output). This metric is quite different from the economist's "efficiency".

The third meaning of efficiency is exemplified by the "efficiency ratio" as used by IFAD and several other major Funds. It is a "deliverology" concept.
What administrative resources does it take to deliver a unit of grants or loans? (The resources in question are sometimes financial and sometimes human -- for example the metric might be the weighted number of loans made annually per loan officer.) This meaning of "efficiency" can be very close to "productivity".

Submitted by Anonymous on Thu, 03/02/2017 - 17:49

Permalink

I couldn't agree more. The thing I've struggled with in the past decade is a cost benefit evaluation of child protection programs; ie, cost per child per annum vs outcomes and impact. It's easy to measure results per output as against inputs, but what does that mean long-term for each child and each community in terms of efficiency? Is the effort borne out in the end?

Submitted by Steve Montague on Sun, 03/05/2017 - 11:42

Permalink

Ha! Ken Watson, from whom I have learned much of what I know about this subject, beat me to the punch in the commentary, so I both defer and support his points made above. For me this is a very important definitional problem and we need to address it with some more consistency world wide before it completely distorts and ultimately discredits work in this area.

Submitted by Michel Laurendeau on Sun, 03/05/2017 - 16:57

Permalink

Kenneth, great job at clarifying the concepts underlying the various approaches to Cost-Benefit Analysis. It shows that the solution to factoring the hidden costs of social and environmental impacts into the cost of interventions simply requires extending the analysis to outcomes/impacts. In that sense, the article should perhaps have been "Rethinking Evaluation: Effectiveness, Effectiveness, Effectiveness."

Submitted by Keri Culver on Mon, 03/06/2017 - 05:03

Permalink

The way DfID thinks about and works with efficiency in evaluation and even in program management seems more realistic to me than the more strictly economic-based inputs/outputs comparisons. As I understand it, (and I've worked with DfID only once) Value for Money - in creative hands - allows for thinking through what is valuable from the perspective of the intervention. VfM also brings in a conceptualization at different levels - what is efficient programmatically may easily not be efficient in terms of key outcomes. For example, keeping psychosocial counseling costs down with trainees makes good economic sense and gives trainees opportunities. But if the program works with victims of sexual violence in war, perhaps more seasoned and specialized counselors outweighs their cost. That's just an off-the-cuff example to make the point that efficiency looks different in different contexts, and the VfM methodology seems to promote thinking this through explicitly.

Submitted by Isaac Galiwango on Thu, 03/09/2017 - 02:52

Permalink

This echoes rethinking evaluation in general. Reflecting about accountability, purpose and objectives of the project and the evaluation would help to determine the boundaries of efficiency. True OECD/DAC definition is silent about outcomes and impact for example transporting an official to a venue would be more efficient but because of impact we tend to consider facilitation because of the impact we want to achieve. Efficiency would have to put into consideration all the other criteria for evaluation to understand the relationship of each finding to the program objectives in comparison with possible alternative ways and results/outcomes. Other than considering it as criteria we would transition it to type of evaluation likewise other criteria could also be reviewed for the same.

Submitted by Kelly Hewitt on Sun, 03/12/2017 - 18:44

Permalink

Thank you for the article. However, I find that the discussion on more efficiency - particularly at first instance upstream - that is looking at a series of options and helping to decide which is best - is indeed what is prescribed by World Bank. Yet, the prescription is not in IEG, per se, but rather in the Bank's new procurement policy - the cascading effect of value for money analyses. Econmists and procurement professionals, unfortunately, do not always speak the same language. Hence, important particulars get lost in translation, indeed.

Submitted by Ian C Davies on Mon, 03/13/2017 - 12:49

Permalink

Thanks Ken (and Steve) for the useful reminder about the differences in meanings of terminologies from different disciplinary perspectives, e.g. operational efficiency in economics usually corresponds to effectiveness in programme evaluation. WRT efficiency, the Auditor General of British Columbia found most practical and meaningful (in the eighties) to construct efficiency as a measure of waste, e.g. a programme is efficient when it minimises waste, given that for most public programmes "factors of production" are not well known. In other words, most public programmes are "low probability technologies" (credit to Dr. James McDavid for the term). In the eighties OAGBC pioneered VFM audit, based on simple constructions of economy, efficiency and effectiveness, that were clear, well communicated, and that contributed to the development of what became known as performance audit in a host of legislative audit jurisdictions, provincially and nationally in Canada, as well as in a number of other legislative audit offices such as the UK NAO and the ECA. WRT to the question of "rethinking evaluation", I think that it is more about questioning traditional development evaluation of ODA as practiced by multilateral and bilateral organisations such as the WB, within a criterion based framework initiated and maintained by the DAC of OECD. There is a vibrant and much larger universe of evaluation, beyond that of the development industry, that is continuously evolving and flourishing, and for which "rethink, reframe, revalue, relearn, retool and engage" is an embedded and ongoing process. Cheers!

Submitted by Amparo on Mon, 03/13/2017 - 14:09

Permalink

One problem that is almost always disregarded is the need to include operating costs into the economic rate of return (ERR) calculations. Let's assume that the ERR we will calculate is the ERR to the government, not to the Bank. These are very different in a large majority of cases. The reason is that the Bank only finances investment costs, but the government has to (a) repaid the Bank and (b) incur the operating costs associated with the investment. For example, if the Bank finances building a road, the government has to spend an annual amount maintaining it. The ERR to the investment project will be very different to the ERR calculating the NVP over the life of the road. Since we don't do the later, roads often go without maintenance and the costs to rebuild them several years later is higher. Of course, even if we did calculate the ERR including operating costs, that does not automatically imply that the national budgets will appropriate maintenance budgets, but that is another story. The point is, if we are going to do ERR for all the projects that are amenable to the use of this methodology (and I would argue that most are), we need to include operating costs.

Submitted by Patrick Osodo on Mon, 03/13/2017 - 14:18

Permalink

I could not agree more! At the very least a focus on efficiency evaluation aligns with the concept of "Realm of Managerial responsibility. Beyond the output level, management's role becomes limited. You are only able to control what you are able to do. So plan and execute very well, considering all that could happen for good or for worse!

Submitted by Adam McCarty on Thu, 03/16/2017 - 03:28

Permalink

I have been running Mekong Economics in Hanoi for the past 20 years. I and my people "evaluate" 10+ projects around ASEAN every year. I also ran MDF Indochina for some years recently, where I designed and delivered an Impact Evaluation course. I have long argued for a return to CBA, but of course it must begin at the planning stage (as a hypothesis about net impact). Also, just getting to a rough and just justified ROI estimate, after many bold assumptions, should be all we can expect. The idea that this "actual" ROI performance is then compared to hypothetically measured alternative scenarios to determine counterfactual efficiency scenarios is surely silly.

Anyway, no donor does CBA because most donor staff do not think and blindly follow OECD REEIS. To the REEIS they then add a shopping list of whatever other questions pop into their heads. I detest REEIS, because it distracts from what really matters: results = impact (over time) = I x S = need to do post-evaluations to determine real impact/results. Why are post-evaluations relatively rare? - because donors are bureaucrats and not focused on results. The incentive structures are wrong. If you do not understand, analyse and address incentive structures then nothing changes. Donor discussions are riddled with good ideas (such as above) based on the assumption that we are dealing with an "awareness raising issue". Whilst many donor officials are stupid, many are not, but they still resist such changes because they and their organisations must respond to incentive structures: to tie aid but pretend not to; to count military donations as ODA; to pretend but not really cooperate much with other donors; to avoid any bad news; etc. They are not interested to know that there was a more efficient way to implement a project just finished, or to understand the real impact of projects through post-evaluations. Those designing and managing projects are input-obsessed glorified accountants tied up in their own "accountability" red tape. The whole ODA value-chain contains only organisations committed to hiding bad news: donors will lose budgets; implementing INGOs will lose new jobs; consulting firms will not win new jobs; etc. Cutting across this swirling mess is an intrinsic (i.e. inevitable, despite best efforts) inability to precisely measure and value attributable results: a reality of ODA we must live with (unlike profits for businesses). We ignore such realities, and just shamefully accept the roughest of "best guesses" as precise answers: so we consultants can deliver whatever best guess you would like. What we can measure are financial costs (inputs) and (poorly attributed) key results indicators - which are about all that most key people (i.e. who control flows of funds) can understand anyway. That is why so many donors rely on childish indicator sets - not because they are stupid, but because they are enough to convince. Academics (including DFID and WB) huff and puff about complex issues of better measurement of efficiency, or RCTs, or whatever, but in the end such technical band aids remain marginal, as they have for decades, coming or going in and out of fashion (CBA, general budget support, etc). Stop it: start mapping incentives and understanding how groups of not very bright people convince each other and hide failures, lie and cheat, and are shamelessly hypocritical in the ODA value-chain. Learn from Donald Trump!

Submitted by NDOUBA GUELEO ROMAIN on Fri, 03/17/2017 - 06:14

Permalink

I think it's better to realize too "ante-evaluation" which can give us indicators of references. And then, the "post-evaluation" can allow us to confirm or not the evolution of recent indicators.

Submitted by Bob Williams on Sun, 03/19/2017 - 21:39

Permalink

What an excellent conversation and thanks to everyone for their thoughts. Like others I'm skeptical about any proposal that seeks to reorientate any endeavour to a single task. However I want to raise two things that Caroline's thesis either gets wrong or ignores. I'm not entirely sure where he information on systems theory and practice comes from, but one of the basic tenets of the area of systems I know best is that you cannot replicate and compare unless you have specific conditions that are rare in the kinds of programs and projects we mostly deal with. Furthermore, by all means use models, but at your peril do not forget that models are abstractions; mental constructs from a particular point of view of how reality might behave. At best they reflect reality from that particular point of view, but they do not represent reality.

My second comment is to remember what the purpose and situation of many of these programs are. They are experiments. In terms of the three concepts of efficiency, effectiveness (and often forgotten efficacy), experiments cannot be judged in terms of efficiency. Experiments are inherently inefficient when compared to something non-experimental. Which is not to say you should seek to run experiments inefficiently or ignore efficiency (using whatever definition) in your evaluation, but it is a different kettle of fish.

Add new comment