Frequent readers know that I’m obsessed with evaluation being as effective as it can be. As much as I am passionate about evaluation and the learning that it allows, its usefulness lies in influencing change, hopefully for the better. 

As promised in my blog When Rating Performance, Start with Yourself, we at IEG worked hard to develop objectives and metrics that tell us whether we are achieving what we want. Our new work program, approved in June, is now online for everyone to see.

Our metrics were built on a two-tier system: 

  • intermediate outcomes over which we have greater control, and
  • medium-term outcomes, which  describe how we affect the World Bank Group

The first is about our ability to generate meaningful evidence from evaluations and through that a greater understanding of what works. The second is about whether the World Bank Group uses the information to improve its services for clients and, in turn, enhance its development effectiveness.

The intermediate outcomes include indicators such as feedback from clients and independent assessments of the quality of our evaluations (our meta-evaluation panel, see Who Evaluates the Evaluators? is in the process of testing quality criteria). Other indicators are strategic choice of our evaluations, timeliness, and efficiency. In the medium-term, we are hoping to see that policy and operational choices are better informed by evidence from independent evaluation, and that this leads to better outcomes for clients.

Some of these might work better than others, and we will revise them as we learn more.

But there is one indicator that we want to expose ourselves to that’s particularly interesting and tricky: value-for-money of evaluation. The concept is easier when we think of goods and services we buy: is this car worth the sticker price; is that meal worth the money I spent? And even in these examples it is clear: there is an intangible metric —our taste, which is the yardstick against which to measure “worth.”

This becomes more difficult when it comes to knowledge. The payoff of knowledge might not be obvious at the time when it is generated. The results of better knowledge might take time to materialize and people might internalize the knowledge to the extent that they actually might not remember where they got it from. Yet, for some knowledge products the equation is clearer: each time we buy a book, a report, a software application, we acquire knowledge with a clear value-for-money proposition that in today’s Internet age is challenged by free applications, open data, and reports that put a lot more knowledge in people’s hands  without charging them for it.

For evaluation the challenge becomes even greater. Many people still have negative associations with evaluation – a bad scorecard, an embarrassing assessment, or worse: fear of losing a project, a job, or a funder! In many institutions where I have worked, control over the budget of evaluation was one way to ensure independence or curtail it. So, could someone argue that an evaluation wasn’t value-for-money work when it actually contains some inconvenient truths? Or, how about those evaluations that shed light on facts that are known, but no one dares talk about – how does one value the transparency that this evaluation brings about? And then there are evaluations that generate genuine new insights, but are they worth the price?

If people were free to put a price-tag on evaluation, how many would say it’s too expensive, what would be their willingness to pay?

To determine value-for-money, we need the cost (which we as evaluators know, at least for producing the evaluation) and the value. How can we best assess it when, just like other knowledge products, it is hard to determine whether policy or operational choices would have been exercised differently if the evaluation had not been available? And how does the value relate to the cost?

We don’t have the answers. Do any of you?

IEG’s What Works blog will return on Tuesday September 2. 

Comments

Submitted by Tessie Catsambas on Tue, 07/29/2014 - 04:09

Permalink
It is good to see the transparency put forth by the IEG, even in making your own theory of change and indicators that will measure your impact public for all of us to see and discuss. It would be so interesting to do an ROI analysis or a Cost Benefit analysis of evaluation with a sample of IEG evaluations. Some things can be quantified, but "value" would include the elements that cannot be quantified. Maybe we can ask for "value perceptions" from evaluation users. We might also think about the cost of not doing a (good quality) evaluation with probability analysis of different possible outcomes that might result from actions based on false conclusions. (I am picturing that big decision tree...) Hopefully, Caroline, you can commission such a study for World Bank Group Evaluations, so we can gain some insight into the value for money issue for evaluations. USAID is now competing a fascinating contract on Evaluation Utilization where they want to explore how their evaluations are (or are not) used, and the factors that influence it. Maybe there is room for a value analysis to them as well! Thank you for making us think about the possibilities for more effective evaluations.

Submitted by Caroline Heider on Wed, 07/30/2014 - 04:29

In reply to by Tessie Catsambas

Permalink
Many thanks, Tessie, for your ideas. We'll collect a number of them and then decide how to take things forward.

Submitted by Elijah Lim on Tue, 07/29/2014 - 06:00

Permalink
The cost, or the fee, is in direct proportion the the value of the evaluation in the eye of the one who holds the purse strings. If evaluations help in curtailing greed such that genuine development is able to take place in a particular region, then it has more than paid for itself. It all depends on how the buyer sees it.

Submitted by Caroline Heider on Wed, 07/30/2014 - 05:16

In reply to by Elijah Lim

Permalink
You raise an interesting question that reiterates the importance of independence: if the paymaster for evaluation is also the manager of the program, s/he might want to see certain evaluation results, whether supported by evidence or not. Your second point, that evaluation should help maximize development outcomes is well taken: that is why we do our work, but the value thereof might not be understood or accepted if running up against vested interests.

Submitted by Rakesh Mohan on Wed, 07/30/2014 - 04:19

Permalink
Enjoyed your post, Caroline. As you write, value-for-money issues are challenging -- difficult to assess and even more difficult to communicate to policymakers, taxpayers, funders, stakeholders, and the public. Often value-for-money is a subjective assessment, which is heavily influenced by socio-political culture of the people/entities involved. In public policy environments, it goes beyon the funder or sponsor of the evaluation. The public, the taxpayer, also get to judge the value of the evaluation. It is interesting that your post came out on the same day as my guest blog post published by the Evaluation Capacity Development Group (ECDG): "Wish I had done this 11 years ago" http://www.ecdg.net/2014/07/28/wish-i-had-done-this-11-years-ago/ Both posts are very similar in nature.

Submitted by Caroline Heider on Wed, 07/30/2014 - 21:41

In reply to by Rakesh Mohan

Permalink
Rakesh, many thanks for your comment and for sharing your own blog. Great posting, and I agree with you about the importance of communicating in addition to all the other important work we need to do.

Submitted by Seetharam on Thu, 07/31/2014 - 08:20

Permalink
I have learnt that what is given free is either useless or priceless. The discussion on value for money for evaluation products is related to the ethos of the organization. The current statistics reveal that evaluation reports are not among the top downloaded reports of these institutions; evaluation report findings are also cited in various internal project processing documents, often for reasons such as fulfilling a reporting criteria or to support the approval of the proposed project, rather than "acknowledging" the value of a lesson learned from the evaluation report. In the light of these, one could ask "what" should be the "value for money" indicator for evaluation reports? I do not have a definite answer yet -- one suggestion would be for IEG to commission an independent review of a set of past evaluation reports and see how parent institution and relevant partners, both governments and development partners have used the lessons and would it be possible to directly attribute the improvement in development results to the evaluation reports. I suppose yes -- to what degree is the question.

Submitted by Caroline Heider on Wed, 07/30/2014 - 21:47

In reply to by Seetharam

Permalink
Seetharam, thanks for the suggestion and the thoughtful email. You are right about challenges to increase the use of evaluations, though at times it is more the recognition that knowledge came from an evaluation than the use as such. We have done some follow-up studies to some of our evaluations on their effectiveness, but as you would know: sometimes, it takes several years before an evaluation recommendation is fully internalized. The value-for-money question, though, is even trickier: it's not jsut whether the evaluation has made a difference, but whether the cost to get that done was worth it, or whether other (cheaper) ways could have led to the same. I'm hoping this blog discussion will generate ideas that we can take further. Thanks for your suggestion.

Submitted by Tessie Catsambas on Sun, 08/03/2014 - 05:22

In reply to by Seetharam

Permalink
Seetharam, your post raises an interesting question for me. When thinking about the value for money proposition for a specific evaluation, do we need to make different assessments for each stakeholder group, and an additional one for society? If the purpose of an evaluation is symbolic--i.e. approving the next contract--should we evaluated it based on this goal, or are there some more "standard" goals that we should use consistently across all evaluations regardless of their stated goals? (Standard goals might be: learning, performance improvement, efficiency, etc.)

Submitted by Charles Lor on Thu, 07/31/2014 - 05:16

Permalink
Dear Caroline, It is really important that IEG holds itself to the same standard as it does the World Bank Group agencies - this is a welcome step. If we apply the same standard, then the quality and timeliness of the evaluations is not an outcome but merely a different dimension of the output than pure delivery of reports. The outcome would be a behavior or practice change on the part of the stakeholders IEG seeks to respond to accountability and learning needs. Hopefully, the policy and practice change should not be a 4-6 years "medium" term outcome. In private sector development for example, programs and markets will have changed to a new agenda if we wait for 4-6 years down the line after an evaluation of a completed program or project is delivered - meaning the issue was pressing a decade ago. On value for money, this is then not an issue of opening the door to curtailing budget in fear of inconvenient truth - I see little risk in that. It is about maintaining a disciplined evaluation agenda. Can we spend $300,000 to $500,000 on a report that answers a question nobody is asking, that will come too late to influence an agenda and market that will have moved on to another phase, or that uses methodologies and samples that are disproportionate compared to what we spend in the field. It will force us to ask: who is the client? what do we need to know? what is good enough?

Submitted by Caroline Heider on Fri, 08/01/2014 - 05:44

Permalink
Charles, great comments and questions. We are addressing the last batch of your questions by making strategic choices about what we evaluate, continuously work on processes and methods, and enhance knowledge sharing from evaluation. We do this by building the IEG systems as well as for each of the evaluations. And, as you say: we are hoping to see some effects to take hold earlier, but as the recommendations from evaluations need first to be accepted and then implemented by others, and only then produce effects on the ground, we have given it a medium-term timeline. That doesn't mean we are "waiting around" but working towards that objective over the whole period.

Submitted by Bojan Radej on Mon, 08/04/2014 - 01:08

Permalink
Thank you for another thought provoking challenge. In my view, this note raises not one but three questions which are asked on three independent levels – so, answer probably requires first saying on which level a question is asked and answer is expected. What is effective Value, or what is necessary Cost or what is fair Price of evaluation? One cannot consistently measure apples with oranges or ask what is ‘value for money’ at least in evaluation (if one can calculate value for money, who needs evaluation?). I think we usually tend to achieve: Value ≥ Price and Price ≥ Cost. Value of evaluation is usually evaluated against its effectiveness as measured with its contribution to improving impacts of the decision or intervention within previously set frame of conditions. The specified discussion question is asked in a way which may be problematic for evaluator – “can we do things differently?” is not sufficiently specific evaluation question. In constructivist approach, evaluation response is always appropriate, and actually event the best achievable in the given context only when all stakeholders are involved (not only those prescribed by the client), when long-term view is involved additionally to short term, when indirect impacts are also taken into account in evaluation beside directly measurable ones… To return to the specified question, I think that the answer (enhancing value of evaluation for money) depends on: - how the evaluation is commissioned by the ‘client’ (definition of evaluation question – stakeholders, time and spatial frame…; price of evaluation); if it is so narrow, that initially rules out alternatives, then his/her question about value-for-money of evaluation seems not entirely legitimate. - how evaluation is accomplished by evaluator (cost and value). Is selected approach to evaluation appropriate to answer questions about alternatives – is it sufficiently inclusive to be cohesive, is it balanced among its contradictory aspects and finally are incommensurable evaluation results effectively synthesized to present alternatives on the strategic level? The first mission of evaluation (according to ethical standards of your national evaluation society, USA) is not optimization of efficiency (value-for-money) but enhancing public benefits (value of change).

Submitted by Johnd964 on Tue, 08/05/2014 - 01:50

Permalink
Really informative article post.Thanks Again. Awesome.

Submitted by Rick Scobey on Tue, 08/05/2014 - 21:45

Permalink
Bojan, many thanks for the very thoughtful comment. Caroline is taking a well deserved break right now, so let me respond, as her Deputy. I very much like how you have unpacked the issue into three different dimensions: effective value, necessary cost, and fair price. And I think you raise a very valuable point that upfront clarity and transparency about the overall approach of an evaluation (ie -- what is the purpose, who is the audience, what are the evaluation questions and analytical methods, clear theory of change, what is the cost and timeline, etc) in conjunction with upfront review and agreement on the overall approach from the different parties involved will promote enhanced cost effectiveness of evaluation work. This is why IEG has been investing heavily in quality standards for Approach Papers. We currently have commissioned some meta-evaluations of our completed reports to assess how well we have complied with our quality standards -- and assess the overall utility, validity, feasibility, and propriety of our work -- which will help us better answer the three questions that you pose!

Submitted by Zachariah Falc… on Thu, 08/07/2014 - 06:20

Permalink
This is a very pertinent topic. I get the question "Is our evaluation design sufficient?" all the time, and my response is starting to feel like a mantra: "Only you can decide what is sufficient; any design can be made more rigorous, but that doesn't mean it's worth the investment." Given finite resources, we all face trade offs. Assuming evaluation services are subject to diminishing marginal returns as is much other consumption, at some point, the marginal utility of a dollar of evaluation services is less than the marginal utility of a dollar of intervention programming. Thinking about this in theoretical terms, it would be fascinating to know how many organizations that commission evaluations consume at their point of maximum utility, in this case, where their organizational budget line is tangential to their indifference curve of evaluation services v. all other activities. As evaluators, we help ensure that programming resources are used efficiently; it's a credit to the field that the IEG is holding evaluation resources to the same standard!

Submitted by Rick Scobey on Thu, 08/07/2014 - 01:16

Permalink
Great comment, Zacharia -- I always like when people cross-fertilize with concepts from other disciplines! You've asked an important and difficult question, which we grapple with all the time at IEG. We have many different evaluation products and activities, and always have to ask the question: at the margin, where will a dollar of evaluation spending have the greatest impact? As Caroline points out, it's hard to quantify the answer. Here's a sign that the marginal returns to M&E are indeed high: almost all of the completed projects that we review that have highly rated M&E performance have outcomes that are rated as "satisfactory" or better. Almost no projects with M&E rated as negligible have outcomes rated as satisfactory. Of course there are many factors behind that correlation, but it suggests that investments in, and use of, M&E have huge payoffs. It is our hope and goal that IEG evaluations, at the margin, continue to pay for themselves in increased development effectiveness of WBG lending. But we continue to seek ways to increase our own efficiency and effectiveness.

Submitted by Rick Dvaies on Fri, 08/08/2014 - 01:59

Permalink
Re "But there is one indicator that we want to expose ourselves to that’s particularly interesting and tricky: value-for-money of evaluation" In my view "value for money" is not an indicator, its a ratio or relationships between indicators - of cost and value achieved, with its own built in performance criteria i.e. value should be in proportion to cost. The more cost, the more value is expected. If so, then the learning opportunities will be in the outliers, where value is unexpected low relative to cost, and vice versa. I am surprised at the focus on the problematic nature of intangible values, when you already have said you have identified specific intermediate outcomes that you are interested in. Would it not be best to start with the analysis of VfM relationships there and then later worry about the more intangible aspects of value? Re "If people were free to put a price-tag on evaluation, how many would say it’s too expensive, what would be their willingness to pay?" It will surely depend on the comparator they are using explicitly or explicitly. Value for money, to me, is an intrinsically relative judgement. The same view seems to be built into our day to day expectations that the more cost, the more value that is expected. Contra IEG ordinary people seem to manage to make value for money judgements everyday. And in the process they manage to compare apples and oranges, about which here is a good cartoon from the Melbourne Age...http://mandenews.blogspot.co.uk/

Submitted by Rick Scobey on Sun, 08/10/2014 - 22:38

In reply to by Rick Dvaies

Permalink
Rick -- Many thanks for weighing in, since you've done a lot of relevant work in this area. You make a very important point that VfM is an intrinsically relative judgement, and that people make these kinds of comparisons every day without an overly complicated "methodological framework." I think what makes the concept a bit more complicated when it comes to evaluation is that we operate within a complex political economy of aid effectiveness that has a wide range of stakeholders with dramatically different valuations of evaluation and therefore implicit/explicit comparators -- so a transparent and clear framework of how to assess value becomes more important. Love the cartoon -- and the message to keep things straightforward and simple!

Submitted by Geeta Shivdasani on Sun, 08/10/2014 - 05:21

Permalink
Vfm -- excellent and a very valid concept ! How does one measure it ? For starters, I am curious about how does one define *value* ? As a package of complex benefits over a period of time for it's cost ?

Submitted by Dirk Petersen on Mon, 08/11/2014 - 04:22

Permalink
Evaluation is value creating the moment it pulls a person/project/unit toward improving their methods. The need for evaluation is without question: With over 60 years of projects, there is ample we can learn from our projects. And the tenor of the article is great, because it suggests a positive approach to evaluation: evaluation as a partner in learning. Where evaluation becomes problematic is if the evaluation organization sees its job as 'doing evaluations.' Its 'job' is to drive change by 'pulling' it. Pulling, rather than pushing: the work needs to be compelling, interesting, insightful enough that it convinces at the TTL level, even if no mandate or enforced check list exists. TTLs and the ability for an evaluation to convince them of behavioral change is where the measurement should aim, despite Caroline's good point that the TTLs may not even know where they got the idea for changing their behavior. This approach would argue for less focus on enforcement and top-level policy change, and more on communication and bottom-up culture change. It would argue for bringing the best, the most experienced TTLs together to share learning with each other and the next generation, and setting the incentive structure up to make a posting in IEG highly desirable as a career step or end of career destination.

Submitted by Rick Scobey on Sun, 08/10/2014 - 22:38

In reply to by Dirk Petersen

Permalink
Extremely thoughtful -- thanks. We very much share your focus on the importance of the evaluation process itself as an opportunity for collaborative learning and knowledge sharing with various stakeholders (including fellow evaluators), as a way to drive behavioral change and incentivize the use of evaluation findings and lessons. In fact, we are focusing right now on how we can enhance the "user experience" of evaluations through the "Design Thinking" approach to innovation -- more on this in a future blog. And your point on "bottom up culture change" is spot on and very much on our radar -- our current work program is focusing on enhancing the effectiveness of IEG communication, scaling up knowledge sharing and the preparation of learning products, and building a stronger community of practice among all staff in the World Bank Group work on evaluation and results monitoring.

Submitted by Ian goldman on Thu, 08/14/2014 - 23:31

Permalink
This is an interesting issue. We have 39 evaluations underway on around $5billion of government expenditure over 3 years. If we make 10% difference (very conservative) that us $500 million. And the cost of the evaluation system is around $5 million per year, ie $15 million over 3 years. So the value for money on evaluation should be huge. Obviously challenge is doing the benefit calculation. We will be doing this in a year or so's time which should be interesting.

Submitted by Rick Scobey on Sun, 08/17/2014 - 22:46

In reply to by Ian goldman

Permalink
Great way to crystallize the issue, Ian. Your earlier blog (https://ieg.worldbankgroup.org/blog/in-south-africa-using-evaluation-improve-government-effectiveness) highlighted the pioneering work underway in South Africa to use monitoring and evaluation to transform public sector service delivery -- we look forward to seeing the next steps in the coming year to quantify the benefits.

Submitted by Daniel Ticehurst on Mon, 09/08/2014 - 05:29

In reply to by Ian goldman

Permalink
Dear Ian, Hi - could you clarify 10% of what difference as well as how and on what basis you what assess 'it'?

Submitted by Daniel Ticehurst on Thu, 09/04/2014 - 01:14

Permalink
Good debate.............A partial answer to the question as to the what makes for/defines the 'value' of evaluation lies, or should do, in its object - who is it for and how will they benefit? Too few evaluation designs articulate the reason for evaluation in this way. Rather, many stop at simply generating questions, answers to which are assumed to be......useful and of value to often unspecified 'beneficiaries'. Perhaps, and I tread carefully here, this is one differnce between research - that generates answers to interesting questions relating to indicators in some logframe - and evaluation - that generates answers to questions that reflect decision uncertainties among specific, intended users. I agree with Rick that V4M is not an indicator. (Indeed, DFID appear to think so judging by the resources it has spent on so-called V4M experts in producing contrived frameworks and indicators that run in parallel to logical or results framework.) Moving on, assessing the v4m of an evaluation boils down to a value judgement, balanced across the factors that make it up - economy, efficiency and effectiveness - that depends on what v4m questions you want answers to that typically involve benchmarking or comparing like for like evaluations.

Submitted by Caroline Heider on Thu, 09/04/2014 - 07:14

In reply to by Daniel Ticehurst

Permalink
Daniel, thanks for honing in on the point "value for whom" and how evaluation questions need to be framed that they address specific concerns to generate the most value. These are important points, also made by others. While evaluation fulfills the purpose of accountability and learning, hence the focus on whether objectives were attained, etc, there is room to focus evaluation questions more precisely on learning needs. In practice, we share draft approach papers (the evaluation design) with stakeholders to get their feedback. In my own practice, I observed several things: some stakeholders knew they wanted an evaluation but had a hard time articulating what they wanted to get out of it; some had questions that had little to do with the past interventions and were more hoping to get an appraisal of new ideas (so not really evaluation questions); and finally there were others that had a clear idea about the problems they wanted the evaluation to address. The latter were great clients for evaluation.

Submitted by Daniel on Sat, 09/06/2014 - 00:33

Permalink
Caroline, thanks and useful on the issues of the why and how differentiated are the clients of evaluation. On the issue of value I always wonder why those who recruit and pay for evaluators seldom use how well they perform in generating an effect. Many paid to do evaluations get by on academic criteria. I guess that is why many academics successfully masquerade as evaluators.

Add new comment