Value for money has become a much-buzzed about concept in international development in recent years. How can we apply it to evaluation and how would it change the way we do business if we did?
Frequent readers know that I’m obsessed with evaluation being as effective as it can be. As much as I am passionate about evaluation and the learning that it allows, its usefulness lies in influencing change, hopefully for the better.
As promised in my blog When Rating Performance, Start with Yourself, we at IEG worked hard to develop objectives and metrics that tell us whether we are achieving what we want. Our new work program, approved in June, is now online for everyone to see.
Our metrics were built on a two-tier system:
intermediate outcomes over which we have greater control, and
medium-term outcomes, which describe how we affect the World Bank Group
The first is about our ability to generate meaningful evidence from evaluations and through that a greater understanding of what works. The second is about whether the World Bank Group uses the information to improve its services for clients and, in turn, enhance its development effectiveness.
The intermediate outcomes include indicators such as feedback from clients and independent assessments of the quality of our evaluations (our meta-evaluation panel, see Who Evaluates the Evaluators? is in the process of testing quality criteria). Other indicators are strategic choice of our evaluations, timeliness, and efficiency. In the medium-term, we are hoping to see that policy and operational choices are better informed by evidence from independent evaluation, and that this leads to better outcomes for clients.
Some of these might work better than others, and we will revise them as we learn more.
But there is one indicator that we want to expose ourselves to that’s particularly interesting and tricky: value-for-money of evaluation. The concept is easier when we think of goods and services we buy: is this car worth the sticker price; is that meal worth the money I spent? And even in these examples it is clear: there is an intangible metric —our taste, which is the yardstick against which to measure “worth.”
This becomes more difficult when it comes to knowledge. The payoff of knowledge might not be obvious at the time when it is generated. The results of better knowledge might take time to materialize and people might internalize the knowledge to the extent that they actually might not remember where they got it from. Yet, for some knowledge products the equation is clearer: each time we buy a book, a report, a software application, we acquire knowledge with a clear value-for-money proposition that in today’s Internet age is challenged by free applications, open data, and reports that put a lot more knowledge in people’s hands without charging them for it.
For evaluation the challenge becomes even greater. Many people still have negative associations with evaluation – a bad scorecard, an embarrassing assessment, or worse: fear of losing a project, a job, or a funder! In many institutions where I have worked, control over the budget of evaluation was one way to ensure independence or curtail it. So, could someone argue that an evaluation wasn’t value-for-money work when it actually contains some inconvenient truths? Or, how about those evaluations that shed light on facts that are known, but no one dares talk about – how does one value the transparency that this evaluation brings about? And then there are evaluations that generate genuine new insights, but are they worth the price?
If people were free to put a price-tag on evaluation, how many would say it’s too expensive, what would be their willingness to pay?
To determine value-for-money, we need the cost (which we as evaluators know, at least for producing the evaluation) and the value. How can we best assess it when, just like other knowledge products, it is hard to determine whether policy or operational choices would have been exercised differently if the evaluation had not been available? And how does the value relate to the cost?
We don’t have the answers. Do any of you?
IEG’s What Works blog will return on Tuesday September 2.