Intangible value and indirect costs, intertwined in an interdependent relationship.

Over the last six months we had a great run with our blog series on Value-for-Money of Evaluation. Thanks to the many of you who viewed, shared, and commented on the blogs. Really an extraordinary participation.

Now, here comes the challenge! How do we estimate, or even calculate the value for money (VfM) from any given evaluation?

In previous blogs, I mapped out the points at which value and costs are incurred. I suggested a number of ways to increase value and reduce the costs of an evaluation. These suggestions are valuable in their own right, as they improve evaluation practice. But, they do not yet spell out how value or cost can be estimated.

Many of the costs can be calculated, for instance the direct cost of an evaluation, or the expense associated with outreach activities. But, there are indirect costs that often do not get taken into account. They are noticed in particular when evaluations uncover controversial issues and create the need for difficult discussions. These meetings are often experienced by stakeholders – people whose programs are evaluated as much as evaluators – as an additional cost. Other costs, often hard to calculate, are those of reputational risk – feared or actual – that can arise if there is no evaluation at all, or if an evaluation is inaccurate.

The value, however, is harder to calculate. An evaluation that is not timely is a missed opportunity to make better informed choices. But, estimating the value would have to compare the choices made with and without evaluation evidence, and assume the evaluation would have influenced decision-making.

In addition, choices along the life-cycle of evaluations can increase the VfM equation at some point but reduce it at others. The most obvious case is when cost-savings lead to evaluation findings that are not robust. But, just spending more money is not the answer—wise choices about evaluation design (objectives, scope, and questions) are. 

Ultimately, the value of evaluation needs to be estimated through the changes that occur as a result of it.

If, for instance, a project normally generates a return of “100” but as a result of an evaluation introduces changes that increase its return to “120” the value of the evaluation would be “20” assuming all changes introduced in the project were motivated by the evaluation.

Typically, however, the value increase of a project is neither calculated nor attributed to an evaluation as easily as in this example for a number of reasons, including the following:

  1. Quantification. Economic and financial returns on an investment include some things that are more easily quantifiable—for instance, reduced vehicle operating costs as a result of an improved road surface—but others are less so. Establishing a link between an evaluation finding, its recommendation, and the implementation thereof to a financial return is even harder for larger programs, sector strategies: the returns might be higher, especially when systemic problems get resolved, but harder to calculate;
     
  2. Attribution. Evaluations are similar to other knowledge work: they stimulate discussion and help rethink approaches. But, many factors play a role when it comes to making course-corrections or decisions about new policies and strategies. Suggesting that all outcomes of these changes are attributable to evaluation would not be justifiable.
     
  3. Time Lag. The effects on an evaluation can often be seen only after years and depend on how soon recommendations get implemented, how quickly they result in behavior change, and how well that translates into changes in policies, strategies, or projects.

None of these points should stop us from attempting to assess the influence, impact and value-for-money or our evaluation work. Please share your examples, if you have done so. 

 

Comments

Submitted by Oscar A. Garcia on Wed, 07/20/2016 - 03:20

Permalink

Excellent question Caroline, and many thanks for raising it. For estimating the value for money of any given evaluation you will need to pose a counterfactual question. What would have happened to the programme or development intervention without conducting the evaluation? If the answer is nothing, then the value for money of that particular evaluation is less than its direct cost. However, if some changes took place then this is a different story and your three points are highly relevant. Let me add that the value of an evaluation does not reside exclusively on the changes that took place in the specific programme under evaluation but also on similar type of programmes, thus reinforcing the contribution that evaluation makes to learning and knowledge management.

Oscar, thanks for your contribution. Yes, the counterfactual to the evaluation would be necessary -- so to say and ex-ex-post evaluation that looks at how programs or policies have changed since the evaluation. The complicating matter is that there are many other drivers of change in addition to an evaluation, while at the same time evaluation can make much deeper systemic changes. Looking forward to discussing this further.

Submitted by Tauhidur Rahman on Sun, 07/24/2016 - 14:45

Permalink

An excellent summary. Given the complexities of quantifying the benefits and costs of evaluation, I think the discussion should be oriented towards estimating the lower-bound estimates of the value.

Add new comment

By submitting this form, you accept the Mollom privacy policy.