TWEET THIS

Evaluation must identify if timely and responsive course-corrections were made if needed.
Emergency situations need timely responses and challenge responders to be agile.

The pantheon of evaluation criteria – relevance, effectiveness, efficiency, impact, and sustainability – does not address the question whether timely and responsive course-corrections were made when needed. In today’s world – with a “new normal” of rapidly changing contexts, be it due to political economy, instability and involuntary migration, or climate change – this might seem surprising. But, 15 years ago development contexts seemed more stable, and the pace at which they changed was (or appeared to be) much slower than today.

Regular readers will recognize this piece as part of a series of blogs that discuss the challenges and changes that evaluation needs to live up to in the near future if it wants to avoid becoming redundant. For those who are joining the series now, please have a look back at our first two Rethinking Evaluation posts - Have we had enough of R/E/E/I/S?, and Is Relevance Still Relevant?  - and join the debate by commenting below. We are looking for your ideas, feedback, and debate.

Development practitioners have, for some time, argued that they are held accountable to objectives set several years earlier in a context that might have changed dramatically since. We evaluators, in turn, suggest at least two arguments in return. The problem might arise from poorly defined objectives at the outset that did not allow the flexibility to adjust tactics while continuing to pursue a higher (and still valid) objective. Or, in the absence of redefined objectives, it is not clear when or what kind of course-corrections were actually introduced that would provide the new basis for evaluation. Rigid bureaucratic systems often create disincentives to revising objectives, or misunderstandings exist about how changes to objectives are reflected in evaluations.

But, even if we resolved these problems, the pantheon of evaluation criteria – relevance, effectiveness, efficiency, impact, and sustainability – does not address the question of whether timely and responsive course-corrections were made when needed. In today’s world – with a “new normal” of rapidly changing contexts, be it due to political economy, instability and involuntary migration, or climate change – this might seem surprising. But, 15 years ago development contexts seemed more stable, and the pace at which they changed was (or appeared to be) much slower than today. Hence, the leaders in evaluation did not think, at the time, about the need for assessing agility and responsiveness.

This gap has been a larger issue in the humanitarian world. Rapidly evolving emergency situations need timely responses and challenge responders to be agile and responsive to constantly changing situations. In these situations, stakeholders – from managers who must make quick decisions to donors who need to prioritize scarce resources – would benefit greatly from evaluative evidence that answers questions about the timeliness and appropriateness of course corrections.

This area, however, is a poorly recognized and hence hardly satisfied demand. Evaluators could address this need by adapting questions and tools of the craft. Questions that could enter the evaluator’s repertoire could include:

  1. Was the need for change anticipated at project design? Clearly, this is not the case for sudden onset disasters like earthquakes. But in other cases, an evaluation should be able to determine whether the potential need for changes in the future were recognized and built into adaptive management and corresponding monitoring systems.
  2. What drove the adaptation process? Here, an evaluation should seek to understand whether development partners proactively monitored relevant indicators and situational information and how that information was used in deciding on course-corrections.
  3. Was adaptation timely?  Establishing timelines of events and tracing when course-corrections were undertaken will be essential to determine whether solutions were sought pro-actively or rather forced by circumstances. 
  4. And what would have happened if….? This is a classic question of establishing counterfactuals, but in this case one that needs to determine whether outcomes were better or worse because course-corrections were made or failed to be made.

These are tough challenges to grapple with in evaluation, in particular as many of the details, processes, and conversations that lead to course-corrections are not documented.

Nonetheless, as agility and responsiveness are important determinants of success or failure, evaluation needs to adopt a specific focus on agility and responsiveness to provide feedback, be it by giving credit for responsiveness and agility when it is due, and identifying opportunities to improve when needed. This alone, I believe will incentivize debates and actions within institutions to anticipate the need for timely and responsive adaptation.

Will that be enough to overcome inertia where it exists? Maybe not, but it is a contribution that evaluation can make.

Read other #Whatworks posts in this series, Rethinking Evaluation:

Have we had enough of R/E/E/I/S?,  Is Relevance Still Relevant?, and, following this post in the series, Efficiency, Efficiency, Efficiency