Evaluation must identify if timely and responsive course-corrections were made if needed.
Emergency situations need timely responses and challenge responders to be agile.

The pantheon of evaluation criteria – relevance, effectiveness, efficiency, impact, and sustainability – does not address the question whether timely and responsive course-corrections were made when needed. In today’s world – with a “new normal” of rapidly changing contexts, be it due to political economy, instability and involuntary migration, or climate change – this might seem surprising. But, 15 years ago development contexts seemed more stable, and the pace at which they changed was (or appeared to be) much slower than today.

Regular readers will recognize this piece as part of a series of blogs that discuss the challenges and changes that evaluation needs to live up to in the near future if it wants to avoid becoming redundant. For those who are joining the series now, please have a look back at our first two Rethinking Evaluation posts - Have we had enough of R/E/E/I/S?, and Is Relevance Still Relevant?  - and join the debate by commenting below. We are looking for your ideas, feedback, and debate.

Development practitioners have, for some time, argued that they are held accountable to objectives set several years earlier in a context that might have changed dramatically since. We evaluators, in turn, suggest at least two arguments in return. The problem might arise from poorly defined objectives at the outset that did not allow the flexibility to adjust tactics while continuing to pursue a higher (and still valid) objective. Or, in the absence of redefined objectives, it is not clear when or what kind of course-corrections were actually introduced that would provide the new basis for evaluation. Rigid bureaucratic systems often create disincentives to revising objectives, or misunderstandings exist about how changes to objectives are reflected in evaluations.

But, even if we resolved these problems, the pantheon of evaluation criteria – relevance, effectiveness, efficiency, impact, and sustainability – does not address the question of whether timely and responsive course-corrections were made when needed. In today’s world – with a “new normal” of rapidly changing contexts, be it due to political economy, instability and involuntary migration, or climate change – this might seem surprising. But, 15 years ago development contexts seemed more stable, and the pace at which they changed was (or appeared to be) much slower than today. Hence, the leaders in evaluation did not think, at the time, about the need for assessing agility and responsiveness.

This gap has been a larger issue in the humanitarian world. Rapidly evolving emergency situations need timely responses and challenge responders to be agile and responsive to constantly changing situations. In these situations, stakeholders – from managers who must make quick decisions to donors who need to prioritize scarce resources – would benefit greatly from evaluative evidence that answers questions about the timeliness and appropriateness of course corrections.

This area, however, is a poorly recognized and hence hardly satisfied demand. Evaluators could address this need by adapting questions and tools of the craft. Questions that could enter the evaluator’s repertoire could include:

  1. Was the need for change anticipated at project design? Clearly, this is not the case for sudden onset disasters like earthquakes. But in other cases, an evaluation should be able to determine whether the potential need for changes in the future were recognized and built into adaptive management and corresponding monitoring systems.
  2. What drove the adaptation process? Here, an evaluation should seek to understand whether development partners proactively monitored relevant indicators and situational information and how that information was used in deciding on course-corrections.
  3. Was adaptation timely?  Establishing timelines of events and tracing when course-corrections were undertaken will be essential to determine whether solutions were sought pro-actively or rather forced by circumstances. 
  4. And what would have happened if….? This is a classic question of establishing counterfactuals, but in this case one that needs to determine whether outcomes were better or worse because course-corrections were made or failed to be made.

These are tough challenges to grapple with in evaluation, in particular as many of the details, processes, and conversations that lead to course-corrections are not documented.

Nonetheless, as agility and responsiveness are important determinants of success or failure, evaluation needs to adopt a specific focus on agility and responsiveness to provide feedback, be it by giving credit for responsiveness and agility when it is due, and identifying opportunities to improve when needed. This alone, I believe will incentivize debates and actions within institutions to anticipate the need for timely and responsive adaptation.

Will that be enough to overcome inertia where it exists? Maybe not, but it is a contribution that evaluation can make.

Read other #Whatworks posts in this series, Rethinking Evaluation:

Have we had enough of R/E/E/I/S?,  Is Relevance Still Relevant?, and, following this post in the series, Efficiency, Efficiency, Efficiency



Incorporating the new criterion into evaluations will add complexity to the art and science of evaluation. For example, while responsiveness and agility may be seen by a subset of the team as more than less of an imperative to achieving success, it may be that key stakeholders disagree on the what, how and when. Also, it would seem natural that a leader be empowered to change course on policies and programs in a timely manner, though there are different incentives and numerous approaches to leadership that will need to also be considered. This implies a need for additional training for evaluators, as well as new requirements for self-evaluation to capture some of the context/thinking while still in the moment.

Entirely right, it would take new skills and tools to do this, and to do it in ways to identify where and when systems limit the capacity of leaders to make necessary course corrections, as well as understand when wrong choices were exercised to avoid making the same mistakes over.


Dear Caroline, agility and responsiveness are indeed very important, especially in humanitarian and peacekeeping contexts. I find that the argument that development practitioners often advance about ‘objectives no longer valid’ is a bit prosaic, as it very much depends at what level objectives are defined. Expected accomplishments and longer-term goals might remain the same, even in case of a change in environment, while immediate outcomes could significantly vary. That being said, I think it is very important, as you very well put it, that evaluators maintain an equal level of flexibility in assessing the relevance, efficiency, and effectiveness of programs, and acknowledge managers' capacity to adapt to change when it occurred. Equally important, I think that there is room for evaluators to further assess any resistance or 'immunity to change', not only at systemic and institutional level but also at team and individual level. This is where evaluators could join efforts with organizational psychologists to understand better the internal dynamics and inform more sustainable patterns of change.


Thanks Caroline for raising this very important but challenging aspect. This resonates with the previous topic about relevance (in the evaluation criteria chain): being responsive and agile in order to remain relevant? This needs an enabling evaluative culture and environment, both at institutional and individual level, as well as methodological diversity and flexibility. However, current donor environment shows apparent preference and requirement for rigid tools such as logframes. A balance needed on manageable degree of responsiveness and agility, which perhaps is most challenging and needs some underlying transformation. Even though challenging as it is, it still needs us to face, discuss and better our responses.

Thank you Ting for your comments. I agree that the discussion is needed beyond the evaluation community, but am focused on the professional group where I have the greatest responsibility and maybe some influence. Wouldn't it be great if the discussion among evaluators would influence practitioners, including donors, to adopt new ways of working?

Thanks Caroline for your prompt reply. Yes, true. People tend to stay in the familiar and comfort zone, it's great to have such thought-provoking discussions initiated and led by IEG, which certainly benefits not only the evaluation community but also wider range of groups of development practitioners.


I have recently worked on a number of assignments on evaluation of humanitarian programmes. After reading this blog post, I feel nowhere the indicators of agility and responsiveness are relevant that humanitarian programmes. Although we try to cover things under the rubric of relevance but the programmes are actually either responsive to the needs or they waste resources. However, in development interventions too. Responsiveness should be measured possibly while understanding "the most significant changes" and while unpacking the logic model. Thanks for the blog and the ideas. I shall try to experiment with it in the next assignment.


Dear Caroline, Many thanks for sharing another perspective of looking at evidence building agenda with another perspective. I will test these additional questions to include in the upcoming evaluation studies. These are very useful points to learn/understand about the programme management and how often programme teams utilised the evidence to inform decision making. on similar lines , we have developed process indicators to ensure the utilisation of evaluation studies. we will gather more evidence on the utility of this system. lets see how these things will work in future.
I must say that you have great contribution to bring in a fresh and interesting perspective in the development sector. I really find your blogs excellent.

Thanks for your support and contribution.


Rethinking Evaluation should be Dynamic Process. Caroline has in the series raised serious issues of serious business that deserve the serious attention of Policy Makers and Decision Makers within and beyond Evaluation. In "Have we had enough of REEIS" Blog article, Hans (DAC OECD) historical insight shed light on priorities and direction moving forward this dialogue in Global Interest.

Hans contribution underline urgent need to take applied history more seriously and to better appreciate that in the first 50 years of International Development Cooperation (1960-2009) the overarching lessons learnt is that no lessons have been learnt hence failure to build bridge between lessons learning and lessons forgetting was re-occurring decimal. As the second 50 years of International Development Cooperation reach 8 years these flaws and failures persist. This is scar on the conscience of relevant authorities - World Leaders, Regional Leaders in each of the 5 Continents and National Leaders in each of 306/193 UN Member States

The SDG which applies equally to all North and South Countries, unlike the MDG which applied to South Countries only underline need for National Leaders and World Leaders on UN Member States Governments: Executive, Legislature, Judiciary at all tiers; UN System: UNO, WBG, IMF Entities including IEG-WBG; CSOs/NGOs; Farmers and Processor Organizations; Private Sector: Micro, Small, Medium, Large, Multinational Enterprises; Academics and Researchers; Internal Consultants and External Consultants to individually and jointly address Rethinking Evaluations real and complex problems on the ground at Community, Sub-national, National, Sub-regional, Regional and Global levels.

2017 is Year 2 of Implementation and 1st quarter end in few days, yet there is no evidence that National Leaders and World Leaders are seriously committed towards addressing Rethinking Evaluation real and complex problems on the ground in each Community, each of 306/193 UN Member States, each of 5 Continents and Worldwide.

Logframe has been identified as part of the problem. Is this really the case? Again going back to history. Hellmut Eggers at EC Evaluation created Project Cycle Management, PCM Benefits focused Approach to Evaluation in 1987. He retired in 1993 before the idea could firmly take root. Although PCM Approach is widely used Worldwide, what is being used is far from Original PCM. Original PCM answers many of the questions that this dialogue is grappling with.

Working independently Lanre Rotimi created Policy, Program, Project Cycle Management and Comprehensive Systems Reform, 3PCM and CSR Benefits focused Approach to Development, M & E, Performance Management (Service Delivery), Procurement and Human Rights in 1993. 3PCM and CSR is Significant Improvement on PCM. In March 2009 Hellmut and Lanre Versions of PCM were merged into the 3PCM Version of PCM.

3PCM has 4 Principles, 4 Instruments / Tools corresponding to each Principle two of which are ToR and Logframe; 4 Practices and a Database. It is pertinent to note that a major Evaluation challenge is an Evaluation having a different ToR from the ToR of the Policy / Program/ Project being Evaluated.

The point made by Hans underline urgent need National Leaders and World Leaders to adopt a One Worldwide Approach to Evaluation that is a Common and Systemic Approach known and practiced at Community to Global levels in each of 306/193 UN Member States and not a one-cap-fit-all Approach.

The points made in Caroline and many contributors to this dialogue underline urgent need for National Leaders and World Leaders to adopt a One Worldwide Approach to Development, Diplomacy, Defense, Democracy, Data and Digitization - Research: Research, Planning, Statistics/Data; Implementation; Evaluation: Monitoring, Evaluation; Success: Learning, Results, Success Policy, Program, Project Interventions, 3PIs and 3PIs Training as One.

It is our hope that this series will not be about Talking and Thinking but Action and Results as the dialogue progress in grappling with Rethinking Evaluation Challenge in our World as is and not as any Stakeholder - no matter How Powerful wish it to be.


Hi Caroline, please find attached link to article moving forward points earlier made by Davis and Lanre…

Points raised highlight Acid Test of Credibility of this IEG Initiative - How it help to deliver:-
1. Better Development, Diplomacy, Defense, Democracy, Data and Digitization
2. Better Trade, Aide, Debts, Anti Corruption, Anti Terrorism and Migration
3. Better End Hunger, Malnutrition and Poverty
4. Better Trust, Integrity, Openness and Transparency

2nd quarter 2017 Year of Implementation is racing to an end, yet many fundamental issues that ought to have been settled by end 2nd quarter 2015 Year of Implementation are still outstanding. As long as National Leaders and World Leaders do not face new direction and adopt new priorities, these information, research and knowledge gaps will remain re-occurring decimals. Allowed to occur the ultimate consequences for over 2 billion poor the UN System: UNO, WBG, IMF serve could be catastrophic.

The fundamental issues we consistently raise cannot be wished away. Stakeholders in the 7 blocks identified need to accept to work jointly as they discuss, negotiate and make all necessary arrangements for 2030 Agenda Implementation and Evaluation sustainable success at each specific Community - Global location context.

We have thrown up the gauntlet. There is a need for you, other authors and Contributors to genuinely demonstrate "Walk Your Talk". Will you and others pick up the gauntlet?

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.