Learning from Evaluation: How can we Stay at the Top of the Game?
Why is there not more organizational learning from self-evaluation in the World Bank Group?
Why is there not more organizational learning from self-evaluation in the World Bank Group?
By: Caroline HeiderRasmus Heltberg
Why is there not more organizational learning from self-evaluation? We can list numerous proximate reasons – self-evaluations are done too late, their lessons are of the wrong type, the processes of assigning and validating ratings distract from real learning, they are based on sometimes weak evidence. But, we submit that the ultimate cause is that learning has taken a backseat to accountability.
Top chess players spend more time analyzing their completed games than actually playing. They spend hours dissecting every weak move, mistake, and blunder they have made. This helps them figure out why they made that weak move. They ask themselves tough questions -- Did I miss an opportunity to launch an attack? Did I press my advantage too hard? Did I underestimate my opponent? Did I succumb to psychological pressure? Do I have a blind spot? They do this not because it is fun and easy—it is hard work and requires a lot of discipline—but because it is the only way to grow as a chess player and learn how to avoid similar mistakes in future games.
Managing for results has been official dogma in the WBG for the last 15-20 years, and systems are in place to track results from all projects and country strategies and display them in the corporate scorecards and the President’s delivery indicators. These systems draw on hundreds of self-evaluation reports written by World Bank Group staff and validated by IEG. The reports measure the results of our investments, assess how well we performed, and formulate lessons intended to help us learn. Writing these reports costs the WBG millions of dollars annually.
The design and operation of the systems adhere to relevant good practice standards, coverage is comprehensive, and many evaluation experts consider the Bank Group’s systems as good as or better than those in comparable organizations. The systems produce corporate results measures that are easy to report externally and to compare across time, contexts, and sectors.
In theory, this is the equivalent of the chess player analyzing past games for clues to causes of weak moves. Yet in reality organizational learning from these systems is disappointing, as we document in a new IEG evaluation, Behind the Mirror.
Sure, individual authors of self-evaluation reports often learn something from visiting the project and writing up their analysis—it would be strange otherwise--but little knowledge flows beyond the authors themselves. It is rare that business units analyze completed self-evaluations or mine their lessons. The WBG conducts a lot of research and hosts many seminars every day but almost none draw on data from mandatory self-evaluation. Lessons rarely turn into revised policies, guidelines, or procedures. IEG’s evaluations point out the same weak spots and missed opportunities, year after year (see Alison Evans’ blog here).
Why is there not more organizational learning from self-evaluation? We can list numerous proximate reasons – self-evaluations are done too late, their lessons are of the wrong type, the processes of assigning and validating ratings distract from real learning, they are based on sometimes weak evidence. But, we submit that the ultimate cause is that learning has taken a backseat to accountability.
The systems’ focus on accountability and corporate reporting—generating ratings that can be aggregated in scorecards and so on--drives the shape, scope, timing, and content of reporting and limit the usefulness of the exercise for learning. If the self-evaluation systems had been set up to primarily serve learning, they would be more solution-oriented (how can we do better?), more selective (which projects offer the greatest learning opportunities?), more programmatic (are there synergies across activities and countries?), better attuned to unintended positive and negative consequences, and done sooner (the median time from approval to review of the Implementation Completion Report for Bank investment projects is nine years).
Lessons contained in self-evaluations rarely touch on internal organizational issues such as flaws in deliberative processes that led to approval of weak projects.
Parts of the system not focused on corporate reporting, such as impact evaluations and other voluntary self-evaluations, tend to be more valued by staff and managers as tools that can help increase effectiveness. Impact evaluations are not mandatory, and they are generally seen as technically credible, necessary investments in monitoring, and are undertaken selectively. What this shows is that when conditions are right, the World Bank Group has strong demand for evaluative information and the ability to supply it.
Operational units could tap into this intellectual energy more systematically.
Already, the Bank and IFC do various retrospectives aimed at learning. These could be scaled up and cover all WBG activities (investment, knowledge, partnerships, and so on) over a period of time in a given sector and country to yield a broader perspective on results and whether different WBG engagements pull in the same direction and dance to the same tune.
Comments
Well done We should document…
Well done We should document the lesson learn as well as to avoid the same mistakes in future in our development project and programs. You provided a good example of chess players who are regularly improving their selves for new games. Thanks for sharing the valuable and meaningful article on Linkedin
Thanks for your kind words…
Thanks for your kind words Mr Khan! To document and learn these lessons we need to identify them first and that can be impeded if the organizational culture focuses on accountability and reporting excessively and at the cost of fostering learning. Many organizations, public and private alike, struggle with this problem. At IEG we are very interested in how to change the organizational culture in a more learning-oriented direction and believe that there are many bright spots, including around impact evaluation and some of the retrospectives. These are voluntary unlike the mandated self-evaluations which follow prescribed formats geared to uniform reporting. This is all discussed in the "Behind the Mirror" report. Glad you like the analogy to chess, my favorite sport. I must confess that I rarely spend time analyzing my own past games though.........
Back in 1995, twenty years…
Back in 1995, twenty years ago, in the second edition of the journal Evaluation, Christopher Pollitt warned of the dangers of what was then called New Public Sector Management in a paper called Justification by Works or by Faith?: Evaluating the New Public Management. It is as relevant today as it was a warning then. But this goes back even earlier. Back in 1989 the policy academic James Wilson developed a matrix that showed how few public sector activities could be held 'accountable' for what they do purely because of the nature of their activities that prevent a straight line being drawn between independently observable action and independently observable result. The matrix formed the basis of a fascinating paper by Robert Gregory in the mid 90s (ACCOUNTABILITY, RESPONSIBILITY AND CORRUPTION: MANAGING THE 'PUBLIC PRODUCTION PROCESS), where the word 'corruption' is about how the accountability focus 'corrupted' the work of the New Zealand Police force. Around the same time a major figure in the European Union policy field was recruited by the New Zealand government to assess it's own very purist version of New Public Sector Management. Asked to comment on his findings he said that in demanding greater accountability from managers .... the new system of public management could, paradoxically, weaken responsibility. "I really believe that the hard edge of contractualism [ie high levels of accountability] in New Zealand can unwittingly diminish from that sense of responsibility. About ten years ago a review of the relative failure of Results Based Management in four major UN agencies came more or less to the same conclusion. The formal response from those agencies was revealing. It basically rejected the criticism that the idea was fundamentally flawed and essentially blamed staff for not taking it seriously enough. Which as my old colleague Jerry Winston used to say is like telling people jumping off buildings that if they only flapped their arms harder they would be able to fly. None of this of course undermines your work, but merely says that a lot of very skilled and knowledgable people predicted this would happen (and of course a lot of people - who turned out to be more influential - argued against this). So while your report just adds to the literature on the topic it may be timely. Somebody once told me that the difference between being 'right' and 'righteous' is often more a matter of 'time' and 'timing'.
Thanks for your thoughtful…
Thanks for your thoughtful comments and references! I was not aware of this literature and the broader debate about new public management. Your comment reminds me of Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” (In other words, when we use some indicator to evaluate performance and it is the same as the target being optimized by those who are being measured, it is no longer a reliable measure of performance.) I think you are probably right to imply that learning this lesson has taken long. In fact, I am not sure it has yet been internalized. I should say though that our report has been well-received by staff and management in the World Bank Group, who appreciate its analysis of behavior, incentives and motivations--the very drivers of the tension between learning and accountability.
The accountability process…
The accountability process should, through obligation, include recommandations analisys and authentification by a limited panel and therefore more fluid in practice. Case of operational research concept. Specifically, it is to systematize and lighten the learning process coupled with accountability.
Thanks for your comment. I…
Thanks for your comment. I agree that making the learning process lighter, simpler, more intuitive would be helpful. But I am not sure that more mandates and oversight and accountability would help with this. In fact, the report cited in this blog post documents a tension between accountability and learning which has to be resolved through, for example, more agile, purposeful learning-oriented evaluation and stronger managerial signals that results, learning and self-evaluation matter.
Thank very much for the…
Thank very much for the document and insightful information. This challenge is very common across a number of development institutions. As M&E practitioner, I have often wondered why after investing so much resources (time, money and energy) into conducting project assessments, the findings and lessons are the findings and lessons that could have helped to enhance the effectiveness of implementation and impact of the intervention are.overlooked and neglected by implementation teams. I believe, there is need to take a careful look at the definition and drivers of what is considered as a successful project or program. Surprisingly, from my experience project teams tends to focus on "getting things done and ticking the box" rather than always reflecting on whether it could have been done differently with better results. There seems to be too much focus on contractual obligation and delivering according to the initial agreement. There is need to focus on "changing the mindset" of project teams from compliance and accountability to focusing on learning and achievement of development results.
Couldn't agree more! We need…
Couldn't agree more! We need to away from "check the box" approaches to reflection and course correction based on insightful use of M&E data, identification of issues and challenges, and candid discussion within teams. Doing so will require a change in mindset of how M&E and project implementation are approached from a compliance focus to a learning and results mindset. As you state, the challenge is common across a number of agencies. I will be curious to learn of examples of how other institutions have tackled this challenge...................?!
Thank you for this thought…
Thank you for this thought provoking blog. It rings true that the pendulum has swung far towards compliance in the Bank's operations rather than learning. But based on my operational experience, I would submit that there is a huge amount of operational learning--other than through ICRs-- that feeds into the design of the programs and with much faster feedback loops: team learning and informal and formal transfer of knowledge from past to new TTs, constant TTL-team learning, manager-to-team guidance, OPCS clinics, to name a few. I suspect that some of the self-evaluation knowledge feeds into these learning processes though it may not be so obvious. So maybe we need to ask the question: what is operational learning and how best we can promote it in the new Bank for greater effectiveness?
Dear Zeljko Very true that…
Dear Zeljko Very true that there is a lot of operational learning and knowledge sharing outside the formal self-evaluation systems. Of course there is. As you know, IEG evaluated this in the two reports on Learning and Results in World Bank Operations: How the Bank Learns (https://ieg.worldbankgroup.org/evaluations/learning-and-results) and found, among other things, that the prevailing oral culture of informal knowledge sharing has its downsides: knowledge is easily lost when people move or leave; knowledge transfer depends on who you know; in a big global decentralized institutions, exchanging knowledge face to face is not always feasible, and so on. The question is therefore also how the formal self-evaluation systems can help identify and document relevant knowledge.
There are many in Cambodia…
There are many in Cambodia who do not understand why the World Bank decided to resume funding when the original issues - explained very clearly by their own Inspection Panel - have not been addressed. This is one of the reasons why impunity remains such a problem affecting all aspects of a country's development and human rights. https://www.cambodiadaily.com/news/world-bank-will-resume-funding-to-ca… http://www.worldbank.org/en/news/press-release/2011/03/08/world-bank-bo…
Thanks for the comment! The…
Thanks for the comment! The World Bank Group has multiple mechanisms of accountability: us here at IEG; the audit function; the Inspection Panel; and more. I am not in a position to comment on the Inspection Panel report that you reference, or on Management's response to it. The Bank's office in Cambodia would be better placed.
Nice job Rasmus and Caroline…
Nice job Rasmus and Caroline....I could not agree with you more that ´more agile, purposeful learning-oriented evaluations´ are needed..Each project that I have evaluated could have benefitted greatly from a real-time M&E system built into the project design. By the time the MTE or Final evaluations are done, it is too late to change course and people have moved on to new things, and of course, accountability is easily swept under a bulging rug that is bulging with mistakes. Timely assessments of the results, adaptive learning and course corrections throughout the implementation are some key ingredients for creating a learning culture that we should be discussing in new project formulations.
Dear Joe Great to hear from…
Dear Joe Great to hear from you. There is a vast missed potential for M&E to really inform projects and add value during implementation. Would better M&E create the learning culture, or do we need organizations to adopt a more learning-oriented culture to enable them to better seize the opportunities afforded by M&E?
The creation of a more…
The creation of a more holistic organisational learning cultural frame seems to be critical for taking the next steps in policy and program improvement. While self evaluation seems more than less important for reorienting a project that is off track, a deeper, strategic understanding and will may be required to measure performance in a way that formulates what's needed to improve effectiveness.
Rex: Yes, M&E systems need…
Rex: Yes, M&E systems need to measure the right things and teams and their managers need to have the incentives and opportunities and organizational culture to use M&E data and information and take action accordingly. In the World Bank, formal restructuring of projects involving changes to the objective is sometimes very time consuming because some client governments require approval at the highest level, President or Parliament. This makes teams avoid restructuring.
I like the message and tone…
I like the message and tone of this posting: Learning from Evaluation. It shows that the authors are keenly aware of the failure of the Bank's management in charge of operations to implement effectively what is the new goal of the institution -that the WB should function as a knowledge bank. If the learning from the institution's own experience cannot be disseminated effectively among its staff and the findings reflected in more efficacious projects or programs designs, monitoring and supervision, then there is something clearly faulty with its working processes and incentives. I used to think that the World Bank faced an unique set of operational problems. But the reality is that even institutions operating in competitive markets face similar problems because they too have to work with multiple stakeholders to achieve their short and longer term objectives. Short termism is a major problem afflicting most organizations but solutions need to be found. Bank staff are not uniquely responsible for a project's good or poor performance. A variety of other players are involved. Also, if there is a nine year lag between project approval and self evaluation (ICR) preparation, it becomes much more complicated to pin blame since the characters have changed and even the country context is different. So pursuing accountability at the cost of learning may not serve the best interest of anyone, unless of course there is fraud or malfeasance which is a different topic. But the good news is that one can always learn important programmatic lessons about the project even after a long delay. IEG staff are evaluating a large portfolio of projects and interventions at any point of time. Collectively, they provide valuable programmatic lessons which are applicable in a wide variety of contexts. Their work and findings also need to be integrated better with the work operational staff are currently engaged in. It is easy to state that the institution needs a cultural change which is undoubtedly true to some extent. But, hopefully the managers in IEG will outline a sustainable and practical plan to educate operational staff on the lessons . This may require not only IEG staff spending more time in educational seminars for staff and managers , but actually closer involvement in reviewing project briefs and ensuring that programmatic lessons have been taken into consideration. There should be no conflict of interest from this participation as nine plus years would have elapsed before the project is evaluated and they might not be still in evaluation.
Thank you for the kind words…
Thank you for the kind words about our posting and about IEG's larger work! Readers of this blog know that we make a lot of effort to promote learning and uptake from IEG's work, while preserving our independence and accountability functions. This blog post is really about use of and learning from self-evaluations done by or for operational units -- that is, the portions of the WBG's evaluation architecture not led from IEG. As you indicate, these systems are not really meant and designed to pin blame on individuals. They are intended to serve: broad organizational reporting and accountability; project and portfolio performance monitoring; and organizational learning. The "Behind the Mirror" evaluation finds that the reporting purpose is much better served than the performance monitoring and learning purposes. This because of the prevailing "compliance mindset": fulfilling reporting requirements but not learning from the process. I find it interesting that so many people are commenting that other organizations face similar challenges and I am keen to hear if anyone has experience with driving organization-wide behavior change in a context that is similar to what we talk about here.
The balance between…
The balance between accountability and learning from self evaluations is a problem for every organisation. However, what I think is the the evaluators should design new approaches for presenting their evaluation findings that could have a direct influence on learning. We are in a fast and multitasking world where there are too many demands on the players and to allot time for learning is the challenge. However, I think learning would be facilitated where the manner in which the findings are presented does not necessitate too much devotion of time from the part of the learner to decipher what is new and what is old. Therefore, we need a practical way of depicting the findings to facilitate learning by doing.
Thanks for your comment! The…
Thanks for your comment! The templates for the WBG's self-evaluations actually try to do this with sections on "lessons learned" that are meant to distill what's new into short bite-sized pieces of information. Unfortunately, reality is that lessons have a justified reputation of being quite obvious and generic and therefore of low value. We found that in World Bank completion reports, lessons are often written in very general terms, without specific recommendations on how to do things differently in the future. But this being said, I also think that it is too easy for practitioners to blame non-learning on external factors such as being busy or information not being packaged in just-the-right way. People in professional jobs also need to take responsibility for their own continued learning and professional growth. Evaluators can and should package and present information in attractive ways, but there has to be a demand for the information.
Dear Caroline and Rasmus,…
Dear Caroline and Rasmus, Many thanks for sharing a very thought provoking blog. I think you highlighted fundamental issue faced by several organisations in the sector. I also picked up few lessons I have been trying to address the same issue and struggling to ensure that we consistently use the learning for improving programme design. We also disseminate learning's from the evaluation studies by producing evaluation summaries. which is a two page document focused on recommendations and learning. Thank you once again for your great work. Regards EH
Thanks, glad you liked it!…
Thanks, glad you liked it! As others have also commented, the issues we discuss here affect evaluation uptake in many organizations.
Add new comment