Patience and persistence are necessary virtues for evaluators as evidence based change usually takes time to deliver.

Regular readers of this blog know how passionate I am about influencing change. We do not undertake evaluations solely to establish a track record of what happened - we also want to help people and organizations understand how and why things came about, so they can learn from experience to improve future results and performance.

Previous blogs, like What makes evaluations influential?€ and €œKnow your stakeholders mapped out a couple of essential ingredients for influencing change. Today I want to talk about another: time!

Human beings are impatient creatures: fads and fashions are fast changing, and are driven ever faster by technology and the flood of information available to most of us at our fingertips. In the development field, many pressing needs seem to compete with each other in a rapid cycle within which certain issues can top the agenda for a while before being rotated out, only to rise to the forefront again. Meanwhile, other issues, dictated by trends and events, are frequently added to the ongoing churn.

All of these are indications that we - as people, development practitioners, and evaluators - are often impatient when it comes to change. We want to see quick results, observe more immediate cause and effect scenarios, and experience the uplifting effects of transformation. But when change takes time, as it almost always does for complex development challenges, it takes persistence and systematic follow-up to deepen both the evidence-base and the dialogue to stimulate change.

Occasionally, if you are lucky enough to catch the right wave, things may happen quickly, as is the case for our Procurement evaluation (2013). That evaluation was completed with a view to feeding into a planned procurement review. It attracted rapid and wide attention from internal stakeholders (Board, management and operational level) and external audiences (OECD, WTO, AfDB, IADB) and the team were invited, for example, to deliver a series of internal learning seminars and to present findings at a number of major conferences. The lead author of the report was invited to serve as a reviewer for an evaluation of AfDB'€™s procurement systems and shared findings and experience with the UK Cabinet Office for Procurement. More importantly however, the evaluation findings have fed into Board packages on the WBG procurement reform and are informing M&E and staff training in procurement as part of the work of the Governance Global Practice.

But that is more the exception -€“ the pace of engagement and change is typically much slower. Take, for example, our work on environment-related issues where through a series of evaluations we re-enforced and deepened our analysis and evaluation evidence to support a set of interrelated recommendations. Through our Management Action Record (MAR), designed to track management's follow-up to our recommendations, we noticed important changes following our evaluations of Environmental Sustainability (2008) and Safeguards (2010).

The 2008 report pointed to the need for the IFC to ensure that clients, especially financial intermediaries, develop and implement sound environmental policies and practices. The 2010 evaluation moved the issue on and recommended that IFC enhance the supervision of financial intermediaries (FIs) at the subproject level. Following the first report we noted (2010) a ramping up of supervision and other developments. In 2012 IFC introduced a revised Sustainability Framework. The policy governing the framework includes the following provision: €œIFC supervision may include visits at the FI level, as well as to recipients of FI loans/investments, particularly high risk subprojects.€ Through our engagement with IFC management (2013), we noted progress but stressed the need for IFC to address how independent third-party monitoring can be used for a high risk subprojects. So, an ongoing process of communication, negotiation, and persuasion.

This might sound frustrating to some readers impatient for change. Why is it that problems do not get fixed faster? Why do we have to come back to the same issues over and over again?

In my experience, evaluation is often about the long game. We can identify issues and put them on the agenda. We can do more work to keep those issues on the agenda over time. We can stimulate the debate by demonstrating patterns and trends, and urging change. But, as evaluators, it is not our role to make change happen. That is the role of decision-makers who decide policy and programming at Board and management levels. So, if you are interested in evaluation, please be sure you can cope with a good helping of deferred gratification!

What is your experience with the process of change based on evaluation findings? And how can we best track the influence of evaluation over time?

Comments

Submitted by Carl Kalwan on Thu, 06/04/2015 - 23:29

Permalink
Having read Ms Carline Heider's article on the subject of "Evaluation" on donor funded projects and I agree that such evaluation is part of good "accountability and reporting" process and the Boards or senior management should take note and implement recommendations of such evaluation reports. We have experience in part of our country (Papua New Guinea), where investors provide "Environmental Impact Plans" to get Parliament to approve such projects but at times fail to evaluate their plans. The result, resource owners complain about environmental damages or achieve maximum benefits ( financial) to the people who are owners of land and resources like gold or oil/gas etc. Therefore, how can Project Management Team can help in the evaluation process? Thank You! Carl Kalwan- Freelance Local Level Government Advisor on Community Projects ( Health & Education)I/8KYR

Submitted by Caroline Heider on Wed, 06/10/2015 - 05:53

In reply to by Carl Kalwan

Permalink
Carl, many thanks for your comment. When it comes to environmental and social impacts, the World Bank Group has standards that it follows. These standards, or safeguards, were evaluated by us in 2010 and right now a process of consultation about updating them is coming to an end (check the World Bank's safeguards page for details - http://web.worldbank.org/WBSITE/EXTERNAL/PROJECTS/EXTPOLICIES/EXTSAFEPOL/0,,menuPK:584441~pagePK:64168427~piPK:64168435~theSitePK:584435,00.html). We are systematizing our coverage of these impacts in our regular evaluation work so that we can generate a consistent flow of information over time and can track how things are evolving. This should help the Bank Group do a better job and, as you suggest, track critical elements of project design and implementation.

Add new comment