Speaking at the World Bank, Ben Ramalingam, author of Aid on the Edge of Chaos set out a challenge to those working in the humanitarian and development fields: Move away from a narrow focus on what we think is important and take a more wide-angle approach to the issues we’re dealing with.
“Responses to complex challenges need to be adaptive,” he said. “Rather than strategies for best practice we should be looking at strategies for best fit.”
Ben discussed his plans to push the debate on change in the development system during a lunch we had about two years ago where we continued a conversation we had started years earlier when both of us worked on evaluating humanitarian assistance.
We both agreed that in a world riddled by unpredictability the usefulness of linear models was limited and that there was a pressing need for rethinking how development assistance works.
Ben’s book argues that our models are based on simplifications that make assumptions, eliminate real life factors, and that fail to reflect that the world is a complex maze of interrelationships that affect each other.
Tools like the logical framework, as Ben says, can if “[d]one right...make[s] users think carefully and systematically about their plans, and how activities will contribute to goals.” Drawing on many evaluations, he observes, that the tool is often used mechanistically: results-based management and M&E systems are typically focused at input-output level and are based on linear relations that ignore rather than recognize complexity.
Is the tool to blame? You might think this is a funny question for an evaluator to ask. After all, don’t we use logframes as the basis for our assessments? Yet, as people struggle to put together meaningful results frameworks, the question is inescapable.
Arguments that the logframe is too limiting, that it doesn’t take into account other factors, or cater to the complexity of situations are true. But only in part.
The tool actually requires planners to clarify their assumptions and assess risks. In other words: think about a networked and chaotic reality and choose a more linear set of goals, objectives, and outputs. Without it, one is left trying to develop from first principles what might be the appropriate systems for adequate planning, learning and evaluation under complex circumstances.
For us at the World Bank Group the challenge is two-fold:
- We understand the world is complex. The new model – the Solutions Bank Group – has been conceived precisely to correspond to this reality and aims to bring about transformational change in how we work.
- To support these changes, we need practical measures to demonstrate – and evaluate independently – whether multi-dimensional development solutions are working, what changes they bring about, and how problems are fixed as they arise.
So how do we get there and what are the risks? Three stand out:
Oversimplification. Past experience is riddled with examples of results that are simply outputs. Take road construction. The simplest measure is the distance that has been built. But, how will this tell us what the road will achieve? In an earlier role I have evaluated road projects. Some resulted in transformational change, economic empowerment and in reduction roadside robberies. Others ended up as the roads less traveled with no economic or social value. So what then is a measure that is simple enough to add up and yet meaningful enough to tell us about results?
Over-Abstraction. Wouldn’t it be great if we had a simple index that tells us whether things are improving or not? It’s a very seductive thought - a number that indicates how well or how badly things are going. But, will the new construct again revert to simplifying models – the ones that Ben points to as the crux of the development matter – in order to make it possible to capture in a number what is a complex process?
Undefined. So, if an iterative learning process is more appropriate in this age of complexity, should we not simply leave our targets undefined and figure things out as we go along? If so, how would we manage the risk, aptly discussed in Ben’s book, of errors that might creep in because we are unaware of our assumptions, have a tendency to simplify models, and to repeatedly follow the same path? How will we know if we are wasting valuable time, effort, and resources instead of investing in them effectively?
During the next week’s Spring Meetings, we will be sponsoring a panel of eminent thinkers and posing this challenge to them so that we can take a practical approach to the new science of delivery. I urge you to make your voice heard.