Speaking at the World Bank, Ben Ramalingam, author of Aid on the Edge of Chaos set out a challenge to those working in the humanitarian and development fields: Move away from a narrow focus on what we think is important and take a more wide-angle approach to the issues we’re dealing with.

“Responses to complex challenges need to be adaptive,” he said. “Rather than strategies for best practice we should be looking at strategies for best fit.”

Ben discussed his plans to push the debate on change in the development system during a lunch we had about two years ago where we  continued a conversation we had started years earlier when both of us worked on evaluating humanitarian assistance.

We both agreed that in a world riddled by unpredictability the usefulness of linear models was limited and that there was a pressing need for rethinking how development assistance works.

Ben’s book argues that our models are based on simplifications that make assumptions, eliminate real life factors, and that fail to reflect that the world is a complex maze of interrelationships that affect each other.

Tools like the logical framework, as Ben says, can if “[d]one right...make[s] users think carefully and systematically about their plans, and how activities will contribute to goals.” Drawing on many evaluations, he observes, that the tool is often used mechanistically: results-based management and M&E systems are typically focused at input-output level and are based on linear relations that ignore rather than recognize complexity.  

Is the tool to blame? You might think this is a funny question for an evaluator to ask.  After all, don’t we use logframes as the basis for our assessments? Yet, as people struggle to put together meaningful results frameworks, the question is inescapable.

Arguments that the logframe is too limiting, that it doesn’t take into account other factors, or cater to the complexity of situations are true. But only in part.

The tool actually requires planners to clarify their assumptions and assess risks. In other words: think about a networked and chaotic reality and choose a more linear set of goals, objectives, and outputs. Without it, one is left trying to develop from first principles what might be the appropriate systems for adequate planning, learning and evaluation under complex circumstances.

For us at the World Bank Group the challenge is two-fold:

  • We understand the world is complex. The new model – the Solutions Bank Group – has been conceived precisely to correspond to this reality and aims to bring about transformational change in how we work. 
     
  • To support these changes, we need practical measures to demonstrate – and evaluate independently – whether multi-dimensional development solutions are working, what changes they bring about, and how problems are fixed as they arise.


So how do we get there and what are the risks? Three stand out:

Oversimplification. Past experience is riddled with examples of results that are simply outputs. Take road construction. The simplest measure is the distance that has been built. But, how will this tell us what the road will achieve?  In an earlier role I have evaluated road projects. Some resulted in transformational change, economic empowerment and in reduction roadside robberies. Others ended up as the roads less traveled with no economic or social value.  So what then is a measure that is simple enough to add up and yet meaningful enough to tell us about results?

Over-Abstraction.  Wouldn’t it be great if we had a simple index that tells us whether things are improving or not? It’s a very seductive thought - a number that indicates how well or how badly things are going. But, will the new construct again revert to simplifying models – the ones that Ben points to as the crux of the development matter – in order to make it possible to capture in a number what is a complex process?

Undefined. So, if an iterative learning process is more appropriate in this age of complexity, should we not simply leave our targets undefined and figure things out as we go along? If so, how would we manage the risk, aptly discussed in Ben’s book, of errors that might creep in because we are unaware of our assumptions, have a tendency to simplify models, and to repeatedly follow the same path? How will we know if we are wasting valuable time, effort, and resources instead of investing in them effectively?

During the next week’s Spring Meetings, we will be sponsoring a panel of eminent thinkers and posing this challenge to them so that we can take a practical approach to the new science of delivery. I urge you to make your voice heard.

Comments

Submitted by rick davies on Fri, 04/04/2014 - 01:41

Permalink
How to operationalise a complexity perspective is the big question,... it seems to me I have no simple solutions, but these are the directions I am interested in: 1. Pay attention to Ashby’s Law of Requisite Variety, as well as Occam’s Razor. I.e. the models we develop need to have a sufficient degree of complexity before they can adequately represent the world. For example, multiple conjunctural causation models. 2. Pay as much attention to real time monitoring as to episodic evaluations. In complex unpredictable settings we need more data not less 3. Pay as much attention to inductive thinking (pattern finding) about what has happened as to deductive thinking (hypotheses development and testing) about what will happen. Where the range of possible outcomes is big, we need useful search mechanisms (e.g. data mining algorithms), before focusing in with rigourous tests 4. Apply more eyeballs to the problem: make data sets publicly accessible wherever possible. So much survey data from development projects is woefully under-analysed and under-utilised. Eric Raymond famously claimed that “given enough eyeballs, all bugs are shallow.”

Submitted by Caroline Heider on Tue, 04/08/2014 - 07:58

In reply to by rick davies

Permalink
Great ideas, Rick, and I believe in line with the World Bank Group's vision of the Solutions Bank: a dynamic system that constantly searches and adapts to finding solutions, and learns while testing them. The big question, though, is: how do we define success from the outset? For instance, the World Bank Group has set two ambitiuos goals -- 3% poverty by 2030 and boosted growth among the bottom 40%. How do we define mearningful targets at the iinterim that will both channel resources and work in the right direction to achieve these goals, and serve as a basis for evaluation?

Submitted by bojan on Sat, 04/05/2014 - 03:17

Permalink
A Simple Measure of Success in a Complex World? Mission impossible. All what we can hope for is complex measure of complexity. This does not mean that the newly developed measure is 'hard to apply, undesrtstand...', it means that complex phenomenon can be simplified only in a complex way. More on this thesis: - http://www.sdeval.si/Publikacije-za-komisijo-za-vrednotenje/Meso-Matrical-Synthesis-of-the-Incommensurable.html - http://www.sdeval.si/Objave/Divided-we-stand.html

Submitted by Caroline Heider on Tue, 04/08/2014 - 08:20

In reply to by bojan

Permalink
Thank you, Bojan, for sharing your paper. Your point about simplification is the same that Ben Ramalingam makes: it has to be done in a complex way. At the same time, we need to come up with tools that help us grasp and share our understanding of the complexity of a specific phenomenom to get a better sense whether and how it will change as a result of an intervention, and how we would measure the change. I found this TED Talk (http://www.ted.com/talks/eric_berlow_how_complexity_leads_to_simplicity ) very helpful in terms of visualizing a problem in ways that helps determine nodes that are instrumental and pathways of interrelationships that otherwise might not be visible. Obviously, the question would be whether an application like this could capture the questions of complexity that you mention in your article. Something to explore?

Submitted by Flavio Roberto… on Mon, 04/07/2014 - 05:46

Permalink
I agree about the complex world where we live in. I found this topic very interesting. Greetings!

Submitted by bojan on Tue, 04/08/2014 - 02:56

Permalink
@Rick Davies, how do you find Occam's razor relevant for dealing with complexity? OR is saying that reality is simple and so scientific tools that produce simple results can be taken as a proof of truth. I am also not convinced that big data is a road to better understanding complexity - since complexity is not microscopic issue but eqaully macroscopic - this is exactly the problem. Our proposal is to develop mesoscopic answer to this question.

Submitted by Philipp Grunewald on Tue, 04/08/2014 - 06:56

Permalink
I believe this is not the right question. Saying "we understand the world is complex" and then going on asking for the same things one always asked for is a contradiction. Understanding something means acting differently afterwards and realising that different questions become valuable and others become useless and counterproductive. But that is not usually what happens since that would mean questioning the rules of the game.

Submitted by Caroline Heider on Tue, 04/08/2014 - 00:42

In reply to by Philipp Grunewald

Permalink
So what would be better questions to ask?

Submitted by bojan (radej) on Sun, 04/13/2014 - 00:58

Permalink
First of all I wish to thank you Caroline for your blog. I am following it only since recently but I immediately found it very thought provoking, opening cutting-edge topics (in my narrow focus) and offering excellent structuration of argument and commentaries. Your blog is one of the best I have ever happened upon. The challenge for simple presentation of complex world is in my view one of the most acute matters at present and visible to us exactly through a paradox mentioned in the title of your post. It is paradox because nobody really expects that complex world can be or should be presented in a simple measure. The challenge is not to kill complexity but to find out how to live with it taking into account limited cognitive capacities of humans as well as constrained institutional, regulatory and management possibilities. So the challenge seems to pose itself as a question how to simplify complex world without decomplexifying it with measurement, conceptualisation, communication, management. Or analogously, how to simplify complexity in a complex way. Which concept offers the simplest view of complexity? Very recently I have delightful opportunity to comment on one of Ramalingam's older papers (obtained via Patricia Rogers, Better Evaluation; Ramalingam B., H. Jones, T. Reba, J. Young. 2008. Exploring the science of complexity Ideas and implications for development and humanitarian efforts. London: Overseas Development Institute, Working Paper 285, 78 pp.). I certainly support their diagnosis of the challenge and fully share their aspirations. I found it very relevant for my study so I will certainly refer to it in my forthcoming book. My criticism is that their methodological aspect is mostly absent and so one can not see how theoretical results can be practically applied in evaluation of development and humanitarian efforts. This is probably just a consequence of one standard problem of studies which aim at defining operationally a concept of complexity. The theoretical aspect of their study is stretched between the system theory, theory of chaos and theory of complexity. But these need to be distinguished as three different 'theories of truth' (James W. 2002. Pragmatism: Will to believe, 1906, Lowell Institute; 1907, Columbia University). So they should be strictly shown not only as co-present, but also as separated. Some processes are primarily systemic, other are primarily chaotic and third may be mainly complex. Only the later is our concern, the concept of ordered complexity (Prigogine). If concept of complexity is too much complexified, it becomes disordered and chaotic and this largely means beyond our (non-scientific) comprehension. Let me illustrate with an example from the paper: 'The concept of phase space and attractors are central to understanding complexity, as complexity relates to specific kinds of system trajectories through phase space over time.' True, this centrality has been recognised for complex processes in nature. The problem is that ‘phase space’ and ‘attractors’ are not central for policy-makers, they don't apply these scientific concepts. Complex processes are specific for systems in transformation. For this reason, it is the most relevant to study transformative changes and transformative mechanisms, not attractors as fields of relative balance, stability and predictability (or resilience, for the same reason – vs. adaptability). Concept of phase space frames all hypothetical states of the system, but policy makers are usually concerned with very specific states of the system which are normally located in neighbourhood. Our challenge is probably not to develop the most exact copy of complexity science in nature and then translate it for explanation of social processes, but to find the simplest version of complex methodology, that describes the widest possible range of social processes. Next example: 'Accepting the notion of chaos … encourages an acknowledgement of the continual change in social systems'. As far as I understand the concept of chaos (Gleick), this is not really creative but very replicative process - scale invariant iteration of fractals. Creativity is usually attributed to evolutionary processes and even more to complex processes - as such, no interference of chaotic processes. Next example: 'Self-organisation is where macro-scale patterns of behaviour occur as the result of the interactions of individuals who act according to their own goals and aims and based on their limited information and perspective on the situation.' This is perfectly in line with Hollings, Giddens and mainstream science in general. What distinguishes complexity theory from theory of chaos is that the later develops with processes which maintain direct relationship between micro and macro level of the system – just like in the old style Newtonian paradigm of science, which is linear and where elements of the system are commensurable – aggregatable from micro to macro. On the other hand, complexity is essentially describing mesoscopic level processes (meso 1, 2, 3; as Dopfer, Potts, Foster, and meso 2a and 2b as I have proposed) – emergent mechanisms of qualitative change, selection, translation, intermediation between polar oppositions of micro and macro. I have tried to approach complex social systems from meso level of its structuration since it seems the most simple way of observe complex systems and much more potential in explanatory power. To *Rick Davies: I am somehow not convinced that dealing with complexity demands big data. Big data concept is basically meant for dealing with chaotic processes (see Cynefin model). Linking complexity to larger amount of data is linked to quantitative idea of complexity, which is reductionist and so wrong. Concept of complexity is paradigmatically different from the concept of simplicity, so the difference is first of all qualitative, unbridgeable with ever larger piles of data. The challenge is how to reorganise and reconceptualise available data to obtain complex picture. We already live in information society with gigatons of data available at hand or at least for not too many dollars which nevertheless remain rather poorly exploited. This can be proved with the absence of synthesis, since we live in very analytical age where each detail is terribly important. To *Caroline Heider: How do we define success from the outset? Complex measures of success are mesoscopic in nature, in my view: this means definition of success must be a set of partly compatible indicators, which are carefully integrated (with the help of mesoscopic algorithm of synthesis), to enable qualitative summative interpretation of the complex matter in holistic way. However, it still remains question on the nature of complexity measure – they are certainly not measures of effectiveness, efficiency, relevance. What are they? I believe answer will be context specific – depending on concrete evaluation situations. We have proposed measures of synergy – in one of our evaluations we developed three integral measures of integration – measure of cohesion and two measures of balance, strong and weak. What is the most important is to observe, that measures of complexity are mostly of hybrid (composite) nature, as they are obtained on the meso level of evaluation. I am pretty much convinced that the real challenge (How do we define success simply in complex operations?) is that mainstream thinking is micro or macro funded, while complexity demands mesoscopic reasoning. The answer is pretty simple, transforming logic which enables it is the real tuff thing in our shared challenge.

Submitted by Caroline Heider on Tue, 04/22/2014 - 03:52

In reply to by bojan (radej)

Permalink
Bojan, many thanks for the compliments and the extensive contribution to the conversation. Yes, transforming logic, as you put it, and the way in which we work, interact with each other are tough challenges for those who design and implement interventions, as much as for evaluators. Let's continue the dialogue, as it will take many good thinkers and different perspectives to come up with approaches that help us move in a better direction. In the meantime, I would be interested in learning more about your "measures of synergy".

Submitted by rick davies on Fri, 04/11/2014 - 06:27

Permalink
Re “How do we define meaningful targets at the interim that will both channel resources and work in the right direction to achieve these goals, and serve as a basis for evaluation?” Defining targets does not seem to have been a problem up to now :-(, but defining meaningful targets would presumably mean taking local context into account in the process, which could lead us in the direction of very customised targets. That solution would then present problems for a portfolio level evaluation that tried to make aggregate judgements about overall performance. [Unless performance was assessed in more meta terms e.g. as percentage of target value achieved] Alternately, a common across-the-board target could be retained if the evaluation process was able to include one or more weightings that took into account local contextual factors. This may not solve the problem because it could be argued that some contextual factors may be more important in some settings than others - so context factor X could not be given the same weighting across all settings :-(. There is another possible alternative that I have been interested in, which is pair comparisons, which allow detailed comparisons of individual cases, but not necessarily the same specific achievement criteria across all cases. Individual pair comparison judgements can be aggregated into overall performance rankings. Pair comparison is one means of aggregating votes and the analysis of their merits and limitations has been given some attention over the years.

Submitted by Caroline Heider on Wed, 04/16/2014 - 02:19

Permalink
Rick, I see your points about global goals and targets and translating them into something meaningful for specific country contexts or using country specific targets and finding ways to aggregate them up. But, maybe more so: high-level goals still need to be translated into outcomes that can be operationalized. Otherwise the leap from a program activity to a high-level goal, such as 3% poverty by 2030, might be too large, too distant.

Add new comment