TWEET THIS

Meeting the bar for relevance might be irrelevant in today’s world of complexity
Tools like network analysis can help us reach more stakeholders, anticipate amplifiers of success
Must capture parameters outside linear project logic essential for success of an intervention


As evaluators we need to shed light on whether an intervention’s focus is on nodes in the network that matter, that can have large multiplier effects, or that are peripheral to the desired solution. That is a lot more than 'relevance'.

In last week's #WhatWorks post, I argued that it was perhaps time for us in the evaluation community to rethink our evaluation criteria.  After nearly 15 years of applying relevance, effectiveness, efficiency, impact, and sustainability as our foundational evaluation criteria, is now the time to change or adapt?

The evaluation criterion “relevance” has troubled me for quite some time. In many development settings, a project is considered relevant when “the aid activity is suited to the priorities and policies of the target group, recipient and donor.” Of course, this is important. In plain language, it makes us question whether the intervention aimed to address real needs.

But that is exactly where the challenge lies: the needs of whom?

In an ideal world, the needs of the target population are aligned within the community, with the government’s priorities, and the policies of donors. In reality, such a theory makes a large number of assumptions. For instance, that the target community is homogenous, which it often is not. Nor are priorities at central and decentralized levels identical, be it for a real difference in needs or for political reasons.

In practice, evaluators often use policies of governments, donors, and aid agencies to assess whether an intervention is relevant in that context. More often than not, these policies are written in ways that can justify a whole slew of different activities. Hence, meeting the bar for relevance is not all that hard.

In addition, I would argue, this criterion might be irrelevant in today’s world of complexity.

Look at network analyses that map out situational problems and how they are interlinked. The TED Talk by Eric Berlow illustrates in less than 4 minutes how complexity theory and technology allows us to map and understand development challenges in completely new ways. Being a visual person, I am fascinated by the modeling capacity that technology now provides.

More importantly, techniques like these could change the process through which we seek and find solutions to development challenges. It provides us with an opportunity to live up to the values of a more inclusive world, where the voices and perspectives of a much broader group of people matters in defining goals, solutions, and pathways that will get us there. This modeling capacity could help bring together the views of a broader set of stakeholders, add perspectives to understanding a particular development challenge and interrelated factors and come up with different solutions than, say, a group of experts might see from the vantage point of their technical expertise.

And, an approach like this can help anticipate potential amplifiers of success, or what we used to call “killer assumptions” that are strong predictors of failure or diminished development outcomes. These assumptions are often embedded in project or policy design without recognizing them.

Impractical? Watch the video and look at the model the US military had developed for the situation in Afghanistan. Berlow maps all of these factors into an interactive model and then identifies nodes that have much larger ripple effects throughout the system than others.

What does all of this have to do with the simple evaluation criterion called relevance?

If we apply relevance to a more complex reality in the same way we have used up to now, with the policy context as the yardstick to assess relevance, any intervention will meet the criterion as long as it falls anywhere in the network of interrelated factors.

But that is not important for decision-makers! Instead, as evaluators we need to shed light on whether an intervention’s focus is on nodes in the network that matter, that can have large multiplier effects, or that are peripheral to the desired solution. That is a lot more than “relevance”.

Instead, I suggest that we fundamentally rethink the “relevance” criterion and replace it with something that helps assess whether:

  • Diverse perspectives were taken into account in identifying and implementing solutions, namely the networked analysis of the development challenge captures parameters that are outside a linear project logic that are essential for success or failure of the intervention;
  • Development interventions address key entry points – the significant nodes that are bottlenecks or opportunities for multiplier effects – in a networked analysis of the development challenge at hand; and
  • There are synergies across – or joining up of – a multitude of interventions aimed at the same development challenge.

Doable? Add your thoughts on what it would take.


The Rethinking Evaluation series is dedicated to unpacking and debating evaluation criteria by which we judge success and failure, and whether they are fit for the future. Stay tuned and contribute your views.

Read other #Whatworks posts in this series, Rethinking Evaluation:

Have we had enough of R/E/E/I/S?,  and, following this post in the series, Agility and Responsiveness are Key to Success, and Efficiency, Efficiency, Efficiency

Comments

Submitted by Ting on Tue, 01/17/2017 - 21:20

Permalink

Thanks Caroline for the informative blog and perspective. System thinking was well embedded in those key points through taking into account of diverse perspectives and connections/links etc. In reality and in system thinking terms, how to then make those boundary choices to determine what is relevant and what is not? Perhaps an actor-centered thinking or theory of change might add some value by making explicit major actors involved, their inter-relationship and domain of interests to help identify the focus and priority? For instance, like the actor centered logic of Outcome Mapping -- differentiating various groups of project actors locating in varied project domains (sphere of control, sphere of influence and interest).

Many thanks, Ting, for this interesting contribution. We will collect a number of suggestions and reflect on what they mean for our work going forward. Until then I hope you keep sharing your ideas.

Submitted by Ashwini Sathnur on Wed, 01/18/2017 - 01:30

Permalink

As evaluators, our focus should also span across the evolution and progress of an intervention's implementation, over a specified duration of time. It signifies that, along with the application of relevance, effectiveness, impact and sustainability as evaluation criteria, there would be a requirement of an addition of "Evolution/ progress" as one more evaluation criteria. This progress would be calculated and measured at the country level or regional level, on the basis of year - on - year increase or decrease in the implementation of the intervention under focus. It would be calculated with the utilization of the mathematical formula, as mentioned below :-

Evolution / Progress = { [ Year n's contribution - Year (n - 1)'s contribution ] / [ Year (n - 1)'s contribution ] } * 100%

This contribution would be the contribution of the nation or the measured collective contribution of the region, for that particular year (n) and the year (n - 1).

Then based on the above deduced values of Evolution/ Progress, countries as well as regions across the world would be ranked. Thus leading to the creation of Rankings based on the evaluation criteria "Evolution/ Progress" !

Ashwini, as you can see from my subsequent blog, I agree with you that we need tools and a focus on assessing change over time, and whether adaptation is timely and responsive. Not sure that this can always be calculated as you suggest. But, even a more qualitative approach will be useful.

Submitted by Juha on Sun, 01/22/2017 - 19:47

Permalink

Dear Caroline,
I fully agree with you and would like to add that for an intervention to be relevant it must make a difference in the development problem we would like to address. To paraphrase a former colleague, all of our projects do good things but whether they have an impact is an entirely different question. Seen from that perspective, relevance must be linked to impact. In fact, your idea about the need to address the key entry points is important.

Submitted by Jindra Cekan, PhD on Mon, 01/23/2017 - 13:29

Permalink

Dear IEG and Caroline- With all due respect, "After nearly 15 years of applying relevance, effectiveness, efficiency, impact, and sustainability as our foundational evaluation criteria" and the Bank still cannot tell us what project outcomes and impacts were standing or newly emerged after the Bank ended its project and programme funding?! We seem to have found that you rarely consult the very participants your President extolls (see http://valuingvoices.com/ieg-blog-series-part-ii-theory-vs-practice-at-…) and you want to shift the focus from evaluating sustained impact (done on less than 1% of all projects) rather than doing far more which is true accountability to those you serve? Help me to understand!

Jindra, I am confused about your comment, as it doesn't seem to relate to the blog. I also think it is a rather sweeping statement that we do not consult stakeholders and civil society at all. Of course there is always room to do more, but as you would know there are many players and those who are not consulted can be very vocal, while those who have might not.

Submitted by Kevin Billing on Tue, 01/24/2017 - 09:10

Permalink

I liked the vision of adding a LOT MORE to 'relevance' - I'll be interested to follow up on some of the new lines of thinking in evaluation opened up by the article - as a market system development specialist - we have been, for a number of years, been interested in identifying the nodes of intervention that have the biggest chance of influencing whole value chains or inducing systemic changes that create the biggest impact. I agree - we all need to make what we do more relevant.

Submitted by Petra on Tue, 01/31/2017 - 15:39

Permalink

I like the thoughts about "is relevance still relevant?". the paradox is: if a project had been relevant for whoever at the start and if it has been successful, it would ideally become obsolete or "irrelevant". how are we to assess that the project is still relevant after it supposedly achieved its objectives?

Petra, you raise an important point. I think it illustrates well the difference between a project that builds a service capacity, and the continued provision of services. For instance, if a project develops the capacities to deliver health services, the project to set these up might no longer be needed/relevant, once they are up and running, but the health services themselves continue to be relevant to the needs of the population. One could imagine another example, outdated by now but useful as illustration effect, a project that build capacities for delivering IT services. Let's assume it aim to develop landlines (anyone remember those?), because that was the technology at the time of approval. By the time the evaluation comes through, the objectives -- to provide affordable IT services to people -- might still be relevant, but the technical solution is not any more.

Submitted by Zehra on Sat, 07/01/2017 - 16:56

Permalink

Thanks for sharing your thoughts on this- I have been thinking about this criterion so much, particularly lately. I am inclined to say that, as it stands now (and with usual questions used within it)- relevance is obsolete and does not significantly contribute to the evaluation. This is due to the fact that, in developmental context (even in case government policies are not there or not elaborated), all interventions are 'relevant' as there are so many needs and so many gaps. From that point of view, your suggested points are going in the right direction, though I would for start call it differently to highlight the different angle that what we actually want to assess under relevance. In most cases, even if an evaluation findings point that an intervention has diminished relevance (due to many reasons, including also efficiency of process from programming to contracting to implementation), this cannot be well reflected, due to way the questions are set. Even when evaluators are in charge of setting evaluation questions, there is a tendency to 'go with a flow' and pose the same old questions, meaning that we will get the same set of answers. This results in the fact that, for vast majority of evaluation reports, relevance is always positive. Review of ROM monitoring reports (EC monitoring tool) also shows that if not all, majority of projects are rated with A or (soft) B. This, to me, is a good indicator that something needs to change quickly or we will just continue to do 'lip service' to our clients.

Submitted by Justus Kamwesigye on Mon, 06/11/2018 - 04:52

Permalink

1. Looking at relevance at the evaluation stage is too late. The focus should be on if and how the intervention (project, Program or whatever) was monitoring what was happening its context that should be measured.
2. Relevance is important in more in design and less in implementation and much less important in evaluation. Analysis of relevance should be done in design and included in the monitoring systems and processes.

Add new comment