Ensuring all views are considered in participatory evaluation

The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.

Participation in development is not a new concept. It goes back some 40 years when practitioners realized the importance of participation and its link to ownership and development impact. In today's world, citizens voice their concerns: protests in the streets from Istanbul to Cairo and Rio, are just one way of doing so. More importantly, new and affordable technology vastly expands possibilities that citizens use to raise issues and ask to be heard. These same technologies can be used by development agencies to reach citizens - including the poor - to give them a voice in project planning, monitoring and reporting.

Participatory Evaluation: We have come a long way

I'm a great fan of the Most Significant Change method that Rick Davies started promoting years ago. While the authors of the method suggested that it should be embedded in projects from design through evaluation, I also used it successfully in an evaluation over 10 years ago in Papua New Guinea. We asked people in project-influence areas what they saw as the big changes in their communities over time and whether they felt these were good or bad, regardless of what the projects had aimed to do. Their assessment of what success looked like was complemented with more traditional evaluations that together gave us a much deeper understanding about what was achieved or not, and why.

As we know from Robert Chambers' work, and that of others, participatory work requires being mindful of whose voices are being heard and influencing the process. The way of engaging people in the field needs to be adapted to what makes sense to them - for instance, in Papua New Guinea, we worked with our local researchers to test and agree on participatory methods they felt would work best in the local context - and ensure all voices are heard. We as evaluators need to be aware of risks that one group or another will dominate the discussion - whether it's because of their wealth and status, gender, ethnicity, age group, or sexual orientation - and design methods to ensure broad participation and differentiated collection of information and feedback.

Community Driven Development: Ideal for participatory evaluation

Today, at IEG we are conducting participatory evaluation work in a number of projects. Community-driven development projects are a good case in point. They are based on beneficiary participation from design through implementation, which make them a good examples of citizen-centered assessment techniques in evaluation.

In our evaluation, we wanted to:

  • understand how the beneficiaries defined and characterized their own development processes at the individual, household and village level;
  • get a sense how people in the communities defined the meaning of "empowerment", and the livelihood impacts that were most important to them;
  • capture benefits that are less tangible and often lost in surveys or techniques that rely on quantitative methods, such as how access to finance affected women's confidence in using money; and
  • ensure that the learning cycle begins and ends with those who were intended to benefit, as well as those who have been left out.

Employing technology to evaluate services delivery

We have used a range of evaluation technology to reach citizens and community members to gather their feedback on service provision.

In Afghanistan, we hired a company to run a local radio campaign that invited people to send an SMS if they wanted to participate in a phone survey. The company called people and asked them for feedback on health and education services. The advantages were clear: we reached areas we could never have traveled to and gathered information from people directly affected by projects and the services they were to enhance. The downside is that we could reach only those who had mobile phones and were willing to participate.

In Senegal, we used an innovative smartphone-based platform to gather beneficiary feedback on the utility and maintenance of sanitation equipment. Working with researchers at the University of Dakar, we carried out a large scale, randomized, and low cost survey that allowed us to confidently report that, at present, 80 percent of the sanitation equipment constructed is still functioning and is considered useful by the households that use them. We also learned that sanitation facilities were more likely to be maintained in households with an able bodied female member present (wife, daughter or sister) since they were charged with cleaning them.

Citizen engagement across the project cycle

These examples demonstrate the value of citizen engagement for evaluation and some of the challenges that come with it. The Bank Group's commitment to engage citizens early will certainly benefit evaluation as well, as we can build on their views and reconfirm how things have worked out at the end.



Great article, and kudos to IEG for embracing the use of technology in its evaluation strategy! The Senegal smartphone example is particularly enticing. While transformational, the use of technology is rarely seen in the evaluation field.Hopefully, technology for enhancing beneficiary participation in evaluation gets further ingrained.

In reply to by Peter McCullin


Peter, thanks for your appreciatoin. I agree with you that it is important to expand use of technology in evaluation. I hope other readers of the blog can contribute other examples so that we get a better sense of what is used.

Technology can go a long way to aid participatory evaluation provided the challenges can be identified and addressed early. From my experiences in Nigeria, high rate of illiteracy and poor infrastructure within the rural settings are of great concern.

In reply to by Hassan Ishaq Ibrahim


Hassan, important points, thanks for raising them. That's why using a range of methods and technologies -- modern and the old fashioned ones of sitting down and talking to people -- are important. We also found that sometimes proxy information works really well; for instance satellite data on forest fires (available for free) provided us inputs that corroborated findings from on-the-ground fieldwork and analysis of documentation. The exciting thing is that today all of these strands of information and evidence can be pulled together at much greater ease than 25 years ago.

Dear IEG; great to hear that there is some interest in participatory evaluation techniques. In China, the World Bank conducted a large impact evaluation for the Poor Rural Communities Development Project which included a participatory impact evaluation, funded from a 1Mio. US$ DfID grant. Community score cards were used to provide feedback on the quality of project services (infrastructure, training etc) and mapping techniques to analyse outcome-level project results, like social capital and knowledge and skills. Participatory evaluation has helped to better understand livelihood changes and added to the analysis of poverty data provided by the household-level survey. The interesting thing about this evaluation is how it combined different (qualitative and quantitative) techniques to systematically assess changes of livelihoods across a very large and diverse project area. The Government of China has appreciated the usefulness of this approach because it provided timely feedback on the effectiveness of project services. A study worth revisiting!

In reply to by Johanna Pennarz


Johanna, many thanks for sharing your experience. Sounds very interesting! Can you tell us more about what happened with the results, how were they used in policy and implementation?

One of the problems associated with user / citizen participation is the challenge to get people to participate. Besides logistical issues -- getting people to attend meetings / workshops -- there is the 'voter apathy' syndrome. It may -- in part-- explained by the perceptions that the rewards for participation and its time commitments etc. are too low (or nonexistent) and that there is often little or no visible connection between the participation contributions and their merit (validity, relevance, creativity, resulting in better outcomes) and the eventual decisions. I have long been working on a method of evaluating 'planning arguments' - the kind of 'pro and con' arguments we routinely use in discussing design, planning, or policy proposals. The method is explained e.g. in my article 'The structure and evaluation of planning arguments' in Informal Logic Journal Dec. 2010. Other papers describe a game based on this approach, to familiarize people with the concept. A side benefit of the approach is that the evaluation results can be used to generate 'rewards' -- contribution merit points -- for participants. In the game context, these would be used to determine 'winners', but this is less of a purpose than to reward participants for cooperative contributions leading to better final solutions. The method can currently be used with current tools for small groups and trained support staff; for large scale projects a 'planning discourse support system' with integrated software to handle the contributions, manage the evaluation process, and provide overview displays (issue and argument maps) will be needed; there is currently no software on the market that offers all needed functions. More details are available in various papers on Academia.edu or on my Wordpress 'Abbe Boulah' blog.

Thorbjoern, you raise a number of important issues that ring true to me from the time when I was doing evaluation fieldwork myself. We will look into your publication and see whether and how we could use your ideas. But, let me just say: the biggest reward of participating in monitoring and evaluation should be the course-corrections that are made to a project in order to produce better results -- better services and opportunities -- for the people who should benefit. That naturally requires empowerment and delegation of certain decision-making combined with a powerful feedback loop that reports "up" those things that need more systemic solutions. Unfortunately, I have hardly ever seen an M&E system that is designed to do that.

In reply to by Caroline Heider


Incorporating participatory methods including stakeholders viewpoints has since long been an effective method of reducing project gestations. Advanced methods of prioritizing project alternatives and ranking of sub-elements and interventions has been another effective approach and serves in subsequent analysis as a baseline for evaluation as also aids to refine and improve impact criteria / monitoring indicators. Now the biggest challenge with use of technology viz. Group response through use of mobile is that - 1. Very few people use mobile vis-a-vis the overall population 2. The method currently limits to only two options viz. Yes/ no or gos/ no go situation 3. Time to respond to the question and providing their answers On the other hand such evaluation methods are being practiced through web enabled tools. However, fortunately with the emergence of smart phones and socia media the response cycle is far reduced. Secondly, the statistical analysis of group response is also instantaneous. The responses can be improved by providing more alternatives and option for ranking them so that a further step of analysis is done to see the distribution pattern of respondents before and after the project. For such an analysis a mobile enabled application can be provided through a download link message and the app. Should be inbuilt with project options, ranking criteria etc. This would get back to the backend webdatabase for subsequent analysis. Also further one can go ahead with depiction of group response at the second round to modify their rankings and further after another such iteration get a consensus on the outcome. Such tools could be useful for urban projects where the mobile penetration is very high.

This is great insight into your evaluation and I would like to see even more methodology. I do agree that it is a concern to get people to participate fully. Typically time constraints are part of the issue.

In reply to by Dr Haag


Dr Haag, thanks for the feedback on wanting to see more on methodology. Certainly will follow through.

It is just a little anecdote but I think it highlights the point made about how participatory evaluation makes sense to the beneficiaries. Our project is using MandE's Most Significant Change approach in schools across Turkey. A large stumbling factor has been enabling school coordinators to understand what MSC stories should contain. "MSC" as a description of the activity should never have been used because potential voices were lost due the daunting task of expressing something "significant". "Every little change" is perhaps better at the author level. I think the point also resonates with evaluators needing to ensure all voices are heard, in their rich diversity.

In reply to by Malcolm Cox


Thanks, Malcolm, for sharing this example of MCS in action. I agree that capturing diverse perspectives, including the "small" voices, it is also important to discern larger patterns to give those smaller voices a larger impact.

Sriniva, many thanks for the many good examples. Great contribution.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.