ICT4Eval provides great opportunities for people to provide feedback and gives them a greater voice. I imagine a future where online systems exist through which people can share their views and at the same time learn about the bigger picture and diverse perspectives, as well as receive information that empowers them to demand better services or greater accountability.

TWEET THIS

In an earlier blog, Technology and Evaluation: The Evaluator’s Perspective, I argued that the needs of stakeholders, not technology should drive evaluation questions. I also shared my thoughts on the implications of Information and Communication Technologies (ICT4Eval) for our profession.

As evaluators, we have to also consider what matters to the users of our evaluations. Many of them play a dual role: as providers of data, and as users of evaluation insights. The value they can derive from evaluation is an understanding of the perspectives of different stakeholders, documented evidence they might not be aware of otherwise, and the expert assessment of the evaluator.

ICT4Eval provides great opportunities for people to provide feedback and gives them a greater voice. I imagine a future where online systems exist through which people can share their views and at the same time learn about the bigger picture and diverse perspectives, as well as receive information that empowers them to demand better services or greater accountability.

Will that eliminate the need for or change the role of the evaluator in this “new brave world”? 

This possibility of providing feedback to people, communities, and other actors in the development process is one of the most thrilling prospects of ICT4Eval. Doing so has been notoriously challenging. Resources are often insufficient to break down insights in ways that are meaningful to this stakeholder group. And, funding is even scarcer when it comes to taking lessons back to where the information had come from. I hope that technology can close these gaps and empower people and communities to understand better their development processes and take necessary actions when needed.

To get there, quite a bit of work will be needed. Not just by evaluators, tech geeks, and data scientists.

Governance structures, rules of the game, ethics, already touched upon in my previous blog, will matter even more when evaluations systems are created that allow self-evaluation and immediate feedback. Vulnerabilities exist at both end of the spectrum. For instance,

  • How will providers of information be protected from repercussions and at the same time, how will systems be protected from deliberately distorted data?
  • How will data scientists use their considerable skill to design data systems and algorithms that make sense in the context where they will be used?
  • Will we rely on the data literacy of all users or will we need to think of strategies to empower everyone to read and interpret findings, be alert to data weaknesses or biases, and make sense of it all?

Assuming these and other challenges to the data supply and demand side can be addressed, lead to better informed stakeholders, it does not follow that evaluative evidence will be used and necessary actions will be taken.

ICT systems can help track progress in implementing evaluation recommendations or facilitate reporting back changes that are happening following an evaluation. But ICTs cannot address questions of incentives that are driven by value systems, authorizing environment, and what motivates people to act. Sometimes, necessary actions are beyond their authority, sometimes, other behaviors are more advantageous to them.

Are you an evaluator or evaluation user? In what ways has technology impacted your work? Let us know in the comments below.