It worries me to hear the argument that data scientists, as the masters of machines, will be pitted against evaluators, as the guardians of evaluation methods...We should have learned by now that dichotomies like these limit choices and unnecessarily lower the quality of data, insights, and value of evaluations.

TWEET THIS

As development processes become more complex we need a larger toolkit to combine evaluation methods.
On the data supply side, questions of ethics, governance, biases, and capacity must be addressed.
Questions of capacity, incentives and follow-up actions must be addressed when using new ICT.

Recently, I was invited to attend the ICT4Eval conference organized by the evaluation office of the International Fund for Agricultural Development (IFAD). It was a well-orchestrated event with an excellent choice of presenters and participants. At the end of the conference, I was asked, along with Marco Segone, director of evaluation at UN Women, to reflect on the stimulating and rich discussions.

Evaluation typically touches on a broad set of actors: from people affected by development interventions, to governments and public servants, the private sector, and evaluators. Their roles span from providing and using information, to developing and testing technology, and making decisions about programs and their implementation. Across the spectrum, technology offers opportunities and entails risks that need to be managed. In this new blog series, we will explore some of the interesting ways in which information and communication technology (ICT) is impacting the world of evaluation.

But, before we get there, let me address a question that loomed large on some of the conference participants’ minds – will ICT replace the evaluator? Will machines, with increasing intelligence and ability to learn, take over and humans will no longer be needed? Who is master, who is servant in this new age of technology?

As with so many things in life, it is not a question of “either—or” but rather of the right combination of man and machine, how the right division of labor is achieved to optimize evaluation design, implementation, and use. It worries me to hear the argument that data scientists, as the masters of machines, will be pitted against evaluators, as the guardians of evaluation methods. As evaluators, we have gone through at least two cycles where two camps formed and fought for supremacy, once between quantitative and qualitative, and another time randomistas against the rest of the world. We should have learned by now that dichotomies like these limit choices and unnecessarily lower the quality of data, insights, and value of evaluations. Let us not waste yet 10 more years arguing whether there is one method that can explain all there is to know—the world is too complex for such a simplistic approach, and resources too scarce to waste on an unnecessary argument.

Instead, as we seek to understand ever more complex development processes and phenomena, we need to have a larger toolkit that allows us to combine relevant evaluation methods. It is the smart employment of people from different professional backgrounds that work together to design and deploy a range of relevant evaluation methods that will help us shed more light on what is happening and why, whether and under which circumstances successes can be replicated, and how we can avoid unnecessarily repeating mistakes.

It is mankind and machine together that can work towards that goal.

In addition to this fundamental question, we discussed plenty of other challenges to ICT4Eval. On the data supply side, questions of ethics, governance, biases, and capacity need to be addressed, while on the data use side there are equally important questions of capacity, incentives, and follow-up actions. Technology can help in many spheres, but we have to consider and manage associated risks.

We will be posting a couple of blogs on the topic to stimulate a discussion among our readership and invite practitioners with experience to contribute. Do you have any interesting experiences or thoughts on how ICT is impacting evaluation? If so, please let us know in the comments section below.

Learn more about the ICT4Eval Conference and watch the full proceedings.

Comments

Submitted by Alejandro Uriza on Thu, 07/20/2017 - 13:04

Permalink

The use of ICT in evaluation processes as a means not only facilitates the data collection process, because it reduces the processing times and is possible to see possible voids in the data that are being collected. Now the combine the methods is always necessary and that contact as an evaluator with people, is not replaceable. The use of ICT is an integral part of the evaluation processes, which make the data collection activities in survey more efficient. But I agree and should be on the data supply side, questions of ethics, governance, biases, etc

Submitted by Jane Massy on Fri, 07/21/2017 - 07:38

Permalink

ICTs also enable us to innovate how data is collected with opportunities to use graphics/photos sound and a range of other media. However, this challenges us as evaluators to build our own skills in designing and implementing new approaches both to collect and analyse data. I have found ICTs have helped us to map relationships and to begin to collect empirical evidence to analyse how supporting relationship building through mechanisms such as networks of participants relates (if at all) contributes towards outcomes. It is a given that ethics, governance, biases etc will need to be at integral part of the adoption of these technologies. And we need to learn to use the technologies for example to help us avoid bias by highlighting through analytics where bias might not be so evident even to the expert eye.

Submitted by Mwiru Sima on Sun, 07/23/2017 - 08:50

Permalink

I agree with the contributors above and largely i see ICT as a tool to complement what evaluators are currently doing in terms of data collection and analysis. this will make evaluation turnaround to be quicker more importantly cheaper, processing large quantities of the information which will make evaluation be even more valuable in terms of arriving in some conclusion with large amount of information triangulated. In all of these though the human intervention is key in terms of interpretation of the findings and even the design of the evaluations themselves. the interconnectedness of the SDGs for instance needs careful human intervention in designing of the evaluation

Submitted by Omid Hassannejad on Tue, 07/25/2017 - 23:08

Permalink

Very interesting topic to me, as I am currently doing research on the human and non-human actors interactions and relations in the evaluation practice and surprisingly using the World Bank's four international development projects' data such as project documents, evaluation reports, etc. as my data set. I am particularly trying to understand how the interactions and relations of non-human actors such as technology, objects, artifacts, tools, software, events, etc. with the project's human actors such as project managers, project sponsors, funders, evaluators and other key stakeholders influences the project outcome and results. Fore more info please check the following link:
https://www.linkedin.com/pulse/how-large-complex-projects-evaluated-iba…

Submitted by Leonardo Bravo on Thu, 08/10/2017 - 09:44

Permalink

Technology is already helping evaluators to perform their tasks faster, cheaper and better. It helps to deliver products with even quality, more in depth analysis and perspective. The ideal is to enhance mankind and to be on top of machine. When evaluations respond to why and how there is no risk of cannibalization but when those are only factual and contain surface analysis, machines will outperform evaluators. It is time to move evaluation practice to the 21st century and use the advanced analytics available.

Add new comment