Technology and Evaluation - Man versus Machine?
The first in a series of reflections on the information and communication technology (ICT) revolution and how it will impact evaluation in the not-so-distant future.
The first in a series of reflections on the information and communication technology (ICT) revolution and how it will impact evaluation in the not-so-distant future.
By: Caroline HeiderIt worries me to hear the argument that data scientists, as the masters of machines, will be pitted against evaluators, as the guardians of evaluation methods...We should have learned by now that dichotomies like these limit choices and unnecessarily lower the quality of data, insights, and value of evaluations.
As development processes become more complex we need a larger toolkit to combine evaluation methods.
On the data supply side, questions of ethics, governance, biases, and capacity must be addressed.
Questions of capacity, incentives and follow-up actions must be addressed when using new ICT.
Recently, I was invited to attend the ICT4Eval conference organized by the evaluation office of the International Fund for Agricultural Development (IFAD). It was a well-orchestrated event with an excellent choice of presenters and participants. At the end of the conference, I was asked, along with Marco Segone, director of evaluation at UN Women, to reflect on the stimulating and rich discussions.
Evaluation typically touches on a broad set of actors: from people affected by development interventions, to governments and public servants, the private sector, and evaluators. Their roles span from providing and using information, to developing and testing technology, and making decisions about programs and their implementation. Across the spectrum, technology offers opportunities and entails risks that need to be managed. In this new blog series, we will explore some of the interesting ways in which information and communication technology (ICT) is impacting the world of evaluation.
But, before we get there, let me address a question that loomed large on some of the conference participants’ minds – will ICT replace the evaluator? Will machines, with increasing intelligence and ability to learn, take over and humans will no longer be needed? Who is master, who is servant in this new age of technology?
As with so many things in life, it is not a question of “either—or” but rather of the right combination of man and machine, how the right division of labor is achieved to optimize evaluation design, implementation, and use. It worries me to hear the argument that data scientists, as the masters of machines, will be pitted against evaluators, as the guardians of evaluation methods. As evaluators, we have gone through at least two cycles where two camps formed and fought for supremacy, once between quantitative and qualitative, and another time randomistas against the rest of the world. We should have learned by now that dichotomies like these limit choices and unnecessarily lower the quality of data, insights, and value of evaluations. Let us not waste yet 10 more years arguing whether there is one method that can explain all there is to know—the world is too complex for such a simplistic approach, and resources too scarce to waste on an unnecessary argument.
Instead, as we seek to understand ever more complex development processes and phenomena, we need to have a larger toolkit that allows us to combine relevant evaluation methods. It is the smart employment of people from different professional backgrounds that work together to design and deploy a range of relevant evaluation methods that will help us shed more light on what is happening and why, whether and under which circumstances successes can be replicated, and how we can avoid unnecessarily repeating mistakes.
It is mankind and machine together that can work towards that goal.
In addition to this fundamental question, we discussed plenty of other challenges to ICT4Eval. On the data supply side, questions of ethics, governance, biases, and capacity need to be addressed, while on the data use side there are equally important questions of capacity, incentives, and follow-up actions. Technology can help in many spheres, but we have to consider and manage associated risks.
We will be posting a couple of blogs on the topic to stimulate a discussion among our readership and invite practitioners with experience to contribute. Do you have any interesting experiences or thoughts on how ICT is impacting evaluation? If so, please let us know in the comments section below.