To unlock the potential of 'finding out slow' evaluators can rely on in-depth case study analysis, a method which is not being used to its full potential in current evaluation practice.

Evaluation is applied social science research under time, budget, data and institutional constraints (Rossi et al., 2004; Bamberger et al., 2011). Some branches of evaluative inquiry—those that lean more toward research, such as impact evaluation—tend to focus their inquiries on very specific interventions and outcomes. Yet this is not common practice in (independent) institutional evaluation functions that are closely linked to corporate decision-making, learning, and accountability.

Evaluators often need to respond to broad evaluation questions relating to what often are quite complex evaluands (large projects, programs, or thematic areas of work). Even in focused evaluative exercises, challenges related to dealing with multiple levels of analysis (such as the thematic area, project, and/or intervention activity), multiple countries (or sectors or projects), and multiple stakeholder groups remain. And here lies one of the key challenges of evaluative inquiry: Can evaluators arrive at a sufficiently credible evaluative judgement in the face of such complexity and practical constraints?

There are several arguments in favor. Systematic evaluation design and data collection, the availability and use of a growing number of relevant data sets, building on existing research and evaluative evidence, applying principles of triangulation and adjudication between rival explanations (etc.), are all examples of principles and tools that are available to the evaluation team.

Yet, there are also risks. Due to the constraints discussed above, the validity of evaluative findings is threatened by inter alia two key threats: cognitive bias and lack of depth of analysis. Regarding the first, this is a perennial issue that is exacerbated in evaluative practice because evaluators often reconstruct and test one particular program theory only. Often, there is no time or incentive (especially in the practice of objectives-based evaluation) to do more than that. The program theory becomes the lens through which the evaluator looks at empirical reality, at the risk of (limited to extensive) cognitive bias (see my blog post on program theories). The second aspect refers to the potential fragmentation of human and financial resources that are allocated to multiple analytical activities in the framework of the larger evaluation.

Both threats are characteristic of a prevalent practice that we could label “finding out fast”. This is not because evaluations are necessarily conducted in a rapid manner but because they comprise a multitude of little exercises that involve the collection of data on bits and pieces of a larger puzzle that support the evaluative assessment at the level of the complex evaluand (e.g. a multi-level and multi-site project, a country program, a thematic area of work). Yet, each piece of the puzzle may be complex in itself.

To counteract the potential threats to validity, I briefly present the case for a different approach which, again for simplicity’s sake, we could label “finding out slow”. Enter Albert O. Hirschman, distinguished development thinker and one of the most influential social scientists of his time. He was also an evaluator avant la lettre, although he vehemently denied this, as he would reject any label (see Picciotto’s (2015) excellent discussion of the implications of Hirschman’s work for evaluation). In one of his seminal works, Development Projects Observed (Hirschman 1967, 2015), Hirschman discusses his in-depth empirical assessment of eleven World Bank projects. Instead of following a deductive approach (as often applied in objectives-based evaluation) he followed a bottom-up, inductive approach, keeping an open mind to the intricate complexity and idiosyncrasies of each of the projects that he studied. Through critical in-depth empirical observation and inquiry, he then arrived at what he called “structural characteristics” (Hirschman, 2015: 4) of projects, i.e. principles that would explain success or failure under certain conditions. In doing this, his thinking was very much aligned with another great contemporary, Robert Merton, in identifying “middle range theories” (Merton, 1967) around the workings of development projects that were neither grand theories of development effectiveness, nor isolated findings that lack broader generalizability. To unlock the potential of “finding out slow” evaluators can rely on in-depth case study analysis, a method which is not being used to its full potential in current evaluation practice (for guidance see for example Stake, 1995 or Yin, 2017).

Let me briefly illustrate my point by using an example of an evaluative exercise I was involved in several years ago about the outreach and potential impact of a rural financial institution (RFI) in Nicaragua. Over the years this RFI has received support from at least two International Financial Institutions and several other international donors. Some of the international donors who pledge their capital for on-lending by the RFI have clear expectations around the use of these funds in terms of deepening outreach among the rural poor and consequently alleviating rural poverty.

An evaluative exercise in “finding out fast” modus would typically look at the “official” program theory underlying outreach (e.g. the availability of physical collateral, a plan for investment, etc.) and at the evolution of the portfolio over time. It would address such questions as: “Has there been a decrease in average loan size, or an increase in the number/proportion of loans below a certain threshold level?” and “Has there been a decrease in average income/asset levels of clients or an increase in the number/proportion of clients under a certain threshold level?” If a clear trend in the portfolio could be established that indicates greater outreach among the poor, then through deduction evaluators may conclude that there is likely to be a positive poverty effect. Such a conclusion could be drawn more confidently if existing research would support poverty reduction effects attributable to credit in similar conditions. Quasi-experimental analysis (if income/asset level data would be available over time for clients and potential clients) could shed even more light on this question.

Yet, a “finding out slow” approach would go beyond this and would shed the restraints of the formal program theory and enable the evaluator to go into the field with a Hirschmanian inductive lens. In our evaluative exercise years ago in the North of Nicaragua, we discovered that outreach appeared not to be determined so much by formal selection criteria, but through the mechanism of existing clients recommending new potential clients. When we embarked upon a social network analysis to study the connections between those who provided the recommendations and the new clients we found a very strong pattern. The big nodes in the social network were large farmers and local community leaders, each recommending land laborers and small farmers from their own sphere of influence. We formally tested the network theory against the official program theory and found the network effect to be a significantly stronger explanatory factor of outreach and access. Given the dependency relationships between the local leaders and their “dependents”, this finding cast a whole new light on the potential poverty alleviation effect of rural credit in that region.

There are good reasons for evaluators to pursue a “finding out fast” strategy. It is good enough in many cases if subject to solid principles of applied evaluation research. Yet, there is always a risk of failing to understand the real causal processes at work. There should be due opportunity and space for “finding out slow”. Hirschman showed this in a thoughtful and eloquent manner in his seminal work Development Projects Observed. It remains as relevant today as it was then.

Note

The Independent Evaluation Group in collaboration with the A Colorni-Hirschman International Institute recently organized the Conference: A Bias for Hope - Second Conference on Albert Hirschman’s Legacy (October 25-26, 2018; World Bank, Washington D.C.).

References

Bamberger, M., J. Rugh and L. Mabry (2011) Realworld evaluation: Working under budget, time, data, and political constraints, Sage, Thousand Oaks.

Hirschman, A.O. (1967, 2015) Development projects observed, The Brookings Institution, Washington D.C.

Merton, R. (1967) On sociological theories of the middle range. Chapter 2 in: On theoretical sociology, five essays old and new, Free Press, New York.

Picciotto, B. (2015) Hirschman’s ideas as evaluation tools, Journal of MultiDisciplinary Evaluation, 11(24), 1-11.

Rossi, P.H., M.W. Lipsey and H.E. Freeman (2003) Evaluation: A systematic approach, Sage, Thousand Oaks.

Stake, R. (1995) The art of case study research, Sage, Thousand Oaks.

Yin, R. (2017) Case study research and applications, Sage, Thousand Oaks.