It wouldn't be the first or second time that we take this route. In the 1990s we saw extensive and heated discussions about whether quantitative methods trumped qualitative ones, or the other way round. That phase was followed by a decade of debates on whether randomized control trials were the only "true" evaluation of results. A claim that was countered by evaluators committed to other evaluation methods that are more participatory and capture qualitative evidence.

The good side of these debates has been a testing of boundaries of existing methods. By engaging in deep arguments about the value and quality of one method over the other, evaluators were able to put their methods to the test and improve them. Debates like this can be a way to grow and strengthen the evaluation profession and practice.

But anyone who has followed the profession for the last 30 years also knows the expense. Deeply entrenched debates have absorbed a lot of energy about who is right and who is wrong. Investments have been made on which evaluation method will provide the ultimate insights or become the panacea that answers all questions. Today it is clear that no one direction will do.

Instead of this partisan approach, I am a staunch supporter of using a mix of evaluation methods. Each one has its advantages and disadvantages. Combined, and used for the right purpose they can shed more light and generate deeper insights into what happened and why as a result of an intervention, or other factors.

So where is the next dichotomy?

Over the last couple of years, I have heard more and more often that accountability and learning are irreconcilable, that evaluation needs to take a stance to adopt or focus on one or the other.

I am concerned that the dialogue will lead to unnecessary divisions in the evaluation profession rather than to "the day when evaluation will be considered as the intelligence of an organization, the wisdom of a society" as Ian Davies put it in the February 2016 edition of the EES Newsletter.

In my contribution to the same newsletter, I argued that the answer to this question depends on how accountability is exercised. If it is used to blame people for poor performance and shortfalls in results, if it becomes a threat that is used by evaluators in their work, no doubt accountability will trigger defensiveness rather than learning.

But, in my view this kind of definition and behavior is not what accountability is about. And it is not what evaluation is about. At least not in my book.

Instead, evaluation brings together learning and accountability in that we are looking back at projects, programs, policies that have been implemented. The intention is to understand what has happened: did we reach our intended goals? If not, what took us off-course? Should we set different goals, or manage better the implementation process? All of these evaluation questions combine queries that serve accountability - in the sense of whether we delivered what was expected - and learning - in the sense of how can we replicate successful experiences, how can we avoid mistakes that have been made before.

Jacques Toulemonde, in the same edition of the EES Newsletter, points to valid issues along the accountability and learning axis. Accountability for results is affected by the complexity of other intervening factors so that the project designer or manager cannot be directly held accountable. Likewise, he raises the shortcomings of learning - lessons that are called "learned" even if no-one absorbs and implements them. I agree with these points. And with his reiteration of Cheryl Grey's argument that evaluation should aim for accountability for learning. 

But, I am equally certain that this is not the end of the debate. I just hope that we use it to constructively explore the boundaries of this (supposed) dichotomy, the associated methodological challenges, and contribute to making the evaluation profession stronger rather than unnecessarily dividing it.

P.S. Readers may also want to read this WEF article, Is your team in 'psychological danger'? The article speaks directly to the issues of blame culture that I highlight above showing why they matter for teams and organizations.

Comments

Permalink

This is a useful post and a real question. The issue is who uses the evaluation, for what, and how the evaluation can contribute. All too often discussion in Parliament is punitive, blaming the executive for their failures. We need to shift from blaming for mistakes, to blaming for not learning from mistakes. Evaluations contributing to fear will seed the failure of evaluation. Departments are frightened that evaluations will lead to blame. So we have to find ways to make the culture one of how to improve not to blame. Our second evaluation of impact of the reception year of schooling was s classic case. The evaluation said no impact in poor schools in poorly performing provinces. The deputy minister who presented it congratulated the department for the degree of rollout, and then said but impact is not what it should be and we need to improve quality. There followed a lively and constructive debate. If we can engender that approach we can get accountability and learning.

In reply to by Ian Goldman

Permalink

Thank you for the example Ian! You are entirely right "We need to shift from blaming for mistakes, to blaming for not learning from mistakes." Let's all work towards that!
Permalink

A new paradigm war would be unfortunate at a time when the evaluation community should be pulling together to implement the Global Evaluation Agenda. It would also be unnecessary since we should learn from experience. Just as the methodological divide was bridged when both sides acknowledged that no single method is equipped to answer all evaluative questions it is time to agree on a simple proposition: neither a pure accountability model nor a pure learning model can fit all situations. Thus a properly conceived accountability driven function makes authority responsible for results over which the organisation can exercise control (performance) while organizational learning focuses on continuous improvements in the design of partnership arrangements and internal management protocols so as to enhance effectiveness.. In most cases an admixture between the two functions is desirable and in all cases a judicious balance between self evaluation and independent evaluation should be struck to create the right enabling environment for effective governance. Thus accountability for learning and learning to be accountable converge - two sides of the same coin. This evokes the need for combining accounting with auditing in order to meet sound financial management standards.

In reply to by ROBERT PICCIOTTO

Permalink

Many thanks, Bob. I agree.
Permalink

I am entirely supportive of the assertion that "evaluation profession should aim for accountability and for learning". In the same vain, I really like the question of who use the evaluation inputs. We should focus on lessons derived from evaluation exercise and what happen then? To ensure that evaluations' main findings are the drivers of better designed and implemented operations we need to solve the issue of countries' ownership on that. That means evaluations findings are disseminated and used for decision making. But, how to make these findings less sophisticated for a great majority of countries experiencing weak capacity on this matter that is reserved to small circle of practitioners? That's the big problem in my sense.

In reply to by Begnadehi Clau…

Permalink

Begnadehi, thank you for your reflections. Yes, taking lessons back to audiences in countries is always difficult for centrally located evaluation offices like IEG. At times, we organize local outreach activities, like the one in Zambia that the WB country director blogged about. But, we need to find more ways to stimulate discussions of evaluation findings in ways that are effective and efficient.
Permalink

Thank you, Caroline, for raising this issue. Indeed, it should not be ether-or, but how accountability contributes to learning. And, as Ian and Bob have added, accountability for learning from evaluation. I like to promote facilitated self-evaluation using the image of an evaluator holding up a mirror to help the evaluand reflect realistically on how things are going. Rather than using a magnifying glass to examine and judge from an external, 'objective,' 'gotcha' perspective, lacking empathy and the promotion of improvement. The difference between performing on an autopsy on a dead body to determine how it died (pure accountability), and a doctor conducting a medical exam providing the facts to help the individual know what to do to improve his or her health (to promote learning). Indeed in my experience I've seen groups resist evaluation, thinking it only serves the former (accountability) function, not realizing that evaluation can also make significant contributions to learning for improvement.

In reply to by Jim Rugh

Permalink

Jim, good points, though I would not put "objective" in the same category as "gotcha". Objectivity and impartiality are hallmarks of evaluation and drive its commitment to evidence-based assessments. These attributes set it aside from opinion-based assessments, where the expert's personal view is sought rather than triangulated evidence brought together from various sources. Also, I am not entirely sure about the example: isn't an autopsy done to understand why the person died, which would lead to learning for others?
Permalink

Caroline thank you for your post: it is an interesting dichotomy especially the way you framed it, it seems to me, from within organizations implementing and by commissioners of evaluations. I would extend a question which is, what if we expanded our notion of accountability to accountability to the country nationals who we have pledged to assist? If we facilitated their evaluation of how well our joint investments through projects have furthered their self sufficiency during and long after closeout, what can we learn? How well we valuing their voices to shape the project they are to self-sustain now for long after our projects end? How would development be done differently? Warmly,, Jindra of Valuing Voices

In reply to by Jindra Cekan

Permalink

Thank you Jindra. Yes, that is the direction of travel I expect evaluation -- as a profession and a practice -- to evolve towards.
Permalink

Excellent point raised Caroline, I think when we talk about evaluation for accountability and learning the focus should be on the future rather than looking back (may lead to blaming). It should be transformed into culture of organization to except and respect to the fact that the evaluation will lead to improvement in Programme and foster the performance by looking back the journey and learning of it.

In reply to by Abdul

Permalink

Thank you, Abdul, for the suggestion. "Looking back" is the main thing that we do in evaluation but with a strong commitment to understand and interpret what has happened in the past through the lens of the future. For instance, if there is a new policy, what can we learn from the past (from looking back) to help us make sure we make the right policy choices and implement it well.
Permalink

Dear Caroline, thank you very much - as usual - for having spearheaded such an important debate on this page. I think that there are two distinct, and both important, reflections to make here. The first reflection pertains, as colleagues have already well put, on an empathic approach to evaluation, one that would naturally blur the line between accountability and learning. The second reflection is related to ‘accountability’ and ‘learning’ from evaluation being too often used/considered as extrinsic incentives to change. Recent research on incentives to lifestyle has stressed that Eudamonia-rewards (intrinsic incentives), which relate to a sense of meaning and purpose, are however more powerful than Hedonic-rewards (extrinsic incentives). So: how can evaluation not only promote learning but provide intrinsic incentives to change, while preserving its independence? If intrinsic incentives are about autonomy (being ourselves), competence (feeling we can make a difference) and relatedness (knowing that we can help and we will be helped), this to me is a call for evaluation – as at least conceived in large international organizations – to be more participatory. This would not only mean a more extensive use of focus groups as methodological tool, but also increased attention devoted to the follow-up of evaluation, so that the report and the knowledge there generated are indeed used to foster change among managers and staff at all levels in the organization.

In reply to by Anna Guerraggio

Permalink

Dear Anna, many thanks for your thoughtful comment. Looks like your research is going very well! Hope you send me an update (by email) some time. I agree with you in many ways, but also have experience that evaluations that aim to address systemic issues that undermine doing meaningful jobs, one can make a big difference. Once a program manager can see the evaluation is not personal -- not of him or herself -- but of the underlying system that hinders him/her to be successful, the mindset can shift. If then evaluation can demonstrate that it actually can be effective in inspiring these systemic changes, it becomes a powerful and accepted tool.
Permalink

I am concerned that the term "accountability," as it is used in management practice, carries the connotation of blame. This may make it difficult for accountability and learning to be used together as productive aims of evaluation. In hierarchical organization, power structures sometimes insure that blame is place at the lowest possible level. I am interested in how others think about these factors.

In reply to by Carey Tisdal

Permalink

Carey, this is exactly the point here. If we confuse accountability and blame, the institution is stymied. And that does not just relate to evaluation, albeit we might be at the forefront of the discussion of the issue. The blame culture, however, is not (necessarily) caused by evaluation.
Permalink

This has been quite an interesting issue. Not entirely new within the evaluation sector. However, my take will be that the seeming dichotomy can be solved by ab initio defining what the evaluation is out to achieve. If accountability and learning, then the approach to the evaluation and the results should be seen in that light: How accountable have the resources and results been and what learning can we derive from the outcomes of the evaluation conducted. I therefore agree that it should not be an either or

In reply to by John Akuse

Permalink

Thanks, John.
Permalink

Caroline, This is a really important point. To be an organisation that learns, we need to experiment and take calculated risks. The IT industry has made rapid progress by creating a culture where rapid experimentation (and multiple failures) is seen as the key to success. Evaluation needs to be part of this cycle, both to identify what projects met their goals (accountability) and to learn (why some things worked and why others didn't). This depends heavily on organisational culture - whether the management reaction makes individuals feel threatened by a negative evaluation, rather than it being an opportunity to learn and move to a more productive path.

In reply to by Robert Drake

Permalink

Robert, I couldn't agree more. In addition, Tuesday's blog of the World Economic Forum speaks about psychological safe space that is needed for people to learn, whether from evaluation or otherwise. A culture of blame, one that does not accept failure as an option, is at the root of this problem. https://www.weforum.org/agenda/2016/04/team-psychological-danger-work-performance?utm_content=buffer358e0&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.