For evaluators, the term "Theory of Change" is not just jargon but a critical element for our work.  As evaluators we are often asked: what will happen once an evaluation is done? What difference is it going to make, how will we know people learned, changed behaviors, or became more successful? And, are these changes commensurate with the money we spend on evaluation?

In the case of the World Bank Group, IEG have seen a correlation in World Bank Group projects between strong results-based management (results frameworks, M&E, work quality at entry and during supervision) and good outcome ratings. What explains that correlation? Does the theory of change apply to our work? If so, how?

In this blog, I try to explain how evaluation influences change, and why a theory of change matters in evaluation.

Simple Theory of Change. In its simplest form, the theory of change of an evaluation is that it will shed light on success and failure. By doing so it helps us make better decisions about what should be replicated and what should be stopped or avoided. It facilitates taking corrective actions and hence achieve better results.

A Different Theory for Independent Evaluation? Not really! Both self- and independent evaluation aim to influence change through critical reflection on what works and why to inform immediate and future actions and increase development effectiveness. The advantage of self-evaluation lies in shorter feedback loops. Lessons are generated and owned within the program management and self-evaluation process, and learning might take place faster. For independent evaluation, feedback loops are longer and learning can be impeded when evaluation findings are resisted. But, self-evaluation might have blind spots for things that are not working, or be bound by institutional norms that impede questioning certain issues. Independent evaluation can overcome these with a dispassionate, arms-length assessment that is not subject to undue pressure. In short: both have advantages and disadvantages that can be balanced out in a combination of both.

Reality Is More Complex Than Theory (of Change). Embedded in such a simple theory of change are factors that are more complex and affect the way in which change will occur as a result of evaluation. Assumptions made along the results chain are often not made explicit. For instance, we assume that enough evidence exists to explain the causal relationship between an intervention and observed changes. Or that evidence weighs heavily in decision-making processes, which however are often dominated by other considerations. Just like a development intervention intervening factors play an important role in explaining the influence of an evaluation. Many of them do not follow a linear results-chain but a complex system of interrelated feedback loops.

Turning Theory to Higher Value Added. These factors that can turn an evaluation into something highly influential or make it fall flat belong into four categories. Two within and two outside the control of the evaluation: those that affect the cost of the evaluation and those that affect its value. In each of these "boxes" the factor might have a positive or negative effect.

Clarity about what the evaluation tries to achieve, the assumptions we make about the channels of influence, and factors that might help or hinder its value-added, or increase or lower its costs, help improve evaluation effectiveness, and with that its value for money.

How does that work?

Let's take a leaf out of our book of advice to colleagues on the operational side of the business. When we design an evaluation, let's:

  • Address issues that are of importance to stakeholders when defining the objectives of an evaluation, while sticking to the evaluation business of assessing past experience.
  • Be explicit about decision-points, processes, or behaviors that the evaluation aims to influence.
  • Be clear about our assumptions how the evaluation will exercise influence. Do any of them require changes in the evaluation design (methods and/or process) to ensure the evaluation makes as convincing a case it needs to?

Just like our assessment of World Bank Group projects - where high quality design and proactive management during implementation are essential contributors to success - I believe these measures increase chance that an evaluation will influence change. They can directly affect the "value" side of the equation, albeit that increasing "value-for-money" will require finding efficient ways to carry out the evaluation.

Comments

Permalink

More interesting and valuable influencing change through evaluation.

In reply to by Ahmed Mustafa …

Permalink

Thank you, Ahmed. It's always rewarding to hear when the blogs are useful.
Permalink

As per my little experience, i am quite sure that many resist with evaluation process due to the fact it will reveal the reason for "what not worked" and will shed a light on level of transparency and accountability exist in organization. the Internal Evaluation wouldn't be that much reliable and fruitful as Independent Evaluation be.

In reply to by Abdul

Permalink

Abdul, in many situations you are right. People worry that a "bad evaluation" will somehow harm them. Changing that perception is important to help improve learning from evaluation. As evaluators we can play a role, even if the larger piece rests with the operational management to embrace feedback and realize how valuable information and insights are. I remember a case (in my past life) where a couple of our evaluations show-cased and contrasted different country programs. The organization hadn't realized how much good work one particular country director/office was doing in comparison to others. Through our evaluations she got the recognition she deserved.
Permalink

Great blog post with visuals and narrative! Over the years conducting evaluation work I have found that by tapping into information through multiple lenses and understanding you are able to create a clarity of the situation otherwise not available. We all assess, analyze on an individual basis through our own lens. The narratives that emerge in evaluation activities are where the real, clear picture of the situation(s) that exists. I love when conducting evaluation interviews to see the light come on in a persons eyes indicating a new way of thinking or understanding a situation. This is where REAL change occurs.

In reply to by Deeanna Burleson

Permalink

Deeanna, thank you for sharing your perspective and experience. I share your sentiment entirely and remember exactly the same kind of situations and reactions. Sometimes things don't work because different stakeholders don't understand each other. An evaluation that "triangulates" (more often than not, it goes beyond three points/sets of stakeholders) information from different points of views can be extremely informative for all parties and lead to the break-through you describe. Really rewarding!
Permalink

Very valuable and timely needed sharing specially for evaluation practitioners and decision makers in development aid.It would be good if these kind of articles can be compliled as guideline for evaluators.

In reply to by Ariyasuthan - …

Permalink

Thank you for the suggestion, Ariyasuthan. We will look into it, at least for us here at IEG.
Permalink

Assumptions are either based in evidence/research, based on prior experience, or are simply wishful thinking. I find that taking implementers through a process of assessing their own assumptions really helps with buy-in. Fantastic post!

In reply to by Anonymous

Permalink

Thank you! Great contribution. I agree that just becoming aware of one's own assumptions can sometimes shift perceptions, understanding, and even goals. Really important step in both program design and evaluation, but one that doesn't always get enough time and attention.
Permalink

Great post Caroline, a couple of thoughts: 1. Social Change is a challenging and complex process where evaluation can certainly play a role, but should not expect to do it alone or in isolation. I believe evaluators and evaluation as a practice/profession should learn to work with other disciplins, ex-ante and ex-post, to improve chances that learnings from our work find their way into action at various levels 2. I believe we need to invest more to understand the change path: what is likely to happen (and actually happens) at the individual level, community, institutional, etc....and which level is the right one for the type of action being evaluated: often, we face interventions such as "the project organized training X....and we are asked to measure outcomes and impact at level Y" without explaining how the relationship is expected to work. 3. finally I share with one of the comments above, that the evaluation process itself is a great opportunity for change that evaluators are not using enough. It is not only the end product - findings, and the dissemination etc. - that matters, the process to me matters equally to foster change. In a recent evaluation I was involved in, we engage into a process of "negotiating" and "fine tuning" findings that were so intensive with back and forth information sharing, that helped strengthened the findings in the first place and clarify the government engagement to take action accordingly. Thanks for the inspiration

In reply to by Oumoul

Permalink

Thanks, Oumoul, for your great contribution (nice to hear from you again!). I couldn't agree more on all points, and we have made them in various of our evaluations. The only one I would differ to some extent is the last one. I am not sure that the process of negotiating findings leads to greater learning. Instead, my experience is that interim debriefings to keep stakeholders posted of findings helps build greater understanding and confidence in the process and its results. And, these interim debriefings can help the evaluators gather additional information before putting pen to paper. Maybe that's what you meant?
Permalink

The effectiveness of monitoring and evaluation for driving change is actually an organizational question. A large part of the theory of change presented above assumes a lot about the information flows and decision making systems in the client organization. M&E will be much less effective in top-down bureaucratic institutions due to inaccurate (only positive) information flowing up and decision making authority invested in only a relatively few positions. I think it was Gary Hamel in "The Future of Management" who said that bureaucracies can not be innovative; in a bureaucracy, there are too many ways to kill ideas too soon. On the other hand, when information (even negative/critical information) flows freely throughout the organization, and authority is shared down the organizational chain, M&E data is both more accurate and actionable. One way of looking at this is that the majority of information collected in evaluations comes from within the system. The evaluation compiles, analyzes, and reports back this information to the organization. While this is useful, it begs the question of why isn't this information, which is already in the system, not being used and acted upon systemically. As an evaluator, I have often been used to get information up the chain of command from the field by folks who weren't being heard by upper management. This process is slow, inefficient, demotivating, and prevents timely change and innovation. The theory of change above is the basic justification for M&E, but to be more useful, I believe it should take into explicit account of the clients communication flows, distribution of authority, and ability to respond to information already in the system as well as that coming from outside the system.

In reply to by Frank Page

Permalink

Frank, excellent points! You are entirely right that corporate or institutional culture determines how effective monitoring and evaluation can be in promoting learning and change. A great addition to the simplified theory of change in the blog, which does not show assumptions or intervening factors and risks. At the same time, I think one of the valuable parts of (independent) evaluation is to provide feedback where it cannot flow freely, or where an institution is open to criticism but has blind spots (something that happens to all of us).
Permalink

I thank Caroline Heider for posting a very educative and useful piece on evlauation. Program evaluation is a tricky business. Unless one is intensely cautious, an evaluation may be heavily influenced by the biased values of those who engage them for the evaluation. For an unbiased independent evaluation, it is crucial that the evaluators are granted necessary freedom in its conduct. It doesn't necessarily mean, there should be no control at all. It's absolutely essential to monitor the evaluator's work and their preliminary findings at regular intervals for fact and quality check. If an evaluator lacks adequate prior contextual experience, or fails to gain it during the course of the evaluation, there is every likelihood of s/he drawing fallacious conclusions from available facts and figures. So, necessary guidance is a prerequisite for the success of an evaluation. The parameters of such guidance should, however, be limited to fact and quality check. It is also important that an evaluator does not forget the context while recommending changes and how they are to be implemented. Otherwise, the recommendations may turn out to be impractical in the local context.

In reply to by Anonymous

Permalink

Thanks for pointing to the challenging task of keeping evaluators impartial and focused on delivering a high quality evaluation (factually correct, understanding of contextual factors, etc.). Well put.
Permalink

I thank Caroline Heider for posting a very educative and useful piece on evaluation. Evaluation is a tricky business. For a successful independent evaluation, evaluators must remain free from any undue influence of those who engage them. The clients on the other hand should grant the evaluators necessary freedom in the conduct of the evaluation to garner optimum benefit from it. By this I do not, however, mean there should be no control on the conduct of the evaluation. The work of the evaluators should be reviewed at regular intervals for fact and quality check. Otherwise, evaluators lacking adequate contextual experience or failing to gain it during the course of the evaluation may end up drawing fallacious conclusions from the analysis of the available facts and figures. They need some guidance, but the parameters of guidance should remain confined to only fact and quality check. In my career as a development practitioner, I have seen many evaluations being merely a reflection of views of the client with very little independent thoughts going into, I dare say, a so-called independent evaluation. It can not be a desirable result from an independent evaluation. Evaluators should also remain alert about the local context while recommending changes. Otherwise, the recommendations may eventually turn out to be impractical and hence of no use to the client .

In reply to by ASM Jahangir

Permalink

You are right: evaluators should be free of undue influence, but meet professional standards that ensure facts are checked, contexts understood, etc. Independence is not an excuse for missing out on accuracy and validity of findings.
Permalink

For me the bottom line is that one needs to be very clear from the starting point whom the evaluation is for and why and that very organization/people need to understand what value(s) the evaluation will bring to them and their partners/clients. Both the clients and evaluators need to be clear about such issues from the very beginning which the TOR should encompass. This will help to effectively plan, implement and report on the evaluation be it self/internal or external evaluations. More so will help to build confidence/trust in the evaluation findings.

In reply to by Kebba

Permalink

Kebba, I couldn't agree more. Being clear about the intention and design of the evaluation is key to its successful implementation. It's just like with projects: the better they are designed, the more likely they are going to succeed in producing their outcomes.
Permalink

I agree ToC related discussions generally provide fascinating and intellectually inspiring discussions. But, my view on this perhaps is also better to ask what change and by who? Asking such questions can produce highly articulated ToCs as well comprehensively designed program intervention/interventions. To my knowledge until recently where some donors began asking serious questions on Somalia's prolonged social, economic and political problems , most ToCs on addressing the root causes of such problems were detectvelly rather indictvally driven interventions. Thank you.

In reply to by Mohamed Hussein

Permalink

Couldn't agree more, Mohamed. The question of "what change" and "by whom" is essential to answer in a very context-specific way. Otherwise, the underlying logic of whether the change can be brought about is hardly relevant to the issues at hand.
Permalink

Very informative, Caroline. I am a novelist and trying to understand the concepts of monitoring and evaluation. I would like to as questions rather than contributing to the discussion. Using the theory change, how does impact revelation influence change in policy formulation ?

In reply to by Michael

Permalink

Michael, hopefully I understand your question correctly. Spelling out a theory of change means that one becomes more conscious of the change one hopes to achieve and how. AND: one takes note of the intervening factors that may help or hinder the intervention one is pursuing. For instance, a couple of years ago when presenting our Mother and Child Health Systematic Review to an audience at WHO, I was asked about a health measure they wanted to introduce as a standard measure. The evidence was not well established. I suggested that they establish the standard as a "working hypothesis" and develop partnerships with researchers that would conduct credible impact evaluations to test whether the measure they wanted to set as a standard actually worked. Once sufficient evidence was gathered, they could revise or approve the provisional standard to one that is internationally accepted.
Permalink

Important to be aware of the difference between an intervention logic ('how one arrives from input to the envisaged outcome / impact') and the Theory of Change. ToC identifies different pathways to arrive at the outcome / impact. ToC is about decisions made (be it purely rational or logic of appropriateness) about the choices among the pathways. In consequence, the intervention logic is a more narrow concept than ToC.

In reply to by willem cornelissen

Permalink

Thanks, Willem, for the clarification.
Permalink

A timely and great article . However not always applicable in the context of developping countries such as MENA where basic data ( statistics) are severely lacking. So without solid empirical data available, the evaluation process becomes a futile exercise!

In reply to by Dr. zakia Belhachmi

Permalink

There are, unfortunately, many contexts where data is missing or sub-optimal. This has been the case even more so in the past, before we had these incredible data systems, data computing capacity, and Big Data. In contexts of extreme data paucity, it is important to think through how to best establish an information base, be it by collecting information or by identifying alternative sources of information. But, that notwithstanding: you are right, investments in statistical capacity development are much needed. And, so are evaluation capacities.
Permalink

I have evaluated many projects and programmes and my experiences has been the fact that if the evaluator is so expert in the approach the results are often ignored by the project participants. methodology is the very foundation to validate an evaluators findings. People all over organizations are very resistant to change be it gradually predisposed or revolutionary. Part of the fear is about the belief that a project has a starting point and an ending point. Simply put, the project participants understand this to be a natural death of their project. This is why the sceptics of an evaluation often doubt findings and rarely believe on the recommendations of an evaluator. The practice of participatory approaches to evaluation can dispel such fear and the evaluation questions are evolved from within be it mid-term evaluation which I term internal or end term evaluation which is the external or end term evaluation. If an evaluator want to be successful in the exercise question of whether the project have undergone organization development since its inception is very important. Furthermore OD exercise removes the hurdles and inform an evaluator the direction that the project has taken during its life span. I agree with this topic that the reality is not only complex but also amorphous just like a hot coffee that need a cup to handle it and then it can be drunk or palatable. Secondly, the evaluation should not be fixed in a timeframe to suite a particular interest group or parties be it the donor or the project rank and file participants. An evaluation should be a well intended proactive venture directed to create innovations, ideas and hence evolve change. Many time the evaluation team are fixed into time rush and money neglecting the long term intention of why the project was initiated and therefore the findings might loose the plot.

In reply to by Stephen Ojwang

Permalink

Stephen, many thanks. You have shared a lot of important and interesting thoughts. I share our experience that evaluation often meets resistance and reactions that make it hard to move from evaluative observations to change. However, each project and each institution is somewhat different. The key to seeing uptake to evaluation, at least in my practice, has been to understand what matters to the people that manage the project, or design the next program. Once they understand how the insights from an evaluation can help them do a better job, they take on the evaluation lessons and run with them.
Permalink

Caroline, along with the other comments, I would like to add my thanks for your blog. I do, however, wonder whether the "can" in the statement "Independent evaluation can overcome these [blind spots and institutional norms] with a dispassionate, arms-length assessment that is not subject to undue pressure" needs to be stressed. It seems that it is not unusual for the management team to go to great lengths to 'dress up' a program before the Evaluator Police, who are treated as if they are more than mere humans, might possible uncover unsavoury results. Can management be dispassionate if they see the evaluation outcomes as a reflection of their performance and therefore a threat or opportunity for self-promotion? If the evaluation provides constructive criticism or positive feedback, how does that flow down to all involved?

In reply to by Jane

Permalink

Jane, I think many evaluators and people who manage programs that are evaluated can relate to your comment. You point to that it is stressful for both sides, which is important to recognize. My own experience is that program managers are generally committed to doing a great job. They are very often very proud to speak about their program, share what they are doing, and their successes. They often discuss also their challenges, share frustrations and ambitions for the future. Evaluation evidence, especially when brought together from different sources can help people recognize why certain things are hard to do, and help overcome those hurdles. Experiences like these change attitudes.
Permalink

Very educative write-up. Thanks very much
Permalink

Thank you Caroline for very clear and useful post. In many cases, even for big programmes, assumptions are poorly developed and they are inadequately analysed and assessed in the evaluation process. Probably, one reason could be they are less tangible and complex in the real world situations.

In reply to by Ram Chandra Khanal

Permalink

Good point, Ram. I would add that teams don't have (or don't make) the time to unpack the assumptions or are not aware that they are making the same without questioning them.
Permalink

Thanks for posting Caroline. You may be interested to read and reflect upon the blog post I wrote last year called "Are you game for (Theories of) Change in an unpredictable world ?". You can access the piece here: http://livestockfish.cgiar.org/2015/02/12/diana-game4toc/ People are not generally open to change and ideas that contradict what they already believe. Under certain conditions, they actively avoid such information while at the same time seeking information that bolsters their original beliefs. Research organizations and the people that work for them are not immune from this. Many who spent their careers in research or international development resist the idea that their efforts may be ineffective or even counterproductive. Cognitive dissonance theory predicts that, based on levels of commitment to current beliefs, evidence to the contrary will be rejected and even discussion (for learning) is discouraged. Exploring Theories of Change, the interfaces of capacity development with change processes, social psychology to understand research (uptake) and development practices, and studying ways to overcome them will have little benefit if the broader research and development field is not predisposed to carefully listen to evaluation findings. A learning approach is critical to enable adaptive management for capacity development. Monitoring and evaluation are both vital in supporting a learning approach, particularly where organisations and their partners can engage in joint review of jointly-defined indicators. Monitoring can track what has changed and link that back to a theory of change. Monitoring is most likely to support effective capacity development when centers, implementing partners, and client organizations collaborate on definitions of indicators and targets and joint reviews are conducted to support mutual learning and adaptation. An evaluation would be needed to gain a better understanding of how and why the theory of change worked or did not work. Evaluations can also consider unintended consequences, alternative explanations, and lessons learned in greater depth. Keep sharing and writing, best regards, Diana

In reply to by Diana Brandes

Permalink

Many thanks, Diana, for the great contribution!
Permalink

Very use way of approaching evaluation. thanks Caroline. In my experience we have the biggest lessons through many evaluations but very less building upon on learning. many dev organization, business and UN agencies are doing their continue projects, programme and organizational evaluation but less to follow of their findings. Also learning for cross evaluation. In my opinion evaluation should be linked with the change rather projects and shall continue mainstream throughout the intervention cycle rather just an activity to be conducted at the project end.
Permalink

For an evaluation to lead to real changes requires first that its findings and recommendations are realistic and also acceptable to stakeholders. There is much to say about it, but here I would like to reduce my intervention to the specific, and often underestimated issue raised by some participants to this discussion, namely that the acceptability of proposed changes is partly linked to the question of independence of the evaluator and even to the position of the Task Manager (TM) for which he performs his duties. This is particularly true in the EU context. Indeed, unlike the World Bank, which has IEG as an independent evaluation group, the EU institutions have decided not to put in place such a system that would ensure a greater independence for evaluators. The fact that the Commission is both the paying agency and the entity whose projects / programmes funded must be evaluated, puts the "independent" evaluator in an ambiguous situation in which he must both do his job seriously, but while observing some caution in the context of his criticism. At a trivial level, we must not forget that the evaluators expect to be paid for their work, and that the companies, also anxious to be paid at the end of their contract, want their proposed experts to avoid any potential conflict with their client, which is often a delicate exercise, especially in the case of particularly problematic projects. Moreover, the experts themselves know that their reports will be evaluated in their turn. In this context, the European delegations (EUDs) TMs are supposed to submit to the Commission a form in which they give their appreciation of the work done by the evaluator during his mission. But experience shows that the more critical the evaluation (or a monitoring report), the higher the rate of negative assessments of TMs. This fact is understandable, since a negative report puts somewhat into question the work of the TM. He, therefore, feels himself personally criticized and mostly responds with a critic of the expert. The phenomenon is of course aggravated when the TM believes the project or programme assessed to be his "child", which happens in some cases. Finally, we must not underestimate the fact that all EUD TMs are merely contractuals, whose future may, therefore, depend on how the quality of their work is estimated. All this leads to a number of biases and if, at first, a monitoring or evaluation report should present "the truth, the whole truth and nothing but the truth", in real life things are different. A monitoring or evaluation report is always a compromise between what should and what can be written, and this for different reasons, the most important being that the main purpose of such a report is to introduce positive changes in ongoing or future programmes, making necessary recommendations acceptable to all stakeholders. This desire to achieve the acceptance of recommendations to initiate a change implies a certain degree of diplomacy, in which the whole truth could even be counter productive. Not that easy to move from theory to reality! Best regards to all. Gilbert
Permalink

While there are iner alia time-line factors with self-assessment and external evaluations, I think from experience that buy-in for the evaluation process starts with self-assessment/evaluation since the organization and its people become aware of their gaps and on their own or with proper guidance, can own up to these shortcomings, learn how to deal with them and thereby, get evaluation as 'added value" on their table as a "regular" activity for the sustainability of their organization. I am thinking of NGOs or private sector startups and the plethora of development activities/business projection and plans they carry out without realizing initially they are vulnerable to extinction with no self-assessment and/or external evaluation as part of their modus operandi. It is also important to control for "expectations" in all evaluations and to coach people to deal with reality and causal factors that inhere in their reality --not as punishment but as productive opportunities.

In reply to by Alison Moses

Permalink

I couldn't agree more on the importance of self-evaluation as the basis for independent evaluation.
Permalink

"Reality Is More Complex Than Theory of Change." this statement is questionalble. Complexity is theoretical therm and can be easily presented in simple way. Complexity is ordered - of course in a way, that is completelly different from ordering simplicity. What is really a problem is that many evaluators stick to standards developed with old approach, the problem is thus not complexity as such but rigidity of standard way of thinking (positivist, linear, simplistic). When talking about complexity, the term 'theory of change' is meant seriously, even radically. This is what convenience dislikes, imperative of change that starts in the head with new concepts of social reality.
Permalink

Thank you Caroline.
Permalink

Thank you Caroline for posting this discussion point. I have seen all comments and strongly agree with their ideas and appreciate all contributors. I also share the role of M&E is very much challenged in various institutions, specially when the M&E is under the program department that impedes its liberty. We have to work hard to change the tradition and make the M&E role influence changes. Thanks
Permalink

Thank you for interesting read and contributions, i a newbie in the field and would love to learn more, keep posting and sharing. Regards.
Permalink

Evaluations are an important management tool both for the public and private sector, though each sector uses different terms. In the private sector evaluations that focus on the bottom line are generally treated with the urgency that is required. In the public sector managing the results of an evaluation often takes precedence, in particular internal evaluations that might negatively affect donor impressions on the effective functioning of the program.
Permalink

Steve, you are absolutely right! In addition to your points, I find that the private sector is often challenged to find the right balance to achieve its profit-maximization goal with one that gives back to community and economy. During last year's summits (SDGs, Finance for Development, and COP21), the private sector made big commitments. They will need to develop metrics and tools to track how well they are delivering on that front. On the public sector side, especially programs that are donor funded, I have found that by now donors understand that nothing is perfect and each program faces challenges. They are more sympathetic and supportive when they see problems are detected and fixed than being told all is well.
Permalink

Thanks Caroline, that is a useful post. I was a bit puzzled by your actual theory though. First, doesn't a good ToC consist of variables? It's a bit hard to construe your first two items in that way: "Choice: what to evaluate when" and "Evaluation evidence". - Presumably the second can be understood as "More and better evaluation evidence"? Then like the final three items, this formulation is expressed as an improvement and it is also easier to see the variability - something which ranges from little, poor evidence to plentiful, good evidence. - The first item - again, a bit hard to see the variability? Do you mean "Good choices are made about what to evaluate when"? But surely, if this is a ToC, this first item is your intervention? Is this all you do? You want to show how "evaluation" leads to change. But then this first item shouldn't be just about good choices, otherwise you only have a theory of "How good choices (about what to evaluate when) lead to greater development effectiveness" when what you want is presumably more general, e.g. "How more and better evaluation leads to greater development effectiveness". So I would suggest a causal chain like this, working backwards: Greater development effectiveness Staff do better planning & management Staff have improved, context-specific and adequately accurate theories about how things work (in relevant domain). More and better evaluation Of course there are lots of assumptions in each step and you could add feedback loops etc. (I made an editable template over here: http://theorymaker.info/?permalink=ToC-evaluation, just click if you want to clone it) The other point I am making here is just to underline what I think you already say: the essential job of evaluation is to produce context-relevant and adequately accurate theories of change, general and specific, for whatever domain we are working in. These are the quintessential output of evaluation. If we have them, we know what to do in order to get what, which routes are most effective and efficient, what are the intervening variables which we can't control, ideally also how to manage feedback and adaptation, where to listen for diagnostic information, etc. So you could say: a theory of change for evaluation has to include "better theories of change" as a central step.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.