It’s a good question that many of us ask.  Training, after all, has a bad reputation in development circles – it is distinctly unfashionable and retro and anathema to building capacity.  A major IEG evaluation showed, for example, that although training resulted in individual learning, it led to improved organizational and institutional capacity only about half of the time. 

So, why have training programs in evaluation proliferated in recent years?  A forthcoming paper commissioned by IEG identified more than 100 training programs advertised in English through a simple internet search.  These are separate from those offered through professional evaluation associations, the number of which has more than quadrupled over the past decade, from 33 in 2003 to 158 in 2013.  There is also a growing number of training programs that have become globally recognized including those offered by the IEG-sponsored International Program for Development Evaluation (IPDET), the Evaluators’ Institute, and the more specialized courses such as the Jameel Poverty Action Lab’s Executive Education and the CLEAR hands-on technical course on impact evaluation offered at different locations, often in collaboration with 3IE

The increasing focus on accountability for development dollars and an emphasis on learning about which interventions actually work and yield results, and which don’t, means that evaluation is in demand.   Many academic institutions, development agencies and consulting firms seem to think that providing (mostly) short-term training will help build the human capacity required to meet this demand, hence the upsurge in the quantity of training. 

But will this training contribute in significant ways to improving the broader capacity required to conduct and use evaluations? It depends.  One thing we do know is that if training continues to use the old classroom-skills-get-who-you-can-one-way-transfer model then it deserves the reputation it has earned.     

On the other hand, if training is focused on building a base that would-be evaluators and others can use as a jumping off point for initiating or accelerating a continuum of individual, organizational, and institutional change processes then, perhaps, this is a case where a rose being called by a different name would actually smell sweeter.  The participants would engage in:

  • Understanding the foundations or basics that allow them to “refresh” their skills or knowledge for jobs they need to do
     
  • Interacting with peers in the same or similar situation to exchange practical knowledge and ideas
     
  • Doing hands-on work on real-world problems of immediate relevance to their contexts and situations
     
  • Working with colleagues in their team so that they begin the learning journey together and develop the same mental maps and frames of reference to apply back on the job
     
  • Creating spaces for “aha” moments, when peers learn about how challenges and problems the profession faces can be solved
     
  • Coming to agreements on the key changes, business processes needed in their workplace
     
  • Networking and making connections with fellow participants with whom they keep in touch as a learning community, taking advantage of communication technologies and knowledge platforms

This could be training without a classroom or an instructor and where even the notion of being taught is absent.  If we look under the label of training, there might just be a quiet, or not-so-quiet, subversion going on.  Examples include:

  • The “flip class room” model being experimented with by CLEAR South Asia which uses blended learning and allows participants to do hands-on work during limited class hours
     
  • The knowledge-exchange community created by IPDET lets participants raise questions and provide answers, well after the classroom time is over
     
  • The ongoing, module-based on-line courses provided by Evalpartners allows participants ready access to learning and join a virtual learning community
     
  • The peer-peer learning models being used extensively to encourage problem-solving and innovation

While the jury is still out and solid evaluation evidence is needed regarding their effectiveness, these newer models and approaches appear promising for several reasons: their emphasis on active learning and the application of new knowledge; their goal of co-creating knowledge; and their strategy of creating communities of practice and identifying emerging champions who then become voices of change in the community - driving accountability for development dollars and promoting learning about interventions that work.   

So, time for that question again – do we need training in evaluation?  

Comments

Submitted by Anonymous on Tue, 06/10/2014 - 05:24

Permalink
the online training is really good. more people will have access to effective training modules but I recommend an introductory face to face training for the new comers to the evaluation team, to learn the basic principles and the best ways for engaging in evaluation

Submitted by Diksha on Mon, 07/14/2014 - 21:34

Permalink
Evaluation and the culture of it is still relatively new in developing countries like India. Hands on training can help way in designing the right kind of evaluations, that can capture the complexities in measurement and data on the ground.

Submitted by Taiwo on Mon, 09/01/2014 - 21:26

Permalink
This is really good,especially in assessing the aftermath of programs and projects in developing nations.

Submitted by Nidhi on Wed, 09/03/2014 - 05:25

Permalink
Agree - different types of learning experiences, including hands-on learning on actual evaluations and access to online modules, will help build capacity more effectively than using just one approach.

Add new comment

By submitting this form, you accept the Mollom privacy policy.