Since 2009 the Government of South Africa has put a major emphasis on monitoring and evaluation (M&E) as a way of transforming public sector service delivery. The key champion is the newly established Department of Performance M&E (DPME), created in the Presidency. Our first priority has been to get the government to focus on a limited number of strategic priorities. We have also now developed tools for assessing the management performance of government departments, hands-on physical monitoring of services around the country through unannounced visits, and we have established an evaluation system. Two approaches currently being developed are around citizen-based monitoring and the monitoring of the performance of local government. A lot has been accomplished in the two and a half years since we were established. But what are some of the emerging lessons around building evaluation capacity?

In research undertaken by DPME 54 percent of departments reported that problems were not seen as an opportunity to learn, and 39 percent saw our work as a way of policing and controlling staff. So how do we introduce evaluation, and the use of evidence as something which will assist in getting better development outcomes in an environment where M&E is for compliance, and vertical accountability rather than managers using the information to improve their own performance?

A lot of what DPME has been doing is about supply – finding better ways to deliver better evidence about how government is performing, for Cabinet, for Parliament, and ultimately for the public. In terms of evaluations we have been building system elements, such as national evaluation plans, guidelines, standards, competences as well as developing a suite of training courses, training 300 government staff per year. Our core approach, however, is that learning happens through doing. We work with departments in evaluation processes, building both their and our capacity. This is a long-term endeavour.

We have recently started working more with the demand side of the equation. We have been building the understanding and awareness of the parliamentary committee that we report to – the Standing Committee on Appropriations - with workshops, international study tours and regular reporting. We also have been undertaking training with Members of Parliament, as well as parliamentary researchers, so that they are able to use our M&E evidence. There is a huge opportunity here for parliamentary committees to use evaluations to inform their oversight processes.

We also know that we need senior managers to support and use evaluations. Our system asks departments to submit proposals for evaluations so that they will want to use the findings. It also requires departments to produce improvement plans once evaluations are completed to ensure the findings are followed up. We have started running training for Director Generals and Deputy Director Generals to build awareness of why evidence is important.  Of course, DPME is also working with Cabinet with the aim of improving the evidence available to it for decision making. Our hope is that this will stimulate Cabinet to demand better evidence. The President is also using M&E evidence to appraise ministers.

It is critical for Parliament and the executive to see that they are on the same mission, and to find ways to work together effectively, using evaluation evidence to inform demands for improvements in performance. With partners in Mexico and Ghana, DPME organized a seven country roundtable in November about the use of evidence that also involved participants from Peru, Uganda, Benin and India. Members of Parliament and senior public servants were encouraged to attend. Those that did found it very powerful, and this is clearly a partnership we need to promote going forward.

This is a major journey we are on, and we have to build this ship as we sail, but already we have 38 evaluations completed, underway, or about to start, and hopefully many of these will help to inform how we can improve what government does and the impact it  has on citizens.

Comments

Submitted by Wolfgang Gruber on Mon, 02/03/2014 - 23:25

Permalink
Dear Ian: As a veteran-evaluator with more than 30yrs experience in evaluation with/at MDBs and Bilateral agencies I understand your problem well. It's the old dilemma "you can bring the horses to the water, but you can't make them drinking!" The unwillingness or unpreparedness of your prime audiences - the project or program generating people - is partly based on their reluctance accepting that "history" is holding advise for the future. This is, of course, wrong, but to change peoples' mindsets is one of the most difficult endeavors (as any parent can easily relate to). Some former WBG-IEG Director General - long before dear Caroline - used to refer to Evaluation as the History Department. This, not to diminish this unit's importance or attacking it, to the contrary; but just to express basically the same problem as you had described. Evaluations are continuously searching for new ways for how to break the 'Gordian Knot'. There is no clear-cut recipee! Evaluation's sisyphean task remains to constantly search for new inroads (trial-and-error approach). And one solution that seems to work in a particular circumstance, not necessarily must work tomorrow or in a changed situation. You are doing the right thing: search, explore, experiment. The "new" path you are apparently walking on is 'demand orientation' , and by that you turned to the accountability side of the equation. That step is good, but - in my opinion, should not become over-emphasized for two reasons: (i) it foremost doesn't solve your LL-acceptance problem - the second pillar on which evaluation rests; and (ii) it even could be counter-productive in that your LL-audience could see in it a 'proof' for the police-function perception. I came to conclude that evaluations should try out a dual approach, that however would come along with some organisational changes - a 'scissors approach' if you like. The one edge should remain as is, mainly ex-post oriented (LL & accountability, the buttom-up focus); the other one should work up-stream (top-down) by involving the most experienced evaluators - those who can credibly refer to long-standing experience - in the (project/program) preparatory stages. Between both, and that to me seems very important, there should be a clear demarcation line of independence - therefore I referred to 'organizational consequences'. But, let's be clear, also this dual approach does not necessarily solves the identified dilemma, but it is a way worthwhile to be tried out. Good luck an best regards Wolfgang (gruber.consult@gmail.com)

Submitted by Wolfgang Gruber on Mon, 02/03/2014 - 23:35

Permalink
Dear Ian: Sorry for coming back, but I would like to comment on a second item. Your designation, apart from evaluation, is apparently also covering the monitoring function. In corporate evaluation world we always argued, I hope I am not wrong on this, that evaluation and monitoring should be separated: monitoring belongs to operations for a host of good reasons, and evaluation should be independent from this project/program implementation responsibility. I don't want to get further into this area which is a separate item but I would recommend that you look into it. Regards Wolfgang (gruber.consult@gmail.com)

Submitted by Ian Goldman on Tue, 02/04/2014 - 20:21

Permalink
Wolfgang - M&E are very different - so the department covers both, and some intermediate functions, but my unit's function is only on evaluation. One of the challenges for the department is to now start to integrate the information generated from a wide variety of sources.

Add new comment