Our ongoing work suggests that entry points that could capture beneficiary feedback, often don’t. While there is a sound data foundation in appraisal documents, there is insufficient attention to needs or preferences expressed by beneficiaries. We found relatively low rates of needs assessment for disadvantaged groups referenced in project documentation across the three sectors.
TWEET THIS
.
In a previous blog, we wrote about a framework that we at IEG have developed for evaluating service delivery. Since then, our teams have been busy on a number of fronts, socializing the framework within IEG and across the Bank Group. We tested the framework by applying it to our ongoing evaluations, and, on a retrospective basis, to earlier IEG evaluations completed in recent years.
At one recent presentation to Bank experts working on human development, participants reflected on the framework and provided us with useful feedback on its structure, scope, and coverage. And although we developed the framework mainly to support a post-hoc, evaluative lens, participants suggested that it could as easily be used ex-ante as a checklist to assist in project design. We also presented at another event on Getting Results Differently and were privileged to be joined by Keith Hansen, Vice President for Human Development at the World Bank Group, who was interviewed by Dennis Whittle, CEO of Feedback Labs about the growing emphasis on effective service delivery in development, and the important role for monitoring and evaluation.
In addition, we have continued to extract and analyze insights from the application of the evaluation framework to our ongoing sector evaluations in Urban Transport, Water and Sanitation, and Health Services. We found, for example, that capacity development is emphasized in project documentation in all three sectors; however, this tends to be more typically focused at the policy level, and much less so at the operational level - i.e., sectors don’t equally address the capacity of all the actors along the supply chain to include front-line service providers in urban transport such as bus drivers and traffic officers. On the other hand, health operations will regularly support capacity development of health and nutrition workers.
Our ongoing work suggests that entry points that could capture beneficiary feedback, often don’t. While there is a sound data foundation in appraisal documents, there is insufficient attention to needs or preferences expressed by beneficiaries. We found relatively low rates of needs assessment for disadvantaged groups referenced in project documentation across the three sectors. Given the geographical basis for targeting water and sanitation services, this is, perhaps, not surprising. On the other hand, given the more individualized nature of health services, the incidence of needs assessment in that case appears low.
When we looked at service outcomes, we found that assessment of satisfaction and affordability is relatively low in all sectors.
We also found that accountability mechanisms, which support the feedback process, are seldom referenced in urban transport project documentation, but are present in half the projects in water and health. When these mechanisms existed across the three sectors, only half of these cases integrated the feedback of beneficiaries through surveys, complaint hotlines, websites, exit interviews, or community scorecards. This may be a missed opportunity to improve design and implementation, as several of the completion reports noted lack of participation from beneficiaries subsequently contributed to poorer outcomes. When we looked at service outcomes, we found that assessment of satisfaction and affordability is relatively low in all sectors.
We are continuing to work with the data to further distill findings on a cross-sectoral basis. The working paper and framework emphasize the importance of feedback loops and accountability for effective delivery, with the beneficiary at the center. Our findings to date suggest less than optimal observance of these basic requirements.
In addition to applying the framework to ongoing evaluations, we have tried to test the retrospectively earlier IEG evaluations – specifically the ones that looked at the World Bank Group’s support for financial inclusion, and early childhood development, as well as support in both low-income fragile and conflict states and in situations of fragility, conflict, and violence. Early findings reinforce what we have observed elsewhere. For example, disaggregation is a basic, but often lacking feature in projects with a service delivery focus. Without being able to describe, and identify intended beneficiaries, it is not possible to meaningfully engage with them, and this, in turn, can result in inappropriate design, underutilization, and, ultimately, failure to achieve desired outcomes.
We are continuing to refine our work and associated lessons, and we’re planning next steps, which are likely to include a deeper dive into the comparative data we have generated, and the production of a chapeau report on work and findings to date. As always, we’d like to hear your thoughts on what we’re coming up, how it chimes with your experience and practice, and any other thoughts on how best to approach the evaluation of service delivery.
Add new comment