| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

View
 

Monitoring and Evaluation for Poverty Reduction

Page history last edited by Alexandra Pittman 10 years, 6 months ago

 

Giovanna Prennushi, Gloria Rubio, and Kalanidhi Subbarao. 2001. “Monitoring and Evaluation.” In Core Techniques and Cross-Cutting Issues, Chapter 3:105–130. Vol. 1 of PRS Source Book. Washington, DC: World Bank.

 

This article presents an overview of M&E system and methods for assessing poverty reduction programs, using an RBM approach. Prennushi et al. (2001) highlight the need to establish intermediate and final indicators for program monitoring purposes as seen in the Figure below. As seen in the diagram below, indicators are depicted in a logical fashion, focusing primarily on how intermediate indicators lead to final outcomes and impact. 

 

 

The authors note that qualitative indicators complement quantitative indicators and should be used as needed. However, mixed methods were cited as frequently necessary in gender analysis. The inclusion of and consultation with experts and the grassroots in M&E design phases enhances the likelihood of relevant measurement. 

 

Prennushi et al. (2001) acknowledge that longer-term final indicators are the result of a range of factors outside of policymakers’ control, but note that intermediate outcomes are a result of actions by the government or other change agents. As such, these indicators change more rapidly and can be used to assess progress of a stated policy or program prescription. See below for an example of a poverty reduction indicator that was created for a program in Tanzania.

 

Example of Poverty Reduction Indicator Development in Tanzania

 

In terms of evaluation, the authors suggest conducting standard impact evaluations only when the conditions are necessary and ripe for reporting causation. These conditions include if the program or policy is of considerable strategic importance for poverty reduction, if it augments knowledge of what works or does not work, or if it tests an innovative approach to poverty reduction.  In very few cases process evaluations, which determine best pathways for delivering a program to target groups, are suggested.  Finally, the authors note that M&E should not be thought of as ad hoc activities. Rather M&E should be used as tools for learning and organizational feedback. As such, M&E should be integrated into all levels of an organization and evaluations should be written up to be relevant to different group such as policymakers, program managers, and program beneficiaries.

 

Strengths:

  • Prennushi et al. focus on capacity building and embedding feedback systems for organizational learning. The authors also recommended that M&E learning should be adapted for use by different stakeholders, such as program managers and policymakers, which enhances the likelihood that learning will be integrated in the program, project, or institution. 
  • There is some attention to the multiple social, political, and economic forces at work that can influence development outcomes.
  • The recommendation of including qualitative and quantitative indicators for programs involving a gender dimension strengthens the M&E approach. However, this recommendation should not only be limited to gender analysis, but be more broadly applied across all programs.
  • RBM models explicitly outline program activities, inputs, outputs, and outcomes.  

 

Weaknesses (or not designed for):

  • There is an underlying assumption that changes in intermediate indicators will provide important evidence regarding the effectiveness of program implementation. However, the RBM Approach does not capture and assess how the program was actually implemented—so we cannot determine if the implementation was successful, if constraints to implementation occurred, or if reversals or shifts occurred based on contextual conditions.
  • Impact indicators that aim to measure changes in macro indicators create false attributions, e.g., % of population whose consumption falls below the poverty line. These outcomes are not solely attributable to the poverty reduction strategy of interest and as such, discount the multiple programs and policies beyond the World Bank that influence poverty in a certain context.
  • There is no attention to contextual influences and the ways in which these might constrain or augment the poverty intervention. Similarly, the RBM approach does not account for systemic contributions to poverty or attend to discovering if the intervention attempts to reduce or mitigate systemic levers at all.
  • In some poverty reduction programs, M&E frameworks recommend outlining the intervention’s goals and targets in a participatory manner.[1] In the World Bank’s strategy, they recommend consultation with experts or grassroots; however, this strategy is not built into the assessment framework.
  • The recommendation to evaluate primarily in extreme cases, e.g., best case or innovation, limits broader lessons that can be learned from the interventions.[2]
  • There is no explicit gender analysis, limiting understanding of the differential impacts of poverty and different poverty reduction strategies on men’s and women’s lives. Moreover, a broader power analysis and its contributions to poverty in the intended target region are not included.
  • There is an embedded bias toward new changes in behavior or policies and not on maintaining past gains since results are actually defined as a change.[3]
  • Both RBM and Log Frame approaches rely extensively on implementation in stable organizational settings where planning structures are well-defined. However, many development situations are not stable, and organizations are working in complex and radically shifting environments, which do not allow for implementation as planned. These circumstances require more flexible and adaptive approaches to M&E. [4] 

 

 


[1] World Bank. 2000. “Poverty Monitoring and Evaluation for Poverty Reduction Strategies.” Ulaantbaatar, Mongolia. PPT.

[2] ibid.

[3] Mark Schacter. 1999. “Results-Based Management and Multilateral Programming at CIDA: A Discussion Paper.” Institute on Governance.

[4] Source: Reflection from CDRA. 2001. “Reflections on Measurement.”

 

 

Comments (0)

You don't have permission to comment on this page.