69407 Results of the Expert Roundtables on Innovative Performance Measurement Tools: NOTE #3: Tools to Improve the Quality of Indicators: Objectives: Technical methods which focus on ensuring that performance indicators have the characteristics necessary for them to serve as quantitative or qualitative variables that allow the verification of changes produced by interventions relative to a baseline and planned goals. Context: The development of performance indicators is typically a process completed by program managers with input from mid-level line ministry staff. Figure 2 shows a selection of four common tools which have been employed to improve the quality of indicators, tools 1-3 are ideally implemented at the beginning of the policy cycle during program design and planning phases. Indicator evaluations in contrast are primarily an ex-post tool to evaluate the quality of existing performance indicators. Systematic evaluation of indicators is found most often as part of executive evaluation initiatives (see note 2. of this series on executive evaluations) and as per the overall objective of executive evaluation methods, the evaluation of indicators is completed with an eye to providing leadership with a ‘snap shot’ of performance and does not go into great depth of analysis. Mexico is the first country we are aware of that has added to this predominant model an in depth evaluation of the quality of performance indicators of federal programs. Figure 2: Common Tools for Developing and Implementing Quality Indicators Tool Description 1. Logic Framework Methodology The Logic Framework Methodology has as one of its key objectives the development of References: performance indicators which accurately reflect the logic/theory of a program i.e. The posited World Bank, Performance Monitoring relationship between inputs, outputs and outcomes. The method is characterized by Indicators, A Handbook for Task Managers, emphasizing a participative process for the development of indicators fostering quality and World Bank, 1999 ownership of indicators 2. Benchmarking Exercises Benchmarking indicators can help the identification of ambitious but achievable targets for References: performance indicators, based on analysis/comparison of existing experiences in that particular World Bank, The Real Bottom Line: performance area. Various methodologies exist to estimate adequate benchmarks, which Benchmarking Performance in Poverty depend on the available information to complete the exercise. Reduction in Latin America and the Caribbean ,World Bank, 2008 Benchmarking of performance targets for indicators can achieved by for example: World Bank, Benchmarking Water and a) benchmarking based on international experiences Sanitation Utilities: A Start-Up Kit. World b) benchmarking based on historical trends of performance Bank. Washington, D.C., 1999. c) benchmarking across agencies in the government These acronyms represent commonly used criteria for performance indicators. They are widely 3. SMART, CREAM, SPICED used in the public, private and civil society sectors to provide ‘rule of thumb’ guidance to References: program managers identifying performance indicators, asserting that indicators should be New Zealand Aid, Activity Monitoring, CREAM, SMART or SPICED. http://nzaidtools.nzaid.govt.nz/?q=activity- monitoring/indicators-their-role-monitoring SMART: Specific, Measurable, Achievable and Attributable, Relevant and Time Bound CREAM: Clear: Relevant, Economic, Adequate and Monitorable SPICED: Subjective, Participatory, Interpreted, Cross-checked, Empowering and Diverse 4. Indicator Evaluations included in Executive Evaluations a) The Program Rating Assessment Tool (PART) in the United States has approximately 9 out of 25 binomial (yes/no) questions relating to the quality of indicators in a program. b) The Government Program Evaluation (EPG) in Chile requires that evaluators assess References: performance indicators along four dimensions: Efficiency, Efficacy, Economy and Quality. www.expectmore.gov Measurement methods used are determined on a case by case basis by the evaluator. www.dipres.cl www.coneval.gob.mx 1 c) The Consistency and Results evaluation (CyR), Mexico has approximately 15 out of 100 binomial (yes/no) questions related to the quality of indicators of a program. Case Study: Mexico, Indicator Evaluation Proposal* Mexico has established the objective of developing an evaluation which concentrates solely on evaluating the indicators included in the official ‘matrix of indicators’ which each program in Mexico is mandated to have. For this purpose the World Bank at the request of the National Council for the Evaluation of Social Development Policy (CONEVAL) developed an initial proposal for the evaluation methodology to serve as a concept paper for the agency. Basic Components of the Evaluation: The methodology organizes its assessment of indicators by identifying three general dimensions of evaluation: basic capacities, quality and use. Each dimension is articulated by questions which the evaluator must answer by assigning each indicator with a score from a four point range in between 0 and 1, where 0 is the worst and 1 is the best. There are also a small number of binomial questions. After the indicator has scores for each question, a final score for the indicator is calculated using a weighting mechanism for each dimension. The quality dimension is in addition disaggregated into 6 separate criteria based on the CREAM to add another level of possible weights should stakeholders wish to prioritize quality criteria. Weights at both the dimension and criteria level are envisioned to be determined jointly by CONEVAL’s staff, academic councilors and international experts. Dimension Selection of Questions in Evaluation Score: 0-1 Dimension: Basic To what extent was the indicator formulated through a consensual process? Capacities To what extent is existing information capacity allow with the production of quality indicators? (Weight TBD) Are there protocols for the access & dissemination of the indicator? Are these protocols appropriate? To what extent are protocols complied with? Does the indicator express clearly that which needs to be measured? Does the unit of measurement correspond to what needs to be measured? Dimension: How clear is the calculation method for the indicator? Quality Does the indicator reflect an important dimension of the objective of the program? (Weight TBD) How precise is the indicator in reflecting the target population? How cost efficient is the indicator? Sub Criteria: Does the indicator have a base line? Clarity, How timely is the basic information required for monitoring of the indicator? Relevance, How good is the quality of the data produced by the information sources for the indicator? Economic, Is there meta data of basic information needed available? Adequate, Monitorable Are the responsibilities defined for the process of generating the indicator? Is the indicator associated with a specific goal? Does the indicator have capacity to measure the advance towards those goals? Is the indicator disaggregated enough to be able to explain the result? Is the indicator unique? In the planning of the indicator, is the cycle of decision making considered? Use How timely is the production of the indicator for use in decision making? (Weight TBD) How understandable are the results of the indicator? To what extent are the results of the indicator being used for decision making in the program? *This is not a full representation of the methodology, for further information please contact Gladys Lopez Acevedo, gacevedo@worldbank.org. The evaluation proposal developed by the World Bank is under discussion in CONEVAL at this time. The final methodology is expected to be published in 2010 by CONEVAL at www.coneval.gob.mx. Considerations for Ensuring Quality Performance Indicators: - Endless process of improving indicators: Though including potent performance indicators at the beginning of a program is crucial for successful monitoring, it is important to remain flexible and vigilant of possible refinements and improvements to indicators in the light of lessons from implementation. - Quality of data is paramount: Performance indicators are often the first entry point for evidence based policy making information. Ensuring that information systems feeding indicators support high quality at this level is extremely important and a possibly good place to concentrate information quality improvements as a first step to a wider initiative to strengthen basic information for results tools. 2 - Balance Monitoring and Evaluation: Performance indicators can provide prioritized concise performance information at a relatively modest cost, nonetheless it remains crucial to include at regular intervals in depth evaluation of programs to illuminate and complement performance indicators information to support evidence based policy making. Performance indicators as their title suggests give ‘indications’ of where a program is achieving and where it is not, in order to have more in depth information for evidence informed program improvements evaluations are necessary. 3