Capacity Enhancement Indicators Review of the Literature Papers Yemile Mizrahi orking W WBI Capacity Enhancement Indicators Review of the Literature Yemile Mizrahi World Bank Institute Copyright 2004 The International Bank for Reconstruction and Development /The World Bank 1818 H Street, N.W. Washington, D.C. 20433, U.S.A. The World Bank enjoys copyright under protocol 2 of the Universal Copyright Convention. This material may nonetheless be copied for research, educational or scholarly purposes only in the member countries of The World Bank. Material in this series is subject to revision. The findings, interpretations, and conclusions expressed in this document are entirely those of the author(s) and should not be attributed in any manner to the World Bank, to its affiliated organizations, or the members of its Board of Executive Directors or the countries they represent. Capacity Enhancement Indicators Review of the Literature Yemile Mizrahi 2004. 38 pages. Stock No. 37232 Foreword Since the international development conferences in Doha, Johannesburg, and Monterrey, capacity enhancement has acquired a central place as a driver of sustainable development. The World Bank Institute's (WBI) responsibility is to keep the focus sharp on capacity enhancement as a core feature of the Bank's development business. As part of its mandate to evaluate the results of WBI's programs, the World Bank Institute Evaluation Group (IEG) commissioned a literature review to identify indicators used to operationalize and measure "capacity" in capacity enhancement programs supported by a number of organizations. The stark conclusion this review paper brings us to is that despite the importance accorded to the concept, little effort has gone into concretely defining what "capacity enhancement" means. To a large extent, the difficulty emerges from a vague understanding of the term "capacity," and even less clarity about the results to be expected from capacity enhancement efforts. The paper suggests that the analytical framework and results orientation of capacity enhancement programs can be strengthened considerably by asking the questions: capacity for whom? and capacity for what? Although a general agreement is emerging with respect to the levels at which capacity enhancement endeavors can be directed ­ individual, organizations, and institutions ­ the capacity-related outcomes will be amenable to measurement only if the outcomes expected are concretely defined. The author also encourages us to think of capacity building as a process and therefore to define interim benchmarks. This paper was prepared by Yemile Mizrahi, under the immediate supervision of Nidhi Khattri. The conclusions are those of the author and do not necessarily reflect the views of the World Bank Institute. Marlaine Lockheed, Manager World Bank Institute Evaluation Group Acknowledgements This report was prepared for the World Bank Institute (WBI) under the overall guidance of Marlaine Lockheed, Manager, Evaluation Group. The team was led by Nidhi Khattri. This report benefited from comments by David Potten (WBIRC). Document production support was provided by Humberto Diaz. WBI Evaluation Studies are produced by the WBI Evaluation Group (WBIEG) to report evaluation results for staff, client, and joint learning events. An objective of the studies is to get the findings out quickly, even if the presentations are less than fully polished. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors and do not necessarily represent the views of the World Bank Group. WBI Evaluation Studies are available on line at http://www.worldbank.org/wbi/evaluation/puball.htm. Vice President, World Bank Institute Ms. Frannie Léautier Manager, Institute Evaluation Group Ms. Marlaine Lockheed Task Manager Ms. Nidhi Khattri ii Table of Contents EXECUTIVE SUMMARY......................................................................................................... v 1. INTRODUCTION ............................................................................................................ 1 2. BACKGROUND.............................................................................................................. 2 3. DIFFICULTIES IN MEASURING CAPACITY ENHANCEMENT............................................ 4 CAPACITY VERSUS PERFORMANCE........................................................................ 4 PRECONDITIONS FOR ENHANCING CAPACITY........................................................ 6 4. CAPACITY ENHANCEMENT INDICATORS ...................................................................... 7 5. DESIGNING CAPACITY ENHANCEMENT INDICATORS: AN EXAMPLE.......................... 16 ANNEXES ........................................................................................................................... 21 ANNEX 1: INDICATORS TO ASSESS CAPACITY. UNDP. CAN WE DO BETTER? INSIGHTS FOR CAPACITY DEVELOPMENT .......................................................................................... 22 ANNEX 2: INDICATORS TO MEASURE INSTITUTIONAL CAPACITY GAPS. ALAIN TOBELEM (1992)................................................................................................................................ 23 ANNEX 3: STATISTICAL CAPACITY BUILDING INDICATORS. PARIS21 TASK TEAM.......... 26 ANNEX 4: EXAMPLES OF CAPACITY ENHANCEMENT INDICATORS PROVIDED BY MORGAN, 1997................................................................................................................................... 28 iii iv EXECUTIVE SUMMARY During the past decade, capacity enhancement has become an increasingly important topic of discussion within the development community. Both academics and practitioners have recognized that developing countries need to enhance the capacity of their private and public institutions and organizations to meet the challenges of development in a sustainable manner. Virtually all donor agencies and multilateral organizations include capacity enhancement programs in their strategies. The topic has been thoroughly researched and a vast literature has been written on this subject. The purposes of this paper are to: · Identify indicators of capacity and capacity enhancement in the development- related literature produced over the past ten years, · Examine the difficulties and challenges of measuring capacity enhancement, and · Suggest an analytical framework format for designing capacity enhancement indicators. Although a consensus exists about the importance of capacity enhancement, there is little agreement about how to identify and measure this concept. Yet, without adequate operationalization and measurement, it becomes extremely hard to assess capacity gaps and to evaluate whether the capacity of a given country, institution, or even an organization has been effectively enhanced as a result of capacity enhancement programs. After a review of the development-selected literature on capacity enhancement, some conclusions emerge: · Despite the prevalent conceptual and analytical vagueness of the term "capacity enhancement," most authors today agree that capacity enhancement involves something more than the strengthening of individual skills and abilities. Trained individuals need an appropriate environment, and the proper mix of opportunities and incentives to use their acquired knowledge. Understanding capacity enhancement therefore requires a more comprehensive analytical framework that takes into account the individual, the organizational and the institutional (or societal) levels of analysis. Reflecting on previous technical assistance projects, both practitioners and academics agree that institutional weakness constitutes a major constraint on development. · Performance indicators cannot be substituted for capacity enhancement indicators. Although capacity and performance are related, they are not synonymous and failure to distinguish between these concepts can lead to misleading conclusions. v · Capacity enhancement is a process and therefore, it can be measured in degrees. The latter requires the definition of benchmarks. · While capacity enhancement can be measured in three analytic dimensions, indicators of capacity enhancement cannot be built in abstraction. Indicators only become operational when they are related to a particular development objectives (capacity for what?) and make reference to specific actors towards which capacity enhancement projects are directed (capacity for whom?). Indicators of capacity enhancement of a country's financial institutions, for example, will be different from indicators of the enhancement of an NGO's capacity to make government institutions more accountable by increasing their ability to understand and analyze financial policy. · Capacity enhancement projects must entail local ownership for them to succeed. Without the leaders' political will to introduce and sustain reforms geared at enhancing capacity, no project can succeed, whether in the public or in the private realms. The following pages review the literature on capacity enhancement and identify some of the indicators that have been proposed to disaggregate, measure and operationalize this concept. Using this literature and combining different analytical approaches, this document suggests a format for designing capacity enhancement indicators for a hypothetical project on statistical capacity enhancement. The annexes reproduce the indicators referred to in the text and are included to facilitate consultation. vi 1. INTRODUCTION 1.1 A consensus exists in the development community, both among practitioners and academics, that strengthening capacity is fundamental for development. The transference of resources from rich to poor countries, although important, is not sufficient to improve the performance of public and private organizations in developing countries. It is equally critical to enhance the capacity of these organizations to use, manage, and deploy these resources so that they are able to accomplish their strategic objectives (Horton, 2001). The latter involves not only the ability to identify needs and acquire the missing resources, but also the ability to design adequate incentives and create the opportunities to use these resources effectively and efficiently. This often requires the introduction of substantial institutional and organizational reforms and the ability to manage the changes that necessarily accompanies this process. Poor institutional contexts, particularly weak bureaucracies and corruption, constitute a serious constraint on the ability of technical cooperation to contribute to capacity development (Browne, 2002, p.20). 1.2 Although the question of capacity enhancement has received extensive attention in the development literature and the topic has been approached from different analytic perspectives, little agreement exists about how to define, operationalize, and measure capacity and capacity enhancement.1 This paper is an attempt to synthesize the literature to draw some conclusions about the measurement of capacity enhancement. 1.3 In the sections below, the paper: · Provides a brief background to the concept of capacity enhancement · Highlights the difficulties in measuring capacity enhancement, · Summarizes the capacity enhancement indicators used in development projects, and, finally, · Suggests and analytic framework for designing capacity enhancement indicators 1.4 The annexes provide specific examples of the instruments that have been used to measure capacity enhancement. 1A large literature was reviewed for this document, but only the most relevant pieces are cited in the bibliography. 1 2. BACKGROUND 2.1 Definitions of the terms capacity and capacity enhancement abound in the literature. Cohen (1993, p.2) believes the term capacity building (or enhancement) has been used too broadly and inconsistently to the point where it has lost its analytic power and utility. He claims that capacity needs to be narrowly defined as "individual ability, competence to carry out a specific task" and that capacity enhancement must therefore focus on increasing the abilities of specific types of personnel within an organization (1993, p.6). This narrow definition, however, does not "travel" well to the other analytic dimensions of capacity and capacity enhancement: the organizational and institutional levels.2 The UNDP (United Nations Development Programme) may provide the most analytically useful and less controversial definition. Capacity refers to the "ability to perform functions, solve problems and set and achieve objectives." This definition recognizes that national capacity is not just the sum total of individual capacities; that the concept is richer and more complex that "weaves individual strengths into a stronger and more resilient fabric...If countries and societies want to develop capacities, they must do more than expand individual skills. They also have to create the opportunities and incentives for people to use and expand individual skills." (Fukuda-Parr, Lopes, and Malik, 2002, p.9). 2.2 The term capacity enhancement adds a time dimension to this definition of capacity. Although conceptually related, capacity enhancement refers to the acquisition of these abilities within a certain period of time. Capacity enhancement projects, thus refer to the necessary resources and conditions that are needed to develop these abilities. Capacity enhancement is a process and it is measured in degrees. Therefore, to become operational, capacity enhancement requires the establishment of some benchmarks along a low-high continuum. 2.3 In addition to the difficulties of defining these terms operationally, most authors writing on capacity and capacity enhancement have eluded questions of measurement and the identification of indicators. As Morgan correctly notes, "measures exist at the input and output end of the spectrum, and many indicators can be found which address service delivery and performance outcomes. There remains however a `black box' in the middle of the indicator spectrum to do with capacity development which remains vague and unclear"(Morgan, 1997, p. 1). 2.4 This vagueness is puzzling because despite the consensus about the importance of strengthening the capacity of developing countries' governments and societies, most analyses of capacity enhancement projects implemented during the 1980s concluded that the majority of these projects failed (Berg, 1993; Jaycox, 1993; David, 2001; Browne, 1999). In the 1990s some financial institutions became highly critical of these projects because "extensive investments apparently produced little in terms of the increased capacity of public sector officials or organizations to perform efficiently, effectively, and 2 The European Center for Development Policy Management (ECDPM) moves a step forward in the analytic dimension and defines "organizational capacity" as "an organization's potential to perform ­its ability to successfully apply its skills and resources toward the accomplishments of its goals and the satisfaction of its stakeholders' expectations" (ECDPM, 2003). 2 responsively" (Grindle, 1997, p. 9). Yet, without adequate instruments to measure, monitor, and evaluate capacity enhancement, it is difficult to understand specifically what aspects/elements of capacity enhancement projects failed, to identify partial successes, and to design better, more effective and feasible projects. 2.5 Reflecting on previous capacity enhancement projects, a large part of the current literature on capacity enhancement focuses on new ways to design and implement technical assistance projects. Critical of the previous experience, this literature develops a "new paradigm" for capacity enhancement (or technical cooperation) which stresses the importance of country ownership, shifts its focus from the transference of knowledge to the acquisition of knowledge, and acknowledges the existence of local capacities (Browne, 2002; Morgan, 1997; Horton, 2001; Lopes and Theisohn, forthcoming).3 However, fewer efforts have been devoted to the particular question of measurement. 3 The motto of this new paradigm is "scan globally, reinvent locally" (Fukuda-Parr, Lopes, and Malik, 2002, p.19). 3 3. DIFFICULTIES IN MEASURING CAPACITY ENHANCEMENT 3.1 In large part, the difficulty of measuring capacity enhancement is that by definition, capacity enhancement is a process, rather than a final outcome or an output (the results of capacity) which are more easily identified and quantified. Moreover, enhancement may lead to different degrees of capacity. 3.2 Measurements of capacity enhancement may be qualitative in nature and involve a time frame, since capacity is strengthened over time. More importantly, capacity enhancement involves a complex process of learning, adaptation, and attitudinal change at the individual, organizational, and institutional levels. Benchmarks used to assess degrees or levels of capacity are often based on subjective evaluations and partial or incomplete information. Identifying indicators and measurement tools that grasp these complexities and address these different levels of analysis is much more challenging and difficult than identifying indicators that measure outputs or outcomes. 3.3 Moreover, focusing on capacity enhancement is also less "glamorous" than focusing on results or outcomes (Morgan, 1997, p. 2). This also explains why fewer efforts have been devoted to measuring this process and the capacity that leads to performance. Of course, one could indirectly measure capacity enhancement by measuring performance outcomes. The logic is that if an organization or institution has greater capacity to perform its functions, performance will likely improve. However, capacity and performance are not synonymous, and failure to distinguish between these concepts can lead to misleading conclusions. CAPACITY VERSUS PERFORMANCE 3.4 There are several reasons why performance indicators are not appropriate for measuring capacity: 3.5 First, while performance may be a good indicator of adequate or good capacity, it does not yield insights into which aspects of capacity is particularly good, or which may be weakening. The personnel within a particular organization, for example, may have adequate levels of skills and yet the organization may be failing in its performance. Analyzing declining levels of performance, however, cannot reveal much about capacity gaps, for it may be that this gap is not at the skill level, but at a higher level of management. 3.6 Furthermore, performance indicators do not reveal what aspect of capacity is responsible for a better or failed performance. Weak performance indicators tell us little about the origins or causes of these results. Capacity enhancement projects may not be successful in generating better performance indicators or more satisfactory outputs, yet without adequately disaggregating capacity and finding indicators and benchmarks to measure capacity enhancement through its different analytic dimensions, it is difficult to assess what aspects of the process are failing, where additional support is required, and whether capacity enhancement projects are even realistic or feasible. Weak performance 4 can be attributed to the lack of skilled personnel, to the unclear definition of roles and responsibilities within an organization, to the lack of adequate financial support, to the weakness of the regulatory framework, or to a combination of all these factors. Understanding these different analytic dimensions and designing measurements to evaluate progress at each level is important for designing better and more effective capacity enhancement projects. 3.7 Second, like in many other development programs, capacity enhancement programs may be only partially successful. Yet partial success is difficult to recognize if the criteria for evaluating these programs is solely based on performance outcomes. Measuring the "process" of capacity enhancement and developing benchmarks is thus critical for allowing the analyst to recognize partial and incomplete results. The prevalent frustration with many capacity enhancement programs stems in large part from the failure to recognize partial success. Confronted with what was perceived as "total failure," many projects attempted to start from scratch every time a new project was introduced. Identifying partial successes lends not only to a more balanced judgment, but also to the adoption of more gradual, piecemeal, and realistic development strategies that take as a starting point "existing local capacity." The latter has been identified by the UNDP as a critical element in the new "paradigm" of capacity development (Browne, 2002). 3.8 Third, an institution or organization can improve its performance indicators, but nothing guarantees that this level of performance can be sustained over time. Unlike performance indicators, indicators of capacity and capacity enhancement indicators provide information about sustainability by revealing information about the extent of institutionalization or routinization of reforms introduced to enhance capacity. Technical assistance projects may have an initial positive impact on performance results, but as soon as the funding of these projects ends or foreign experts leave the country, performance indicators deteriorate. Unlike indicators of performance, indicators of capacity enhancement tells us something about the extent of "country ownership," a critical element for the sustainability of any capacity enhancement project. More on this in the sections below. 3.9 Finally, the relationship between capacity enhancement and performance is by no means direct and linear. The performance of governments, businesses, or civil society organizations is affected by a multiplicity of factors, above and beyond capacity enhancement. A severe economic crisis, for example, can have a substantial impact in the growth of poverty rates, regardless of the capacity of public officials to design and implement better poverty reduction strategies. Rapid economic growth, on the other hand, can have a greater impact on reducing poverty rates than the enhancement of government's long term capacity to deal with macro-economic stability. Similarly, low HIV rates may not accurately reveal the government's capacity to respond, should the problem emerge at a later stage. Finally, a business may be successful in a closed economy protected from competition, regardless of its capacity to produce quality products. 5 PRECONDITIONS FOR ENHANCING CAPACITY 3.10 In a recent publication that reflects on previous capacity enhancement projects, the UNDP recognizes that "successful and sustainable capacity development has remained an elusive goal" and that despite the training of thousands of people, "development undertakings have constantly faced lack of necessary skills and weak institutions." (Fokuda-Parr, Lopes, and Malik, 2002, p.3). In large part, this is due to the lack of "ownership" of most capacity enhancement projects, the strong dependency on the donor community and foreign experts, and the tendency to concentrate on the training of individual skills without consideration of the larger organizational and institutional context. 3.11 It is clear that a necessary precondition for enhancing the capacity of governments, public or private organizations, or firms, is the willingness and commitment of key actors within this institutions to introduce reforms geared at improving performance outputs. An organization may have the technical capacity to accomplish a particular task, yet without a strong commitment of this organization's leadership, this organization may lack the adequate resources and/or the appropriate regulatory framework to accomplish these tasks. 6 4. CAPACITY ENHANCEMENT INDICATORS 4.1 The literature on capacity and capacity enhancement is extensive but mostly vague when it comes to operationalization of concepts, identification of measurement tools, and definition of indicators. While a consensus exists that capacity involves something more than the sum total of individual capacities, and that capacity enhancement projects therefore need to consider the broader institutional and organizational framework in which individuals operate, there is very little agreement on how to assess, monitor and measure capacity and capacity enhancement in the absence of specific developmental or sectoral objectives. 4.2 Furthermore, even when most authors acknowledge that capacity and capacity enhancement needs to be approached on a three dimensional analytical framework, authors usually tend to focus more on one analytic dimension more than another. Thus, for example, in his study of building technical capacity in the public sector, Cohen (1993) focuses more on building functional capacities (skills), at the individual level. In contrast, Peterson's analysis of bureaucracies in Africa (1997) focuses on the organizational dimension, and Boknick's study of fiscal discipline in Zambia (1997) focuses on the institutional level of analysis. 4.3 In a 2002 publication, the UNDP recognizes that despite years of training through technical assistance projects, weak institutions and poor skills remain unyielding constraints to development. While this document stresses that these projects failed due in large part to lack of "ownership," it devotes most of its attention to "finding new solutions to old problems," but offers little in terms of suggesting more concrete indicators to evaluate the effectiveness or progress of capacity enhancement projects. A more recent publication, however, grants greater attention to defining capacity and capacity enhancement in more operational terms. 4 4.4 Building on the experiences and lessons of previous capacity enhancement projects in different countries, the main goal of this UNDP document is to offer practical and useful advice for practitioners and decision makers in developing countries and the international donor community. To disaggregate the term capacity, the document identifies "key capacities" that could be expected from "an empowered and capable individual, organization, or society in molding its own destiny" (Lopes and Theisohn, forthcoming, p.19).5 4 In the UNDP language, capacity enhancement is described as "capacity development." They prefer this term to that of "capacity building" because it acknowledges existing local capacities and it connotes a long term process that covers many crucial stages and that ensures national ownership and sustainability. The concept "capacity development" encompasses organizations and institutions that lie entirely outside the public sector: private enterprise and civil society organizations in particular. Capacity development involves human resource development, institution building, and capacities in the society as a whole. (UNDP, 2002, Introduction). 5 Although this list is less vague than the term "capacity enhancement" or "capacity development," the items on the list still need to be operationalized and linked to particular institutional and organizational contexts to be subjected to measurement. 7 UNDP's Core Capacities: 1. The capacity to set objectives 2. The capacity to develop strategies 3. The capacity to draw action plans 4. The capacity to develop and implement appropriate policies 5. The capacity to develop regulatory and legal frameworks 6. The capacity to build and manage partnerships 7. The capacity to foster an enabling environment for civil society 8. The capacity to mobilize and manage resources 9. The capacity to implement action plans 10. The capacity to monitor progress Lopes and Theisohn, forthcoming, p. 18. 4.5 While stopping short of defining general capacity enhancement indicators, the book recommends some tools and techniques to conduct capacity assessments and to identify capacity gaps at the individual, organizational, and societal levels. 4.6 This is without a doubt a step forward in the definition of an analytic framework and the identification of measurement tools to analyze and assess capacity enhancement. However, the tools and techniques proposed to analyze capacity still remain abstract and vague. They serve better as general guiding principles for thinking about capacity enhancement than as useful measurement tools. For example, to assess the capacity of an organization to fulfill its functions, the book recommends that one should know if "the institutional processes such as planning, quality management, monitoring and evaluation, work effectively." Similarly, to assess the capacity of individuals, one should know if "the incentives are sufficient to promote excellence" (See annex 1 for the full list). However, how should one evaluate whether planning works effectively? What are the benchmarks? How should we know if the incentives are sufficient or not to promote excellence? These elements need to be defined before we can build more concrete indicators. 4.7 A more ambitious attempt to analyze capacity enhancement in the public sector through its various levels of analysis is Merilee S. Grindle edited volume, "Getting Good Government." This edited volume presents an analytic framework, or conceptual map, to analyze capacity and provides concrete case studies of capacity enhancement projects in various parts of the world. Contributors to this volume agree that "good government is advanced when skilled and professional public officials undertake to formulate and implement their policies, when bureaucratic units perform their assigned tasks effectively, and when fair and authoritative rules for economic and political interaction are regularly observed and enforced" (Grindle, 1997, p. 8). A central contribution of this volume is the recognition that enhancing the capacity of governments to perform efficiently, effectively and responsibly requires addressing the different dimensions of governance: developing human resources, strengthening organizations, and reforming (or creating) institutions. 8 4.8 Hilderbrand and Grindle's chapter provides the most comprehensive and substantial analytic framework to assess capacity and capacity enhancement (capacity building in their words). By disaggregating capacity by different levels of analysis, this framework helps to identify capacity gaps and tools for designing more effective projects. Like most recent studies on capacity enhancement, these authors argue that training individuals and transferring technology is not sufficient for enhancing capacity, for individuals do not perform in a vacuum. Performance of individuals depends on the larger organizational and institutional framework. Moreover, they argue that unlike many authors who stress that civil servants improve their performance if they are adequately trained, well paid and have well defined responsibilities, Grindle and Hilderbrand persuasively argue that although these elements are important, good performance depends much more on improved management (Grindle, 1997, p. 33). Taking a more systematic approach, they define five levels of analysis that affect capacity and should guide capacity building interventions: Hilderbrand and Grindle's Analytic Framework 1. The action environment, or the political, social, and economic context in which governments carry out their activities (rate of economic growth; degree of political conflict; human resource profile of the country) 2. The institutional context of the public sector, which includes such factors as the rules and procedures set for government operations and public officials, the financial resources the government has to carry out its activities, the responsibilities that government assumes for development initiatives. 3. The task network refers to the organizations involved in accomplishing any given task. The degree of communication and coordination among these organizations and the extent to which organizations are able to carry their responsibilities effectively. 4. Organizations are the building blocks of the task network. Factors such as the structure, processes, resources, and management styles affect how organizations define their goals, structure their work, provide incentives structures, and establish authority relations. 5. Human Resources refers to the level of skills and the retention of skilled personnel within organizations. Hilderbrand and Grindle, 1997, pp.35-37 4.9 Using this analytic framework, the cases reviewed in this volume provide concrete examples of successes and failures of various capacity enhancement projects. Moreover, some of the case studies identify concrete indicators to assess the outcomes of the particular project under review. However, aside from proposing general guiding principles, the volume does not develop specific indicators, benchmarks or measurement tools that can be used across regions. 4.10 Taking a similar analytical perspective but devoting more attention to developing a methodology for carrying out systematic institutional capacity analysis, Alain Tobelem (Tobelem, 1992) defines different indicators to assess capacity gaps. Tobelem's document is intended as an "operation manual" to conduct institutional capacity analysis. Tobelem's methodological approach focuses more on disaggregating and operationalizing capacity, rather than building benchmarks for assessing capacity 9 enhancement. Nevertheless, his framework is extremely useful for identifying indicators of capacity through its different analytic dimensions. 4.11 Although he does not use the term "ownership," Tobelem stresses that a government's commitment and political will to engage in a capacity engagement project is a necessary precondition. In his words "when a government, an administration, or an entity does not intend to change anything ­when it does not want to do anything more or anything better--the related installed institutional capacity is by definition sufficient and therefore does not need to be assessed" (Tobelem, 1992, p. 3). 4.12 Once this level of commitment or "ownership" has been evaluated, Tobelem suggests five analytic perspectives to assess institutional capacity that very much resemble those proposed by Grindle (1997) in the volume referred to above. However, in addition to describing each one of these analytic perspectives, Tobelem defines a set of indicators to measure capacity and assess capacity gaps. Tobelem's Analytic Dimensions to Assess Capacity: 1. Rules of the game (institutional background), which includes governance, constitution, legislation, regulations and rules. 2. Inter-institutional Relationships, refers to the number and the extent of coordination of the different institutional entities (or organizations) in charge of a particular function or task. 3. Internal Organization, which includes the roles, the mandates, the distribution of functions, the internal relationship flows, the management style, and the resources of an organization. 4. Personnel Policy and Reward System refers to the existing civil service regulations. 5. Skills, which includes the personnel's knowledge and skill levels to accomplish their functions. Tobelem, 1992, pp.23-51 4.13 Annex 2 reproduces the full list of indicators Tobelem proposes for each analytic perspective. As is apparent from this list, many of these indicators remain vague unless they make explicit reference to concrete development objectives and particular regional contexts. In abstraction, they can serve as an overall framework which can then be used to operationalize concepts and identify indicators. 4.14 An example of capacity indicators that relate to a specific development objective is the Paris21 Task Team on Statistical Capacity Building. Recognizing the centrality of statistical information for the formulation of development policies, particularly in the area of poverty reduction, this team developed a set of quantitative and qualitative indicators of statistical capacity to help identify capacity gaps and to track the progress of countries building statistical capacity (Laliberte, 2002). The process of defining these indicators took three years, from 1999 to 2002.6 These indicators are particularly applicable to countries "that are statistically challenged, that have major deficiencies in available statistics and require sizable statistical capacity building, including fundamental changes 6 The Paris21 Consortium is a partnership of national, regional, and international statisticians, policymakers, development professionals, and other users of statistics. This Consortium was launched in 1999 and its purpose is to promote, influence, and facilitate capacity-building activities and the better use of statistics. Its founding organizers are the UN, OECD, World Bank, IMF, and EC. 10 to improve statistical operations and that cannot develop their statistical capacity without external assistance." (Laliberte, 2002, p. 4).7 4.15 The quantitative indicators measure the performance of data-producing agencies by providing information on the depth and breath of statistical activities: the financing, staff, number of data sources, and diversity of statistical outputs8. Quantitative indicators focus on the statistics produced and can be used to assess if the statistical agency has attained the goal of delivering its products (Laliberte, 2002, p. 14). These indicators, however, do not provide information about whether the data is effectively used by governmental agencies or other users. Nor do they reveal whether the data is produced in an efficient manner. This information is supplied by the qualitative indicators, which are applied to the statistical data sets.9 Qualitative indicators reveal information on effectiveness and efficiency by taking into consideration the broader environment in which the statistical agency operates. They show if the legal environment facilitates the production of statistics; if the culture is amenable to quality work; if the integrity and professionalism are protected and transparency measures are in place; if the data produced follows international methodological standards; if measures are in place to maintain the relevancy of products; and if the characteristics of the statistics produced fit the user's needs (Laliberte, 2002, p. 15).10 Qualitative indicators are structured according to six criteria that are relevant for statistical operations: Paris 21 Statistical Capacity Building Criteria used to Build Qualitative Indicators: 1. Institutional prerequisites 2. Integrity 3. Methodological Soundness 4. Accuracy and Reliability 5. Serviceability 6. Accessibility 4.16 In addition to describing the quantitative and qualitative concepts and developing indicators for each different aspect of the statistical operation process, the Paris21 Task Team defined benchmarks to assess these indicators according to a four-level range. Level 4 refers to optimal conditions for data production, and level 1 to least favorable conditions. Developing benchmarks related to a particular objective (in this case, strengthening statistical capacity of an agency) allows the analyst to assess not only capacity gaps, but also to analyze the process of capacity enhancement. A full list of these quantitative and qualitative indicators is provided in Annex 3. 7 Other sector specific indicators probably exist, but nothing was found with reference to particular development projects. 8To limit the reporting burden, The Paris21 Task Team suggests that only three representative agencies be assessed (Laliberte, 2002,10). 9 Like in the case of quantitative indicators, the Paris 21 Task Team suggest to limit these data sets to three representative statistical outputs (they suggest GDP, population, and household income/expenditure data sets, which represent the economic, demographic, and social domains respectively. Laliberte, 1997, 11). 10The Paris21 and Tobelem's approaches were combined to build the indicators of statistical capacity building. See Section V of this document. 11 Paris 21 Benchmarks Descriptions for Data-related Indicators (An example) Indicator: Effective Coordination of Statistics Level 4 Level 3 Level 2 Level 1 1. Legal or other formal 1. Legal or other formal 1. Legal or other formal 1. There is no legal or arrangements/procedure arrangements/procedure arrangements/procedure other formal s clearly specify the s allocate responsibility s do not allocate arrangements/procedure responsibilities for for coordination of responsibility for s that specify coordination of statistical statistical work, but this coordination of statistical responsibility for work and promotion of is not fully effective in work, and coordination coordination of statistical statistics standards, and practice does not occur. work. this is implemented 2. There is some (but 2. There is significant 2. There is significant effectively through: not significant) data data gaps in certain area data gaps and 2. Development of a gaps and/or duplication and/or duplication of duplication of statistical coordinated national of statistical effort statistical effort effort. program of statistical 3. Standard frameworks, (statistical outputs 3. Standard frameworks, activities; identification of etc. are promoted but produced by different etc. are not promoted data gaps in meeting there are some agencies may lack and are generally not users' needs; elimination instances of non- consistency and observed. Data- of duplication of compliance coherence) producing agencies may statistical effort 3. Standard framework produce and use 3. Promotion of standard etc. are jot actively statistical outputs that frameworks, concepts, promoted and there is are in conflict with those classifications, and significant non- produced by others. methodologies compliance throughout the data- producing agencies Source: Laliberte (2002, p.25). 4.17 Like much of the recent literature on capacity enhancement, the Paris21 Team recognizes that capacity enhancement depends on something more than the development of skills and the acquisition technical equipment. Although their indicators are not disaggregated by levels of analysis (individual, organizational, institutional), these different dimensions are implicit in their definition. 4.18 Another example of an attempt to develop indicators to assess particular capacity enhancement projects is Cohen and Wheeler's study on training and retention of public sector bureaucracies in Africa (Cohen and Wheeler, 1997). This study assesses the impact of six externally funded capacity building projects that focused on training public sector economists, planners, statisticians, and financial managers. 4.19 Cohen and Wheeler recognize that although many of these projects have been widely perceived as failures, there are few systematic studies to assess the success and failures of training and staff retention programs. Although their analysis focuses at the skill (or human resources) level of analysis, they recognize that analyzing the capacity of the public sector requires a broader analytic perspective that takes into consideration management styles and other organizational aspects. 4.20 To assess the success of training projects aimed at building human resource capacity in the public sector, they develop these indicators: 12 Indicators of capacity enhancement in African Public Sectors 1. Retention rates of trained personnel in the targeted ministry 2. Retention of trained personnel in other government ministries or agencies 3. Attrition rates (how long trained personnel stay in the public sector) 4. Decline of expatriate experts 5. Profile of those who leave the public sector: whether the best and brightest stayed in or left the public sector Cohen and Wheeler, 1992, pp.125-140 4.21 Using these indicators, these authors found that retention rates were much higher than expected, even though there is a wide perception that public servants are badly paid, lack equipment, work in a demoralized environment and suffer from poor management (Cohen and Wheeler, 1997, p. 140). The questions then are, why so many trained individuals stayed in their jobs and why despite such high retention rates, bureaucracies in these African countries continue to perform poorly. The answer is that for many public officials, their government job represented only a minor component of their salary. The combination of low salaries and weak management allowed these people to keep their public sector employment while seeking jobs outside. Trained people stayed in the public sector, but they were under-performing. "The opportunity to use office hours and equipment to significantly augment official salaries through private-income earning activities provides a major incentive to remain in the civil service" (Cohen and Wheeler, 1997, p. 142). Paradoxically, weak management, lack of clarity of roles and responsibilities, duplication of functions, and absence of performance evaluations, increased public officials' incentives to remain in the public service. Lack of accountability and motivation within the public sector allowed these officials to use their jobs as safety nets while devoting their time and talent to other more profitable and rewarding occupations. 4.22 The conclusion of this study is that quantitative indicators such as skill retention rates and attrition rates are not sufficient to assess government capacity. More qualitative indicators of management styles and civil service rules are required in assessing the capacity of an organization. Although Cohen and Wheeler suggest different organizational aspects that need to be taken into consideration in assessing capacity (pay levels, team work, supportive supervision, development of a more attractive scheme of service, transparent and timely promotions, etc), they do not define indicators to assess organizational capacity. 4.23 In an article entitled "The design and use of capacity development indicators," Peter Morgan (1997) offers a framework for thinking about capacity indicators and provides some operational guidelines for their design. He stresses the importance of understanding that measuring capacity is different from measuring performance or outcomes; capacity indicators reveal something about the efforts that are necessary to improve organizational performance. 4.24 Morgan contends that designing indicators is an activity that has to be "de- mystified" In his words, "indicator factories in funding agencies now produce lists and 13 lists of indicators for many different sectors. These are then tacked on to development projects and inserted into approval documents and contracts with little empirical evidence of their benefit or impact...Yet at the same time, there needs to be more attention paid to the use and design of indicators as one part of the broader process of the strategic management of capacity development (Morgan, 1997, p. 3)." Rather than providing yet another list of indicators, Morgan suggests criteria for how to design indicators that are more effective and realistic and that reveal information about the "process" of building or strengthening capacities. 4.25 Although Morgan recognizes that there are no generic indicators of organizational capacity development, and that indicators need to relate to the specific development objectives (capacity for what?) and the actors for whom it is aimed (capacity for whom?), he still identifies some "boiler plate principles" that can be used to assess the capacity of any organization to fulfill its functions. Morgan's Boiler Plate Principles of Organizational Capacity 1. The organization can learn and adapt to changing circumstances. It has a self-renewing capacity 2. The organization can form productive relationships with outside groups or organizations as part of a broader effort to achieve its objectives. 3. The organization has an effective program for the recruitment, development and retention of staff that can adequately perform its critical functions. 4. The organization has some ability to legitimize its existence 5. The organization has a structure, technology, and set of procedures that enable the staff to carry out the critical functions. 6. The organization has a culture, a set of values and an organizational motivation that values and rewards performance 7. The organization has the ability, the resources and the autonomy to focus on a manageable set of objectives over a reasonable period of time Morgan, 1997, p. 9. 4.26 As is evident from this list, the issues addressed here by Morgan refer to the organizational level or dimension discussed by other authors. In abstraction, however, the elements of this list too are vague to be used as indicators. They need to be adapted to particular development goals and to concrete institutional and organizational contexts. Morgan gives some examples of capacity development indicators that refer to particular development goals and that are targeted to particular agencies or institutions. Some of these examples are reproduced in Annex 4. Although Morgan's indicators are more specific about the goals and the agencies involved in particular development projects, they do not make explicit reference to the different analytic dimensions of capacity enhancement. This limits their use in assessing the capacity of an institution or an organization to fulfill their assigned functions in a sustainable manner. 4.27 In reviewing the literature on capacity enhancement, five conclusions emerge: a) Consensus exists that analysis of capacity and capacity enhancement should be approached through three different levels of analysis: The individual (human 14 skills), the organizational, and the institutional. The recognition of the relationship between these three dimensions or levels is fundamental in the emerging paradigm for capacity enhancement among the international donor community. Unlike the past, where most capacity enhancement projects centered around strengthening human skills through training, today there is a recognition that the broader social, economic, and political context needs to be taken into account for any project to have a feasible possibility of success. Teachers and trainers can transfer information effectively, but trained individuals need a facilitating environment to apply their acquired knowledge. To have more analytic value, indicators of capacity enhancement have to be defined for these different analytical dimensions. b) Capacity enhancement indicators acquire operational value when they refer to concrete development objectives and the actors towards which capacity enhancement projects are directed. In abstraction, indicators lose analytic utility. Thus, to build indicators it is essential to address two central questions: capacity for what? And capacity for whom? Indicators of capacity of a statistical agency for example, will be different from indicators of organizational capacity of public bureaucracies. c) Capacity enhancement is a dynamic process of learning and adaptation. To gauge this process, indicators require the definition of benchmarks or norms that allow the analyst to assess different levels of capacity along a continuum. Defining these benchmarks may be difficult because they are often based on subjective perceptions and are not always value-free. However, some minimum level of consensus among experts is needed for benchmarks to have any utility as measurement tools. d) Capacity enhancement depends first and foremost on the existence of political will and commitment on the part of the recipients. A teacher's success highly depends on the will and motivation of the student to learn. Country ownership and motivation are therefore the single most important determinants of effectiveness of capacity enhancement projects. Evaluating the extent of political will and motivation may be difficult, but it is essential to assess whether this element exists before any project is launched. The definition of the indicators, therefore, will need to be sensitive to the country's sense of ownership and the leaders' capacity and will to change in that direction. e) Finally, most authors agree that aside from political will (or country ownership), capacity enhancement projects require "champions" of reform. Like in all reform process, capacity enhancement projects generate winners and losers. It is essential not only to minimize the losers and maximize the winners, but in most cases, the success of any reform depends on good leadership, or as Stephen Peterson (1994) argues, on the existence of "saints" (government reformers willing to introduce reforms and confront potential opposition). Although leadership is difficult to measure, capacity enhancement indicators need to be sensitive to this element for in many cases, regardless of the quality of technical assistance projects, the success or failure of capacity enhancement projects depends on good leadership. 15 5. DESIGNING CAPACITY ENHANCEMENT INDICATORS: AN EXAMPLE 5.1 To illustrate how the analytic framework can be used to design indicators of capacity enhancement, this document provides a hypothetical example of a project designed to enhance the capacity of a country's statistical agency to collect and analyze data. Strengthening this country's statistical capacity is regarded as a crucial step for designing and implementing better social policies, and particularly, better anti-poverty policies. Measuring and monitoring poverty requires stronger methodological and analytical tools. The latter strongly relies on the quality and coverage of poverty data. Reducing poverty and inequality is the overall goal of the project, but the latter requires good anti-poverty policies, which in turn depends on the existence of good analytical and measurement tools. See Table 1. 5.2 Taking strengthening the country's statistical agency as an immediate goal, Table 2 suggests a variety of indicators designed to measure capacity enhancement for measuring, monitoring, and analyzing poverty in this country. These indicators were developed by combining Tobelem's approach, who disaggregates capacity by different analytical dimensions, and the Paris21 Task Team who give particular attention to elements related to methodological soundness, integrity, and reliability of statistical data sources. Table 1. CAPACITY ENHACEMENT FOR POVERTY REDUCTION STRATEGIES Overall goal: poverty Intermediate goal: improving Immediate goal: improving the reduction poverty reduction policies methodology for measuring and analyzing poverty; improve the quality and coverage of poverty statistics · Reduction of poverty levels · Improving poverty assessments · Improve the methodology of poverty (reduction of number of poor) using more reliable and consistent measurement, improve the statistical information and based on household budget survey (HBS) and · Improvement in the an improved methodology a system of administrative statistics distribution of income for enhanced poverty monitoring and · Improve analytic skills to forecast · Creation of jobs; policy-oriented analytic work impact of social reform policies on compensation of workers who poverty · Generate consensus on the number lose their jobs; retraining and characteristics of the poor; the · Improve targeting methods and · Improvement of quality of poverty line, and welfare indicators delivery methods (based on basic services: health, definition of poverty line, number of · Improve quality, consistency, education, nutrition poor, identification of populations at reliability and coverage (particularly · Alleviation of extreme poverty, risk) regional) of statistics on the poor protection of the most · Introducing new methods and tools · Publicize and disseminate data to vulnerable for monitoring and evaluating the other agencies within the · Strengthening the targeting of poverty impact of policies and government social assistance programs; assess the short term impact of enterprise restructuring · Guarantee open access to the data · Inclusion of excluded social · Promote a dialogue on poverty sector (regionally and ethnic · Improve understanding of tradeoffs issues considerations) and costs of different policy instruments and policies 16 to reih lly are to ga n ectstorp ilitybi le nt; tiality ta;ad is esuqi s en lan me ge .ec for a d ponss s na techn ie as ng re d ma rs zatioinagro NOIT an idfn ilit ndi n cofo cessorp itievitca an terferen ib . ed fu full d es in othe mmittee rmatio LIZAA fo info eachrbr seicnega an itst essioforp e spons by luav co re llect udgeb co ecttorp iornes rkowla of d d norod tic rde an of ce linediug sidtuot the ga ev fo on IONT ing to nda sn nure te enr ou fyic istats re tinganidro essc of ac s ltiesanep atistics ityro na thiw oporti st pl nda terfe n th e statistical a spe coro produc to in d tio e rc OPERA ta ate au an na llyareneg pr sa tee ed fo da 'syc dna isiovorplan criteri l, iot hoicc aran liticalop yrlaelc is s oordic yc ectijbo,tnedneped ing task DINGL gu n scriberp sevig dissemin d engA (the cesru onne stitu e rs in nc from duree en in ag fundtn afo tiality. e statistical dna tion an d. pe d n sofo n l,an BUI en ar la an nde no ec islatio eg idfn reeh isg ile fire al pe ernmev mp cesio eg proclag *L co *T *Le co *Statistical nda ognizec re *L inde ectiotorp *Ch asedb *Le promotio *Statistical essioforp *Go *Existen ed e in ate y d CAPACITY agre to arante and m earlcl ent statistical 17 nda gu ces e TICAL INDICATORS ROTA n ar statistical ernm are volveinyl formationin informatio clearsahyc ons gov y y'strn resour sh ep ies de functions the Cou of commits ycn are d INDIC en a; STATIS encga peratio ni of agenc of ag dat to Age nt an 2. n tiona ofy thiw y ies date on ect ectio wla ned y collect man nc proj Tabla Coll preserv by Statistical to assig Autonom Integrit Agenc on age Government Statistical Governme informati the NOISN orkw orkw frame al frame on y p DIME-BUS rshi itimac ne Constituti Regulatory Leg Ow 1. 2. 3. .4 SN DIMENSIO Institutional ci t, ,sn seu dge nda to d are in nte cies strateg bu ffats nce istictats sificatio of clas seicnega of callyid s flexibility ciesne orie agen PCs,n r:of ringo ounds d. nit rdoc rforma ag ltsu pe has rioep g NOIT m rag ts, nd ngic tionanidro tioa era nefi onis arlylu mo re, min de mis reg nda ffatsl in res/ effort of g ncep co inputs fula lityauq LIZAA prolan sseid llew hin ectsjorp tedcudnoc co ycnega on produ es, et seicnega edsu g era amrgorp e of ar management are tio ks, ta rc s ing tern in onsti nding s ternxe ta IONT na statistical orw ind, da fo in tion rs m stsoc, duaivi yb sucof d oduc func itiev te of eht lisbatse,s tiesi statistical nai frame out task,s s OPERA rd k,rowten pr zain of statistical ystes m tsc acti oducrp tasks ndeu tiv priorities, iesitv ta p l; n proje ac rm da orga ng oordc plication al the onne progra rmatio jor eenwteb ta rongts acti da mmittee for larevo rs -termgn s thiw oftn dufo ndaats through rfoep lo fo ma ycnega changi s det promote of ieg cofo tern in ot gs ind ion/ta pe of int of ni to ceru statisticalfo ec teau thiniw afo en nt plic nce iont n nai frame, eq buil me elopmev itiesv e tions du iontav ec em ge thodolo ad ag rkowlaunna,sn esor sn ectjorpfo ipa s atio tioa rd rforma rtic response recti luav *De acti *Elimination *Promotion me *Existen *Main are *Offic *Func *Little *Moti *Existen *Man pla pe pa reco In* red *E aluvE* oordcllew na Ma* n ncies on data age ng ass on uatioal hical 18 ecti of et ROTA ev coll on natii enewteb istixe thiw tsn mission/yital of d functi qu d nda nt ng ireme nda on nal data communication of INDIC on oordcla ation ipme eness istixe requ cus toring fo emente and lizati l-loc inform of equyglo mpareoc nsiv as tsn of e thiw he ed tiona moni,g impl rofessiop es of ireme anc e Centra Centra collection wolF nin Information techno Compre functions requ Relev compar Organiz Plan measur Cultur of g NOISN on natii tssixe havins or gsin ildub tn unctionsF of elysttn coord ent titiene formally ofy DIME-BUS enew uac pmeiuqe on geme relate Effective statistics Agreem bet to informally Adeq and Distributi Mana 1. 2. 1. 2. 3. SN al nization DIMENSIO Inter- Institutional Orga n ev d for ev d an used etiti or mp m ailableva yc an s sted dna tes ingt stratii dna e co lis is d licbup ju are latioupop lyt ra ad agen d gions jusda admin s m tead re nte e e th an tables eted e an dna NOIT liec th rkloadow cesru istensn ponss g th to process to phic co so proble; n put teau sedu targ;n not are refo thiw eq elba thiniw thiwtn are ondence evi ogra ad rit, ts ge imputin ed LIZAA llya ses upo kep rmatio are me atistical st ailva ountsccal taad strati fo lla su for tain IONT riodic detca istesn are s of in iontariav nts taad are tiona ectsjorp n co cepn corresp de d min cen and na is co micro d us main pe ant sis ition ba nda rsevoc an nding ponde era d nd es ted te thiw rediuq re dna ta ceruos e ementv exis co the stuptuo in rs res istcatn th OPERA sn nducoc rkow of on d operations cynegala eewteb escru fficolan ysevr ates thods co tu su unde s es d io acceplan ad;susnec res nduos pe me s ewiv improrof te to eratio tic egrg cap is of ty gniu an statistical /reg atio classifications d es m idvorp re opfo ducnoc m bouta ist ag lidav lsev lis resofo ys,evrus gni ps rat rent e intn at Sta n tern co th lity ionsts le blic; eadh rities d ylla lsau is pu of dlo llecte des macro grou progra diffe ict ponss ev y edd e re tioac int yev ev tis ityro th lin *Qua ugges trimonia *Man *Salar *Hiring pa *Information to *Sha lloA* eenwteb iorp k rren sehu co inclu senops seru Cu *International to *Ho *Data is *Sur hnicte *Re monitore ctiA* ossrca th *Sta non-re *Effecti au e and ng, ds 19 firi ROTA t/competenci yitilb taad ndar e y hiring, mer yti sta ac qu g INDIC fortsixe officials on bil nal ic ed ublp Accountay/ nai gio used ade bas l/re d susta na taad taad torinin ministrativda mo of ds ures ng ls n veely ial ente of of onse datio standar Proced promoti Pa ansparencrT nanciF Internatio implem Source Source Resp Vali y ilit NOISN ab lesuR ice alicg Reli d g DIME-BUS olo ess any Serv etin dn Civil Budg Method soun Accurac 4. 5. 6. 7. SN DIMENSIO of lau d , ons internal d amrg annfo an s la finiti an eys statistical reliability ro user w rvus d all llo an fo erehw de, ticsitats NOIT sesoprup time-p blication to g nda taad ed assess to a to pu to d lysu g eo clear idvorp tspe documented, oin oncc is ble ong comparability LIZAA susnecl atei taadtlusnoc y in an d s rd ltan an are ngi ailava ndle m gularly time s rly ly ha check termed re co sources, aticall ac simu sersu IONT statisticalr no den to fo duaiv di info cessorp serie ndeu to deiw progra in required dia data, totnei sedu in n ystems ased ased ased esigd on elb era place me lse ngini atio s esg rele rele an rele nt llew icalty ts al tion lev tra OPERA is in fficus ert re taad is from ta lidav tuo undertaken are are is are . are are rmao essicca eht obj y produc to manageable ev cess da d inf methodology the of s pro ire antne ried of freel is car data exprehto studies stuptuo stuptuo chfo ffeid d iceton stuptuo in anrehto de stuptuo d d taad on te an of ffatsfor erv of istrati ev iestivitca ncy nge e in te an ce on anlan ra el educated nc mdA* acti is ionnats nA* onsc que mssessA* aret is turno putuo ionsiv rse *Re preliminary *Us *Statistical *Statistical anvdA* produc so numbe itiesv te port re *Statistical era full Statistical* *Sea ppropriaa A* classifications, ailabva ogslat is *Ca The acti Staff Staff Ex to d final d stu s ede 20 ROTA an outp ate tputuo nesl on skilla INDIC termediin on statistical s statistical minati data er edta of ultati of chnice n ofy disse meta l/Ta task upd cons icitd ent turnov datio teda of are Vali outputs User melinessiT Perio Effective Upd Intellectu implem Rate Skills y g NOISN uac nin d eq trai an ad ylit y e ntion us ilit level rete nuo DIME-BUS abi Staff expertis Skill Conti Service Accessib 1. 2. 3. 8. 9. SN DIMENSIO Individual ANNEXES 21 ANNEX 1: INDICATORS TO ASSESS CAPACITY. UNDP. CAN WE DO BETTER? INSIGHTS FOR CAPACITY DEVELOPMENT Societal level. Focuses on the overall policy framework in which individuals and organizations operate and interact with the external environment. · Policy Framework: what are the strengths, weakness, opportunities and threats according to the socio-political, government/public sector, economic/technological and physical environment factors operating at the societal level? Is the overall policy environment favorable? · Legal/Regulatory Framework: is the appropriate legislation in place and are these laws effectively enforced? · Management/Accountability Framework: Are institutional responsibilities clearly defined and are responsible institutions held publicly accountable? · Economic Framework: Do markets function effectively and efficiently? · Systems-level Framework: Are the required human, financial and information resources available? · Process and Relationships: Do the different institutions and processes interact and work together effectively? Institutional Level. Focuses on the overall organizational performance and functioning capabilities as well s the ability of an organization to adapt to change. · Mission and Strategy: Do the institutions have clearly defined and understood missions and mandates? · Culture/Structure/Competencies: Are institutions effectively structured and managed? · Process: Do institutional processes such as planning, quality management, monitoring and evaluation, work effectively? · Human Resources: Are the human resources adequate, sufficiently skilled and appropriately deployed? · Financial Resources: Are financial resources managed effectively and allocated appropriately to enable effective operation? · Information Resources: Is required information available and effectively distributed and managed? · Infrastructure: Are material requirements such as building, offices, vehicles, and computers, allocated appropriately and managed effectively? Individual Level: Refers to the process of changing attitudes and behaviors--imparting knowledge and developing skills while maximizing the benefits of participation, knowledge exchange and ownership. · Job requirements and skill levels: Are jobs correctly defined and are the required skills available? · Training/Retraining: Is the appropriate learning taking place? · Career Progression: Are individuals able to advance and develop professionally? · Accountability/Ethics: Is responsibility effectively delegated and are individuals held accountable? · Access to Information: Is there adequate access to needed information? · Personal/Professional Networking: Are individuals in contact and exchanging knowledge with appropriate peers? · Performance/Conduct: Is performance effectively measured? · Incentives/Security: Are these sufficient to promote excellence? · Values, Integrity, and Attitudes: Are these in place and maintained? · Morale and Motivation: Are these adequately maintained? · Work Redeployment and Job Sharing: Are there alternatives to the existing arrangements? · Inter-relationships and Teamwork: Do individuals interact effectively and form functional teams? · Interdependencies: Are there appropriate levels of interdependence? · Communications Skills: Are these effective? ________________________________________________________________________________ Source: Carlos Lopes and Thomas Theisohn, Can We do Better? Insights for Capacity Development. United Nations Development Program, Forthcoming. pp. 54-55. 22 ANNEX 2: INDICATORS TO MEASURE INSTITUTIONAL CAPACITY GAPS. ALAIN TOBELEM (1992) Analytical Viewpoint Category Indicator Rules of the Game · Governance · Recognizable role of the state (by sector) · Legitimacy of decision-makers · Transparency of Procedures · Separate Judicial System · State Ownership of · Flexibility in Ownership National Wealth and Definition National Resources · Civil Service Flexibility · Role and Impact of Performance in Civil Service Management · Flexibility in the Definition · Whether given public functions of the role of the State can easily be allowed to become private sector functions · Adequate and Complete · Nothing in the general legislation Legislation prevents implementing any part of the proposed work program · General legislation provides definitions as required by proposed program · Relevant Sectoral · Sectoral legislation does not Legislation contradict general legislation · Sectoral legislation provides definitions as required by the program at stake · Relevant Regulations · Sectoral legislation has been translated into adequate regulations · Relevant regulations are clear and comprehensive · Cultural Patterns · Cultural patterns of implementers and beneficiaries not harmed in any way by proposed work program Inter-Institutional · Comprehensive of · All implementation functions Relationships assigned functions for required have been assigned to project implementation every entity involved, thus identifying the implementing institutional universe · Relationships Network well · All IIR-related tasks to be defined implemented by entities with an understanding about respective roles · Agreements Entered Into · Agreement exists between 23 Analytical Viewpoint Category Indicator entities having to relate formally or informally · Potential Opposition · Directly of indirectly IRR tasks will not be opposed by non- decision makers · Beneficiaries' Agreement · Intended beneficiaries consulted and their viewpoint considered: their role well defined and accepted Internal Organization · Distribution of Functions · Comprehensiveness of existing functions as compared with requirements · Relevance of existing functions as compared with requirements · Duplication/overlap of existing functions when applied to requirements · Internal Relationships · Internal relationships ensured for Flow development activity implementation as required · Internal relationships based on clear agreements · Rules of the Game and · Administrative and Manual organizational rules of the game well defined as required and included in an ad hoc manual · Management Procedures · Management procedures made and Style explicit to all implementing staff and creating a conducive institutional environment · Management style conducive to obtaining maximum efficiency from staff skills · Interpretation of Civil · Liberal and stimulating way of Service Rules applying civil service rules · Internal personnel management using attractive procedures and incentives to retain and motivate skilled staff · Technical Manuals · Existence of technical procedures and systems well defined in didactic manuals · Internal harmony and agreement on technical manual's content and procedures · Physical and Financial · Space available corresponds to C it 24 Analytical Viewpoint Category Indicator Capacity requirements for new projects · Equipment in line with implementation requirements · Recurrent money adequate for implementation of new tasks · Existence of an internal · Institutional Development institutional development Function function · Absence of institutional development function makes it difficult to implement the particular program at stake Personnel Policy and Reward · Civil Service Career · Career exists and helps secure System and motivate personnel required · Salary levels and grids commensurate with level of qualifications and with the private sector · Compensation Packages · Includes social incentives · Includes accidents/health insurance · Includes provisions to leave and reenter the service Skills · Information · Information on the politics of the project: priority, national importance · Information on the project itself: objectives, resources, institutional responsibilities · Information on specific aspects of task implementation · Knowledge · Every piece of knowledge from academic background · Same but with professional experience · Know-How · Practical skills necessary to implement certain tasks · Intellectual skills such as writing, preparing reports and ability to speak in public · Skills linked to personality: public relations, diplomacy, etc. Source: Alain Tobelem, "Institutional Capacity Analysis and Development System (ICADS). Operational Manual." Public Sector Management Division. Technical Department. Latin America and the Caribbean Region. World Bank. Occasional Papers, Num. 9, July 14, 1992, pp. 23-51. 25 ANNEX 3: STATISTICAL CAPACITY BUILDING INDICATORS. PARIS21 TASK TEAM Statistical Capacity Building Indicators measure the statistical condition in a country through a prism that captures representative elements of these conditions: · Sixteen quantitative indicators cover resources (domestically and externally funded annual budget, staff, and equipment), inputs (survey and administrative data sources), statistical products. · Eighteen qualitative indicators focus on relevant aspects of environment (institutions and organizational), of core statistical process, and of statistical products. Aside from defining these indicators, the Paris21 Task Team establishes a four-scale assessment level and provide benchmark descriptions: Level 4 applies to highly developed statistical activities; level 3 to moderately well-developed activities; level 2 to activities that are developing but still have many deficiencies; and level 1 to activities that are undeveloped. Ratings of 3 and 4 refer to activities that do not require external support. Indicators GDP Population Household Income/Expenditure Quantitative (Agency-related Indicators) Name of Agency producing the statistics on: Government Funding Current Capital Donor Funding Funds (amount) TA expert working days Donor Agency (name) Statistical Staff Number Turnover (%) ICT Equipment Main Frame (yes/no) Internal Network (yes/no) Internet Dissemination (yes/no) PCs in use (number) Website address Source of Data used Household surveys/census Other surveys/census Administrative sources Data Releases Publications/Yearbooks Other releases 26 ANNEX 3 (CONTINUED) STATISTICAL CAPACITY BUILDING INDICATORS, PARIS21 TASK TEAM Qualitative Indicators Household (Data-Related Indicators) GDP Population Income/Expenditu re Rating Scale: 4: Highly developed; 3: Developed; 2: Largely Undeveloped; 1: Undeveloped Prerequisites 1. Collection of information and preservation of confidentiality guaranteed by law and effective 2. Effective coordination of statistics 3. Staff level and expertise adequacy 4. Building and equipment adequacy 5. Planning, monitoring and evaluation measures implemented 6. Organizational focus on quality Integrity 1. Independence of statistical operations 2. Culture of professional and ethical standards Methodological Soundness 1. International/regional standards implemented Accuracy and Reliability 1. Source of data adequacy 2. Response monitoring 3. Validation of administrative data 4. Validation of intermediate and final outputs Serviceability 1. User consultation 2. Timeliness of statistical outputs 3. Periodicity of statistical outputs Accessibility 1. Effectiveness of dissemination 2. Updated metadata Source: Peter Morgan, "The Design and Use of Capacity Development Indicators." Paper prepared for the Policy Branch of CIDA. December, 1997 27 ANNEX 4: EXAMPLES OF CAPACITY ENHANCEMENT INDICATORS PROVIDED BY MORGAN, 1997 Whose Capacity? Critical Function Capacity to do What? Capacity Indicators Community water management Water pump maintenance in A functioning Pump committees rural areas that cannot be Management Committee that properly serviced by regional meets at least once per month authorities and keeps the pump functioning 90% of the time in normal circumstances Research staff of government Need for government Acceptance of survey methods departments departments to carry out joint as an effective tool by senior surveys of client farmers in research officers and their delta area of cotton region incorporation into the work program of the agencies. Regional managers and politicians Need for regional authorities to Ability of the regional upgrade transportation facilities authorities to mobilize political in eastern part of the region support and local resources to support its position within central authorities. Systemic capacity to manage Need to improve interactions Increased use of the survey national park system in a small between national parks staff data in park planning African country and local communities parameters Source: Peter Morgan, "The Design and Use of Capacity Development Indicators." Paper prepared for the Policy Branch of CIDA. December, 1997. 28 Bibliography Bahjat Achibache, Misha Belindas, Mustafa Dinc, Graham Eele, and Eric Swanson. "Strengthening Statistical Systems." Chapter 5. Porverty Reduction Strategy Papers Sourcebook. World Bank. 2002. Eliot J. Berg, Rethinking Technical Cooperation: Reforms for Capacity Building in Africa. Washington DC: United Nations Development Program/Development Alternatives Inc., 1993. Bruce R. Bolnick, "Establishing Fiscal Discipline: The Cash Budget in Zambia," in Merilee S. Grindle, ed. Getting Good Government, Harvard University Press, 1997. Stephen Browne, Beyond Aid: From Patronage to Partnership. Aldershot: Ashgate, 1999. Stephen Browne, ed.Developing Capacity Through Technical Cooperation. Country Experiences. UNDP. Earthscan Publications, 2002. John M. Cohen and John R. Wheeler, "Training and Retention in African Public Sectors," in Merilee S. Grindle, ed. Getting Good Government. Harvard University Press, 1997. John M. Cohen, "Building Sustainable Public Sector Managerial, Professional, and Technical Capacity: A Framework for Analysis and Intervention," Harvard Institute for International Development, Harvard University. Development Discussion Papers, n. 473. October, 1993 Isidoro P. David, "Why Statistical Capacity Building Technical Assistance Projects Fail?," International Association for Official Statistics. Manuscript. 2001. ECDPM, Evaluating Capacity Development. Issue 17, April, 2003. www.capacity.org Sakiko Fukuda-Parr, Carlos Lopes and Khalid Malik, Capacity for Development. New Solutions to Old Problems. United Nations Development Program and Earthscan Publications, 2002. Merilee S. Grindle, ed. Getting Good Government. Capacity Building in the Public Sectors of Developing Countries. Harvard Institute for International Development. Harvard University Press, 1997. Merilee S. Grindle and Mary E. Hilderbrand, "Building sustainable capacity in the public sector: what can be done?," Public Administration and Development, Vol. 15, 441-463, 1995. Douglas Horton, Learning About Capacity Development Through Evaluation. Perspectives and Observations from a Collaborative Network of National and International Organizations and Donor Agencies. The Hague: International Service for National Agricultural Research, 2001. 29 Douglas Horton, "Planning, Implementing, and Evaluating Capacity Development," International Service for National Agricultural Research. Briefing paper, N.50, July 2002. Edward V. K. Jaycox, "Capacity Building: The Missing Link in African Development," Transcript of address to the African-American Institute Conference, African Capacity Building: Effective and Enduring Partnerships, Reston, VA., 1993. Lucie Laliberte, "Statistical Capacity Building Indicators. Final Report." Paris21 Task Team on Statistical Capacity Indicators. September, 2002. Carlos Lopes and Thomas Theisohn, "Can We Do Better? Insights for Capacity Development." Unpublished Manuscript. UNDP. Forthcoming. Peter Morgan, "The Design and Use of Capacity Development Indicators." Paper prepared for the Policy Branch of CIDA. December, 1997 Peter Morgan, "Technical Cooperation. Success and Failure: An Overview," unpublished paper submitted to the UNDP. October, 2001. Stephen B. Peterson, "Saints, Demons, Wizards and Systems: Why Information Technology Reforms Fail or Under perform in Public Bureaucracies in Africa." Harvard Institute for Institutional Development, Harvard University. Development Discussion Paper, n.48. May, 1994. Stephen B. Peterson, "Hierarchy versus Networks: Alternative Strategies for Building Organizational Capacity in Public Bureaucracies in Africa," in Merilee S. Grindle, ed. Getting Good Government. Harvard University Press, 1997. Alain Tobelem, "Institutional Capacity Analysis and Development System (ICADS). Operation Manual." World Bank. Public Sector Management Division. Technical Department. Latin American and the Caribbean Region. November 9-July, 1992. Links on capacity enhancement: www.capacity.org www-wbweb.worldbank.org/prem/pas/capacity www.undp.org/governance www1.oecd.org/dac/tcnet www.usaid.gov/democracy 30 Mission of World Bank Institute This mission of WBI is to help World Bank clients and staff acquire new development knowledge and skills through a variety of courses, seminars, and other learning events. It designs programs on topics related to economic and social development for governments, nongovernmental organi- zations, and other stakeholders. The Institute produces and disseminates publications and electronic information products that support these objectives. For information on WBI publications write to: Publications WBI The World Bank 1818 H Street, NW Washington, DC 20433 Tel: 202-473-6349 Fax: 202-522-0401 Visit us on the World Wide Web at: http://www.worldbank.org/wbi SN37232