84381 Managing Knowledge Results: An Exploration of the Systems and Practices of Development Agencies Dawn Roberts World Bank Institute, Capacity Development and Results October 7, 2013 1 Acknowledgements This paper was authored by Dawn Roberts, with Cristina Ling as the task team leader and under the guidance of Samuel Otoo, Manager, Capacity Development and Results. Many thanks to Violaine Le Rouzic for providing feedback. 2 Introduction The World Bank Institute (WBI) is a global connector of knowledge, learning, and innovation for poverty reduction. To best leverage knowledge for development effectiveness, WBI has worked to foster a results culture among its staff and establish a results infrastructure. Institutionalizing a comprehensive and integrated approach to results management is an instrumental step on the path to building an understanding of what works in what context for development activities with knowledge components. The ongoing shift to a results focus at WBI is consistent with the growing concern across the broader development community that the outcomes of capacity development and knowledge initiatives have not been defined or documented sufficiently. Donors agree that knowledge services should be demand-driven, owned by the country, and build on existing capacity, and there is an emerging consensus on what elements characterize capacity development. Critical factors include not only technical dimensions of organizations but also local political and governance related aspects in the development context. Knowledge results thus extend beyond the knowledge and skills gained at an individual level to include organizations, institutions, networks, and other systems depending on the approach of the specific development agency. Given this reality, development agencies and knowledge organizations have individually and collectively grappled with how to measure and manage knowledge results to promote evidence-based decision-making and increase aid effectiveness. In the spring of 2013, WBI’s Capacity and Results practice (WBICR) conducted an exploratory study to examine the range of practices and systems in place at knowledge organizations to plan for and manage results. The study team collected information from 13 development agencies through the desk review of reports and other artifacts and interviews with 20 individuals familiar with their organizations’ management of capacity development results. This report spotlights interesting approaches and methods used by these organizations to further the dialogue on how to plan for and strengthen the outcomes of knowledge activities going forward. The list of organizations and individuals interviewed and information sources they provided is in the annex. Conceptual Approach Knowledge organizations embrace a broad range of conceptual frameworks and methods to guide and assess how the capacities of individuals, organizations, policy frameworks, and societies are being enhanced to advance development objectives. A few development agencies do not stipulate any standard approach for identifying knowledge results and instead rely on a decentralized structure in which local teams decide how to define their own theories of change for capacity development and how to measure outputs and outcomes. Most of those interviewed, however, have adopted a standard framework or set of principles that informs or even codifies the achievement of capacity development results across their organizations. 3 WBI’s Capacity Development Results Framework At the center of WBI's approach to knowledge results management is the Capacity Development Results Framework (CDRF) which underpins WBI's overall strategy, business processes, and reporting. The framework focuses on capacity development as a process of empowerment for local agents in order to change constraining or enabling characteristics of institutional conditions to advance the achievement of development goals (Figure 1). This common systematic approach to the identification, design, and monitoring and evaluation of learning for capacity development offers some valuable functions for practitioners:  Guiding local stakeholders through the process of building their own “theory of change”  Defining a change process logic to facilitate the assignment of measurable results indicators  Prescribing sets of intermediate and final outcome indicators that can be flexibly applied across sectors and countries  Allowing for adaptability by signaling needed program adjustments during implementation The focus on change and the definition of capacity development as the process whereby change is enabled allows practitioners to apply specialized knowledge to capacity development initiatives from across the broad spectrum of governance, political economy, psychology, social accountability and organizational and institutional development. Figure 1. WBI’s Capacity Development Results Framework 4 The ability to measure outcomes at two levels in relation to a development goal offers particular value for WBI and its development partners. The path to desired longer-term institutional changes involves improving the disposition or abilities of key stakeholders who can initiate or manage needed changes. These shorter-term observable intermediate capacity outcomes can include raised awareness, enhanced knowledge and skills, improved consensus and teamwork, strengthened coalitions, enhanced networks, and/or new implementation know-how. The identification of two levels of outcomes support WBI’s results-focused approach for advancing toward a development goal. Examples of these longer-term and intermediate capacity changes are shown in Table 1. Table 1. Examples of WBI’s Capacity Development Results Longer-Term Capacity Changes Intermediate Results (Outcomes or Outputs) Description Results Examples Description Results Examples Institutional Stakeholder ownership: Stakeholders are Raised awareness— changes are Transparency of empowered to Reformers within targeted in three information—Kenya made manage or initiate government were inspired by broad areas: available country-level needed changes. the Bank’s Open Data, Open  strength of public expenditures data for Evidence of an Government Initiatives to stakeholder the first time via a new improvement in the launch one of the first and ownership Portal (web site with data client’s disposition or most comprehensive Open  efficiency of previously not published) ability to effect Data portals in Sub-Saharan policy change can reflect six Africa instruments Policy instruments: types of intermediate Strengthened coalitions—  effectiveness Ease of administration— capacity outcomes: A multi-stakeholder coalition of Liberia simplified business  Raised awareness emerged during a national organizational regulations and  Enhanced visioning workshop in Liberia arrangements requirements in the knowledge and and successfully pushed for 19 Standard investment code (ministry skills administrative reforms to improve the business characteristics for data on license processing  Improved times) environment. assessment are consensus and New implementation identified across teamwork know-how—Rapid results these capacity Organizational  Strengthened arrangements: initiative deployed in the areas coalitions Operational efficiency— context of a Bank  Enhanced investment project identified the operational cost ratio of networks the water and sewerage key opportunities for  New improved efficiency of a authority improved from implementation 21% to 50% (audited national water and sewerage know-how company. financial statements) The UNDP Capacity Measurement Framework The United Nations Development Programme (UNDP) supports programmatic responses to address the enabling environment (policies, laws, and regulations) as well as the organizational (business processes, management systems) and individual (training) levels. The approach for measuring capacity therefore focuses on results at three linked levels along a results chain:  Impact—Changes in people’s well-being;  Outcomes—Changes in institutional performance, stability and adaptability; and 5  Outputs—Products produced or services provided based on changes in institutional arrangements, leadership, knowledge, and accountability. UNDP has developed a framework for defining and measuring these three types of results, recognizing that all capacity development will build on a foundation of existing competencies and resources. For each initiative, the measurement of changes in capacity will focus on one or more aspects of the overall framework shown in Figure 2. Figure 2. UNDP’s Framework for Measuring Capacity Source: UNDP 2010. Measuring Capacity While the measurement of progress against national development goals has been generally well- articulated, UNDP’s framework plays an instrumental role in clarifying that the key to this progress is in the continuous improvement in the performance, stability, and adaptability of national institutions responsible for development. These improvements are reflected by changes in the institution’s ability to convert inputs to productive use (performance), seek resolution to 6 problems and remove barriers (stability), and adapt to changing realities and demands (adaptability). These changes in national institutions indicate outcomes in the enabling environment and at the organizational level. This desired strengthening of institutions responsible for development is contingent on the establishment of needed policies, systems, processes, and mechanisms. The “levers of change” are assessed by measuring outputs, the products produced and services provided based on capacity development core issues. UNDP therefore targets investment and outputs related to institutional arrangements, leadership, knowledge, and accountability. Examples of both of these levels of capacity development results for UNDP are in Table 2. Table 2. Examples of UNDP’s Capacity Development Results Longer-Term Capacity Changes Intermediate Results (Outcomes or Outputs) Description Results Examples Description Results Examples Longer term Performance: Shorter-term outputs Institutional arrangements: capacity Effectiveness: resulting directly Business process maps development Department of Forestry from capacity developed--% of critical outcomes are protects area of land development processes with clearly sustained beyond covered by forest, as activities reflect documented requirements for the project term measured by: % of changes in the output quality, information flow and reflect 3 forestland with adequate products or services map, workflow map and realistic types of changes fire safety measures as provided by an and ambitious performance in institutions: specified by the forest organization. There improvement targets  Performance protection policy are 4 areas of Leadership: (effectiveness Stability: capacity outputs: Clear vision defined--% of and efficiency) Institutionalization:  Institutional stakeholders who understand the  Stability Local governments use arrangements vision and believe the (Institutional- standard operating (institutional organization has clear goals for ization and risk procedures, developed by reform and the medium term mitigation Ministry of Local incentive Knowledge:  Adaptability Government, as measured mechanisms) Education reform strategy for (investment for by Rate of compliance  Leadership professional learning innovation and with standard operating development implemented—approval of continuous procedures  Knowledge policies that directly support improvement) Adaptability: (education, targeted professional learning Investment for training, and opportunities in sectors most in innovation: learning) need of improvement MoH improves  Accountability Accountability: distribution mechanisms (accountability Integrated M&E framework of medicine to rural areas, and voice implemented—Existence of as measured by Coverage mechanisms) nationally recognized M&E of rural areas with standards and certification distribution mechanisms system UNDP’s focus on changes in the enabling environment and at the organizational and individual levels reflects an approach widely used among the donors interviewed. Variations on this interpretation and the importance of measuring outputs as a capacity result (products produced 7 and services provided) are described in the following spotlighted approaches of other knowledge organizations. Open Systems Approach The interviews and materials collected from development agencies during this study highlighted an emerging consensus that capacity development is fundamentally about facilitating change processes that result in organizational and system-wide reform, thus shifting away from the more narrow focus on training and human resource development. At the heart of this evolution is the interpretation of one or more organizations functioning as elements in a wider system, a conceptual approach that has continued to garner attention and foster discussion since 2005.1 This framework, shown in Figure 3, was described by several organizations in this study as instrumental for shaping their approach to capacity development results management, namely the European Commission (EC), Asian Development Bank (ADB), Danish International Development Agency (DANIDA), SNV, the Organisation for Economic Co-operation and Development (OECD), Japan International Cooperation Agency (JICA), and the European Centre for Development Policy Management (ECDPM). Figure 3. Analytical Framework—Organizations as Open Systems Source: N. Boesen. 2010. “Chapter 6. Institutions, Power, and Politics—Looking for Change Beyond the Boundaries, the Formal and the Functional” in J. Ubels et al. Capacity Development in Practice. London, UK: Earthscan Ltd. (reproduced and used broadly in donors’ guides to capacity development) The framework is based on a series of key assumptions to articulate a capacity development results chain. First, organizations operate within a context. Second, their performance leads to outputs. These outputs produced by the organizations are what lead to outcomes and impact. In short, the chain of causality from capacity to impact is subject to a broad array of influences and 1 The earliest uses of this framework identified during this review were in DANIDA’s Results -Oriented Approach to Capacity Change (ROACH) methodology. 8 the organizational outputs, as an immediate step in the results chain, are viewed as a useful proxy indicator for capacity. ADB’s Practical Guide to Capacity Development in a Sector Context (2011), developed jointly with the European Commission, seeks to clarify how outputs from the organization in terms of services, products, and regulations can be identified as the specific outcomes of the CD process. In fact, there are two logical chains that overlap—that of the sector organization(s) and that of the CD process. The alignment of these two logics and their usefulness for defining and measuring capacity development results is shown in Table 3. 9 Table 3. Capacity Development Intervention Planning—Combining Sector and Capacity Development Logics Source: Asian Development Bank 2011. A Practical Guide to Capacity Development in a Sector Context. 10 Based on this interpretation, capacity development outcomes are reflected by sector outputs. Longer-term capacity changes are therefore those improvements in organizational performance and capabilities that are sustainable beyond the project term whereas the intermediate results are the direct changes in structures and competencies that lead from the learning process and the individual and institutional levels. An illustration of how this logic can be tracked and the results measured is in Table 4. This methodological approach has been also translated into an operational framework that is consistent with the traditional logical framework matrix that considers the "capacity" level within the result chain2. Table 4. Examples of the EC’s Capacity Development Results Longer-Term Capacity Changes Intermediate Results (Outcomes or Outputs) Description Results Examples Description Results Examples Capacity Irrigation Example: Shorter-term Irrigation Example: outcomes Effective maintenance, capacity WRD has functioning units for reflect longer- rehabilitation and upgrading development participatory irrigation term changes of services to users: outputs achieved management, social development, in performance monitoring during the project dam design and safety, as evidenced organizational shows 90% of scheduled term reflect by functional units confirm basic performance maintenance tasks completed changes in capacity (staff, systems, business and on time organizational processes, management, and capabilities structures and/or coordination with other units) in that are Water users’ associations internal self-assessment after 2 years, and sustainable (WUAs) serviced with competencies further increased capacity over beyond the training and advisory support and skills that years 2-5. project term. by Water Resources result from the Results-oriented and Department (WRD): Annual learning process participatory management and client survey confirms that at the individual leadership exist in WRD: sample 80% of WUAs are satisfied and/or WRD staff assessed management with advisory support institutional and leadership to be more results- provided by WRD levels focused and participatory. One of the imperatives that has emerged from the growing attention to systems thinking is the need to learn by doing. In fact, many practitioners argue that given the multi-layered transformative process of capacity development, interventions cannot be defined too precisely in advance. Instead, an incremental design process will succeed better given the constantly changing external factors. This recognition has contributed to a growing tension between standard results-based management and complex adaptive systems thinking where results and interventions are defined and redefined during implementation. This perspective has intensified the focus among many donors on defining the targeted capacity for what, for whom, and in what context. By extension, any indicators to monitor capacity changes need to be designed within the specific context. The European Commission’s Rapid Assessment of Capacity provides a useful example of how an M&E framework is designed within an open systems approach. As shown in Figure 4, 2 See Tool 8: Logical design of CD processes and support to CD, EuropeAid (2010) Toolkit for Capacity Development, 2010. Tools and Methods Series, Reference Document No. 6. 11 guidance for a standard intervention logic provides a CD roadmap that then needs to be adapted in each case for use under local conditions. This graphic format highlights only the critical levels (enabling factors and inputs/outputs/outcomes) to allow for the basic identification of a chain of effects linked to a context. These levels might be complemented by other intermediate or longer- term levels as appropriate for the evaluation. Figure 4. The EC’s Standard Intervention Logic for the Evaluation of Capacity Development Support Source: European Commission 2012. Evaluation Methodology & Baseline Study of European Commission Technical Cooperation Support. The 5C Approach Given the shortcomings of formal planning models for capacity development, ECDPM has worked to apply complex adaptive systems thinking in its conceptual approach to managing knowledge results. ECDPM’s framework for capacity describes “soft” abilities and attributes that actors must have to deliver the mandates of organizations. Individual competencies—skills, abilities, and motivations—lead to collective capabilities. As shown in Box 1, the skills and abilities of a group or organization to achieve objectives and sustain itself can be categorized in 12 terms of five core capabilities, which frame the “5C Approach.” These capabilities, in turn, contribute to enhanced system capacity, the overall ability of a system to make a contribution. Box 1. Five Core Capabilities Defined by ECDPM  to commit and engage: volition, empowerment, motivation, attitude, confidence  to carry out technical, service delivery & logistical tasks: core functions directed at the implementation of mandated goals  to relate and attract resources & support: manage relationships, resource mobilization, networking, legitimacy building, protecting space  to adapt and self-renew: learning, strategizing, adaptation, repositioning, managing change  to balance coherence and diversity: encourage innovation and stability, control fragmentation, manage complexity, balance capability mix Source: ECDPM 2008. Capacity Change and Performance: Insights and Implications for Development Cooperation. The 5C Approach is applied in various donors’ approaches to monitoring and evaluating capacity development initiatives. Projects and programs supported by funding from the Netherlands in particular are required to incorporate indicators for monitoring changes in the five core capabilities. For example, SNV distinguishes three interconnected outcome types: first the capacities developed, which are then followed by improved performance, and an improved enabling environment. In this case, the intermediate results directly under project control are changes in the capacities of clients. These changes are assessed in terms of the five core capabilities, as shown in Table 5. Table 5. Examples of SNV’s Capacity Development Results Longer-Term Outcomes Short Term Outcomes ‘improved performance’ and ‘improved ‘capacities developed’ enabling environment’ Description Results Examples Description Results Examples SNV defines Improved performance: By developing Improved capacity of three types of Improved productivity (quality, the capacities of clients/groups: outcomes that quantity) of farms and firms, as client groups, # of clients groups whose reflect capacity measured by: SNV supports capacity improved—as changes. Two of  # of client groups that have them in follows: these are longer improved productivity of the improving their  # with improved term and outside targeted farms and performance that capability to relate the direct span of enterprises/firms (specifying contributes to  # with improved SNV’s control, whether in staple food impact. capability to act and but these reflect crops/cash crops/meat and “Capacities commit important dairy) developed” is the  # with improved capacity changes Improved enabling environment: first capability to adapt that should be Enforcement by (national/local) (intermediate) and renew planned for and governments of inclusive policies development  # with improved monitored: and legal frameworks: result for SNV, capability to balance 13  Improved  # of clients of which members and the only type coherence & performance or the target groups benefit of outcome flexibility  Improved from inclusive food security directly within  # with improved enabling policies/rules/ regulations SNV’s direct capability to deliver environment control. development results Causally Interdependent Capacity Changes The evolving thinking about open systems and the multidimensional aspects of capacity development have led some knowledge organizations to adopt a less linear approach in articulating results. For example, GIZ has developed an integrated results model to simplify the representation of a progressive sequence of causally interdependent positive changes. The specified multidimensional chain of outcomes is what is envisioned to be achieved by GIZ together with development partners. The results model, shown in Figure 5, reflects the change processes within a given sector that GIZ and its partners want to contribute to through their interventions. 14 Figure 5. The GIZ Results Model Key features of this results model include that the development partners must identify their sphere of responsibility within they can be expected to influence capacity changes. Alternative options for action are possible at each step and therefore strategic options should be negotiated with partners and communicated with commissioning parties and clients throughout the 15 development process. The results model is designed to be compatible with the results logic of other development agencies while also remaining flexible enough for use across all GIZ’s business areas and instruments. The management model Capacity WORKS (Figure 6) complements the GIZ Results Model by providing a structure for the planning, designing and adapting of a project’s intervention architectures. As the interdependencies between various stakeholders increase, the pressure on joint steering becomes higher since negotiations have to produce decisions that all sides can uphold. Capacity WORKS is designed to facilitate that kind of cooperation. It works well in any multi-organizational context where objectives can only be achieved if stakeholders comprehend their interdependency for the planning, implementation, and evaluation of projects. The model provides a management toolbox focused on five success factors: Strategy, cooperation, steering structure, processes, and learning and innovation. The starting point for project management is always the assessment of the political and societal context and actors, given that the connectivity of projects to the existing political culture and societal dynamics is key to achieving results. Capacity WORKS is an adaptive systems thinking approach, where results and interventions are revised and redefined during implementation: the project’s architecture is reviewed, (re-)designed, monitored and corrected on an iterative basis, always in relation to the five success factors. 16 Figure 6. The Management Model Capacity WORKS The flexibility of this approach for managing results means that GIZ does not necessarily specify whether targeted results are shorter-term outcomes or outputs (in terms of improved products or services of organizations) or longer-term capacity changes (in terms of sustained changes in abilities and performance). Instead, results are mapped along a logical change sequence with new strategic options for influencing the change continually introduced along the way. Once the outcomes have been articulated with the results model, GIZ assigns indicators as shown in the 17 examples in Table 6. These are the basis of the results-based monitoring system, informing the mutual steering of project implementation with partners. Table 6. Examples of GIZ’s Capacity Development Results Longer-Term Capacity Changes Intermediate Results (Outcomes or Outputs) Description Results Examples Description Results Examples No clear distinction is Organic agriculture in The GIZ results Organic agriculture in Serbia set for CD indicators Serbia example results: model is example results: and the results model  Improved quality of flexible and  Extension services, schools & is flexible. Indicators OA products (e.g., # non-linear. universities integrate OA in are formulated for not of Serbian OA Typically, the their programmes only objectives but products that get EU shorter-term  Ministry of Agriculture also for steps (results) certification through results relate to improves policy, legal & leading towards Serbian certification those CD regulatory framework for OA objectives, since key bodies results, which in line with EU standards CD can occur below  Productivity of OA can be  Serbian certification and the objective level. increases achieved control system for OA The lifespan of a  More farmers during an products is established German development convert to OA individual  Joint on-farm research and program is 12-15  Demand for OA project term (3 development projects years, so longer term products in Serbia years or less). between farmers and results would be those increases researchers exist that require more than  Marketing channels  Offer of relevant services for one project phase (up for OA products are OA established by BMOs and to 3 years) to improved NGOs complete. The adoption of a multidimensional approach to defining results and the openness to reviewing strategic options throughout implementation is increasingly common across the development community as knowledge organizations try to understand what works in capacity development and scale up successful innovation. For example, those interviewed in SNV’s Planning Monitoring and Evaluation Unit described a similar process to map results in terms of a “cloud of outcomes” to demonstrate the pathways to sustainable institutional change. 18 The RAPID Outcome Mapping Approach The Research and Policy in Development (RAPID) program at the Overseas Development Institute (ODI) has supported capacity development to improve the use of research in informing policies and practices. This focus on developing the capacities of think tanks, networks, policy makers, and others to base development decisions on evidence has been pursued largely within a complex adaptive systems paradigm—testing approaches and continually refining and revisiting them to learn what works. RAPID’s capacity development initiatives have focused on three common levels of capacity: individual skills and abilities; institutional structures, processes, and resources; and systems such as coherent policies or coordination across sectors among others. ODI has explored and tested various tools to plan for and identify the results of the RAPID program. One promising method has been the development of the RAPID Outcome Mapping Approach (ROMA), which built on the development of the Outcome Mapping Learning Community funded by the International Development Research Centre. The mapping process helps a project or team or program define targeted actors, desired changes, and appropriate strategies to achieve these changes. As shown in Figure 7, the process is an iterative one that allows a team to map how research can be used to change the behaviors of key stakeholders. The application of this approach has served as a useful stepping off point for ODI to develop and test a range of tools to continue exploring how best to manage knowledge results within complex systems. Figure 7. ODI’s RAPID Outcome Mapping Approach 19 Source: Mendizabal, E. A. Datta. and J. Young. 2011. Overseas Development Institute Background Note. Developing capacities for better research uptake: the experience of ODI’s Research and Policy in Development Programme Monitoring and Reporting Systems The interviews with the knowledge organizations also explored the various aspects of results management, starting with the design phase and moving through monitoring, evaluation, and reporting. Additional questions focused on how monitoring data are stored and analyzed and whether emerging outcome information during implementation is used for adaptive management. A prominent theme of the discussions centered on the degree to which the approach to results management was comprehensive and integrated into all phases of the program or project cycle. Integrated Results Management during the Project or Program Cycle WBI has worked to establish and refine a comprehensive results infrastructure to guide and support an integrated approach to managing knowledge results. The Capacity Development Results Framework underpins WBI's overall strategy, business processes, and reporting. As shown in Figure 8, the CDRF supports WBI's results management from the design stage throughout the results cycle and at the portfolio level through the aggregation and analysis of standard types of outcomes. In theory, the consistent documentation of results information before and during implementation and through client feedback also facilitates the work of independent evaluators. Figure 8. WBI’s Results Management Cycle 20 Across the World Bank Group, WBI is the standards setter for all capacity development interventions in the TE product line (formerly for “external training”). The Capacity Development Results Framework is embedded in WBI’s systems and processes for TE initiatives, so that teams plan for, code, and report on capacity development outcomes using a consistent approach. A central data system is the repository for results data from individual projects throughout the project cycle entered by project teams using standard reporting forms and templates:  The task team leader (TTL) first creates an Activity Initiation Summary (AIS) in which the specific development objective is stated. The AIS must be approved by the practice manager and then a project identification code is assigned that serves as a unique identifier throughout the entire cycle of the initiative.  A full Concept Note (CN) is developed next which describes how a team plans to achieve and demonstrate targeted results. The standard CD template calls for a description of the entire results chain, starting with the higher-level development objective. Questions elicit narrative descriptions of the envisioned change process, the content and design, the proposed indicators, and the planned evidence of results. An important function of the CN template is not only to collect all of this qualitative information on how and why the capacity development support will be implemented but also to classify the targeted outcomes at two levels in a standardized format as shown in Box 2. The use of closed-ended choices to identify targeted institutional changes and intermediate capacity outcomes helps teams to define desired changes and allows for the aggregation and analysis of results across WBI initiatives.  During implementation, the TTL is required to report at least every six months on progress related to deliverables and results in an Activity Update Summary. Throughout the initiative, the team collects simple planned and opportunistic evidence of results (including before and after data) and files this evidence in the World Bank’s official electronic archival system. The interim results update form can then be updated by including milestones, a description of progress, and the archival system’s link to the evidence.  Within six months of completion, the TTL is required to fill out the Activity Completion Summary for management approval. The data can be added later if data collection is still underway. Client feedback, collected during implementation and after the final delivery, is included in the results documentation where possible. 21 Box 2. Identifying Two Levels of Results in WBI’s Concept Note Template 1. Development Objective 1a. What is the Development Objective that this activity is seeking to achieve? 1b. Select below the main area(s) of institutional change targeted.  Inclusiveness of stakeholder ownership strengthened  Efficiency of policy instrument(s) increased  Effectiveness of organizational arrangements improved  Other, specify: 1c. Select below the main type(s) of intermediate capacity outcome targeted.  Awareness raised  Knowledge or skills enhanced  Consensus and teamwork improved  Coalition(s) strengthened  Network(s) enhanced  New implementation know-how  Other, specify: The other development agencies included in this exploratory study were at various stages of implementing a more systematic approach to managing their capacity development results, and the interviews highlighted several interesting practices aimed at better institutionalizing a results focus across the capacity development initiatives of the organization. One notable example was for SNV where a new comprehensive planning monitoring and evaluation (PME) tool was launched in 2013. SNV’s PME tool is designed to manage capacity development intervention information efficiently and operationalize the Managing for Results standards. The tool is web-based, and all SNV project teams are expected to enter data throughout the project cycle as described in Table 7. Similar to WBI’s approach, all teams plan and identify their results at the start of each initiative in relation to a higher level development objective. 22 Table 7. Brief Overview of the Format and Components in SNV’s PME tool Source: Roefs. M. Managing for Capacity Results in SNV. Presentation at the INTRAC Conference in June, 2011. Along with the PME management tool, SNV has formulated corporately harmonized impacts and outcomes with indicators and other tools to strengthen the results focus of their initiatives:  Indicators. Knowledge network leaders, senior sector advisors and the PME unit at headquarters worked together to formulate standard impacts and outcomes and harmonized indicators for agriculture, water and sanitation, and renewable energy, allowing for comparison and learning among projects as well as for aggregating results at a higher level. Each project team is required to select at least one of the impacts and one of the outcomes with indicators to plan on, set a baseline, monitor, and report on. Projects are welcome to use additional project-level results and indicators in addition to the standard ones assigned in the sector.  Tools. Project teams receive guidance from headquarters to strengthen their results orientation but operate with local flexibility to design their own results chains and adapt tools as appropriate. Various tools are included in the guidance provided to strengthen all aspects of the planning, monitoring, and reporting process and improve the quality of the data entered into the PME forms. Two tools in particular were highlighted during the interview: o A guide to capacity assessment based on the 5C approach developed by ECDPM; and 23 o A set of guidelines on how to organize a review session with stakeholders as one recommended method for monitoring project outcomes. Whereas WBI and SNV share a central identity as capacity development organizations, other development agencies have a broader set of functions spanning both knowledge services and infrastructure investments. This mixed mandate makes the management of knowledge results more complicated, particularly given that this focus has not traditionally been mainstreamed as a priority across the organization. In this context, the representatives interviewed spotlighted useful systems and tools and noted ongoing areas for improvement such as in the following examples:  UNDP’s online CD Tracker is used to assess the level of CD integration in the planning of development projects, with 4 ratings (or NA): national partner-led process, sound diagnosis, comprehensive response, and clear results. The CD Tracker is linked to other management and reporting systems (e.g. ROAR) where country- and project-level indicators are entered and monitored. UNDP prioritizes a strategic planning process that involves consultation with stakeholders to assess the current capacity and the needed changes to achieve development objectives. UNDP has a capacity assessment tool that provides a methodology which needs to be adapted to the local context. Local ownership and engagement in the capacity assessment is critical.  The European Commission does not capture information on capacity outputs and outcomes in any central database, but a new quality assurance process requires that a yearly sample of programs (700-1,000) report on a standard CD indicator related to aid effectiveness criteria (i.e. local ownership, harmonization, alignment, appropriate program implementation arrangements etc.). Tools for effective knowledge sharing and CD are in use through the platform Capacity 4 Development (http://capacity4dev.ec.europa.eu). Tools and resources developed by the EC to support capacity assessment to date include: o Checklist for organizational assessment (helping teams track what has been covered by an assessment across 5 areas) o Guide for preparing a terms of reference for capacity assessments o Rapid Assessment of Capacity, a proposed evaluation methodology for technical cooperation support  ADB developed a Capacity Development Framework and Action Plan and now tracks all data via an internal E-Ops system. The system is designed to be a comprehensive one, with a corporate results framework at the highest level, which is then cascaded down to the results framework in each country partnership strategy and then down to the project results frameworks themselves. However, the CD aspects of the system are still being developed and there is no place for CD analysis in the templates for country strategies. CD assessment by project teams is encouraged by ADB but not required. CD diagnostics are considered to be discretionary compared to other diagnostics that are mandatory. A sample Terms of Reference for conducting a capacity assessment is available to support sector colleagues. Typically, project design includes a workshop to get stakeholders on board, during which a monitoring framework should be developed and entered into the E-Ops system. 24  GIZ aims to mainstream capacity development into all projects and teams. Teams are encourages to apply the management model Capacity WORKS for project management. Five basic tools that foster an approach to capacity development are mandatory since September 2013. Projects also have access to a suite of tools to develop a capacity development strategy jointly with partner organizations that guides their activities in a sector approach. No standard indicators are specified for capacity development, but GIZ is currently working to strengthen M&E systems and exploring the possibility of developing an indicator database. The standard GIZ results model described previously is used by all teams to plan for and monitor results.  Oxfam GB plans for and assesses capacity development in terms of the development of partner organizations. The Oxfam program accountability and learning system (OPAL) is designed to capture results data but capacity development indicators are not prescribed, well- understood, or required. In theory, the structure of the data system easily allows for the aggregation and analysis of outputs and outcomes but an education process is needed for teams to understand the value and consistently enter the data. Oxfam is currently setting up a global performance framework with a set of priorities and indicators articulated across six areas, and Oxfam GB will be transitioning to a new global data system aligned to this new framework. All of the organizations included in the study also noted challenges related to the management of knowledge results. Comments reflected interrelated recurring themes about the obstacles encountered:  Tools and standard practices developed by a central unit are not consistently applied in the field. While capacity needs assessment tools, monitoring methods, and various guidelines are available to project teams, these practitioners choose which approaches to use based on the perceived value and amount of time, resources, and expertise available. For example, a JICA representative reflected that a customized process is required to apply the capacity assessment tools available in their capacity assessment handbook, so these can be selected and combined in accordance with the economic and social conditions in partner countries. In many cases, a culture shift is needed to increase the focus on knowledge results. One of the representatives of ADB noted that CD diagnostics are considered to be discretionary compared to other traditional diagnostics that are mandatory and a DANIDA counterpart emphasized that tracking the “soft” outcomes of capacity development has been a lower priority than the more concrete outcomes of infrastructure investments.  Using data for adaptive management and applying lessons learned are good ideas in theory but it is not clear how much this happens in practice. All of the agencies reported that their capacity development initiatives are monitored at least once a year, but they had only limited examples of how monitoring data was used during implementation or for future planning to improve project results. Some described the difficulty of getting timely approval to adjust a project’s design midstream while others noted that limited documentation in the system simply prevented them from knowing how much such adjustments were being made. 25 Three organizations described a formal process in place to help ensure that project teams were using monitoring data and lessons learned related to capacity development to maximize results: o GIZ had a two-level management response system where lessons learned from evaluations were discussed and entered into a management response matrix. The team’s proposed response to the recommendation must be approved by the country director (for project evaluations) or a higher level steering committee (for independent sectoral evaluations). There is a follow-up a year later to check whether the responses have been implemented. o An element of ECDPM’s regular internal evaluation and reporting process included capturing stories about the contribution to and impact on policy change by individuals and teams on three occasions. These occasions include presentations and discussions at the annual internal Centre Seminars, submissions as part of the annual reporting, and an elaborate center-wide self-assessment once every five years. ECDPM also has a blog (Talking Points), which is the principal instrument to feed learning and reflections about the core issues with a wider community of practice. o The European Commission tried to have teams summarize lessons learned in a brief to capture knowledge about what has worked. The idea is to support knowledge sharing via communities of practice, but the process has been challenging in practice. Aggregate Reporting Within WBI, individual initiative teams are responsible for observing and reporting on the changes to which their initiative contributed, and WBICR is responsible for measuring capacity development outcomes at the portfolio level. The systematic identification of intermediate capacity outcomes and institutional changes from the concept note stage through to the results completion summary allows for WBICR to track two levels of intermediate outcomes that are contributing to the development objectives of WBI’s initiatives. WBICR aggregates the results of all WBI’s TE initiatives and issues reports to WBI’s Senior Management Team and other stakeholders. In September 2012, WBI signed its first Memorandum of Understanding (MOU) with the Managing Directors after an extensive participatory process to build its performance indicators around key strategic objectives. As shown in Table 8, the indicators reported for the MOU track the whole results chain, from inputs to outcomes and results. The MOU has been cascaded down to department units to ensure full alignment with the priorities of each vice presidential unit (VPU). WBI uses the MOU as the basis for its regular portfolio reviews and monthly monitoring reports to assess progress towards Departmental and VPU level targets. WBI has actively worked with other units within the World Bank to facilitate management attention to quality at entry and results and increase compliance with the MOU reporting requirements. Notable efforts have included the introduction of a Portfolio Dashboard enabling management to focus their attention on concept note compliance, significantly improving the practice of ex-ante reviews. In addition, the system-based tool for TTLs to report on their results has experienced a high level of compliance in FY13 (89 percent), as part of an intense effort in WBI referred to as “the surge for results.” 26 Table 8. Enhancing Accountability: From Inputs to Outcomes Selected Indicators from WBI's Memorandum of Understanding FY12 FY13 Targets KNOWLEDGE & OUTCOMES Share of core knowledge services funded by Trust Funds 50% Monitored Percentage of TE/TA products completed with objectives accomplished 75% 80% Percentage of TE/TA completed in last 12 months or in pipeline with +$50,000 spent that have NOT had a concept note review 47% 20% Percentage of ongoing TE initiatives with approved AUS in last 6 months 89% 85% Count of completed initiatives: TE + TA 72 70 Practitioner Learning No. of e-learning courses delivered to scale up learning through technology 60 60 No. of regional partner institutions supported in wholesaling WBI e-learning courses 6 12 No. of practitioner networks supported 26 29 No. of peer-to-peer knowledge exchanges supported 189 Monitored INNOVATIVE SOLUTIONS No. of competitions and challenges conducted to surface/advance innovations 5 10 No of social enterprises supported through the Development Marketplace 32 40 COLLABORATIVE GOVERNANCE No. of leadership teams and multi-stakeholder coalitions supported to strengthen collaborative action 109 68 No. of countries supported in open budgeting & contracting to strengthen transparency in public expenditures 14 22 No. of countries supported in strengthening social accountability 25 29 INTEGRATION WITH WBG OPERATIONS No. of WBG Operations (projects, AAA & policy dialogues) supported 114 Monitored No. of Bank country strategies (CAS/CPS/ISN) with WBI contributions 31 27 OUTCOMES / RESULTS Percentage of client respondents rating "overall usefulness" of WBI deliverables as high 89% 85% Percentage of completed TE that contributed to strengthened institutions and improved development actions 73% Monitored Percentage of ongoing TE that have contributed to institutional change or improved development action 65% Monitored Percentage of ongoing TE that have achieved intermediate capacity outcomes 84% Monitored The articulation of each WBI initiative’s change logic based on the CDRF, the documentation of results using WBI’s system-based tool, and the role of WBICR in further analyzing and aggregating results all combine to enable WBI to communicate about how capacity development support contributes to the achievement of higher level development goals. The added value of WBI’s portfolio for regions, country partners, WBG networks, and the International Finance Corporation can now be defined more accurately. One approach WBI uses to communicate this value is to spotlight major results of the WBI portfolio and show how these support regional priorities (Figure 9). 27 Figure 9. Spotlighting How WBI Contributes to Results in the Regions E E A C P A A M S L F N A C R A R R Other development agencies are also working to develop effective ways of reporting on their knowledge results and a range of systems, methods, and practices to support this process were highlighted in the interviews:  UNDP launched the CD Tracker system in 2011 to improve project quality by rating various aspects of capacity development integration in the planning of new development projects. The CD Tracker system itself is designed for a quality assurance rather than a results management function, but it is linked to the broader data system in which project data can be aggregated to analyze aid effectiveness at the country level (e.g., ROAR). If there is growing focus on knowledge results and better measurement at the project level, the CD tracker can be used to flag those projects with a high level of CD integration to guide the analysis of available results data in the broader system.  SNV expects to be able to report more effectively on knowledge results at the aggregated level once the PME tool has been fully implemented. One of the challenges encountered so far in the first year the PME tool has been operational is that project teams are entering data less consistently after the planning stage.  The EC has established a quality assessment identification system that translates the Aid Effectiveness principles into quality criteria to be mainstreamed in the EC project cycle. A sample of 700 to 1000 programs is monitored and reported on each year. This system provides a useful marker of the extent and quality of focus on capacity development, but 28 there is not yet an effective central system for documenting and aggregating knowledge results.  ADB reports on results by sector as captured in their E-Ops system, but there is not yet any systematic aggregation and reporting of capacity development results at the three designated levels (institutions, organizational, and networking). Instead, progress reports on the implementation of the framework and action plan show how CD has been increasingly mainstreamed within ADB’s portfolio by examining the number and quality- at-entry of projects identified as capacity development in accordance with set criteria.  GIZ is currently developing indicators for aggregate results reporting in selected sectors, and some of these aggregation indicators measure CD. The plans are for data to be collected biannually.  While not an implementing agency, the OECD provided an interesting snapshot of how to track and aggregate knowledge results related to public financial management by helping to institute standard practices and methodologies within country systems. For example, budget database structures and procedures first adopted in OECD countries are now being used broadly by other countries to allow for standardized self-assessments. Organizations face common obstacles in trying to report on their knowledge results. For example, an AusAID representative described that the ability to document and analyze standard types of capacity development outcomes on any aggregate level is not possible given the strong decentralized and “generational” approach wherein teams have autonomy to define outcomes and results desired from each individual investment. . ODI works with projects on a contractual basis and therefore has limited opportunities to invest in or implement any centralized results infrastructure. Oxfam has guidelines for mandatory data entry but experiences low compliance among its teams. Conclusion Overall, the review of current systems and practices for managing the knowledge results of development agencies indicated that WBI’s comprehensive integrated approach is unique. WBI has undertaken extensive efforts to develop a typology for institutional changes and intermediate capacity outcomes and invested in an extensive results infrastructure to guide and support the systematic reporting of knowledge results. Other knowledge organizations have developed or are in the process of developing valuable tools and processes that further refine this approach across the full range of knowledge initiatives to build an evidence base for understanding what works and what does not work in capacity development. The interviews with practitioners and the review of agencies’ tools and reports highlighted two areas in particular where ongoing attention is warranted across the development community:  There is a continuing tension between the traditional results based management (RBM) approach and a complex adaptive systems (CAS) approach. On the RBM end of the spectrum, project teams specify targeted outcomes and plan interventions in detail during the design stage and face notable obstacles in trying to adjust indicators or deliveries during implementation. At the CAS end of the spectrum, teams recognize that assigning targeted outcomes or planning the specific mix of interventions up front is difficult and potentially even damaging for development objectives given the constantly changing context and 29 interaction among stakeholder groups. In fact, there is a growing convergence among donors affirming that both approaches lend value in the management of knowledge results. Identifying the kinds of outcomes needed for sustainable institutional change is best coupled with flexibility and careful monitoring during implementation to adjust the implementation plans and/or expected results to maximize aid effectiveness.  A culture shift is needed to strengthen the mainstreaming of capacity development, the sharing and application of tools and methods, and the quality of data. Agencies presented numerous examples of how the “soft” outcomes of capacity development were typically not valued in the same way as the “hard” outcomes of infrastructure investments and how establishing guidelines and requirements for data entry did not guarantee compliance given the range of pressures on teams to deliver development projects. Knowledge organizations universally struggle with how best to facilitate a paradigm shift in which the focus on and implementation of capacity development results management is prioritized. Most importantly, this study highlighted a strong collaborative spirit across knowledge organizations. In many cases, the methods, frameworks and tools described had been developed jointly with other agencies and/or had been shared and adapted for use. Specialists who focus on aspects related to managing knowledge results were eager to exchange information and viewed their participation in this review as part of an ongoing discussion. The comments, reports and tools provided were in no instances designed to contradict the practices of others but instead reflected an ongoing effort to build a common understanding of how development aid can contribute to sustainable institutional changes in a local context. 30 ANNEX: Information Sources, by Organization Organization Information Sources Interview with Sandra Nicoll, Claudia Buentjen, and Liz Fischelis on April 23, 2013 ADB (2011). Practical Guide to Capacity Development in a Sector Context. http://www.adb.org/documents/practical-guide-capacity-development-sector- context?ref ADB (2007). Guidelines for Preparing a Design and Monitoring Framework. http://www.adb.org/sites/default/files/pub/2007/guidelines-preparing-dmf.pdf ADB ADB (2007 and 2010 update) Integrating Capacity Development into Country Programs and Operations: Medium-Term Framework and Action Plan http://www.adb.org/documents/integrating-capacity-development-country- programs-and-operations?ref=themes/capacity-development/publications ADB (2010) Capacity Development Action Plan: Annual Progress Report http://www.adb.org/documents/capacity-development-action-plan-annual-progress- report-2010 AusAID Interview with Natashia Allitt on May 16, 2013 Interview with Henning Nohr on May 16, 2013 DANIDA 2011. Addressing Capacity Development in Danish Development Cooperation: DANIDA Guiding Principles and Operational Steps. Buhl-Nielsen, E., D. Dietvorst, J. Opio, J. Epitu, and F. Behnsen 2012. Capacity Development in the Water and Environment Sector in Uganda: 7-10 February 2012. Interview with Paul Riembault, Maria Sancho-Hidalga, and Milena Reinfeldt European Commission (2010) Toolkit for Capacity Development: Tools and Methods Series, Reference Document No. 6. http://ec.europa.eu/europeaid/how/ensure-aid- effectiveness/documents/toolkit_cd_en_web_en.pdf European Commission (2012) New Project and Program Cycle Management Guidance EC http://www.capacity.org/capacity/opencms/en/topics/monitoring-and- evaluation/guidance-and-evaluation-of-capacity-development-a-new-approach.html European Commission (2012) Evaluation Methodology & Baseline Study of European Commission Technical Cooperation Support. http://capacity4dev.ec.europa.eu/public-cd-tc/minisite/rapid-assessment-tool- capacity-development-rac Interview with Volker Hauck and Eunike Spierings on April 16, 2013 ECDPM ECDPM (2011). ECDPM Strategy 2012-2016. Maastricht. http://www.ecdpm.org/ ECDPM (2011) ECDPM strategy 2012-2016: Extended results framework. Maastricht. Interview with Andreas Schaumayer at the BMZ on May 6, 2013 Interview with Godje Bialluch, Annika Schoenfeld, and Sabine Dinges on May 21, 2013 GIZ / BMZ GIZ (2013) Guidelines for Developing a Results-based Monitoring System (provided by M&E unit) GIZ (2012) Developing the Results Model (provided by the M&E Unit) GIZ (2012) Presentation on the Results Model with Examples 31 Former GTZ (2009) Capacity WORKS. The Management Model for Sustainable Development: http://www.giz.de/de/leistungen/1544.html Email correspondence with Noriharu Masugi in May 2013 Kharas, H., K. Makino, and W. Jung, eds (2011). Catalyzing Development: A New Vision JICA for Aid. Brookings Institution Press. JICA (2008) Capacity Assessment Handbook: Project Management for Realizing Capacity Development. Tokyo. http://jica-ri.jica.go.jp/IFIC_and_JBICI- Studies/english/publications/reports/study/capacity/200809/index.html Interview with John Young on May 30, 2013 ODI Medizabal, E., A. Datta, and J. Young. 2011. ODI Background Note. Developing Capacities for Better Research Uptake: the Experience of ODI’s Research and Policy in Development Programme Interview with Sara Fyson on May 13, 2013 OECD 2011. Supporting Capacity Development in PFM—A Practitioner’s Guide. Volume OECD I. OECD 2010. Inventory to Donor Approaches in Capacity Development: What We Are Learning. Oxfam GB Interview with Jennie Richmond on May 16, 2013 Interview with Anita Van der Laan and Margriet Poel on April 24, 2013 Ubels, J., N. Acquaye-Baddoo, and A. Fowler, eds. (2010). Capacity Development in Practice http://www.snvworld.org/sites/www.snvworld.org/files/publications/capacity_devel SNV opment_in_practice_-_complete_publication.pdf Roefs, M., and S. Ooms (2011). Managing for Capacity Development Results. http://www.intrac.org/data/files/ME_conference_papers_2011/Working_groups_pa pers/Working_group_3/Managing_for_Capacity_Results_in_SNV_13_may_2011.p df Interview with Dipa Bagai on May 21, 2013. UNDP (2008). Capacity Assessment Practice Note. http://www.undp.org/content/undp/en/home/librarypage/capacity-building/capacity- assessment-practice-note/ UNDP (2010). Measuring Capacity http://www.undp.org/content/undp/en/home/librarypage/capacity-building/undp- UNDP paper-on-measuring-capacity/ Mericourt, B. and D. Bagai (2012). Review of Developing Capacities for Effective Aid Management and Coordination Project (DCEAMC) http://www.undp.org/content/dam/nepal/docs/reports/projects/UNDP_NP_Review %20of%20DCEAMC%20Project%20July%202012.pdf UNDP (2013) Guidance Note on the CD Tracker: Tracking the Integration of Capacity Development in UNDP Project Planning WBI (2011) Overview of CDRF http://wbi.worldbank.org/wbi/document/wbi-capacity- WBI development-and-results-framework WBI (2011) Institutional Capacities and their Contributing Characteristics http://wbi.worldbank.org/wbi/document/institutional-capacities-and-their-contributing- 32 characteristics-institutional-diagnosticspr WBI (2011) Intermediate Capacity Outcomes http://wbi.worldbank.org/wbi/document/intermediate-capacity-outcomes 33