89145 ASSESSING FOREST GOVERNANCE A Practical Guide to Data Collection, Analysis, and Use THIS GUIDE WAS PREPARED WITH THE SUPPORT OF www.recoftc.org www.bmz.de www.giz.de www.fern.org www.un-redd.org www.maderaverde.org www.forest-trends.org www.efi.int www.fao.org/forestry/eu-flegt/ ASSESSING FOREST GOVERNANCE A Practical Guide to Data Collection, Analysis, and Use Authors: Phil Cowling, Kristin DeValue, and Kenneth Rosenbaum Disclaimer All omissions and inaccuracies in this document are the responsibility of the authors. The views expressed in this guide do not necessarily represent those of the institutions involved, nor do they necessarily represent official policies of these institutions. Suggested Citation Cowling, Phil, Kristin DeValue and Kenneth Rosenbaum, 2014. Assessing forest governance: A Practical Guide to Data Collection, Analysis & Use. PROFOR and FAO. Washington DC. Photo Credits All inside photos: Flore de Préneuf Published in June 2014 Material in this publication can be copied and quoted freely provided acknowledgment is given. TABLE OF CONTENTS FOREWORD 9 ACKNOWLEDGMENTS 10 ACRONYMS AND ABBREVIATIONS 11 INTRODUCTION 13 OVERVIEW OF THE GUIDE 15 SECTION 1: PLANNING YOUR ASSESSMENT 15 SECTION 2: IMPLEMENTING YOUR ASSESSMENT 16 SECTION 3: USING YOUR ASSESSMENT 17 POSTSCRIPT 17 ANNEXES 18 SECTION I: PLANNING YOUR ASSESSMENT 21 OVERVIEW 21 CHAPTER 1: SETTING THE OBJECTIVES 22 STEP 1: DEFINE THE WHY 23 STEP 2: CONSIDER THE CONTEXT 25 THE POLITICAL AND INSTITUTIONAL CONTEXT 25 THE ECONOMIC CONTEXT 27 THE SOCIAL CONTEXT 27 THE TECHNICAL/OPERATIONAL CONTEXT 27 THE ENVIRONMENTAL CONTEXT 27 STEP 3: SET THE OBJECTIVES 28 EXAMPLES OF OBJECTIVE SETTING 29 POINTS ON PROCESS: DEVELOPING SHARED OBJECTIVES 31 CHAPTER 2: DEVELOPING A WORK PLAN 33 STEP 1: IDENTIFY YOUR SCOPE—WHAT TO MEASURE 35 STEP 2: IDENTIFY YOUR APPROACH—HOW WILL YOU GET YOUR INFORMATION? 39 STEP 3: WHO WILL DO THE ASSESSMENT? 46 STEP 4: WHEN WILL IT BE DONE, HOW OFTEN, AND FOR HOW LONG? 49 STEP 5: HOW MUCH WILL IT COST? 49 STEP 6: WRITE THE WORK PLAN 51 POINTS ON PROCESS: COMMUNICATING AND MANAGING THE PROCESS 53 TABLE OF CONTENTS 3 CHAPTER 3. PLANNING FOR DATA COLLECTION 54 STEP 1: DECIDE WHAT ASPECTS OF GOVERNANCE TO ASSESS 55 STEP 2: IDENTIFY POTENTIAL SOURCES OF INFORMATION 60 WRITTEN MATERIALS 61 PEOPLE 64 PHYSICAL EVIDENCE 65 STEP 3: SELECT DATA COLLECTION METHODS 67 FINDING INFORMATION ON BACKGROUND AND HISTORY 69 FINDING INFORMATION ON GOVERNANCE DESIGN 69 FINDING INFORMATION ON PLANNING AND DECISION-MAKING PROCESSES 70 FINDING INFORMATION ON IMPLEMENTATION 70 STEP 4: DEVELOP TOOLS FOR EACH METHOD 70 DESK REVIEWS 72 EXPERT ANALYSIS 75 KEY INFORMANTS 76 FOCUS GROUP DISCUSSIONS 79 WORKSHOPS 81 SURVEYS 83 STEP 5: FINALIZE YOUR WORK PLAN AND DEVELOP A DATA COLLECTION MANUAL 85 POINTS ON PROCESS: VETTING THE METHODS 86 SECTION II: IMPLEMENTING YOUR ASSESSMENT 89 OVERVIEW 89 CHAPTER 4: DATA COLLECTION 90 STEP 1: ASSEMBLE AND TRAIN A DATA COLLECTION TEAM 91 TEAM COMPOSITION 91 TRAINING NEEDS 92 STEP 2: COLLECT THE DATA 93 INTERVIEW TECHNIQUES 93 FACILITATION TECHNIQUES 93 SURVEY ADMINISTRATION 94 CODING 97 STEP 3: QUALITY ASSURANCE 98 EDITING 98 CLEANING 98 VERIFICATION 99 TRIANGULATION 99 POINTS ON PROCESS: PRACTICAL AND ETHICAL DATA COLLECTION 100 4 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE CHAPTER 5: INTERPRETATION AND ANALYSIS 102 STEP 1: PROCESS THE DATA 104 DATA ORGANIZATION 104 DATA SUMMARY 106 VISUALIZING RESULTS 108 STEP 2: DO ANALYSIS 111 STEP 3: MAKE RECOMMENDATIONS 112 PRIORITIES 112 ACTIONS 113 POINTS ON PROCESS: VETTING AND VALIDATION OF ANALYSIS 114 SECTION III: USING YOUR ASSESSMENT 117 OVERVIEW CHAPTER 6: APPLICATION OF THE RESULTS 118 STEP 1: DECIDE ON A DISSEMINATION STRATEGY 120 KINDS OF OUTPUTS 121 MAKING OUTPUTS AVAILABLE 122 DRAWING ATTENTION TO THE FINDINGS 123 STEP 2: IMPLEMENT THE STRATEGY 124 CREATE A DRAFT OF YOUR MAIN OUTPUT 124 VET THE DRAFT AND REVISE 125 CREATE SUPPLEMENTAL VERSIONS OR OUTPUTS 125 PUBLISH YOUR OUTPUTS 125 STEP 3: INSTITUTIONALIZE FURTHER ASSESSMENT 126 AN INSTITUTIONAL HOME 126 A BASE OF SUPPORT 127 LEADERSHIP 128 POINTS ON PROCESS: FACILITATE USE OF YOUR FINDINGS 129 CHAPTER 7: LEARNING AND IMPROVEMENT 130 STEP 1: BEGIN EVALUATION DURING THE ASSESSMENT 131 STEP 2: HOLD AN EVALUATION AFTER THE ASSESSMENT 131 STEP 3: MAKE THE EVALUATION RESULTS AVAILABLE 134 STEP 4: KEEP THE DOOR OPEN TO ONGOING FEEDBACK 134 POINTS ON PROCESS: CONDUCTING A TEAM SELF-EVALUATION 135 Table of Contents 5 POSTSCRIPT 138 ANNEX I: CASE STUDIES 139 CASE STUDY: INDONESIA 140 CASE STUDY: TANZANIA 146 CASE STUDY: ECUADOR 153 CASE STUDY: LIBERIA 159 CASE STUDY: UGANDA 164 ANNEX II: METHODS, TOOLS, GUIDANCE, AND REFERENCES 169 ANNEX III: CREATING BUDGETS 187 ANNEX IV: SAMPLE WORK PLAN OUTLINE 191 ANNEX V: CONCEPTS TO HELP IN DEVELOPING INDICATORS 193 ANNEX VI: GLOSSARY 198 BIBLIOGRAPHY 201   Boxes Box 1: Some Key Terms 14 Box 2: Using the Same Terms—Forest Governance 24 Box 3: Assessing Context—Tools and Activities 26 Box 4: Staying SMART—Keeping Your Objectives Focused 29 Box 5: Gauge Your Resources 30 Box 6: Balancing Breadth and Depth of Assessment 37 Box 7: Remember the Context—Increase Impacts and Reduce Risk 38 Box 8: Methods, Approaches, and Types of Data 40 Box 9: Remember Your Participants and Your Audience— The Human Capacity of Those Who Provide and Use Your Information 41 Box 10: Tailoring Your Approach to Match Objectives—Mixing Methods 42 Box 11: Do You Need to Be Different? Designing a Completely New Approach vs. Working with Existing Forest Governance Assessment Approaches 43 Box 12: Engaging Stakeholders—The Benefits of Partnerships 48 Box 13: How Much Will It Cost? 50 Box 14: Review Your Work Plan 52 Box 15: Be Clear on What You Are Assessing 55 Box 16: Example of Narrative Description of What to Address 56 Box 17: Three Examples of Indicators 58 Box 18: Setting the Foundations 60 Box 19: Using Existing Methods and Tools 60 Box 20: Inputs, Process, Outputs, and Outcomes 61 6 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Box 21: Searching for Data 62 Box 22: Using Data in Different Ways 63 Box 23: Examples of Finding Data in Written Materials 64 Box 24: Examples of Finding People Who Can Provide Data 66 Box 25: Less Frequently Used Data Collection Methods 68 Box 26: Content Analysis 73 Box 27: Using Technology in Data Gathering and Management 74 Box 28: The Delphi Method—A Specialized Way to Use Experts 75 Box 29: Rules of Thumb for Designing Questions 76 Box 30: Open and Closed Questions 77 Box 31: Interview Protocols and Structure 78 Box 32: Coding of Interview and Survey Responses 80 Box 33: Factoring in Sampling and Stratification 82 Box 34: Designing Data Collection Forms 84 Box 35: Piloting Surveys 84 Box 36: Identifying Good Team Members 91 Box 37: Practical Tips for Interviewers 94 Box 38: Data Collection and Entry 96 Box 39: Providing Compensation 100 Box 40: Ethical Rules of Thumb 101 Box 41: Drawing on Interpretive Techniques from Outside the Forest Sector 103 Box 42: Assuring Quality in Data Entry 104 Box 43: Searching for Software 105 Box 44: Archiving 106 Box 45: Coding Written Materials 107 Box 46: Three Ways to Report Averages—Means, Medians, and Modes 108 Box 47: Vetting and Validation 114 Box 48: Thinking Beyond the Report 119 Box 49: Rethink and Revise 120 Box 50: Some Output Formats 121 Box 51: Possible Ways to Distribute the Assessment Findings 122 Box 52: Possible Ways to Draw Attention to the Assessment Findings 123 Box 53: Examples of Reports 124 Box 54: Protect your Sources 125 Box 55: Trust and Impact 126 Box 56: Sample Event Evaluation Questions 132 Box 57: Key Questions for a Project Evaluation 133 Box 58: Some Evaluation Exercises and Tools 136 Table of Contents 7 Figures Figure 1: Stakeholder Engagement Continuum 32 Figure 2: Using a Framework to Identify Technical Scope and Focus of Assessment 36 Figure 3: Graphic Conveying the Difference Between Ideal Scoring and Actual Scoring of Forest Governance Indicators in Russia 109 Figure 4: Example of a Table Using Shading to Convey the Relative Quality of Scores 109 Figure 5: Using Color in a Bar Graph to Display Indicator Scores from Liberia 110 Figure 6: Results from Scoring Indicators in Uganda Presented in a Radar Graph 110 Figure 7: How Word Clouds Could Be Used to Compare Concerns Raised by Government Officials (top) and NGO Officials (bottom). 110 Tables Table 1: Organization of the Guide 19 Table 2: Balancing Breadth of Assessment Scope with Depth of Assessment 37 Table 3: Types of Method and Data 39 Table 4: Six Basic Data-Gathering Methods Frequently Used in Governance Assessments 44 Table 5: Common Methods of Data Collection 67 Table 6: Planning and Design Choices for the Use of Data Collection Tools 72 Table 7: Key Assessment Team Members 92 Table 8: Who Will Conduct the Evaluation? 133 8 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Foreword In the last twenty years, practitioners have come to appreciate that governance is often the weak link in addressing unsustainable use of forests and trees. Technical knowledge alone is insufficient, and no natural forest management, protected area, plantation, or agro-forestry project will succeed if the resources are poorly governed. The concept of “forest governance” is often difficult to grasp because many laws, rules, policies, actions, and interactions shape forests. This also makes it difficult to be clear about what the major governance impediments are and what to do about them. Thus, an essential first step towards improving forest governance is to define its most relevant core elements in a coherent framework. In 2009, several organizations working on forest governance initiated a series of discussions on forest governance monitoring and indicator development. This partnership led to the production of a document, “Framework for Assessing and Monitoring Forest Governance,” published by FAO and PROFOR in 2011. Since then, the framework has been used for forest governance assessments by several organizations, in many different countries, and is seen as an increasingly useful basis and point of departure for forest governance work. The framework facilitates systematic thinking about forest governance issues but leaves open the broad question of how to collect and analyze the empirical data. Thus, as a follow-up, FAO, and PROFOR took the lead in producing this guide on data collection and analysis, in collaboration with other organizations. The guide is the outcome of a remarkable collaboration of experts from organizations with different views and roles on governance issues who nonetheless united to direct the compilation of a common set of good assessment practices. This guide presents a step-by-step approach to planning forest governance assessment or monitoring, collecting data, analyzing it, and making the results available to decision makers and other stakeholders. It also presents five case studies to illustrate how assessment or monitoring initiatives have applied the steps in practice, and it includes references and links to dozens of sources of further information. The remedy to poor governance starts with understanding where governance is weak. If we can measure forest governance, we can diagnose problems, advance reforms, and monitor their impacts. Governance data collection and assessment provides a necessary foundation for systematic improvement. This guide is a handbook for those seeking to better understand the issues, status, and trends of forest governance, through assessment and analysis. The guide will complement the efforts of both FAO and PROFOR to support sustainable forest management by improving the information base and understanding of governance. FAO and PROFOR are proud to have partnered in the production of this guide. We hope that it will prove valuable to people around the world whose lives are linked to forests. Eva Muller Diji Chandrasekharan Behr Director Program Manager Forest Economics, Policy and Products Division Program on Forests (PROFOR) Forestry Department Food and Agriculture Organization, Rome The World Bank, Washington DC Foreword 9 Acknowledgments Funding for development of this guide has come from several sources, including the FAO-Finland Programme, the UN-REDD Programme, PROFOR, and the European Union-FAO Forest Law Enforcement, Governance, and Trade Programme (EU FAO FLEGT Programme). The impetus for the guide came from FAO (Eva Muller, Ewald Rametsteiner, Emelyne Cheney, and Robert Simpson) and PROFOR (Nalin Kishor). A 2012 workshop at FAO’s headquarters in Rome confirmed the need for the guide and produced its rough outline. The workshop participants included Crystal Davis, Alice Thuault, Cecile Njebet, Giorgio Budi Indrarto, Sam Lawson, David Young , Samuel Nguiffo, Phil Franks, Saraswati Rodriguez, Tina Sølvberg, Abdul Wahib Situmorang, Joachim Nahem, Evgeny Kuzmichev, Vladislava Nemova, Marina Smetanina, Stephano Kingazi, Nguyen Phu Hung, Doan Diem, Beatrice Lukama , Orleans Mfune, Sam Nketiah, Chris Beeko, Jo Van Brusselen, An Bollen, Herman Savenije, Tapani Oksanen, Tek Narayan Maraseni, Laura Secco, Eva Muller, Bob Simpson, Marjo Maidell, Ewald Rametsteiner, Emelyne Cheney, and Nalin Kishor. Kenneth Rosenbaum facilitated. The 2012 workshop prompted creation of a core group of experts to oversee creation of the guide. This group included Robert Simpson, Ewald Rametsteiner, Nalin Kishor, Jo van Brusselen, Tina Sølvberg, Emelyne Cheney, Crystal Davis, Saskia Ozinga, Filippo del Gatto, Boris Romaniuk, Steve Nsita, Nguyen Quang Tan, and Ragna John. Kenneth Rosenbaum and Guido Broekhoven facilitated the group meetings. The European Forest Institute (EFI) hosted the first meeting of the group in November 2012; the World Resources Institute (WRI) hosted the second in June 2013. Also participating in parts of the meetings were Rudi Kohnert, Tuukka Castrén, Dan Miller, Lauren Goers Williams, Florence Daviet, Flore de Préneuf, and Phil Cowling. As some members of the core group left to take on new assignments, Melissa Othman and Free de Koning filled in to give feedback on drafts. Additional feedback came from outside reviewers Nina Rinnerberger, Judith Walcott, and Lera Miles. Many people provided logistical and operational support, including Tania Abdirizzak, Michela Mancurti, Anne Ricchiuti-Romanelli, Elena Bernardini, Janice Saich, Helene Godeaux, Sujatha Venkat Ganeshan, and Veronica Jarrin. People offering general advice, information, and assistance included Benjamin Cashore, Tini Gumartini, Daniel Meireles Tristao, Rosalind Reeve, James Mayers, Francesca Felicani Robles, and Stephanie Ratte. Besides the core group of experts, people who provided information about case studies included Louise Riley, Christine Holding, Nuru Chamuya, Søren Dalsgaard, Abdul Wahib Situmorang, Mireya Villacís, and Sigrid Vásconez. Phil Cowling and Kenneth Rosenbaum wrote the body of the guide with the valuable assistance of Kristin DeValue. Kristin DeValue led the assembly of the case study, tools, and glossary annexes. The guide was edited by Daria Steigman and designed by Studio Grafik. James Cantrell provided editorial and publication support. 10 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE ACRONYMS AND ABBREVIATIONS CS-IFM Civil Society-Independent Forest Monitors DFID Department for International Development (UK) EFI European Forest Institute ENPI European Neighbourhood and Partnership Instrument EU European Union FAO Food and Agriculture Organization FARO Fundación para el Avance de las Reformas y las Oportunidades (Foundation for the Advance of Reforms and Opportunities) FLEG Forest Law Enforcement and Governance FLEGT Forest Law Enforcement, Governance, and Trade GIZ Gesellschaft für Internationale Zusammenarbeit GFI Governance of Forests Initiative IAP2 International Association for Public Participation ICA Institutional and Context Analysis ICT Information and Communication Technology IFAD International Fund for Agricultural Development ITTO International Tropical Timber Organization IUCN International Union for Conservation of Nature MOU Memorandum of Understanding NAFORMA National Forest Resources Monitoring and Assessment NFP National Forest Programme NGO Nongovernmental Organization ODI Overseas Development Institute PEA Political Economy Analysis PGA Participatory Governance Assessment PROFOR Program on Forests REDD Reducing Emissions from Deforestation and Forest Degradation REDD+ REDD plus Fostering Conservation, Sustainable Management of Forests and Enhancement of Forest Carbon Stocks SCAPES Sustainable Conservation Approaches in Priority Ecosystems SDI Sustainable Development Institute SPSS Statistical Package for the Social Sciences SWOT Strengths, Weaknesses, Opportunities, and Threats UK United Kingdom UN United Nations UNDP UN Development Programme UNFCCC UN Framework Convention on Climate Change US United States USAID U.S. Agency for International Development VPA Voluntary Partnership Agreement WRI World Resources Institute Acronyms and Abbreviations 11 12 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE INTRODUCTION This is a guide to measuring or assessing for- People assess forest governance in many ways. The est governance. Forest governance comprises all Uganda case study gathered most of its data in a the social and economic systems that affect how two-day stakeholder workshop. The Tanzania case people interact with forests, including bureaucra- surveyed thousands of households. The Indonesia cies, laws, policies, traditional norms and culture, case study involved many weeks of interviews, li- patterns of land tenure, and markets.1 brary research, consultations, and surveys. People assess forest governance for many rea- In every assessment, people take on varied sons. Assessments tied to reducing emissions roles. Some plan the assessment. Some manage from deforestation and forest degradation, plus its implementation. Some carry out the data col- fostering conservation, sustainable management lection, analyze the data or communicate the re- of forests, and enhancement of forest carbon sults. Some participate as information resources, stocks (REDD+) have aimed to fulfill international constructive critics, or advisors. obligations, diagnose problems, and establish a baseline for future monitoring. Assessments un- This guide aims to be useful to everyone involved der the World Bank Forest Investment Program in a forest governance assessment. Some readers have helped set the agendas for donor funding. will want to go through the whole guide to have Assessments by nongovernmental organizations a full picture of the process. Others will find what (NGOs) have held officials more accountable they want in particular chapters on planning, data and have been the basis for advocacy for reform collection, analysis, and use of data. or better implementation of forest laws. Readers will find that this guide bases its ap- People assess forest governance on many scales. proach on a few key foundations. These are that The assessment in Liberia highlighted in Annex a good assessment: I focused on a few concessions; the cases from Tanzania, Indonesia and Uganda covered whole • Requires good planning. For that reason, this nations; and the one on Ecuador was part of a guide goes into detail on planning. larger study intended to compare performance • Is transparent and includes stakeholder in- in several countries. The Ecuador assessment volvement and outside review. For that reason, looked specifically at transparency, while the this guide stresses participatory approaches. three national assessments attempted to cover • Uses data collection methods that are forest governance more broadly. technically sound. For that reason, the data collection chapters of this guide introduce technical topics. • Does not stop at data collection and analy- 1. For a good, detailed explanation of what makes up forest governance, see the framework presented in PROFOR & FAO (2011). That publication sis; rather, it disseminates results in ways breaks forest governance down into pillars, components, and that encourage use of the assessment. subcomponents. Davis et al. (2013), World Bank (2009), Situmorang et al. (2013), IIED (2005b), and USAID (2013) offer alternative frameworks. For that reason, this guide talks about Introduction 13 dissemination strategies and ways to build This guide does not set standards. There is no upon assessments. single best way to conduct an assessment. This • Is open to learning. It evaluates itself and seeks guide builds upon what others have done in this to improve. For that reason, this guide talks rapidly developing field and points to some use- about piloting, adaptive changes in planning, ful practices and resources. and self-evaluation at the end of the process. The potential number of steps can seem daunting, The guide presents approaches to assessment but doing a governance assessment is not neces- consistent with these premises. If you are con- sarily harder than doing other kinds of inventories ducting a large, detailed, and unique assess- or monitoring. The case studies in Annex I show ment, you will be interested in following most how assessments of different sizes and complex- of the steps in the guide. If your assessment is ity have approached the task and succeeded. smaller and less complex, you may decide that you do not need to follow every step. For ex- To decide what parts of this guide will be useful ample, you may not need to write a work plan to you, read the overview of chapters presented or a data collection manual if the assessment below and consult Table 1. You may discover that is built around a single day’s workshop, or to you want to use the whole guide, or you may design a new method of data collection if the end up using the guide selectively to improve project wants to use a method designed for a your planning and to learn new ways to collect previous monitoring process. data or increase the impact of your findings. LANGUAGE CHECK BOX 1: SOME KEY TERMS Different sources use terms like “assessment” and “evaluation” differently. This guide gives these terms broad definitions: Assessment means “appraisal based on careful analytical evaluation” (PROFOR & FAO 2011, p.31). Data collection means the systematic gathering of information. Evaluation means study or measurement, often with an aim to compare the current situation with a past situation or a desired goal. Measuring means finding the size, amount, extent, status, or degree of something. As used in this guide, it can apply to both quantitative and qualitative data collection. Monitoring means “systematic tracking or scrutiny for the purpose of collecting specified data or information” (PROFOR & FAO 2011, p.31). Refer to Annex VI for definitions of other useful terms. 14 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE OVERVIEW OF THE GUIDE This guide comprises three sections. The first section is about planning, the second is about data collection and analysis, and the third is about using your assessment. Section I: Planning Your Assessment Chapter 1 of the guide deals with setting objec- • Deciding who will conduct the assessment. tives. It includes: Who will fund it, who will do the field work, and who will provide institutional support? • Identifying why you are doing an assessment. • Making a budget. Your answer to “why” forms the foundation • Drawing a timeline or setting a time schedule. of your remaining work. • Assessing the context in which you are work- This chapter will be most useful to people carry- ing. Context can affect when you decide to do ing out the high-level planning, but it will also be an assessment and what approach you take. of interest to funders, stakeholders, and others • Setting out your objectives in consideration who may have a say in the planning decisions. of what is practical to achieve. Chapter 3 deals with planning for data col- This chapter will be most useful to people initiat- lection. This planning requires some technical ing, funding, or overseeing assessments. knowledge of data collection methods, and the chapter offers an introduction to these with Chapter 2 of the guide gets into the details of pointers to further information. Some assess- planning. The steps in this chapter lead up to ments will come to this point with definite ideas writing an assessment work plan. Not every as- of how to collect data. For example, they may be sessment needs a formal written work plan, but part of an established monitoring process and every assessment needs to make some basic be bound to use more or less the same meth- planning decisions. These include: ods as the previous round of monitoring. Other assessments will be creating new methods and • Setting the technical scope (the specific el- will need to carefully go through all the steps in ements of forest governance you are inter- the chapter. These steps are: ested in), geographical scope (e.g., whether local, national, or international), and social • Deciding what aspects of governance to scope (the specific social groups or institu- measure. This step builds on the scope- tions you want to gain information on). setting in Chapter 2, but takes it to a new • Choosing the general methods you will level of detail. The step may entail creation use to gather data. For example, will you of indicator sets. use household surveys? Expert opinion? Stakeholder workshops? Document reviews? Overview of the Guide 15 • Identifying data sources. This means under- • Writing a data collection plan. Complex proj- standing where to find relevant information ects involving many data collectors will also and who to involve in the search. want to write a field manual. • Selecting data collection methods. Again, this builds on decisions made in earlier planning. This chapter will be of greatest interest to the However, the general plans for methods managers responsible for implementing the as- need to become more concrete guidance for sessment as the task of designing data collection data collection. usually falls to them. • Developing data collection tools. These may include interview protocols, surveys, work- shop agendas, and so forth. Section II: Implementing Your Assessment Chapter 4 covers the basics of collecting data Chapter 5 covers interpretation and analysis: for the assessment. The details of the steps will vary with the methods that the assessment uses. • Processing the data, which may include en- You will need more effort and staff to gather data tering it into digital form, summarizing it, or through household surveys than you will by col- producing visual representations of it. lecting the opinions of a few experts. • Analyzing the data, which means interpreting the data in terms of the local context. This The basic steps covered in this chapter are com- may include scoring indicators, identifying mon to most data gathering efforts: patterns in the data and shedding light on their causes and effects, or explaining the • Recruiting and training staff. data in terms of social or economic theory. • Collecting the data. • Making recommendations, which typically • Assuring the quality of the data collected. take the form of suggested priorities or actions. The information in this chapter will be of use to As with Chapter 4, this chapter will be of interest high-level managers, data collection managers, to people doing the work of analysis as well as data collection staff, and people outside the as- to people who want to understand and critique sessment who want to understand and critique the analysis of others. data collection practices. 16 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Section III: Using Your Assessment Chapter 6 covers dissemination of results. Too Chapter 7 discusses learning and improve- many assessments simply publish a report ment to make the first and future assessments that gets read a few times and then filed away. better. It covers: Chapter 6 discusses: • Ongoing self-evaluation of the assessment • How to develop a dissemination strategy process during implementation. that will bring your findings and recommen- • Evaluation of the process after the assess- dations to the attention of decision makers ment is complete. and stakeholders who can put the informa- • Capturing and sharing lessons learned. tion to good use. Assessment is an evolving practice, and we • How to implement the strategy. can all learn from each other’s experiences. • How to institutionalize the assessment pro- • Finding ways to keep gathering feedback af- cess, or at least make it more likely that the ter the assessment is over and the staff have next assessment will have your records and moved on to other projects. Some lessons methods available to build upon. will emerge only after events play out over months or years. This chapter should be of interest to people who are interested in seeing the investment of time This chapter should be of interest to all who and energy in assessments lead to real change: hope to improve the quality of assessments. assessment initiators and funders, managers, data collectors, analysts, and stakeholders. Postscript The postscript briefly notes the rapid evolution of forest governance assessment and encourages practitioners to contribute to the growth of the field. Overview of the Guide 17 Annexes This guide includes several annexes that should Each case is outlined using the steps presented provide as much practical guidance as the text itself. in the main text of the guide. Annex I includes five case studies: Annex II presents a set of references and tools linked to the chapters of the guide. For example, • A broad, indicator-based, countrywide as- if you are interested in learning more about sessment in Indonesia that uses several political or economic assessments (discussed methods to gather data (with regional and in Chapter 1), creation of survey instruments local components). (Chapter 3), or data visualization (Chapter 5), • A national survey-based assessment in you will find links to resources on those topics Tanzania that was part of a larger effort to col- in this annex. lect biophysical and social data about forests. • A national assessment in Ecuador focusing Annexes III and IV present guidance on two on evaluating transparency and designed to planning tasks covered in Chapter 2. Annex III be part of an international survey of several has advice on creating budgets. Annex IV has a developing countries. sample outline for a work plan. • An assessment in Liberia of the governance of seven forest concessions. Annex V contains information for people inter- • An assessment in Uganda using a national ested in developing or refining their own indica- stakeholder workshop for rapid scoring of a tors of forest governance. large set of indicators. Annex VI is a glossary of terms used in the guide. This guide is part of the growing exchange of ideas among practitioners of forest governance assessment. Please join that conversation by sharing your feedback and experiences with the sponsors of this guide. Send email to assessment@forestgov.info. 18 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE TABLE 1: ORGANIZATION OF THE GUIDE Sections Chapters Technical Elements Points on Process Section I: Planning your –– Define the “why” –– Developing shared Assessment Setting the Objectives –– Consider the context objectives –– Set the objectives –– Identify your scope –– Communicating and –– Identify your approach managing the process –– Decide who will conduct the Developing a Work Plan assessment –– Figure timing –– Figure cost –– Write the work plan –– Decide what aspects of governance –– Vetting the Methods to address –– Identify potential sources of Planning for Data information Collection –– Select data collection methods –– Develop tools for each method –– Finalize your work plan and develop a data collection manual Section II: Implementing –– Assemble and train a data –– Practical and ethical Your Assessment collection team data collection Data Collection –– Collect data –– Assure data quality –– Process the data –– Vetting and Validation Interpretation and –– Do the analysis of Analysis Analysis –– Make recommendations Section III: Using Your –– Decide on an implementation –– Facilitate use of your Assessment strategy findings Application of Results –– Implement your strategy –– Institutionalize further assessment –– Begin self-evaluation during the –– Conducting a team assessment self-evaluation –– Hold an evaluation after the Learning and assessment Improvement –– Make the evaluation results available –– Keep the door open for further feedback Overview of the Guide 19 20 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE SECTION I: PLANNING YOUR ASSESSMENT OVERVIEW • Section 1 provides an overview of how to plan your assessment. It is divided into three chapters: Setting Your Objectives, Developing Your Work Plan, and Planning for Data Collection. • Chapter 1, Setting the Objectives, helps readers to assess why they are doing the assessment and what contextual factors could affect its design as part of an objective-setting process. • Chapter 1 also provides an introduction to stakeholder engagement within the assessment pro- cess to help readers consider how they will engage different groups within their assessment. • Chapter 2, Developing a Work Plan, helps readers to develop their assessment’s approach by considering what they want to include within their assessment (the geographical, technical, and social scope), what methods they want to use (quantitative, qualitative), and who will be involved in conducting the assessment. It then provides further guidance in considering the practical ele- ments of developing a work plan for the assessment, including identifying when it will be done, how long it will take, how much it will cost, and whether it will be repeated. • Chapter 2 also provides guidance on effectively communicating with the different stakeholder groups engaged within the development and planning of your assessment. • Chapter 3, Planning for Data Collection, helps readers refine their plans to produce practical tools for collecting needed information. It has readers set concrete measurement aims, identify potential data sources, select data collection methods, develop specific data collection tools, and capture everything in a data collection manual. • Chapter 3 also discusses going to stakeholders or peers to get feedback on proposed methods. • While this section is presented in a sequential order, planners will need to consider many ele- ments at the same time—with decisions on finance, human resources, and intended outcomes all influencing the potential scope and approaches to be used. • You should thus consider Section I to be a general guide to developing the approach to your as- sessment. The steps can be useful even if you do not have a full commitment to go ahead with an assessment. You can undertake some or all of this planning to attract funding, to cost out an already- agreed-upon assessment, or to identify how you can achieve an assessment within your budget. Planning Your Assessment 21 1 SETTING THE OBJECTIVES Setting objectives is the first step in the development of an assessment. It will define what you are trying to achieve and help you to communicate this to others. Even if you are planning to use an assessment tool used many times before, having clear objectives will help guide the decisions you make as you apply the tool to the circumstances at hand. This chapter provides an overview of the objective-setting process. That process has three main steps. DEFINE THE “WHY” STEP Begin with a clear understanding of why you are conducting the assessment. This will help 1 you refine your objectives and explain them to others. Understanding the “why” requires thinking about your background motives and the intended achievements or outcomes from the assessment. CONSIDER THE CONTEXT STEP The broad social, political, and environment context, which the forest sector is part of and in 2 which your assessment will take place, will affect what you want to and can achieve. Analysis of this context may help you to identify opportunities, risks, and obstacles. The process can also help you identify which groups should be the key audiences of the assessment and which groups should be engaged in its development. This knowledge can lead you to revise your timing or anticipated outcomes and shape your objectives to ensure that the assessment is as relevant as possible. SET THE OBJECTIVES STEP Having considered why you are doing the assessment and the context in which it is taking 3 place, you can now set your objectives. These can be divided into three levels: a high-level goal related to the overall impact you want the assessment to achieve; a small number of outcomes that you think will help achieve the overall goal; and a number of more direct outputs that will contribute to achieving your desired outcomes. POINTS ON PROCESS: DEVELOPING SHARED OBJECTIVES Many different stakeholders are engaged in the forest sector, many of whom will have different interests in and views on the sector. Talking with these different groups at an early stage can help you develop objectives that are relevant to many of them. This can increase support for conducting the assessment and make it more likely that the stakeholders will accept the assessment’s results. 22 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 1: Define the Why What are your motivations? What is the pur- • Monitoring the impact or performance of a pose of the assessment? Do you need to do an specific policy, program, or legal or adminis- assessment? trative process (e.g., forest law reform) over time. (See the Liberia and Ecuador cases in There are many reasons to undertake an assess- Annex I.) ment, and your specific motivation will depend • Setting a baseline for future monitoring. (See on your position within the sector, the organi- the Indonesia and Liberia cases in Annex I.) zation you work for, and the existing status of • Establishing or strengthening a system of for- governance within your forest sector. Some of est sector monitoring to include forest gov- the more common motivations include: ernance. (See the Tanzania case in Annex I.) • Diagnosing forest governance challenges Your motivation may be one of these, something related to elements of the forest sector, usu- else, or a mix. Having multiple motivations is not ally as part of a planning process related to a problem and can improve the relevance of the the development of policies and measures assessment. Being clear on your motivations will related to international agreements such help you to decide whether you need to conduct as Voluntary Partnership Agreements (VPA) an assessment and to communicate to others (linked to European Union (EU) market why an assessment is or is not needed. It will also access) or REDD+ Readiness processes help you to ensure that your own motivations are linked to REDD+ developments under the effectively captured within proposed objectives UNFCCC. (See the Uganda and Indonesia and approaches, including within the geographi- cases in Annex I.) cal scope (where), the technical scope (what), • Raising awareness of a perceived issue/ and the social scope (who) of the assessment problem or number of issues within the sec- (discussed in more detail in Chapter 2). tor that action should be taken on. (See the Ecuador case in Annex I.) Setting the Objectives 23 FROM THE TOOL BOX BOX 2: USING THE SAME TERMS—FOREST GOVERNANCE Forest governance is a broad topic, and actors will have different understandings of the key concepts and the language used to describe them. Many will also have an incomplete knowledge of the forest sector and forest governance, with their knowledge shaped by their own experiences and interests. It is thus good at the beginning of any process to establish some common understanding. This can be achieved by presenting a list of key terms and concepts and coming to agreement on what they mean for the purpose of your discussions. This may also be achieved or helped by using a basic framework around which discussions on forest governance can be structured. This approach can help to bring the ideas of diverse stakeholders together by showing how their different experiences in the sector may be linked by a common governance issue or form part of a chain of governance that you want to get more information on. Assessments have used a number of different forest governance frameworks. The one shown below, developed by FAO, PROFOR, and others in 2011 as a common framework, is one example. The three main pillars are presented as the core elements of forest governance, while the cross-cutting (horizontal) principles are seen as the generally accepted principles of good governance. Use of a framework such at this can allow you to focus in on a key problem area that stakeholders may be interested in (e.g., transparency in the development of forest laws). While not essential at the objective-setting phase, introducing the framework at an early point in the process of developing your assessment may help structure discussions throughout the process, moving from discussion on why you are doing the assessment to how and what you are going to assess. ACCOUNTABILITY Policy, Legal, Planning and Implementation, EFFECTIVENESS Institutional, Decision-making Enforcement, and and Regulatory Processes Compliance EFFICIENCY Frameworks FAIRNESS/EQUITY PARTICIPATION TRANSPARENCY 24 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 2: Consider the Context Both your motivations and intended outcomes • Ownership and power dynamics. There are likely to be shaped by the context in which are many different stakeholders within the you are working. This requires you to step back sector with differing levels of power and from your immediate motivations and look at influence and differing relationships. Clear the wider environment in which the assess- understanding of these will help you focus ment will be taking place. Such analysis will your assessment and develop approaches help you identify: and methods that take such imbalances of power into consideration. • Windows of opportunity. These are current or upcoming events or situations that would An analysis of the context in which you are work- increase the relevance or impact of the as- ing can be strengthened by engagement with sessment (e.g., changes in leadership of a key stakeholders (see the process note on stake- institution, an election, or a shift in the domes- holder engagement at the end of this chapter). tic or international economic situation). The exact focus areas will depend on your own • Risks. An assessment may pose risks associ- areas of interest, but some common areas for ated with its physical implementation or use analysis are discussed below. of its findings. These include safety risks to people participating in the assessment, risks The Political and Institutional Context of results being misapplied to justify poor de- • What are the main formal and informal institu- cisions, and risks to the reputations of people tions affecting or affected by the forest sector? conducting the assessment. Considering • Who are the key decision makers and what these within the broader context will help areas of information are of interest to them? you mitigate them from an early stage. What would make the assessment more • Other initiatives. Awareness of past or on- compelling for them? What elements of an going assessments or other programs work- assessment could they be opposed to? ing on governance can help you identify • What local, national, or international events how to work with them and prevent dupli- are coming up or ongoing that could influence cation of effort. the assessment and its outcomes? Upcoming • Key problems. The forest sector is complex. events might include elections, planned re- Assessment may ultimately identify different forms, or planned investments from develop- problems, different perspectives on problems, ment partners or the private sector. Ongoing and different underlying drivers of these prob- events might include notable public failures lems than the ones you expected to find at of governance (e.g., a weak response to a di- the start. Gaining an early perspective on what saster, exposure of corruption). Be particularly problems might exist and whether they come aware of other forest governance programs, from within the forest sector or outside will like Voluntary Partnership Agreements (VPAs) help you to make your assessment relevant or Forest Law Enforcement, Governance and and increase its potential impact. Trade (FLEGT) programs that might be sup- portive of assessments. Setting the Objectives 25 FROM THE TOOL BOX BOX 3: ASSESSING CONTEXT—TOOLS AND ACTIVITIES Governments, NGOs, development partners, and the private sector have used many tools and activities to assess the context in which they are working. These can produce a range of information, from detailed analyses to quick snapshots of ”headline” issues. Below are examples of some analysis tools and activities that could be of use. Annex II has pointers to resources for many of these tools. • Stakeholder workshops. A simple workshop to bring stakeholders together to discuss the existing status of the sector and potential opportunities for change provides a forum for analysis of the current context. It also provides an opportunity to learn about other programs and initiatives that may be ongoing. Such a workshop could be part of a multi-stakeholder planning process for the assessment. • Timeline development. A basic timeline showing key events and cycles occurring in the sector and in the national/local government may help you identify for the optimal time to issue your report (e.g., before a budget cycle begins) or when to avoid fieldwork (e.g., during the winter or the rainy season). • Stakeholder analysis/mapping. Mapping stakeholders within the sector can identify which groups you need to engage and which groups are important target audiences. Discussions with stakeholders during the mapping process can identify their interests and could point to possible conflicts. Mapping can also be linked to a power analysis, looking at the power different stakeholders have within the sector. • Development of a background document. It may be possible to recruit a consultant or other personnel to prepare a background report on the sector, including assessment of key stakeholders, status of the forests and land-cover change, and key political and social issues. PROFOR’s Users Guide to Assessing and Monitoring Forest Governance (Kishor & Rosenbaum 2012) provides a sample outline of such a document for a forest governance assessment, but you could design and develop one based on your own areas of interest and the resources you have available. • Political economy analysis (PEA) or institutional and context analysis (ICA). These processes are more in-depth analyses and will include many of the above tools and activities. Analysis of this type often takes months to undertake in any detail and will provide key recommendations on potential drivers of future change in the sector that an assessment or future programme could capitalize on. UNDP provide a guidance manual on ICA while DFID have developed a guidance note on PEA. • Poverty or livelihood impact assessment. This process uses a number of different methods to identify what social impacts proposed or recently implemented policies or activities will have or have had. The tool has been developed by the NGO Forest Trends and has been used most frequently in association with voluntary partnership agreements linked to the EU’s Forest Law Enforcement, Governance, and Trade (FLEGT) Initiative. Very detailed analysis may be beyond the capacity of your assessment, but it will be valuable to see if such assessments have been conducted and whether they can help inform your objective-setting and planning process. 26 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE The Economic Context The Technical/Operational Context • How significant are the formal (e.g., regulat- • What resources, people, and organizations ed timber industry, ecotourism) and informal are available to help in the assessment? (e.g., firewood collection, non-timber forest • What other assessments or analyses have products used in communities) elements of been conducted or are ongoing that could ei- the forest sector to the economy? ther strengthen the assessment or conflict with • What other economic activities are important it? Again, look for programs like FLEG or FLEGT. and how do they influence the forest sector? • Who are the key economic stakeholders and The Environmental Context what influence do they have? • What are the key environmental issues being • Are there potential changes on the economic faced at local and national levels? landscape, such as changes in commodity • Are nationally or internationally significant prices, access to markets, or development of ecosystems/species affected by current for- new natural resources? est practices? • Has any analysis of the costs of environmen- The Social Context tal degradation or the value of ecosystem • What are the different social and cultural uses services been done? of forest areas? Who are the key stakeholders from a social and cultural standpoint? Gaining this knowledge will help you refine the • What is the tenure situation? Do local commu- outcomes you think are both possible and most nities have respected tenure rights? Are ten- important. This knowledge will also help you ad- ure rights disputed (between communities, dress such key points as the audience for the between communities and the state, and/or assessment, the timing of implementation and between official state-issued allocations)? Are delivery (which can be critical in terms of iden- there overlapping resource concessions? tifying opportunities to increase impact), and the • What are the existing relationships between potential resources available for conducting your different stakeholders within the forest sector? assessment. These considerations are not just rel- • Which stakeholders have power within the sec- evant at the objective-setting stage; they will be tor and which stakeholders are more excluded important throughout the planning and develop- (e.g., rural women and other subgroups)? ment process. The more context that can be wo- • Which groups are interested in forest gover- ven into the planning stage, the more you will be nance and view the need to support assess- able to consider how contextual factors will influ- ment and monitoring or push for change? ence your assessment—and identify opportunities to use these contextual factors to strengthen the assessment and avoid potential pitfalls. Setting the Objectives 27 Step 3: Set the Objectives Having considered both your initial motivations and the broader context, you can move to iden- Goal Overarching objective to tifying the overall goal, outcomes, and outputs which the assessment will that you want your assessment to achieve. contribute and that will be supported by achieving the stated outcomes (e.g., These represent the top level of your “hierarchy enhanced engagement of objectives”; this will eventually provide a link of indigenous peoples in forest governance decision Theory of Change between the activities you are doing and your How different making). overall objectives. Activities contribute to the activities will lead delivery of direct outputs (such as reports or a to outputs, outputs to outcomes, and workshop), which then support broader out- outcomes to goals. comes (such as increased awareness of forest Outcomes Key developments that governance amongst forest dependent com- will help achieve the goal munities or increased capacity to monitor forest and will be supported governance within government agencies), which by outputs. Normally limited to 2-4 things (e.g., will help support achievement of your goals increased understanding (such as increased demand for good forest gov- of the role of indigenous ernance amongst forest dependent communi- communities in forest management). ties or improved access to information on forest governance within a country). Outputs Working out the links between these different Number of outputs levels is often called developing a theory of that will contribute to change. A theory of change provides a plausible delivering outcomes. These are normally more tangible path from the activities and outputs you will things that the project work on to the overall goal you may have (i.e., will definitely deliver e.g. how all the different activities add up to several a report on the role of indigenous communities medium-sized changes (outputs) that then lead in forest management. to bigger changes (outcomes) and contribute to one big change (the goal)). Developing this will help you think through exactly what it is you Activities: want to achieve and how you will achieve it. As Specific activities that will you develop your plan, the theory of change will be undertaken in order to achieve the outputs (e.g., also help you to identify the scope of your as- workshops, assessment sessment, the target audience of your outputs, work, meetings. and what activities you want to undertake. These decisions will be crucial in shaping the type of outputs you produce and how these contribute to achieving your outcomes and goal. 28 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE PRACTICE TIP BOX 4: STAYING SMART—KEEPING YOUR OBJECTIVES FOCUSED Each objective (from goal to output) should be a clear statement of what the assessment wants to achieve. Each may focus on a specific change occurring or the process by which the assessment is conducted, but as a group they should conform to a number of key guidelines: • Be Specific. Objectives should be well-defined and unambiguous, focusing on a clear result. • Be Measurable. Objectives should define success in a way that can be measured. • Be Achievable. Objectives should be achievable considering your resources. You will need to consider your time and financial and human resources when assessing this. • Be Realistic. Objectives should be realistic considering your context (the social, political, economic, and environmental situation). • Be Time-bound. Objectives, particularly for outputs, should be achieved within a certain time frame or by a certain deadline. Examples of Objective Setting Here are examples of setting objectives from two different assessments. EXAMPLE 1: INDEPENDENT FOREST MONITORING IN LIBERIA Motivations Goal The purpose was to assess whether To ensure that concessions help meet the logging concessions were helping objectives of the National Forest Policy to meet the national forest policy’s regarding economic development, equitable objectives of economic development, forest access, and stakeholder participation. equitable forest access, and Theory of Change stakeholder participation. Better information Outcomes on concession • Provision of a baseline against which performance the social impacts of implementing the and governance EU-Liberia VPA can be judged. will promote • Identification of areas of government compliance with Contextual Factors policy that will require new regulations or forest and revenue • The Liberia-EU voluntary partnership modification. laws, empower agreement includes provisions • Increased awareness among communities local people, for civil society monitoring. It was of forest governance developments. and persuade through this provision that the study was funded. officials of needed Outputs enforcement or • Timber exploitation remains a • An assessment methodology that could reforms. highly political issue in Liberia. be repeated in coming years. Increased public information on • A report on existing benefits of logging the public benefits of commercial concessions and identifying areas of logging will provide an important policy and legislation that could be basis for decision making on strengthened/modified. future activities. Setting the Objectives 29 EXAMPLE 2: CASE STUDY FROM ANNEX I INDONESIA PGA FOR REDD+ Motivations Goal Indonesia’s national policy-making and To improve information on and awareness of international REDD+ commitments forest governance to inform future domestic demanded robust and credible baseline reforms and boost international support. data on forest, land, and REDD+ governance as a first step toward Theory of Change Outcomes Assessment will improving forest governance. • Increased capacity among key increase awareness stakeholder groups to undertake forest of problems and governance assessments. broad participation • Awareness of existing levels of capacity will lead to broad to address forest governance. acceptance of Contextual Factors • A baseline for Indonesia’s REDD+ • In 2009, Indonesia’s president committed findings. Forest- safeguards information system. dependent people, to reducing the country’s greenhouse gas emissions by 26 percent by 2020. officials, donors, Outputs and others will Indonesia had received significant • A clear and repeatable method for then be closer to external support from UN agencies assessing forest governance that agreement about and foreign governments to advance conforms to domestic legislation and areas needing Indonesia’s REDD+ efforts, including a international best practice and engages reform; this will national climate and forest strategy. indigenous peoples. make achieving • The country signed a Letter of Intent • A forest governance report that is easily reform more likely. with the Government of Norway to accessible to international and domestic undertake actions to reduce emissions stakeholders and available within 12 from deforestation and forest months. degradation. BOX 5: GAUGE YOUR RESOURCES PRACTICE TIP Any assessment will be defined to an extent by the resources available. These include time, money, and people. While these are discussed in more detail in Chapters 2 and 3, they must be considered even at the stage of objective setting. It is possible to assess this by asking a number of questions: • Who could support the achievement of this objective? • What capacity do the people available to support the assessment have in terms of time and technical skills? • What level of finance is available to undertake the assessment? Are there any potential additional sources of finance? • Are there constraints on how long the assessment should take? • Is this a one-off assessment, repetition of an existing assessment, or the development of a baseline on which future assessments will be based? In some cases—for example, monitoring processes for which a government office is responsible and a budget has been allocated—answers to some of these questions may be predetermined as part of the assessment structure or history. In most cases however further consideration of the context can help assess what opportunities or constraints exist within these areas (e.g., the best timing for an assessment might be prior to an election, or additional funding might be available from a development partner). 30 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Points on Process Developing Shared Objectives A large number of stakeholders are engaged • Increased legitimacy. Engagement of stake- within the forest sector, and they have differ- holders at an early stage allows stakeholders ent perspectives on forest governance, different to see that the process is being developed in a motivations for engaging in an assessment, and transparent and open way and understand the different desired outcomes. Engagement with motivations behind it. This, combined with hav- these stakeholders occurs along a continuum ing an increased sense of ownership through be- from simple awareness-raising through consulta- ing engaged, can help to increase the perceived tion to joint decision making and eventually to legitimacy of results and interest in them—help- empowerment. You should consider as soon as ing to improve the impact of the assessment possible at what level you want engagement to and its overall value. This is particularly relevant if occur as seeking true engagement will require a key target group for the assessments may not a commitment to allowing different groups to welcome assessments of sector governance. have a role in the design and ongoing decision- making processes for the assessment. While this It is also important to consider the challenges that may seem daunting, increased engagement can might come with increased stakeholder engage- bring a number of benefits: ment. These can include increased time required to allow for effective discussion of approaches, • Increased relevance. By engaging different increased costs due to broader engagement and stakeholders you are able to access different consultation, and a broader scope of issues re- ideas and information on the forest sector. quiring assessment due to a wider range of in- This can bring insight into key areas that the terests being represented. While these challenges assessment should cover as well as provid- are important, they are generally not considered ing information on other programs and ac- to outweigh the benefits and increased impact tivities that may help in the design process. of strong stakeholder engagement in almost all This can help increase the relevance of the forms of governance assessment. assessment above and beyond your original ideas and increase the number of stakehold- Engaging Stakeholders in Planning, ers interested in the outcomes. Development, and Implementation • Increased ownership and support. There are a wide range of methods for engaging Engaging stakeholders helps increase their stakeholders that support different levels of en- understanding of the process. If they see that gagement. Figure 1 provides an example of some an assessment can benefit them, that increas- of these. Engagement of stakeholders in develop- es their interest in its success and could also ing and discussing the initial context analysis can increase their willingness to support a reform provide an excellent starting point to discuss the based on the assessment. Understanding approaches as well as to determine which stake- can thus increase cooperation, ranging from holders should be engaged at what points in the willingness to answer questions to provision process. Further information on who to engage of ongoing financial, logistical, and technical in implementing the assessment is provided in support. The resulting participation creates a Chapter 2 and further information on specific sense of ownership. methods and tools is provided in Annex II. Setting the Objectives 31 FIGURE 1: STAKEHOLDER ENGAGEMENT CONTINUUM Description: Transfer of control of level of decision making. Empowerment Example in an assessment: A specific stakeholder group could take responsibility for undertaking and leading the assessment. This may be a situation that is aiming at, for example, development in year one of an assessment approach and method that can be led and undertaken by civil society groups in subsequent years. Joint Decision Description: Collaboration where there is shared control of decision making. Making Example in an assessment: Multi-stakeholder steering committee established to provide oversight of the assessment and make joint decisions on objectives, approach, methodology, and use of results. Collaboration Description: Joint activities, with stakeholders engaged in problem solving and the development of proposals. Example in an assessment: Multi-stakeholder team brought together to implement an assessment and refine the approach, with the key decision making role still held by one or a group of lead agencies. Description: Two-way flow of information to gain feedback on views and Consultation respond to feedback. Example in an assessment: Individual meetings or consultation workshops held with stakeholders to gain feedback on the assessment (with comments being collected and responded to). Description: Information provided to stakeholders Information Example in an assessment: Briefings provided on the assessment, including events (e.g., awareness-raising meetings) and information sharing (e.g., press releases) about the assessment but with limited opportunity to comment or contribute to its design or implementation. Source: Adapted from International Association for Public Participation spectrum:www.iap2.org/associations/4748/files/spectrum.pdf. It will be up to you to decide the approach best Framing your own objectives first and considering suited to the nature of the assessment you are what resources you have will allow you to out- planning, your country, and the organizational line more clearly what you see as possible, will context, as well as to the potential relationships help increase the productivity of any stakeholder and power dynamics that may exist between engagement, and will help you be alert to unwar- stakeholder groups. Your resources and dead- ranted expectations from stakeholders of what lines may also be factors. the assessment will be able to achieve. Sharing this draft with stakeholders will also help increase A good place to start can be to conduct an ini- the transparency of the process and help stake- tial draft objective-setting and planning process holders understand your motivations. within your organization or with existing partners. 32 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE 2 DEVELOPING A WORK PLAN Chapter 1 developed the idea of why you are doing the assessment. Building on your thinking of why, this chapter starts the decision making on what to cover, how and when to do it and how much it will cost. Working through the steps you should develop a high-level work plan that you can use to aid further planning, guide implementation, and explain your work to others. Each assessment is unique, and you may begin your work with some matters already decided (for example, the budget or the technical scope). If so, first identify what parameters are fixed and then use these to help guide your decision making within the other steps. IDENTIFY YOUR SCOPE—WHAT TO MEASURE STEP The scope of your assessment provides the basic parameters around what information you 1 are interested in assessing. It can be divided into three areas: technical scope (the specific elements of forest governance you are interested in); geographical scope; and social scope (the specific social groups you want to gain information on). IDENTIFY YOUR APPROACH—HOW WILL YOU GET STEP YOUR INFORMATION? 2 How you conduct the assessment is influenced by what type of outputs you want, what types of information your target audience is interested in, what methods for data collection your target audience sees as acceptable, and what capacity and resources you have. You will need to consider how you might link different methods (such as desk reviews, expert analyses, key informant interviews, focus groups, surveys, and workshops) together and whether your approach should focus on quantitative or qualitative information. WHO WILL CONDUCT THE ASSESSMENT? STEP Any assessment requires contributions from a range of actors, including those who fund it, 3 provide institutional support to it, and implement it. Identifying who will fill these roles, what capacity they have, and in what ways they will engage in the development and implementation of your assessment will help you clarify how the assessment will be conducted. WHEN WILL IT BE DONE, HOW OFTEN, AND FOR HOW LONG? STEP Time is often left out of the planning process, but is a critical element. Consideration must be 4 given not only to how long the assessment will take but also how often it might be repeated. Developing a Work Plan 33 HOW MUCH WILL IT COST? STEP The cost of an assessment is often a critical element. Working through the pricing of different 5 methods, covering different scopes, can help to provide a clearer analysis of costs and benefits and clarify what decisions and compromises need to be made. WRITE THE WORK PLAN STEP From the steps above you can start to develop an outline for your work plan. This will provide a 6 structure around which further planning and communication can occur. POINTS ON PROCESS: COMMUNICATING AND MANAGING THE PROCESS Many stakeholders will be interested in being engaged in an assessment of forest governance. It will be impossible to ensure all of their interests and expectations are met and have the assessment done in a reasonable time frame and on budget. It is thus important to communicate clearly with different groups to ensure they understand what the assessment will focus on and why, and to be transparent about how these decisions have been taken. Clarity at this stage in the development of the assessment will help prevent confusion and conflicting expectations later on and will also facilitate more detailed planning with all groups sharing the same understanding of why the assessment is taking place. 34 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 1: Identify Your Scope—What to Measure Identifying what you want to measure will set Framework (PROFOR & FAO 2011) provides one the scope for your assessment. There are three such example. The three pillars presented (see main aspects of this scope: the technical scope, Box 2 in Chapter 1, above, or Figure 2, below) the geographical scope, and the social scope. At can help you broadly define which core part of this point in your planning, all that you need to governance you are interested and within that do is set the general scope of what you want what principles of ‘good governance’ you may to measure. Chapter 3 discusses in detail how be most concerned with. For example you may to narrow down what you want to measure and be most interested in the planning and decision- specific ways of measuring. making processes (the pillar) within the forest sector and how transparent and accountable The technical scope of the assessment may be (the principles) these are. Equally it may be these simply to identify if you are complying with an in- principles of ‘good governance’ that are of most ternational reporting format, are following a past interest to you for example transparency, and assessment, or have a very specific issue you are seeing how this is implemented across existing interested in gaining information on (e.g., exist- policy, institutional and regulatory frameworks, ing legislation within the forest sector). If your the planning and decision-making processes that objectives are broader, or you are developing an create them and their implementation, enforce- assessment for the first time, however, you may ment and compliance (See Figure 2 for overview need to consider a larger number of elements example). The full framework further divides the cutting across forest governance. This process pillars into components and subcomponents, can be challenging, particularly if you are working which can allow you to identify your scope at with others who do not have an overall picture a more detailed level and to relate the specific of the different elements of forest governance. interests individuals or groups may have back to If this is the case, using an existing governance the broader area of governance that you have framework to structure discussions and link dif- identified as your scope. Linking these to spe- ferent perspectives on the technical scope may cific criteria for measurement is covered in more prove helpful. depth in Chapter 3. Several existing frameworks look at forest gov- ernance or governance more broadly.2 Each provides a structured simplification of gover- nance arrangements. The Forest Governance 2. Some of these frameworks are listed in Annex II. These are not exhaustive lists but should provide an indication of some of the main frameworks available. Developing a Work Plan 35 FIGURE 2: USING A FRAMEWORK TO IDENTIFY TECHNICAL SCOPE AND FOCUS OF ASSESSMENT Grey boxes added show area of interest—where they overlap (solid red boxes) are the key focus areas. ACCOUNTABILITY Areas of Interest Policy, Legal, Planning and Implementation, EFFECTIVENESS Institutional, Decision-making Enforcement, and and Regulatory Processes Compliance EFFICIENCY Frameworks FAIRNESS/EQUITY PARTICIPATION TRANSPARENCY Areas of Interest ACCOUNTABILITY Policy, Legal, Planning and Implementation, EFFECTIVENESS Institutional, Decision-making Enforcement, and and Regulatory Processes Compliance EFFICIENCY Frameworks FAIRNESS/EQUITY PARTICIPATION TRANSPARENCY Areas of Interest Areas of Interest Areas of Interest 36 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE LOOKING DEEPER BOX 6: BALANCING BREADTH AND DEPTH OF ASSESSMENT Agreeing on the scope of your assessment will require you to balance a temptation to assess everything, everywhere with what is possible from a practical point of view and what will provide you with enough depth of analysis to achieve your desired objectives. The broader your assessment, the less depth you are likely to be able to achieve given the same level of resources. In most situations you have to choose between a very detailed assessment within a focused area (be that technical, geographical, or social) or a less in-depth assessment across a broader area. Table 2 provides an overview of these choices. Note that you can try to address some of these challenges by being strategic in the way you collect information (your approach to sampling) and using methods that may be lower cost but provide effective representations of the situation. More information on these options is provided in Chapter 3. TABLE 2: BALANCING BREADTH OF ASSESSMENT SCOPE WITH DEPTH OF ASSESSMENT Completeness   Addressing only a subset of forest Complete, addressing all aspects of governance aspects considered forest governance in detail. most important and perhaps acting as proxies for other aspects. Less rigorous measurement, OPTION 1: The worst option, though OPTION 2: Sacrifice rigor for building on existing methods but still better than nothing and better completeness. Typical uses: with some adaptation based on than aiming for Option 4 and not diagnosis of problems; surveillance input from country stakeholders. achieving it. for emerging issues. Rigor Highly rigorous measurement, OPTION 3: Sacrifice completeness OPTION 4: Very expensive, unlikely using new methods designed for for rigor. Typical uses: tracking ever to be funded, certainly never the specific measurement with full impact of a specific reform; likely to be funded repeatedly over multi-stakeholder engagement.  monitoring of priority concerns. time so that improvements can be tracked. Source: Adapted from Lawson (2012). Developing a Work Plan 37 The geographical scope of the assessment can regions, you may need to first think about what be shaped by both technical and logistical con- similarities there are in terms of technical areas siderations. First, identify what geographical scale of interest to help focus the assessment and to and locations are relevant to the technical areas keep findings relevant to each location. you are interested in; next, consider whether it is relevant to include other areas for comparison Almost every assessment will rely to some de- or if you can compare between different areas gree on sampling: measuring a random or repre- of interest. For example, it may be appropriate sentative sample of an attribute rather than every to assess the application of forest laws between manifestation of that attribute. Geographically, if provinces if all areas have similar forest coverage. three provinces are believed to be similar, you If, however, only one province has significant for- might collect data in one province rather than all est cover, such an assessment would be less three. If you are interested in local governance valuable. In such a context it may be appropriate in a collection of 1000 villages, for example, you for data collection to focus only on that province, might select ten randomly for study. Chapters 3 saving time and resources on data collection and 4 further discuss sampling; you should be from other areas. If you are considering doing aware, however, that good use of sampling can an assessment between different countries or reduce the costs of assessment. LOOKING DEEPER BOX 7: REMEMBER THE CONTEXT—INCREASE IMPACTS AND REDUCE RISK Your approach must be firmly rooted in the realities of your context. Weak links between your approach and the context can reduce its relevance and impact, and even cause political and social difficulties. For example, when defining technical scope it may be difficult, and even dangerous, to explicitly focus on corruption or illegality. Information that directly challenges a social group may also be controversial and result in lost opportunities for dialogue with the impugned groups. You may, however, be able to find an acceptable indirect way to address these problems. For example, you might frame corruption as an impediment to trade and marketing, and plan to measure it in terms of lost reputation. Defining the geographical scope presents similar challenges. Strong comparisons between regions of the country may promote change through exposure of provincial diversity— but it may also be used by others to aggravate social or ethnic tensions that go beyond the intentions of your assessment. In that case, structuring the output around geographical comparisons could lead key decision makers to try to distance themselves from the assessment’s findings. Thus, it is important to remember and revisit your context analysis when developing your approach, discuss it with other key stakeholders, and, if necessary, consider ways of mitigating impacts or risks through the methods and outputs you choose. 38 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE The social scope of your assessment will con- whose views may be more difficult to capture sider who are the social groups and institutions or who are often left out of other forms of as- that you want to gain information on. Many as- sessment (e.g., women, landless people). Some sessments will be interested in understanding assessments will look at all institutions affect- the role of different social groups in forest gover- ing forest governance, but some will limit their nance and how they are impacted by changes; scrutiny to government agencies as opposed to some assessments, however, will focus on spe- traditional community structures, markets, or civil cific social groups (e.g., indigenous peoples, rural society institutions. In every case, who to cover communities) or be interested in looking at their within the assessment will be influenced by your situation relative to that of other groups. In con- objectives and technical scope, and in turn will sidering who should be included it may be im- influence how you collect your data. portant to identify frequently overlooked groups Step 2: Identify Your Approach—How Will You Get Your Information? You can use a large number of different meth- of these different methods can be referred to as ods to collect information for your assessment. your approach. A number of different methods Each of these has different strengths and weak- for different types of data are shown in Table 3; nesses and, in most assessments, you will need Box 8 provides more information on the differ- to combine a number of these in order to collect ent types of data and methods. all the information you need. The combination TABLE 3: TYPES OF METHOD AND DATA Quantitative Qualitative Secondary Existing censuses, assessments, budgets, etc. Prior assessment reports, plans, etc. Primary Surveys (e.g., opinion polls, household surveys, Questionnaires for experts, structured/semi-structured assessments of forest cover, etc.) interviews, focus group discussions, workshops, etc. Developing a Work Plan 39 LANGUAGE CHECK BOX 8: METHODS, APPROACHES, AND TYPES OF DATA Assessments and guides to assessment don’t all use words in the same way. Some key terms used in this guide are defined below (and more information on several of them is provided in Chapter 3). Within your own work, try to be clear and consistent in your use of terms. This helps ensure that all stakeholders understand what is happening and that your team is clear about what it can and cannot achieve. • Methods. Within this guide, methods are identified as ways for undertaking an activity (e.g., data collection or stakeholder engagement). They lay out a specific set of actions to take to guide you in how to undertake them. • Approach. Within this guide, approach refers to the way different methods are brought together to complete the assessment; this will include methods relevant to assessment development, implementation, and application of results. • Primary data are new data that the assessment generates. • Secondary data are existing data (e.g., from prior assessments, censuses, scholarly studies) that the assessment can use. • Quantitative data are data expressed in hard numbers (e.g., income levels, percentages, budget numbers). • Qualitative data are data not generally measured in numbers (e.g., expert opinions, focus group preferences, workshop findings, and anecdotal information such as individual stories, examples, or cases that illustrate a point).* • Participatory Approaches engage different stakeholders throughout the development, implementation, and evaluation of an assessment. These can increase ownership of results among target groups. Tools such as workshops, focus groups, and advisory groups can help strengthen participation. (See also Chapter 1, Points on Process.) *Boundaries between these data types are not absolute. If you gather enough qualitative opinions in a public opinion poll, you may be able to produce a quantitative data e.g. 70 percent of people believe X or 20 percent of people believe Y. For any assessment, there is usually more than • Aim for primary quantitative data. Conduct one possible approach. To take a narrow exam- a new public opinion poll on the reputation ple, say that you wanted to measure the level of of the forest agency. corruption in the forest agency. You could: • Aim for primary qualitative data represent- ing broad sampling. Convene focus groups • Rely largely on secondary data. Look for or workshops to score a “citizens’ report card.” existing public opinion polls on the subject, • Aim for primary qualitative data represent- court cases, and news reports of corruption. ing narrow sampling (i.e., anecdotal data). Conduct confidential interviews with people and look for whistleblowers. 40 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE PRACTICE TIP BOX 9: REMEMBER YOUR PARTICIPANTS AND YOUR AUDIENCE—THE HUMAN CAPACITY OF THOSE WHO PROVIDE AND USE YOUR INFORMATION Human capacity is also a context issue. The capacity of the intended participants and final audience for the assessment should factor into the design of your approach. How much time, knowledge, and skills does this audience have to contribute to the assessment and interpret its results? For example, a written survey may have limited success if respondents are not able to read; a long and detailed report analyzing every technical element of forest governance may likewise have limited impact on a target audience of busy government officials with limited time to read and digest the report. Assessment of the capacity of the target audience will also help in deciding what types of outputs to develop. Each of these specific methods would deliver may require you to adjust your methods, so be some of the results you need, but in reality as- prepared to remain flexible. Nevertheless, care- sessment approaches will draw on more than ful consideration of key points early in the pro- one method. For example, you might do an initial cess will make later changes less likely. Those review of secondary data sources, have experts key points include: review those findings and fill in gaps with expert opinion, and then vet the findings of the experts • Is your approach practical given your likely in a stakeholder workshop. A mix of methods capacities, resources, time frame, and context? is often best to capture a range of viewpoints, Do you have the budget to do a large public provide a diversity of information, and improve survey with face-to-face interviews? Do you the reliability of the findings. have the resources to go into the field and conduct focus groups all over the country in Deciding on which methods to use must also local languages? Do you have the time before be shaped by who your target audience is, how the rainy season starts to conduct five regional long you have available, and how much it will workshops? Will respondents be interested cost. For example, some people in the intended enough to sit through a long survey and can- audience may prefer quantitative findings on did enough to give you honest answers? specific points while others may be looking for • Will the approach provide the data that a broader, more qualitative picture. Remember, you need to answer basic questions within too, that some methods have side benefits, the scope of your assessment? For example, such as informing or building capacity among you may want to rely entirely on secondary stakeholders, which may serve the assessment’s data—but if no one has collected data to an- goals; other methods have challenges related to swer your questions on how the forest agen- the existing levels of capacity in the country. cy uses public input in its decision making, you may need to collect the data yourself. These decisions will be a fundamental part of • Will your approach provide data that your planning and will affect the design and im- are convincing to your target audience? plementation choices that you make later. New You may have agency experts who only information gained as implementation proceeds value quantitative data. You may have rural Developing a Work Plan 41 LOOKING DEEPER BOX 10: TAILORING YOUR APPROACH TO MATCH OBJECTIVES— MIXING METHODS A large number of factors will influence the methods chosen for an assessment. The most critical, however, is the ability of these methods to help deliver the goal and desire outcomes the assessment. The PROFOR Assessing and Monitoring Forest Governance tool (Kishor & Rosenbaum 2012) is a complete forest governance assessment tool, which has been used in a number of countries. Even when using an existing approach, however, there is the opportunity to mix the combination of methods to best suit the goals of an assessment: • In Uganda the tool was piloted as a way to diagnose problems in governance and promote reform. An expert developed a background paper and customized the tool’s indicators for the country. A multi-stakeholder workshop scored the indicators, providing a forum for discussion and a means to increase awareness and acceptance of the assessment. Key stakeholder interviews were then used to vet results and further strengthen understanding of and support for the process. In other words, the methods chosen were focused on building support for the results among key stakeholders, a critical element when the assessment’s goal was to foster reform. • In Russia the assessment sought to diagnose problems and promote reform, but it was considered that acceptance of the results would be highest if they were supported by the opinions of “experts.” As such, the assessment used two independent methods to score the same indicators: expert analysis and stakeholder workshops. The case studies presented in Annex I also present a range of different approaches to achieving their objectives. In Ecuador, Grupo Faro chose to limit the resources invested in data collection, relying on expert analysis from their in-house team and key informant interviews. They invested, however, more on raising awareness of the results of their assessment through workshops and events. This was also appropriate for an assessment that had to be conducted on an annual basis—a “fixed parameter.” Conversely, in Tanzania the objective was to develop a comprehensive assessment linked to biophysical information collected through household surveys; the data presented a number of fixed parameters. In response, a combined approach was developed that linked governance information to these planned surveys, with the most relevant information subsequently being supported by key stakeholder interviews. The data this produced was used in a review of the National Forest Programme (NFP) 2001–2010. The revised NFP for 2015–2024 will thus be based on stronger evidence of biophysical, socioeconomic, and governance factors and projections of future trends. 42 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE residents who you can reach best through been collected through an approach they anecdotes. You may also need to consider were engaged in developing or implement- what types of outputs you want to deliver at ing. Or you may have a target audience who the end of the process—do you want narra- will only accept data generated by high-level tive reports, statistics, short publications for academics or other independent experts. broad public reading or a mixture of these?3 • Will your approach be acceptable to your Table 4 provides information on six common target audience? You may have key stake- methods. Further information on specific meth- holders who will only value the data that has ods is included in Chapter 3. PRACTICE TIP BOX 11: DO YOU NEED TO BE DIFFERENT? DESIGNING A COMPLETELY NEW APPROACH VS. WORKING WITH EXISTING FOREST GOVERNANCE ASSESSMENT APPROACHES Forest governance has attracted increasing attention over the past ten to fifteen years, and many groups have spent a long time working out how to measure and monitor its status and changes in it. Many of these approaches have been developed into formal manuals or tools that can be adapted to specific country contexts. This guide refers to several of them, and a more comprehensive list is provided in Annex II. Working with existing approaches may reduce the time needed to develop your approach, help you to gain technical support for specific institutions or countries that have already done it, and provide an opportunity for comparisons with other countries or areas. This should only be done, of course, if you can find an approach that meets your specific requirements and will deliver the outputs you need. 3. Chapter 6 provides more information on the different types of outputs and the information needed for them. It is advisable to read that chapter prior to finalizing an approach to ensure that you will be able to develop the outputs you want from the approach you choose. Developing a Work Plan 43 TABLE 4: SIX BASIC DATA-GATHERING METHODS FREQUENTLY USED IN GOVERNANCE ASSESSMENTS Description Strength Limitations Assessment based on existing and available Low cost. This method requires Limited accuracy/consistency. In many information (some effort may have to be limited resources and can be countries the information easily available put into collecting reports and documents done by an individual or a small may not be up to date or may have limits from different sources). This provides team. in terms of accuracy. In relying on others access to secondary data, which can be data sources, you are relying on the quality both qualitative (e.g., narrative reports) and Limited logistics. The of their data collection and analysis (which quantitative (e.g., trade statistics). assessment can be done may not be at a standard you want). remotely with limited travel Desk Review This often forms part of any assessment required and can cover areas Lack of new information. While providing a baseline from which all that are difficult to visit. consolidation of available information further data collection and discussion is into one place may be useful and present undertaken. Consolidation of “accepted a clearer picture of the current status of knowledge.” Using official or forest governance, it may not capture key Desk reviews are often conducted by an accepted data and bringing this underlying issues and may not be accepted expert who is able to put information together can gain traction with by all stakeholders (potentially adding bias gathered into context and so provide a stakeholders as they see their to any assessment). more useful output than could be achieved own information being used and by simply reproducing information. thus become more interested in the outcomes. The term survey encompasses a range Large volumes of primary Limited understanding of roots of of different tools (e.g., structured and data. Surveys can collect large problems/opportunities. Overly structured semi-structured questionnaires, field- volumes of primary quantitative surveys may limit the opportunity to gain based observations), all of which can be and qualitative data. Collection in-depth information on why a problem or administered at a range of scales (i.e., of primary data beyond what issue is occurring (i.e., if that falls outside large or small numbers of respondents already exists may strengthen of the existing questions format). across a large or small geographical area). the position of an assessment. Expense. It can be expensive to conduct Surveys Structured surveys can collect primary data, Structure. Developing a large-scale surveys when personnel are which can be a mixture of quantitative and structured approach to what required to travel and administer the process. qualitative information. Semi-structured information to collect and surveys collect primarily qualitative questions to ask will help Bias. If a survey is not conducted at information in a more narrative form. ensure consistency and improve random and not effectively supported, a accuracy of information. bias may appear in the data. This can be affected by who has time to respond, who has capacity (e.g., language/ability to write) to respond, and who has issues they feel strongly about. Use of one or a number of individual Depth of analysis. Expert Bias. Experts may have a specific opinion or experts in forest governance or related knowledge will help throughout area of interest related to forest governance; areas can provide a basis for developing the assessment process in this may result in excess focus in this area an assessment. Experts can provide planning and collecting and or strong views being expressed at the analysis based on their own experience in analyzing information—and detriment of other opinions. the sector and carry out some desk-based may be able to identify links Expert Analysis analysis. They may also be able to add between key issues that are not Legitimacy. A single expert may not additional depth to information that you immediately obvious. have legitimacy with all groups and an collect through other methods. assessment developed by a single expert in isolation of others may not be seen as Experts on forest governance may be legitimate by all stakeholders. Linking of part of your implementation team or they expert analysis with other methods can may support the process through various address this. structures (e.g., advisory groups, steering committees, or expert respondents, using approaches such as the Delphi method4). 44 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Description Strength Limitations There will be a number of key individuals Depth of opinion. Ability to Bias. Each informant will have a very who know a lot about the specific areas gain information from key specific view, which may be highly or forest governance you are interested stakeholders with significant subjective and based on personal in. These may be academics, government knowledge of the sector. experience as opposed to the broader officials, private sector operators, or local context. It is also difficult to identify which community members—all of whom bring Ability to speak freely. Allows opinion should be given most prominence different perspectives. Interviewing these informants to speak freely as in a subsequent compilation of interviews. stakeholders will provide information there are no other stakeholders on the sector and where to get further present and information can be Replicability. Key informants will change, information. Using a structured or semi- treated confidentially (this will as will their views, making accurate Key Informant Interviews structured questionnaire format can help depend on who is conducting replication difficult. This can be facilitated ensure that you gain the information you the interview). by ensuring that the same specific want from the discussion and that there is questions are asked each time. consistency across interviews. Low cost. Limited logistics and time may be required for this if key stakeholders are based in one place. Focus groups bring together key Broader perspective. By Harmonized views. A group may have stakeholders to discuss specific issues. bringing together a range of varied experiences of the forest sector These can be experts or a sample of the people, you gain a broader view but individuals, particularly the most specific social groups you are interested (i.e., one that is less specific to vulnerable, may find it difficult to speak in. Focus group discussions provide an an individual). out in a group setting. This can be opportunity to talk about positions and addressed to some degree by stratifying validate findings from other forms of Increased participation. Focus your focus groups to include respondents Focus Groups assessment. groups can provide a cost of similar social, economic, and effective way to engage with a geographic position. larger number of people (versus one-to-one interviews). Workshops bring together a broad range Broad participation. Broad Expensive. Depending on logistics, it can of stakeholders to share information and participation and the potential be expensive to bring groups together. discuss key issues. Workshops offer a good for discussion of key issues. opportunity to provide information to a Balance of stakeholders. All stakeholders range of stakeholders. Time efficient. By bringing all may not be willing to talk openly within a stakeholders together it may workshop format. be easier to gain a broad range of viewpoints than multiple Participation vs. information sharing. individual interviews. Workshops require careful planning to increase participant participation Workshops Increased consensus. as opposed to being just a forum for Workshops may help to deliver a information sharing. broad consensus on issues. 4. See Box 28 in Chapter 3 for further information on the Delphi method. Developing a Work Plan 45 Step 3: Who Will Do the Assessment? In any assessment there are a range of differ- • The Implementers are responsible for actu- ent actors who will be engaged in the process. ally conducting the assessment. They may They will have different roles and responsibilities, be an office within a government institution, which are partially defined by the approach and an NGO, a consultant, a community group, methods you choose but should also be clari- or a mixture of these organizations. They will fied during the planning process to help improve have a primary role in developing the details efficiency and avoid confusion. Three key roles of how the assessment will be achieved, in- in almost all assessments are the funder, the po- cluding the approach, methods, and physical litical/institutional sponsor, and the implementer. implementation. There should always be, however, a central implementer and focal • The Funders are responsible for providing person or persons to whom communications financial support to the process. They may can be directed and who takes responsibility be a development partner, a central govern- for delivering the assessment. ment, or another institution or a combination of a number of different groups. They may Establishing clear roles and responsibilities for play a role in defining the overall objectives each of these groups will help ensure effective and scope of the assessment, either directly working relationships as well as good external per- through discussions with implementers or ceptions of the assessment. Assign roles and re- more indirectly through guidance provided sponsibilities based on the comparative strengths with funding. of the different groups, but also consider vested • The Political/Institutional Sponsors are interests, reputations, and potential appearances responsible for providing their political and of undue influence. For example, a logging com- institutional support to the process. In many pany may provide some financing for the assess- cases this will be a government institution, ment; it would be inappropriate, however, for that a well-respected NGO, or an international company to have too strong a position in decid- organization or group of such organizations. ing on the assessments design and approach. In some cases funders and sponsors may be Similarly, a government body may have trouble the same. They will likely play a role in defin- getting honest evaluations from stakeholders who ing objectives and scope and may also be rely on it for permissions and licenses and are engaged in defining the approaches taken. anxious not to offend it and junior government officers may be reluctant to publically present an assessment that is critical of their superiors. 46 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE In many cases the roles of funder, sponsor, and before accepting support. You can adopt implementer are predefined—for example, if the measures to help mitigate undue influence, Forest Authority is required to do an assessment such as creating clear definitions of a role of forest governance on a periodic basis. In other that is away from their vested interest. cases, however, it may be valuable to consider • How will they engage with stakehold- broadening engagement beyond an initially ers? Are they well respected by them and identified group to help bring in additional re- have they working experience with different sources and skills and to improve the legitimacy groups? This is particularly important for the of the assessment (see Box 12). In all situations implementer. when identifying potential funders, sponsors, • What benefits are there of increasing this and implementers there are a number of key stakeholder’s capacity to be engaged in points for consideration: an assessment? Increasing the capacity of key stakeholder groups to be engaged in • What levels of authority do they have in an assessment may have benefits beyond the sector? High levels of authority within the immediate assessment (e.g., building the sector may be good for a sponsor who capacity for future assessments or building can help bring stakeholders together. It may capacity to engage in the forest sector more not be good for an implementer who may effectively). These benefits may outweigh a struggle to gain an unbiased opinion from desire to immediately select organizations stakeholders who are wary of the power the with existing capacity. organization or individual holds. • How legitimate will stakeholders perceive Further detail on how to develop your imple- them to be? The more legitimate all groups menting team and the range of structures that engaged in the process are the more legiti- can be used to strengthen both the team and mate the results will be perceived to be. engagement with other stakeholders is provided • What vested interests do they have? This in Chapters 3 and 4. may also affect their legitimacy with other stakeholders and should be considered Developing a Work Plan 47 LOOKING DEEPER BOX 12: ENGAGING STAKEHOLDERS—THE BENEFITS OF PARTNERSHIPS Assessments can be expensive and require time, logistical, technical, and financial capacity. They also require a level of social, political, and technical legitimacy to ensure that they are accepted at international, national, and local levels. Your organization may be able to bring some of these elements to the table, but it is likely that there will be limitations in some areas (e.g., the already busy schedule of your staff, your organization’s legitimacy or profile with a certain stakeholder group, or your ability to reach different areas of the country). Given these constraints, engaging different stakeholder groups can provide many benefits. As mentioned in Chapter 1, this can occur along a continuum from keeping them informed of the assessment process to fully empowering them to take leadership of it. Engaging stakeholders in a more comprehensive way in decision making and implementation can not only increase the legitimacy of the assessment but also share the burden of resources by sharing operational costs and increasing capacity (e.g., by bringing in new staff and ideas). These benefits must be weighed against the challenges of linking your objectives with partners, completing work within the assessment time frame, and ensuring effective coordination and standardization of methods throughout the process; it can lead, however, to a more comprehensive assessment that is able to draw support, skills, and information from a wider pool. You can strengthen collaborations by using a number of different tools: • A steering committee. This can include highly respected individuals who will provide oversight of the assessment, something that may increase its legitimacy. Although not a governance assessment program, the European Neighbourhood and Partnership Instrument (ENPI) FLEG Program has reported success using a regional operational committee representing donors, participating countries, and implementing agencies to steer the program, and national program advisory committees, representing government and nongovernmental stakeholders in each participating country, to bring in country-level oversight. (ENPI FLEG 2013). • A technical working group. This can include a range of technical specialists to help review and improve methodologies and can bring a range of skills and experience to the table. The Indonesia PGA (Annex I) used a multi-stakeholder expert panel composed of government, civil society, academic, and private sector representatives. • A memorandum of understanding (MOU). A written agreement to share logistical costs, responsibilities, or control between or among government agencies, NGOs, and/or other stakeholders, this may help increase the potential geographical coverage of the assessment. Global Witness (2005) notes that an MOU with the government is especially useful when independent agents are collecting data and need cooperation from authorities. 48 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 4: When Will It Be Done, How Often, and for How Long? Time is a critical and often overlooked element You should also consider the long-term implica- of any planning process and can be particularly tions of your decisions, including how often the important in an assessment that may involve assessment will need to be repeated and what multiple stakeholders, cover a large geographical the likelihood is that the same inputs (finance, area, or be conducted on a regular basis. The human capacity, and time) will be available at approach and methods you choose must be future points in time. In this way, the potential practical within the time frame identified during replicability of inputs as well as actual implemen- the objective-setting process, and they should tation may influence the approach you choose. take into account contextual factors ranging from For example, if you design an assessment that the practical (e.g., national holidays) to the envi- requires international experts to conduct it the ronmental (e.g., impassable roads during rainy technical aspects may in theory be easily repli- seasons or winters), to the strategic (e.g., timing cable by recruiting another expert—but the funds the production of the report to coincide with the available to hire the expert may not be so easy development of a forest sector strategy or legis- to find on an ongoing basis. lation). (See Chapter 1 for further information on these considerations.) Step 5: How Much Will It Cost The cost of an assessment varies significantly achieve your goal. For others, it will be a case of based on the approach you use (the geographi- developing a proposal for the assessment and cal and technical scope, the methods, and the trying to gain full or partial funding from differ- levels of stakeholder engagement) and the ent sources. A clear assessment of costs—and country in which you are doing it. As such, it is options for reducing the budget—will be a useful impossible to provide specific and universal guid- tool within this context. ance on the levels of finance required. For some, the financial resources will be the first point of Having a budget to cover the entire assessment consideration for an assessment, with many will also help ensure effective implementation, working within a predefined budget. Within this allowing you to focus on implementation rather context you will need to adjust your approach to than fundraising and ensuring that you can de- fit this budget and develop a clear outline of op- liver the whole process without delays caused by tions and costs of different activities to help you lack of funds. Developing a Work Plan 49 BOX 13: HOW MUCH WILL IT COST? LOOKING DEEPER Some outline budgets are provided below for significant assessments recently undertaken. While even the base costs for these are significant, the variety of approaches available provides examples of how to develop, implement, and apply an assessment within almost any budget. Global Witnesses’ Forest Transparency Report Card: Linking Assessment and Advocacy in Ecuador Grupo FARO’s approach utilized a budget of approximately $100,000 per annum. Half was spent on maintaining a core team which worked on development of the report card (reviewing secondary data and conducting key stakeholder interviews) and managing a small grants program to provide grants to other organizations taking action on forest transparency (the goal) which could link with their assessment work. The actual assessment work (the report card) represented only a relatively small portion of the budget; running events and supporting other organizations to increase levels of awareness of forest transparency absorbed a higher portion. Indonesia PGA for REDD+: Building Capacity, Informing Policy, and Setting a Baseline The assessment was supported by a large number of consultants who conducted field work and facilitated engagement of other stakeholders. The cost of data collection has been estimated at $130,000 over two years, covering both salaries and the logistical costs to travel to different areas within the country. Uganda PROFOR‘s Diagnostic Tool for Assessing Forest Governance: Developing a High-level Assessment Uganda used the simplest approach outlined in the PROFOR diagnostic tool, linking expert review with a series of participatory workshops to develop an assessment of existing levels of forest governance. This approach was estimated to cost approximately $35,000–40,000 to cover the fees of experts to support the process and a small number of workshops. See Annex 1 for further information on these case studies and the approaches taken. 50 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 6: Write the Work Plan Having considered why you are doing the as- activities will be conducted, who will conduct sessment, the scope of what information you those activities, and, through a supporting bud- want and how you will be able to collect it, you get, how much these activities will cost. can develop a work plan and corresponding budget. Having a work plan will help you in fur- The easiest way to approach this process may ther developing your methods as well as provid- be to draw an initial timeline (working in weeks ing a guide for when things should occur. It may or months and identifying the current time and also help you explain your work to stakeholders, the time by which the assessment needs to be potential funders or supporters. completed). A range of tools can help you de- velop this plan, including a number of computer Depending on the complexity of the assessment packages; a simple linear timeline with a list of it may be appropriate to start this by clearly out- activities beneath often provides the most prac- lining the key elements of the assessment in a tical approach, as it is easy to amend, discuss, short summary covering: and share with others. Once developed, you can review this plan against your objectives and the • Objectives. What will it achieve? assessment of context developed in Chapter 1 • Timeline. How long will the assessment take? to identify if it is likely to achieve the objectives • Scope. What is the technical, geographic, while remaining relevant and practical. and social scope? • Methods and approach. What methods will Once you have identified the list of activities you be used—qualitative, quantitative, participa- will be working you can then develop a list of tory, other? key items for budgeting. You must work through • Groups involved. Who will be involved in the full list of activities when undertaking this as funding, sponsoring, and implementing the well as consider what activities may be more ex- assessment, and from whom and how will pensive than initially anticipated (e.g., the cost of information be sought? field work or workshops). • Cost. Do you have an outline budget that includes basic resources (e.g., vehicles, ven- Grouping different activities within budget areas ues, staff, computers, and so forth)? may also help clarify the choices that you need • Outputs. What are the tangible outputs that to make on how the assessment will be imple- will be produced from the assessment? mented. For example, the cost of developing a website to house information on the assessment The key elements can then be built into a work may be twice the price of translating the report plan, including assessment planning and devel- into a local language and running two provincial opment (which you are already involved in), workshops, providing you with a choice as to implementation (which will include stakeholder which you think would be more beneficial. engagement, data collection, and analysis), and the dissemination and application of results. In See Annex III for an example of an assessment its simplest form the work plan should lay out budget outline and Annex IV for an example out- what activities will be conducted, when those line of an assessment work plan. Developing a Work Plan 51 BOX 14: REVIEW YOUR WORK PLAN PRACTICE TIP Having identified the key elements of your approach, you should review it to decide whether it is possible, realistic, and achieves what you want to achieve. One way of doing this is to consider it from four sequential viewpoints: • Inputs. Do you have the inputs in terms of financial support, human capacity, and time to achieve what you are planning? • Process. Is the way you are planning to conduct your assessment appropriate to the context in which you are working and the objectives you want to achieve? Within this you should consider if you are covering the right technical, geographical, and social areas, engaging the right people in the right way, and conducting the assessment at the right time. • Outputs. What will the outputs you produce look like? How will they be developed from the work that you have done? Will they be relevant to your target audience? Will they help achieve your objectives? • Outcomes. What will the outcomes of your assessment be? Do the different elements of it contribute to these and can they be improved? Are there any potential negative outcomes that could occur and how can you mitigate against these? Consideration of these points will help you review and revise the approach you are taking. It will also present a number of key questions that will required more detailed thinking. Chapter 3 will help in answering some of these, particularly with regard to which methods to select and how to develop and further define these methods. 52 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Points on Process Communicating and Managing the Process As you begin to define the plan for your assess- • Develop a shared clear statement of the ment it will be increasingly important to ensure assessment’s objectives, including a state- that you communicate clearly with other groups ment of the perceived problem or need that and manage the development process effec- the assessment will address, anticipated out- tively. A well-managed, transparent process will puts, and ways of achieving these. It may be help ensure that all groups understand what the relevant to get each stakeholder to first do assessment is about, who is engaged, why, and this independently—and then share these as what to expect at the end. They may agree or a step toward developing a smaller number disagree with these points, but as long as they of shared objectives. are aware of them and their justifications, any • Get people to think through and explain challenges to the assessment can be dealt with how they see the assessment helping to effectively. Failure to provide this information resolve the problem or fill the need. may lead to some stakeholders trying to discredit • Get people to think through and explain the assessment as biased or unrepresentative or who will need to use the results, who lead to excessive expectations of what the out- should be influenced by the results, and puts will deliver. As such, is it important to keep what that implies for the objectives and plan- a number of guidelines in mind during the de- ning of the assessment. velopment process: • Encourage people to view the problem from varied perspectives. Will the assess- • Strive to be transparent and inclusive. ment advance the interests of a large set of Build trust within your effort by being trusting stakeholders? Are we overlooking the interests and trustworthy. of hard-to-represent groups? For example, are • Manage expectations. People participating we considering the interests of youth, women, should find no surprises about their own landless people, and indigenous peoples? roles and responsibilities; they should also • Use simple language and try to avoid the have reasonable expectations about what use of language unfamiliar to your stakehold- the assessment can achieve. In addition, they ers. Technical language and jargon should be should understand the practical constraints avoided. Try, when possible, to explain things of the assessment. in the local language. Developing a Work Plan 53 3 PLANNING FOR DATA COLLECTION In Chapter 1 you identified clear objectives for your assessment; Chapter 2 helped you to identify your approach. This chapter will help you to focus on exactly what elements of governance you want to look at, where you can find information, how you can access that information through different methods, and how those methods can be refined to help you get the information you want in an effective way. Whether you are planning to use an existing tool or approach or are planning a new approach, this chapter will help you think about your proposed data collection methods, make sure they meet your objectives, and consider whether you need to fine-tune them or adjust them. It provides a number of steps for refining your methods. Although this may take as many as five steps, for some assessments some of the steps are quite simple. DECIDE WHAT ASPECTS OF GOVERNANCE TO ADDRESS STEP To guide your data collection, you need to add detail to your description of scope. 1 STEP IDENTIFY POTENTIAL SOURCES OF INFORMATION 2 You need to understand where you can find the information you want. SELECT DATA COLLECTION METHODS STEP You must decide how you will tap your information sources. 3 DEVELOP TOOLS FOR EACH METHOD STEP You need to work out how you will apply your methods. In the process, you may develop protocols 4 for interviews, questionnaires for surveys, sampling plans, and so forth. STEP FINALIZE YOUR WORK PLAN AND DEVELOP A DATA COLLECTION MANUAL 5 You can now fill in details to your assessment work plan and, if necessary, write instructions for the people who will collect the data. POINTS ON PROCESS: VETTING THE METHODS Like many other parts of the assessment process, defining the method can benefit from drawing on knowledge and values outside of the assessment team. This may mean vetting the choice of methods with outside experts or stakeholders or small-scale testing of a method followed, if needed, by revisions. 54 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 1: Decide What Aspects of Governance to Assess At this point in the planning process, you need To help you review this it may be useful to bring a detailed statement of what to measure. A a group of stakeholders together to discuss the general statement, such as “this assessment will key elements of governance you want to assess measure the state of forest governance at the (you can also link this with starting to identify national level,” is too broad and abstract. An as- how you will assess it). sessment guided only by this statement would be difficult to repeat with consistency and is un- One way to work through this process is to con- likely to deliver meaningful results. sider the following steps, labeled (a) through (e): Chapter 2 introduced the idea of using existing a) Recall the overall scope and objectives. forest governance frameworks to help you iden- Chapters 1 and 2 of the guide covered set- tify the technical scope (the elements of forest ting of scope and objectives, and you should governance) of your assessment. You should now not lose sight of them while thinking about further refine this process to consider exactly what the detail. Always consider whether the ele- the constituent parts of these elements of forest ments you are discussing will be of impor- governance are and which parts you (and your tance to the quality of your assessment and target audience) are most interested in. If you are of interest to your target audience. interested in looking at transparency, for example, b) Look at existing forest governance frame- what are the key elements of transparency that works. Existing frameworks provide a basis you want to consider? Do you want to look at around which you can frame your discussion existing legislation (what level of transparency is and can save you getting lost in long techni- required by law), its implementation (what really cal arguments about forest governance itself. happens in a practical sense), what procedures Even if you are planning on using an existing there are to address failings in transparency, or a assessment framework, you should still re- combination of all of these things? view it to ensure that it is covering the points BOX 15: BE CLEAR ON WHAT YOU ARE ASSESSING PRACTICE TIP Being clear on what you are assessing will be critical to all elements of your assessment design and implementation. It will help you communicate, what you are doing, why you are doing it, and how. Ensuring there is a shared understanding among your team will also help ensure that data is collected accurately and effectively, with all members working to gain the specific information you need. Confusion on this matter can result in significant effort being put in to collect information that does not address the issues you were interested in. So spend time planning and revisit these plans again as you refine the assessment to make sure you are going in the right direction and that everybody knows which direction that is! Planning for Data Collection 55 you are interested in and that it is not using c. The social scope—whether you are inter- a lot of resources covering elements you are ested in assessing the different experi- not interested in. ences of governance between different c) Decide how detailed you want to be. In look- social groups (e.g., genders, economic ing over approaches in other sources, you will groups, ethnic groups, and so forth). see variation in how precisely the descriptions are set out. Some approaches list over 100 These considerations will be affected not only by aspects of governance to be evaluated or mea- your objectives but also by such practical con- sured, while others describe fewer than a dozen. siderations as time, resources, and capacity. In Key areas in which decisions will need to be addition, if the decision on a detailed description made are the same as those considered for the is up to a group, the group may find it easier to general scope of the assessment and include: agree on a few broad statements than on a large number of narrow statements. a. The level of technical detail—how much detail you want to assess the element of d) Decide how you will specify what you governance you are interested in. want to measure. Will you use a narrative b. The geographical detail—what areas will description or indicators? This decision will be included and whether you are inter- influence the remaining steps of your as- ested in showing differences geographi- sessment, from design of data collection, to cally (and, if so, at what scales). analysis, to reporting. LOOKING DEEPER BOX 16: EXAMPLE OF NARRATIVE DESCRIPTION OF WHAT TO ADDRESS The following is an example of a narrative description of what to assess regarding forest tenure, based directly on the components of the PROFOR-FAO Framework: The assessment will evaluate: • The extent to which the legal framework recognizes and protects forest-related property rights, including rights to carbon. • The extent to which the legal framework recognizes customary and traditional rights of indigenous peoples, local communities, and traditional forest users. • The consistency between formal and informal rights to forest resources. • The extent to which the legal framework provides an effective, due process means of resolving disputes. • The comprehensiveness and accuracy of documentation and accessibility of information related to forest tenure and rights. • The existence and effectiveness of implementation of processes and mechanisms for resolving disputes and conflicts over tenure and rights. • The effectiveness of compensation mechanisms when rights are taken away. • The adequacy of measures and mechanisms to ensure the tenure security of forest owners and rights holders. 56 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE A narrative states what you are going to measure If you choose to adopt an indicator approach, the in sentences and paragraphs. Good ones draw data collection tools will have to be built around on an existing model or framework of governance your indicators. The first step in this analysis will to provide organization and detail. Box 16 gives be to score the indicators, and the report will an example of a narrative description of what an have to include those scores and present them assessment will evaluate concerning governance in an understandable way. of forest tenure based on the PROFOR-FAO Framework (PROFOR & FAO 2011). For another Recent practice seems to favor the use of in- example, see the Sustainable Conservation dicator sets. Indicator sets make data collection Approaches in Priority Ecosystems (SCAPES) tool planning easier because they give you definite (USAID 2013), which bases a narrative descrip- questions to answer. In addition, the strong struc- tion on a model that sees governance in terms of ture of indicator sets makes assessments easier legitimacy, capacity, and power. to repeat with consistency. But indicator sets can also have shortcomings. Poorly designed indica- If you choose a narrative description, its underly- tors, for example, can be too focused to give you ing framework will point toward what data you a complete understanding of what you want to need to collect, how to make sense of the data, evaluate. (See the discussion of indicator design and how to describe what the data tell you. in Annex V for other possible weaknesses.) Assessments using indicators go one step fur- Compared to indicators, a narrative description ther. First they develop a list or description of will give you less direction to shape your data what the assessment is interested in evaluat- collection, but more flexibility to inquire into ing and then they set these out in a structured problems; a narrative based on a clear model format. The PROFOR-FAO Framework (PROFOR of governance, meanwhile, can provide structure & FAO 2011) sets out a list of “components” for analysis. In other words, both narratives and and “subcomponents.” The WRI GFI Framework indicator sets can lead to good assessments— (Davis et al. 2013) calls these “themes” and and both have limitations in the way they shape “subthemes.” In other references you may find assessments and thus both options should be these called “criteria.” carefully considered. Assessments using indicators next develop spe- e) Set out the description in writing. Guided cific measurable indicators (an indicator set) by your work in Steps (a) through (d), de- that will shed light on the components and scribe what you want your assessment to subcomponents. An indicator is simply “a quan- address as a set of indicators or as a narra- titative, qualitative, or descriptive attribute that, tive of the things to measure. This description if measured or monitored periodically, could will be critical in undertaking the subsequent indicate the direction of change in a governance steps in this chapter, as it will provide you subcomponent.” (PROFOR & FAO 2011, p.31). with a clear understanding of what informa- Each subcomponent can have one or more in- tion you want to gain and will shape the da- dicators. Box 17 has examples of indicators for ta-gathering process that you are designing. governance of forest tenure either taken from existing tools or based on existing frameworks; further information on developing indicators is provided in Annex V. Planning for Data Collection 57 LOOKING DEEPER BOX 17: THREE EXAMPLES OF INDICATORS Here are three examples of indicators. Each deals with an aspect of the governance of forest tenure. The WRI GFI Indicator Framework (Davis et al. 2013) has nine indicators under the heading of “Forest Ownership and Use Rights”; each indicator has four to six “elements of quality” that can be evaluated as being present or absent. The first indicator is “To what extent does the legal framework recognize a broad spectrum of existing forest tenure rights and rights-holders?” Its elements are: • Individual rights. The forest tenure rights held by individuals and households are recognized in the legal framework. • Communal rights. The forest tenure rights collectively held by local communities and other relevant groups are recognized in the legal framework. • Traditional rights. The forest tenure rights traditionally held by indigenous peoples and other groups with customary tenure systems are recognized in the legal framework. • Rights of women. The legal framework does not discriminate against the forest tenure rights of women. The GFI manual suggests that the elements be scored as present or absent, and the indicators be scored on a scale of one to ten based on the element scores. The FAO Voluntary Guidelines on the Responsible Governance of Tenure (2012) do not have indicators, but they do have 25 principles (each of which has sub-principles and, sometimes, sub-sub-principles). Many of these are normative: they begin “States should” and could be the basis of indicators. For example, sub-principle 3.1 says that States should: • Recognize and respect all legitimate tenure holder rights. • Safeguard legitimate rights. • Promote enjoyment of legitimate rights. • Provide access to justice to deal with infringements. • Prevent tenure disputes, violent conflicts and corruption. 58 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE LOOKING DEEPER BOX 17: THREE EXAMPLES OF INDICATORS (continued) (The actual guidelines give more detail explaining each of these points.) The assessment could gather data to score each normative sub-principle and sub-sub-principle on a one-to-five scale, and then rate overall conformance with each principle as red (poor), yellow (fair), or green (good) based on these scores. The PROFOR tool (Kishor and Rosenbaum 2012) has 15 indicators based on tenure-related subcomponents from the PROFOR-FAO Framework. One of them is: Do forest-dependent communities have secure access to the resources that they depend on? Rationale: It is a basic human right for forest-dependent communities to have secure and equitable access to forest resources on which they depend for their livelihoods. Their rights should not be arbitrarily changed or taken away. Possible responses: a) All forest-dependent communities have secure access to necessary forest resources. b) Most forest-dependent communities have secure access to necessary forest resources c) Some forest-dependent communities have secure access to necessary forest resources. d) No forest-dependent communities have secure access to necessary forest resources. To score this indicator, the assessment needs to choose one of the possible responses. Note that despite their format, the PROFOR indicators are not intended as survey or interview questions. Like all indicators, they pose questions for the assessment to answer by gathering data. Step 4 of this chapter has more to say about designing good survey and interview questions. Planning for Data Collection 59 Step 2: Identify Potential Sources of Information Once you have a sufficiently detailed description steps, adjusting to ensure that you are best able of your measurement aims (what you want to as- to collect data from different sources within your sess), you need to identify where you can gain ac- budget and time frame. During this process your cess to this information (this step), what methods selected methods may change as you become to use to gain access to it (Steps 3 and 4) and aware of new data sources, review the time and how you will prepare to use them (Step 5). effort that will be required to gain access to ef- fective information from other sources, and dis- Movement through these steps will need to be cuss practical considerations (such as availability iterative—you will go back and forth between of experts and budget). PRACTICE TIP BOX 18: SETTING THE FOUNDATIONS Chapter 2 provides information on the first steps toward identifying data sources and methods. It provides information on some key methods as well as the different types of data (including primary, secondary, qualitative, and quantitative) and the methods used to collect these different types of data. If these terms are new to you, you may find it helpful to review Chapter 2, Step 2, prior to working through the steps in Chapter 3. BOX 19: USING EXISTING METHODS AND TOOLS PRACTICE TIP Some assessments reach this step having already made basic decisions about sources, methods, and tools—or are required to use a specific tool. Nonetheless, reviewing exactly what information you want to collect (Step 1), where you can get it (Step 2), and how you will get it (Steps 3 and 4) will help you get the most out of your chosen approach and, if necessary, allow you to customise it to make it more relevant to your situation. 60 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE BOX 20: INPUTS, PROCESS, OUTPUTS, AND OUTCOMES PRACTICE TIP Some sources will shed direct light on the questions that you are trying to answer. More often, however, you will find sources that contain indirect measures of things that cannot be measured directly. For example, forest law enforcement is difficult to measure directly, but you may find information about inputs to law enforcement, such as number of enforcement officers and size of enforcement budgets; about the process of enforcement, describing how patrols are structured or how suspects of forest crimes are prosecuted; the outputs of enforcement, such as statistics on arrests made, cases brought, or offenders sentenced; and perhaps on the outcomes of enforcement, such as trends in deforestation from illegal logging. Gaining all this information will allow you to develop a compelling description of forest law enforcement as well as, potentially, to identify exactly where challenges may exist. Assessments generally have used three broad Written Materials classes of sources: written materials, people, Written materials can provide a vital source of and physical evidence. data. Some sources, such as government sta- tistical offices, may have been using substantial • Written materials usually provide secondary funds to compile data regularly over long time data and are at the heart of desk reviews. periods. Other sources, such as official publish- They are sometimes used to help with other ers of laws or government records, provide infor- methods (e.g., in framing questions for inter- mation that cannot be found in any other place. views or selecting samples to be surveyed). • People provide primary data and are the Usually analyzed through desk reviews, written main source of information for the most data sources can provide a low-cost method widely used assessment methods: expert to gain significant information either as a back- consultations, key informant interviews, focus ground to an assessment or as an assessment in group discussions, workshops, and surveys. their own right. Useful data for your assessment • Physical evidence is used less frequently in may be found in a range of written sources: governance assessments, but relates to in- formation on the physical environment that • Recent assessments by others. Although is affected by governance. not always available, these may have direct answers to questions that you are seeking Further information on each of these is provided to answer or contain findings that you can in the subsequent pages. When looking for in- compare with your own to show changes formation sources, remember that governance in governance. Good assessments will ex- is intangible, and you often have to measure it plain how the authors arrived at their find- indirectly. Box 20 has more on the kinds of in- ings, which will point you to other sources of direct information that assessments have used. information. Planning for Data Collection 61 BOX 21: SEARCHING FOR DATA PRACTICE TIP Information relevant to governance comes in many different forms. Do not limit your search to assessments focused on governance. General assessments of the forest sector often include evaluations of governance components (such as policy or public expenditures). In addition, look for governance assessments in related fields to see how they obtained data. The approaches that they used may suggest ways to find forest sector information. Look in the easiest places first. Reports and other data may be very easily accessible at government or NGO offices or online even if they are not presented in an ideal format. Searching in these places first will be cheaper than developing approaches to collecting primary data and will also encourage government and NGO staff that their information and systems are being used. Government websites, in particular, can contain a wealth of data (including reports, statistics, and organisational information) and can be a valuable starting point for data collection. (This was a starting point for research in the Grupo FARO case in Ecuador discussed in Annex I.) • Government forest inventories, censuses, • Published laws and policies. Sometimes and other compilations of statistics. This is these will yield a direct answer to one of your a varied group of information sources with information needs. For example, one of the many possible uses. If you have a non-spe- PROFOR tool’s indicators asks whether the cific, outcome-oriented indicator, like the rate country has committed its forest policy to of deforestation, you may be able to score writing. Find a written forest policy and you it directly from these sources. Similarly, you have answered that question. More often, may be able to find useful statistics on forest these will provide the basis of expert opin- law enforcement (arrests, prosecutions, and ions on the adequacy of laws and policies convictions). Sometimes you can analyze as written or the beginning of an inquiry into government statistics in creative ways (see whether laws and policies are being imple- Box 22). Finally, these sources can provide mented fully. data that facilitates your use of other sources • Gray literature. Government offices gener- and tools; for example, census data can help ate much information that is never officially you design sampling plans for surveys. published. This information may include licensing records, arrest records, internal 62 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE BOX 22: USING DATA IN DIFFERENT WAYS LOOKING DEEPER With imagination and insight, assessments have used statistics collected for other purposes to throw light on governance. One approach has been to compare official harvest or trade figures with measures of consumption or demand to understand whether forest commodities are moving through lawful channels. For example, the Chatham House assessment of illegal logging (Lawson and MacFaul 2010) compared data on legal harvests with data on forest product use and demand to produce estimates of illegal logging. Similarly, an analysis of governance of charcoal production in Tanzania compared revenue collection for charcoal licenses with census data on household spending for charcoal to show that most of the charcoal trade escaped government regulation (World Bank 2010). Sometimes changes in production or trade statistics over time demonstrate the impacts of changes in governance. A recent market analysis in Uzbekistan compared international trade figures on wild-grown liquorice roots with the dates of changes in law and policy to show how governance was influencing collection and trade (FAO report forthcoming). evaluations, progress reports, and so forth. • beginning of an inquiry to see whether bud- These are rarely indexed; people familiar with gets are followed, officials are actually tending the internal workings of an agency, however, to the duties in their job descriptions, and so may be able to point you to rich sources of forth. The Liberia case (see Annex I) analyzed data buried in print and computer files. concession contracts to determine their level • Statistics compiled outside of govern- of compliance with laws. ment by NGOs, international development • Media reports. These can be a rich source of partners, public opinion firms, and others. anecdotes and illustrative examples. Beyond These have many of the same uses as govern- this, you can use techniques like content ment statistics. In particular, assessments have analysis4 to draw rigorous inferences. For ex- used measures of public opinion—corruption ample, Chatham House did a content analysis reputation polls, citizen report cards and the of media reports to score coverage of illegal like—as sources of information about integrity logging (Lawson and MacFaul 2010). and public trust in government agencies. • Academic studies of forestry or govern- • Budgets, organizational diagrams, staff lists, ment. These can be as valuable as recent as- and other agency documents. These are of- sessments. They can also be a good source ten used the same way as published laws and of information about the history and context policies: as the basis for expert evaluation. An of the forest sector. expert can give an opinion as to whether bud- gets and staffing are adequate and properly allocated. An expert can also use these as the 4. See Box 26 and Annex II for sources describing the techniques of content analysis. Planning for Data Collection 63 BOX 23: EXAMPLES OF FINDING DATA IN WRITTEN MATERIALS LOOKING DEEPER Suppose that you were looking for secondary data on governance of forest tenure, using indicators like those in Box 17. What written materials would you consider using? • Published laws might be a rich source about how the tenure system appears on paper. • Media reports might give you anecdotal evidence of strong conflicts over forest tenure. • Academic literature, if you are lucky, might contain studies on tenure disputes. • Statistical data from the courts or enforcement officials might provide information on the frequency of lawsuits or crimes tied to forest tenure conflicts. • Grey literature (internal records) might give you information on the adequacy and accuracy of land tenure records. • Government websites and organizational diagrams might suggest where you could find people to tap for primary data. • If you are fortunate, a prior assessment will have information on forest tenure. People have tapped government officials as experts People provide an extremely rich source of infor- and key information resources, as well as for mation. Stakeholders in government, academia, focus groups and workshops. Finding gov- business, NGOs, and civil society organizations ernment employees/officials is usually easy, hold knowledge that you can tap through use of especially if the government is cooperating experts, key stakeholder interviews, focus group with the assessment. However, perceptions discussions, workshops, and surveys, as well as from within government can be one-sided some of the minor methods described in Box 25 and often need to be balanced with the in Step 3 of this chapter. views of those outside government. • Academia. Universities, institutes, and labo- A general point to remember when seeking out ratories are often good sources of experts. people as information sources is that people Assessments have also included academ- have biases. Assessments must try to balance ics in group processes like workshops. If an or at least disclose the potential biases of their academic is widely respected and seen as sources, where they are known or anticipated. relatively neutral, he or she may make a good facilitator for workshops bringing together • Government. Much knowledge about gover- people of diverse opinions. nance resides in the brains of people work- • Business. Businesspeople have distinct per- ing for the government. Many assessments ceptions and knowledge and can be valuable 64 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE sources of information. For example, the recent Other stakeholders. Some assessments may World Bank assessment in Russia (Kuzmichev want to gain an understanding of forest gover- et al. 2012) used heads of forest enterprises nance from a broad cross-section of stakehold- and business associations among the experts ers (including those not in prominent positions; it asked to score indicators. Some assessments see, for example, the participatory governance have had trouble engaging businesspeople in assessment in Indonesia discussed in Annex I). lengthy processes (i.e., multi-day workshops), unless the businessperson was acting as a paid Accessing knowledge from a broader group of consultant or had a clear interest in the out- people will require you to consider technical is- come of the process. It may be easier to get sues, such as how to effectively sample a group businesspeople to participate in short activities, (you could not, for example, ask every person such as surveys or stakeholder interviews. living in a rural area their views). Designing • NGOs and other civil society organizations. an approach to sampling is covered briefly in People in these organizations can be quite Chapter 2 and in more detail in Box 33. knowledgeable and willing to cooperate in roles ranging from expert to survey participant. Physical Evidence An assessment’s stakeholder map, done early Extensive use of physical evidence is uncommon in in planning, will point to valuable organizations. governance assessments. Governance is abstract, You can often find key people within these or- and you cannot weigh it or measure its physical ganizations through networking (for example, dimensions. However, some of the outputs and by drawing on the connections of the assess- outcomes of governance (see Box 20) are concrete ment’s advisory group or by asking one key and can provide compelling information to frame informant to suggest others to contact). As with discussions on governance or evidence of challeng- other stakeholder groups, the views of these es or successes within a system. For example, infor- groups may also reflect the standpoint of their mation on existing levels of deforestation or forest organization and so should be balanced by the degradation can attract considerable attention and views of other stakeholder groups. can be compared against governance elements. At • Development and donor agencies. People a more specific level, an assessment could make in development agencies may prefer not to field visits to compare actual forest conditions with express opinions or score indicators directly; those set out in management plans or to inspect the they can be useful sources of background in- quality of forest surveys and boundary markings. In formation, however, and can be used in vet- theory, it could even conduct random inspections of ting and validation of information They can transport and processing activities. also direct assessments toward key docu- ments and knowledgeable people. Planning for Data Collection 65 LOOKING DEEPER BOX 24: EXAMPLES OF FINDING PEOPLE WHO CAN PROVIDE DATA Suppose you were looking for primary data on governance of forest tenure using indicators like those in Box 17. What people could serve as sources? A wide variety of stakeholders will have knowledge of forest tenure governance. Local users of forest products, based on their life experience, will be able to tell you whether the formal system of forest tenure allows them reliable access to resources, is consistent with informal tenure systems, is leading to unresolved conflicts, and so forth. You could gather this kind of information through focus groups, workshops, and surveys. Several people may have detailed knowledge based on training or work experience. Lawyers could comment on the laws as written and as implemented. Government land managers may have data on how well property boundaries are located or how reliable government property records are. Private land owners could talk about the security of land rights. An NGO official might be able to give you a broader picture of the concerns of rural or indigenous peoples than you could get from speaking with a few of these people individually. You might also find an academic who has been studying tenure issues, or an official at a donor agency who has been tracking tenure problems. You could gather data from these people by retaining them as experts, through key stakeholder interviews, and through focus groups, workshops, or surveys. One difficulty in using physical evidence is the cost backgrounds, and would end up being costly. of doing so on a scale that yields a full picture of Remote sensing, however, can cover large areas governance issues. Inspecting a significant number quickly at a reasonable cost, which might be a use- of licensed forest operations, for example, would ful source of physical evidence of forest activity. take time, would require staff with good technical 66 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 3: Select Data Collection Methods Your task in this step is to identify the general meth- can find more information on their strengths and ods that you will use to obtain your data. Table 5: limitations in Chapter 2, Step 2. Besides these six Common Methods of Data Collection provides an major methods, Box 25 also details some less overview of the six most common methods. You common methods of information gathering. TABLE 5: COMMON METHODS OF DATA COLLECTION Description Researching existing and available information (some effort may have to be put into collecting reports and documents Reviews Desk from different sources). Asking questions and analyzing some or all the responses statistically. The term “survey” encompasses a range of tools, including structured and semi-structured questionnaires and field-based observations which can be administered on a Surveys range of scales (i.e., large or small numbers of respondents across a large or small geographical area) and which can collect quantitative or qualitative information. Using one or a number of individual experts to provide analysis based on their own knowledge, research, or experience of Analysis Expert the sector. Experts on forest governance may form key elements of your implementation team or can support the process through advisory groups, steering committees, or paid or unpaid consultations. Interviewing key individuals who know about the specific areas of forest governance you are interested in. These Key Informant Interviews interviews provide information on the sector and often point you to further information. Using a structured or semi- structured interview protocol can help you ensure you gain the information you want from the discussion and that there is consistency across interviews. Bringing together selected stakeholders to discuss specific issues in a form of group interview. The stakeholders can Groups Focus be experts or a sample of the specific social groups you are interested in. Focus group discussions often provide an opportunity to talk about positions and validate findings from other forms of assessment. Bringing together a broad range of stakeholders to share information and discuss key issues. Workshops tend to be Workshops longer events than focus group discussions and feature more complex agendas. Workshops offer a good opportunity to provide information to and to get information from a range of stakeholders. Planning for Data Collection 67 LOOKING DEEPER BOX 25: LESS FREQUENTLY USED DATA COLLECTION METHODS Testing. Engage in a governance process to evaluate its existence, effectiveness, or efficiency. Examples: • Apply for licenses to collect firewood at several forest offices in order to evaluate the efficiency and fairness of licensing procedures, noting the time it takes to process applications and the adherence to legal standards in granting licenses. • Ask for copies of government forestry documents to evaluate the implementation of transparency provisions; assess their availability, the time it takes to get the documents and the government’s adherence to legal standards in deciding whether to release the documents. Observation. Watch a process in action to evaluate its existence, effectiveness, or efficiency. Examples: • Attend a public hearing or citizen workshop to assess how well it fosters stakeholder engagement. • Observe trials of people accused of forest offenses to determine whether the process and outcomes are fair. • Observe the auctions of forest concessions to determine if the proper rules are followed. Field visits. These are like observations (and can be combined with them), but they aim at finding evidence of what has already happened. Go “in the field” to determine how actual conditions compare with conditions as described on paper. Examples: • Visit sites of past forest harvest or other management activities to determine if actions were in compliance with management plans. (As noted in Step 2 of this chapter, such “field visits” might be done remotely through satellite imagery.) • Visit sites subject to government reports or evaluation to verify that the reports and evaluations are accurate. Story collection. Ask a large number of stakeholders to tell brief stories about their experiences with particular agencies or programs. Use content analysis (See Box 26) to find patterns and common threads. Story collection is something like a survey and something like a series of key stakeholder interviews, but the interaction with subjects is less structured. The structure comes instead from the coding and analysis of the collected stories. The charitable organization GlobalGiving uses this technique to evaluate the impact of development projects.* Examples: • Ask local forest users to tell stories about their interactions with forest officers. Evaluate whether the stories tend to be positive or negative in overall tone, whether the officers and forest agency are viewed favorably or unfavorably, how frequently bribes or corruption are mentioned, and so forth. • Invite local forest users to tell stories about conflicts that have involved the forest. Code the stories to identify the issue at the heart of the conflict (e.g., land ownership, right to participate in management planning, right to enter land or graze livestock), the parties in conflict, the persistence of the conflict, whether the conflict hampers sustainable management of the forest, whether a third party was asked to resolve the conflict, and so forth. *See http://www.globalgiving.org/story-walk-through-4-vap/ (visited 24 January 2014). 68 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE The discussion below provides examples of how Finding Information on Governance Design assessments have used particular methods to This information corresponds to Pillar 1 of the meet particular needs. It considers four broad PROFOR-FAO Framework (see Figure 2 and data needs that assessments often face: getting PROFOR & FAO 2011, pp. 14–15). It covers background information, getting information on what governance looks like “on paper”; for this governance as it is designed to operate, getting reason, much of the information can be found information on decision-making processes, and through desk reviews of laws, policies, and or- getting information on governance as it works ganizational plans. However, almost any tool can in practice. In deciding what methods to use shed some light on governance design. you should consider the cost of implementing them, the time it will take to conduct the data Desk reviews and experts are natural sources collection, the capacity you have to implement of information. The Indonesia PGA, for exam- the method, the persuasive value of the results, ple, relied on document reviews to score the and how well the method serves the objectives “law and policy” components of its indicators. of your assessment. Assessments might turn some parts of a desk review, such as a review of the quality of the Finding Information on Background and History forest law, over to an expert in the area, in this All assessment reports need to include some case someone with expertise in law. Experts and background and history of governance issues. key stakeholders are also useful indirect sources Without these, the reader cannot understand the of information who can often suggest where to need for the assessment or the options for deal- find the desired documents to review. ing with any problems the assessment discovers. This is usually a small part of the data-gathering Assessments have also used focus group dis- task of an assessment and is sometimes over- cussions and workshops to develop data on looked—but it is essential. governance design. Some assessment tools (e.g., those developed by PROFOR (Kishor & The background information does not need to Rosenbaum 2012) and USAID (USAID 2013)) be detailed and it does not need to bring new get almost all their information from group pro- facts to light. Because of this, the two most cesses, including information on governance common sources of background information design. Indeed, some information on design will are desk reviews and experts. In Uganda, the be subjective—for example, whether the distribu- PROFOR pilot test (see Annex I) hired an expert tion of forest access under the law or policy is to write a background report. The expert drew equitable—and the best way to get such percep- on the expert’s own knowledge and on govern- tion-based information is through contact with ment reports and statistics. In Indonesia, the stakeholders. That suggests interviews, focus PGA report drew on the studies of academics, group discussions, workshops, and/or surveys. NGOs, and development partners (Situmorang et al. 2013). The report of the 2012 governance assessment in Russia (Kuzmichev et al. 2012) cites few sources but was clearly written by ex- perts with working knowledge of Russian forests. Planning for Data Collection 69 Finding Information on Planning and Decision- for example, a World Bank study was able to making Processes compare licensing and revenue collection re- This category corresponds to the second pillar of cords (amount of charcoal legally produced) the PROFOR-FAO Framework (PROFOR & FAO against household surveys of demand (amount 2011, pp.15–16). Information here concerns of charcoal actually used) to show that most the extent and impact of public participation, the charcoal production was happening outside of transparency of decision-making processes, and legal controls (World Bank 2010). The Chatham the roles of stakeholders and the media generally. House illegal logging study performed content analyses of media reports to compare trends If these decision-making processes leave a paper in illegal logging in chosen countries (Lawson trail, desk review of that trail may provide use- & MacFaul 2010). A desk review may also be ful data. The more common approach among able to shed light on whether inventories and assessments, however, is to contact the people plans are current, whether budgets are followed, involved in the process through interviews, fo- whether key staff positions are filled and similar cus group discussions, and workshops. Surveys issues that involve well-documented facts. might also be used, along with the (less com- mon) approaches of testing, observation, and There are, however, many subjective issues story collection (see Box 25). here—for example, whether budgets are ad- equate and appropriate to the problems at Finding Information on Implementation hand, whether government agencies coordinate, This category corresponds to the third pillar of whether the rule of law is consistently followed, the PROFOR-FAO Framework: how governance and whether the resulting implementation is works in practice (PROFOR & FAO 2011, pp. equitable. Assessments generally reach these 17–18). Here, again, a desk review might pro- issues through interviews, focus group discus- vide answers to specific questions. In a study of sions, workshops, and/or surveys. governance of charcoal production in Tanzania, Step 4: Develop Tools for Each Method A method is a general way to gather data; a tool To achieve this you will need to start by under- is a specific application of a method. Step 4 will taking an initial review of: help you progress from having identified you methods to developing a working tool. Even if • Which method will cover what elements you are starting out with an existing tool for your of data collection (against the information assessment, you might need to take some steps needs you identified in Step 1). For exam- to customize it for your particular use. ple, you may decide to use a desk review to gather background information, expert 70 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE analysis to evaluate the laws and policies as The next planning decisions depend on the par- written, focus group discussions to evaluate ticular methods that you intend to use. Table lists implementation, and a stakeholder work- some of the decisions you may need to make shop to review and refine the findings of for each method. In each case the first step is to the other methods. Note that using multiple develop a clear description of the task that the overlapping methods is one way to validate tool must accomplish (i.e., its desired outputs). the information you are gathering. (See the process point at the end of this chapter for If you are going to employ the tool yourself, an in- more about validation.) formal description will be enough. If other staff are • Your timeline (discussed in Chapter 2). eventually going to use the tool, you may want to You will want to fill in more detail about write a formal description of the tool’s task now. data gathering. If you are using experts, for This will be useful later on when you are writing example, how long will you give yourself to a data collection manual (the next step in this write terms of reference and locate the ex- chapter) or are training staff (the first step in the perts? How long will you give the experts to next chapter). If you are going to give the task to complete their tasks? an outside expert (e.g., commissioning an expert • Your budget (discussed in Chapter 2). You analysis) or to a consultant, then a full written de- may want to divide the general allotment of scription of the desired output of the task will be funds to specific data collection tasks, or you useful when you write formal terms of reference. may want to revise your budget entirely. In each case the description should include the • How you intend to analyze and use your formats in which information will be collected and data (discussed in more detail in Chapters 5 supplied to you. This will help ensure the stan- and 6). Early consideration of how you will dardization of data collection across groups and use your data will help you ensure that you that information collected by one person or group collect the right information in the right for- is accessible for use by others in your assessment mats to use. This will not only impact what team and in the future. information you are asking for but also how you collect it and store it (e.g., coded re- The remaining discussion in this step covers sponses that are entered into a spreadsheet these choices dealing with each method se- for use in graphs versus anecdotal stories quentially. The references for further information presented as part of a narrative report to pro- lead to Babbie (2010), a widely used American vide a human story to key issues). text on social research, and to Bryman (2012), a widely used British text. See also the coverage of tool design resources in Annex II. Planning for Data Collection 71 TABLE 6: PLANNING AND DESIGN CHOICES FOR THE USE OF DATA COLLECTION TOOLS Method Method-specific Planning Possible Design and Implementation Choices Desk Reviews Will you just gather data or perform new analyses? Content analysis techniques Expert Analysis Kinds of experts, and perhaps terms of reference Terms of reference and choice of experts Key Informants Means of selection; interview type Interview structure, questions, and coding Focus Group Discussions Sampling and stratification; means of convening Interview questions and coding Workshops Sampling, stratification, workshop tasks Identifying participants, structuring tasks, and choosing facilitators Surveys Sampling and stratification; sample size Question design and coding Desk Reviews These categories of review require different lev- Desk reviews of written material fall into three els of expertise and preparation from both the rough categories: reviewers and you as the assessment organizer. To collect information on the history of the forest • Information collection. Collecting docu- sector might require a bright student, an outline ments and material to gain access to the of the information needed, and a bit of over- facts (be they statistics, historical information, sight. To evaluate the quality of a law or policy or or even the results of a prior assessment) the adequacy of a published budget, however, that the material contains and that the as- would take someone with a bit more knowledge. sessment can use directly. You might need to specify exactly the type of • Qualitative information analysis. Collecting skills the person will require through a terms of documents and material that are then analyzed reference. To design and carry out a quantitative qualitatively. An example is a desk review of the content analysis of media reports takes research- forest law where the review goes beyond the ers with another form of specialized ability. If you facts that are directly stated and draws conclu- do not have the capacity to perform a complex sions based on an evaluation of the law. desk review on a particular topic, you may need • Quantitative information analysis. Collecting to shift the task to experts—or you may need to documents and material that is then coded to allow extra time and budget to gain the capacity undertake quantitative analysis. An example to carry out the review. is the Chatham House review (Lawson & MacFaul 2010), which analyzed news sto- For further information on content analysis see Box ries on illegal logging to develop an assess- 26 and the discussion of coding in this chapter and ment of the problem in five countries (e.g., Chapter 4. For more on using existing data, see frequency of reports of violent conflict over Babbie, pp.344–50; Bryman, chs. 5, 14 & 23. forest resources).   72 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE BOX 26: CONTENT ANALYSIS LOOKING DEEPER Content analysis is a term that covers many ways of systematically analyzing communications. Those communications are usually documents (in past assessments, things like laws, policies, logging contracts, and news reports), but they could also be other kinds of communications (e.g., e-mail, web pages, audio or video recordings, and transcripts of interviews). Qualitative Analysis As mentioned in the main text, assessments often analyze documents qualitatively. Typically, that analysis draws upon an analyst’s specialized skill (e.g., knowledge of law). If several people will be performing qualitative analysis independently, good practice requires giving them some guidance so that each performs a similar analysis. The guidance could be a set of specific questions to answer or indicators to score. If only one person or a small, closely coordinated team is performing the analysis, assessments may provide oral or informal guidance; the better practice, however, is to set out guidance in writing. If nothing else, this written guidance will be valuable to any subsequent assessment. Quantitative Analysis Quantitative content analysis is more complex. The analyst begins with a set of communications. The analyst codes each communication using agreed-upon guidance and then analyzes the resulting scores. For example, an analyst might go through a set of court records for forest offences and code the offence, the outcome, and the sanction imposed for each case. This would allow the assessment to make findings about the conviction rate and the average sentence. The coding could also note the magnitude of the crime to allow the assessment to analyze whether both petty and grand offenses were being enforced. Quantitative content analysis can also code more subjective variables. For example, an analyst could review recordings of key informant interviews to score whether the informant’s view of the forest agency was strongly favorable, somewhat favorable, neutral, somewhat unfavorable, or strongly unfavorable. By also assigning the informants to groups (forest officers, small business managers, large business managers, local government officials, NGO officials, rural residents, and so forth) the assessment could make findings on how the perception of the agency varies among different stakeholders. For more on content analysis, see Babbie, pp.333–344; Bryman, ch. 13. Planning for Data Collection 73 LOOKING DEEPER BOX 27: USING TECHNOLOGY IN DATA GATHERING AND MANAGEMENT Developments in technology provide exciting opportunities for data collection tool development. Technology can change how you engage your respondents (e.g., e-mail or online surveys such as SurveyMonkey or QuestionPro as opposed to face-to-face meetings), how that data is collected e.g., via text messages or web-based forms), how that data is stored (e.g., on computers as opposed to paper), and how that data is displayed (e.g., use of graphics, video, or presentations). A number of benefits and challenges of using new technology are listed below. You should carefully consider context, technical focus areas, existing skill sets, budgets, and related issues when deciding how to use technology within your assessment. Benefits • Can provide a cost effective way to access large numbers of people (e.g., online surveys, email surveys). • Can provide a mechanism to engage with distant groups and sustain engagement (e.g., online surveys, e-mail, Skype, and text-message-based information updates). • Can provide financial savings (due to reduced travel time or need for multistage data entry). • Can increase honesty of responses if there is a feeling of anonymity. • Can increase speed by allowing for rapid data collection and efficient data management. Challenges • Can create a bias within an assessment by only being accessible to those who are technologically literate, have access to relevant devices (e.g., computers, mobile phones), and are motivated to respond. • Can require significant technical expertise and limit flexibility (if reliant on an external specialist). • Can be subject to practical challenges (e.g., limited battery life of laptops, vulnerability to weather conditions). • Can lead to data sets or collection tools becoming unusable because of changes to software or hardware. • Availability of existing tools or programs in relevant languages (e.g., SurveyMonkey is available in 16 languages, but most of these are European languages). Provision of up-to-date guidance on specific tools within this area is difficult due to the rapid pace of change. Many development agencies and NGOs are, however, expanding their focus on the use of technology within governance work. Many maintain web portals with new reports and case studies, such as the World Bank’s IC4D - Information and Communications for Development site. Some relevant recent publications include Forest Governance 2.0: A primer on ICTs and Governance (Castrén & Pillai 2011) and ICT Applications for Data Collection and Monitoring and Evaluation (World Bank 2013) (the latter of which includes information on a number of tools). 74 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Expert Analysis In considering what an expert will do, review the Step 2 identifies some of the locations in which existing methods noted here. Will your expert experts can be found. It is now time to consider be responsible for undertaking one of these how many experts you intend to consult and methods (e.g., undertaking a desk review), be a what you expect them to do. subject for one (e.g., as a key informant or a par- ticipant in the Delphi Method—see Box 28)? Will The potential for bias should be considered in you give the expert some freedom to choose deciding how many experts you will consult and methods or design tools? Will your expert be in- from what organizations. You may want to follow volved in the assessment beyond the data-gath- the example of the World Bank assessment in ering phase and into analysis or dissemination? Russia (Kuzmichev et al. 2012) and seek experts from government, business, NGOs, and academia Whatever task you assign to an expert, you must so that you get opinions from varied perspectives. make sure that the expert understands the objec- Alternatively, you may decide to try to attract ex- tives of the assessment, the information that you perts who will be seen as inherently neutral, such are interested in, and exactly what role you want as people from outside the country, academics, or the expert to play. You can do this both infor- retired professionals. Or you may want to vet the mally, through discussion, and formally, through selection of experts with stakeholders, such as via the development of clear terms of reference that a stakeholder advisory committee (see the pro- set out the nature of the assessment and the ex- cess point on vetting at the end of this chapter). pert’s role within it (including outputs, timelines, and responsibilities to work with others). LOOKING DEEPER BOX 28: THE DELPHI METHOD—A SPECIALIZED WAY TO USE EXPERTS The Delphi method is a paper-based exercise originally developed for forecasting using experts. You can adapt it to other purposes, such as scoring indicators. To use the Delphi method in an assessment, locate a set of experts and ask each independently to score indicators or answer questions about governance, including comments explaining their scores or answers. Take these responses and summarize them, keeping the experts anonymous. The summary should point out where the experts agree and disagree. Give the summary back to the experts and ask them to react. Allow them to revise their scores or earlier comments or make new comments. They can also comment on the whole process. Summarize these new responses. Repeat this process of scoring, summarizing, and revising until the group reaches consensus or until it becomes clear that further rounds will not produce new insights. The idea is to get some of the benefits of a focus group discussion—group interchange and sharing of reasoning—while reducing the role that egos and personalities play in the process. See Annex II for references on the Delphi method. Planning for Data Collection 75 PRACTICE TIP BOX 29: RULES OF THUMB FOR DESIGNING QUESTIONS • Keep questions relevant to the assessment: cover what you need to know and don’t waste questions. • Make questions short. • Use clear, simple words. • Ask about one thing at a time. • Be careful in structured interviews with very general questions: they may be misinterpreted and the responses may be hard to code. • Avoid biased language. • Avoid leading questions (i.e., ones that suggest to the subject that you are looking for a particular answer). • Put questions in a positive form. Experience shows that people can easily mistake a negative question in an interview for one with the opposite meaning. • With closed questions, offer a balanced set of possible responses. Sources: Bryman, pp.254–59; Babbie, pp. 255–62. Key Informants about how you intend to use your results and In planning to use key informants, the first deci- practical considerations such as how long you sions to make are the kind and number of people have to collect them. you need to consult, for what information, and in how formal a style. Some of these questions are If you are going to use more formal interview much like the questions that you face with ex- approaches you will need to develop interview perts. You want to avoid bias, so you want to hear questions, which form part of your interview pro- from a variety of informants. Usually this is easier tocol (see Box 31). Depending on how you plan to do with informants than with experts because to process the outputs of the interviews, you may you are tapping into a larger pool of people. need to develop protocols for content analysis or coding of the responses (see Boxes 26 and 32). The formality of style is a matter that has im- Once developed, questions and coding may also plications for planning and management of the need to be pilot tested (Box 35 also has points assessment. Some assessments use participants that apply by analogy to piloting interview proto- quite informally. For example, the PROFOR cols). If several researchers will be using the same Uganda assessment (Annex I) used people to protocol, you should expect to monitor their first vet the results of the assessment. The interviews uses to assure consistent application. You might were unstructured, with participants told a sum- also want to invite researchers to make sugges- mary of the draft results and allowed to react. The tions for improving the protocol after their initial Indonesia PGA (Annex I) used semi-structured experience with it. All this design, testing, and revi- interviews. Deciding on which approach to use sion takes extra skill and time, and planning needs will be guided by both technical considerations to take that into account. 76 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE LOOKING DEEPER BOX 30: OPEN AND CLOSED QUESTIONS Open questions are questions without set responses. “Tell me about how you use the forest” is an open question. Closed questions are questions with set responses (e.g., yes or no, multiple choice, or numbers on a scale). “Do you use fuel wood for cooking?” is a closed question. Open questions can bring out unexpected information. They can be less likely to suggest to the subject what answer the interviewer might be looking for, and thus less likely to introduce bias. On the other hand, the responses from open questions can be harder to code and analyze in a standard way. Open questions are most useful when you don’t quite know what answer to expect or what information you need. You might use open questions in semi-structured interviews with key informants or in some focus group situations. Formal interviews and surveys tend to use closed questions. In developing a survey, though, you might create a draft with open questions, try it out (pilot it) on a few subjects, and let the responses suggest how to create closed questions for the actual survey. For more on creating possible answers for closed questions, see Box 32 on coding. For more on open and closed questions, see Babbie, pp.256–57; Bryman, pp.246–47. Planning for Data Collection 77 LOOKING DEEPER BOX 31: INTERVIEW PROTOCOLS AND STRUCTURE In key informant exercises, focus groups, and many surveys, researchers tap knowledge through interviews. These interviews may be informal, semi-structured, or structured. In each case, the interviewer should go into the interview with a protocol. In an informal interview, this may just be an outline of points to cover. In a semi-structured interview, the protocol may include a guiding set of questions, which often include open questions. In a formal interview, the protocol tends to include more closed questions and tends to be followed more rigidly. (See Box 30 for more on open and closed questions.) A protocol typically calls for an introductory phase in which the researcher tells the participant about the assessment and the nature and purpose of the interview. This might include information about the length of the interview and topics to be covered. The protocol should allow the person to ask questions about the assessment and the interview. The participant may have concerns about attribution and confidentiality, so the researcher should bring these up and come to a mutual understanding. (See the discussion of ethics at the end of Chapter 4.) If the researcher is recording the interview, the researcher should get the informant’s permission to do this. A protocol usually has a second phase for collecting information about the participant. This may include the person’s full name, contact information, official position, and background in the sector. A protocol’s central phase deals with gathering information. The protocol provides questions that the researcher should ask. In informal and semi-structured interviews, the researcher should have some freedom to adapt the questions to the circumstances. It may make sense to ask questions in a different order or to omit questions in areas where the participant clearly has no knowledge. In a structured interview, the interviewer generally follows the protocol carefully, so that every interview subject has a comparable experience. Finally, the interview should close with another opportunity for the participant to ask questions, an expression of thanks for the person’s cooperation, and a request to be able to contact the person again to confirm answers or to get additional information. It should also be discussed if the participant will have an opportunity to review the assessment report before it becomes public. For more on structured interviewing, see Babbie, pp. 274–279; Bryman, chapter 9. For more on semi- structured interviewing, see Bryman, chapter 20. 78 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Focus Group Discussions questions might be answered by asking the group Focus group discussions are essentially group to brainstorm, thus producing multiple responses. interviews with key informants, and they share Some might be answered through inviting and some of the same planning issues: How many recording individual opinions. Some might be an- focus groups will you convene? What kinds of swered through a voting exercise, and some by people will participate? What information will you asking the group to arrive at consensus. seek, and how formal a style will you use? In plan- ning focus groups, you need to think about how If the plan is to conduct the same exercise with group interactions will affect the responses. For different focus groups, the protocol should call example, you may want to talk with junior and for the moderator to ask the same questions, senior forest officers separately, or the junior of- in the same order, of each group. The modera- ficers might not be candid about problems that tor should have some flexibility, however, if the put the senior officers in a bad light. Similarly, you group clearly has no expertise to answer a ques- may want to separate villagers from local officials. tion or if time is limited and some questions must be omitted. Preparation of specific tools to implement fo- cus groups is like preparing for key informant The final phase should be similar to that of an interviews (see Boxes 29 and 31 on design of interview. It should give the participants another questions and interview structure), with the chance to ask questions, to clarify their ongoing main phases of the interview process followed. role (if any) in the assessment, and to thank The main information-gathering phase of a fo- them for their cooperation. cus group session is, however, usually more structured—providing questions with specific For more on focus group discussions generally, procedures to answer them. For example, some see Babbie, pp.322–23; Bryman, chapter 21. Planning for Data Collection 79 LOOKING DEEPER BOX 32: CODING OF INTERVIEW AND SURVEY RESPONSES Coding means assigning responses to categories to allow analysis. You need to consider coding when you design questions, and your team needs to be aware of coding when they collect data (Chapter 4). Closed questions (see Box 30) tend to be “pre-coded,” meaning that the categories are set out as possible responses for the respondent to choose. For example, a closed question of whether a benefit-sharing system is equitable might ask the respondent to choose a number on a five-point scale, where one is very equitable and five is very inequitable. (For more on using indexes and scales, see Babbie, chapter 6.) Open questions tend to be “post-coded,” meaning that a researcher ends up assigning the response to a category. A researcher might ask, “Tell me about the benefit sharing system; is it fair?” and take down notes on the response. Later, the researcher will assign the response to categories (e.g., “yes,” “mixed,” or “no”). A good coding for a question has three properties. • The categories (possible responses for a closed question) don’t overlap. • The categories are complete (they cover all possible responses). • The rules for assigning responses to categories are clear. For the best coding of responses, the questions themselves must be clear and easy to understand. Sometimes that means the question needs to be carefully explained to the subject, and sometimes that means that the interviewer needs careful instructions as to how to record responses. For example, if the response to a yes-or-no “Do you eat bush meat from the forest?” is “Not very often,” the interviewer needs to know whether to record that as a yes or a no—and all interviewers need to follow the same set of guidelines for recording such responses. Chapter 4 has more on data collection and coding. For more on question design, see Iarossi (2006), chapter 3; Bryman, chapter 11; Babbie, chapter 9. 80 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Workshops The next step is, sometimes, to present introduc- For planning purposes, you should be able to tory remarks on the topic at hand. These might, state how many workshops you will conduct, for example, provide background on the nature with what groups participating, how long they of the forest sector in the country. Keep these will take, and what information you expect them remarks neutral; avoid saying things that might to produce. You may also want to consider prejudice the data that you are about to gather. whether to hire an independent facilitator to run the workshops. If you are going to be discussing As with a focus group, the data-generating phase sensitive topics or are aiming for a consensus of a workshop can take many forms. You can finding, then a neutral, experienced facilitator can have short presentations, open discussions, brain- be a useful addition to your team. storming, mapping exercises, strengths-weakness- opportunities-and-threats (SWOT) evaluations, Once you have decided on these elements, voting exercises, consensus exercises, or any you will need to devise a protocol (sometimes of dozens of tools and variations to capture the simply called an agenda). Many of the guide- thoughts of the participants. When dealing with lines for designing focus group sessions apply large numbers of participants, you can break into to workshops. However, workshops tend to be smaller groups and conduct parallel processes, longer and more varied in organization than fo- coming back to the plenary to report results. cus group sessions, and they can involve more people. Workshops can also include educational The conclusion of the workshop often involves and ceremonial components. some kind of summing up of the event, often by either a sponsor or a facilitator. On occasion, there Workshops may start with some sort of formal is a ceremonial closing speech by a sponsor. welcome from the sponsors. Often, then, the leaders (facilitators) of the workshop discuss the If a workshop is to be repeated in different re- purpose and agenda and any ground rules for gions or with different stakeholders, the basic participants. The facilitators may also discuss mat- structure should largely remain the same. You ters of confidentiality and attribution of remarks. may have to adjust the length of the workshop, or the number of breakout groups, or the key speakers. However, the tools used and the prob- lems addressed should be similar. Planning for Data Collection 81 LOOKING DEEPER BOX 33: FACTORING IN SAMPLING AND STRATIFICATION Chapter 2 introduced the concepts of sampling and stratification. You will use sampling if your assessment uses key informants, focus group discussions, workshops, or surveys. You cannot contact every possible key informant, include every stakeholder in a focus group or workshop, or survey everyone in the country. You have to take samples. For some assessments, especially those using surveys, sampling is a quantitative problem. You need randomness and a large enough sample size to be able to draw conclusions from your results with confidence. You need to deal with these matters in detail when you design your survey, and your data collection plan should reflect that you will have to spend time and effort on sampling. Chapter 4 of Iarossi (2006) covers practical issues of survey sampling. Most assessments must deal with sampling qualitatively. Indonesia (Annex I) and Russia (Kuzmichev et al. 2012) provide good examples. Both involved large countries. Neither could afford to gather data throughout the country. The solution was to select representative provinces to study. The selections reflected the desire to get information from a variety of places that fully reflected conditions throughout the country—but they were not strictly random. Sampling is not limited to country-wide assessments. In Liberia, for example (Annex I), the assessment chose to sample seven communities affected by logging concessions. Stratification (dividing samples into “strata” and measuring each separately) is another concern. Stratification can help to improve the accuracy of a survey or other sampling technique for a given sampling size, and it can allow details to emerge about individual parts of governance. If, for example, you have more than one agency managing forests, you may want to gather data about them separately (e.g., score the same indicator separately for each agency). If different regions of the country are likely to have different governance problems, you may want to assess each region separately. If you think that junior staff will have different perspectives on agency problems than senior staff, you may want to sort them into separate focus groups. Stratification may require extra time and resources, so you should begin to consider it during this planning step. 82 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Surveys Being specific means avoiding language that’s In planning for a survey, you should allow time loose or subject to more than one meaning. For and budget to develop a sampling plan, design example, “Do you often go on the public for- the questions, test the questions, administer the est land?” depends too much on what a person survey, code the responses, and analyze the thinks is “often.” results. This will require careful planning and awareness of the logistical and operational chal- Box 29 restates these and a few more rules lenges within the areas in which you are working. of thumb that apply to survey and interview questions. You will also need a survey instrument or ques- tionnaire. The wording of the survey questions, The style and order of questioning can affect the order in which they are asked, whether they answers. For example, given a list of questions are asked in person or over the phone, and with similar options for responses, people have the attitude of the survey taker are some of the a slight tendency to answer them all identically. many factors that can affect the outcome of a Studies have shown that people may tend to survey so it is important that these factors are favor positive responses over negative ones and both considered and standardized. that position may affect choice when there is a long list of possible responses. People usually Survey questions should be brief, objective, don’t like questions that are irrelevant to them, simple, and specific (BOSS) (Iarossi, pp.30–43). that are hypothetical, that invade their privacy, or Being brief and being simple both have the aim that might make them look bad. of avoiding confusion and misunderstanding. Ask about one thing at a time, and don’t load Other considerations in survey design include questions with assumptions. For example, don’t the survey length, the physical layout of the sur- ask a household if its access to forest resources vey forms (see Box 34 for some practical tips), is fair before you ask if they want and have ac- and the translation of the survey into local lan- cess to forest resources. Try not to ask long ques- guages. If the data collectors have limited skills tions, but don’t make questions so short that in the local language, this too may be a consider- they become confusing. Use language that your ation. The questions must be simple enough for respondents will understand. the collectors to explain them and to understand the answers. Being objective means avoiding biases. Don’t ask questions that favor a particular answer (either Babbie, chapter 9, and Bryman, chapters. 10 and because of the way the question is phrased or 12, have more information on survey planning because of the possible responses offered). Don’t and design. Iarossi (2006) is a book-length re- give the respondents biased background informa- source and is available for download online. tion and then expect an objective response. Avoid emotionally charged wording. Try not to give an impression that a particular answer would be more polite or friendly than another. Be sensitive to the culture of your subjects and try to ask questions that will get honest, not just polite, responses. Planning for Data Collection 83 BOX 34: DESIGNING DATA COLLECTION FORMS PRACTICE TIP Design of forms for recording information from interviews, focus groups, or surveys is an art in itself. Iarossi (2006), pp.81–85, offers some practical advice: • Give each form a unique identification number so you can tell if any become lost. • Number each question on the form; don’t skip or repeat numbers or restart numbering in each section. This will help the interviewer to avoid skipping questions and help later in transferring the data. • Leave ample space for answers and notes. Don’t crowd. • Include instructions. Use different formats to distinguish instructions for the interviewer from instructions to be read to the subject. • Use bold printing in questions to show the interviewer which words to emphasise. • Use boxes and other symbols to guide the interviewer in the entry of standardized responses, such as yes-or-no, multiple choice, or numbers. • Use arrows in instructions to indicate skips (e.g., “If the answer to this question is no, àSkip to Question 10”). BOX 35: PILOTING SURVEYS PRACTICE TIP Many weaknesses in question design can come to light in piloting. Piloting can also give you an idea of how long it will take to administer the survey and how skilled your survey-taking team is. People use four different approaches to test surveys. In conventional piloting, the team gives the survey to a few people from the target audience and then asks them about problems with the survey (e.g., were there questions that were hard to understand or did you have answers that weren’t among the choices offered?). In behavioral piloting, an expert observes the survey being given and notes problems. In cognitive piloting, respondents are asked to report everything that comes to their minds while taking the survey. In desk review piloting, experts in survey design evaluate the survey documents. Each of these piloting approaches can bring different problems to light. Although conventional piloting is the most common, you may want to try one or more of the other approaches as well. Assessments also use piloting to improve the questions used with key informants and focus groups. Indicators, like interview questions, can be misunderstood or off target. When assessments devise their own indicators, they may pilot them among experts to make sure they are clear and relevant. For more on piloting, see Babbie, p.267; Bryman, pp.263–64; Iarossi (2006), pp.10–12 & 86–94. 84 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 5: Finalize Your Work Plan and Develop a Data Collection Manual Now that you have identified in more detail what during the piloting of the survey. It may provide information you need (Step 1), who you will get it instructions on how to report the data or forms from (Step 2), and how and who will get it (Steps to capture the responses. If it is necessary to pro- 3 and 4), you can update your work plan (see tect the identity of those surveyed, it may include Chapter 2) to be more specific in terms of both instructions on how to do that. technical and operational details. If you have iden- tified the need to develop specific protocols for For key stakeholder interviews, the manual may key elements you should also finalize these and include a protocol listing the questions to ask. link them to the work plan to ensure that there These interviews tend to be much less struc- are no discrepancies. For example, if your survey tured than survey interviews. The interview sub- protocol says that you will do five surveys a day in jects often tell stories that yield information out rural areas and you need twenty surveys, you will of sequence to the protocol or information that need to ensure that the right number of days are is completely unexpected. The manual needs to allocated in the work plan and allocate time for include not only a protocol listing the questions transport between areas and for rest days. but also an explanation for the interviewer on what the objective of the interview is and to give As noted in Step 4, developing clear guidance the interviewer some freedom to improvise. for using tools is important, especially if the tool will be applied many times by many different For workshops and focus groups, the manual people. In recognition of this, some assessments may give instructions on determining who to prepare data collection manuals to guide collec- invite, the basic agenda to use, and the basic tion and assure uniformity. method to use to collect information. On the last point, for example, the manual might direct the Unlike a data collection plan, which forms part of the assessors to simply allow free discussion and work plan and which is written for project managers, capture the variety of opinions, to conduct a vot- the data collection manual is written for the people ing exercise, or to try to get the participants to actually collecting data in the field. Sometimes those reach consensus on scoring an indicator. people will be making sampling choices. For exam- ple, your plan may call for surveying 10 households For desk reviews, the manual may instruct in each sampled village, but the task of identifying the researchers on what sources to use, what the households is left to the researcher in the field. format to use to record the data, and what in- The data collection manual should explain how to formation to collect about the source to allow make a random selection (for example, by getting citation or verification. a complete list of the households in the village and using a random number table or lottery to pick the In each case the manual should not only provide households to be visited). guidance on the collection of data but also on its storage and management. This is critical to For a survey, the manual may include a script ensure that information is not lost between col- to follow in asking the questions. It may provide lection and analysis. More information on this is tips on answering questions that subjects asked provided in Chapter 4. Planning for Data Collection 85 Points on Process Vetting the Methods Vetting is a process of inviting constructive criti- and other issues that were simply outside your cism from stakeholders. Almost all assessments original scope. Similarly, you may find that your use vetting at some stage. Probably the most focus on some aspect of budgeting or staffing is common is to allow comments on the results relatively unimportant. of the assessment. However, using vetting early and often during an assessment has benefits. Vetting an indicator set can point out gaps in You tap the collective wisdom of the stakehold- your planned assessment. For example, it may ers. You increase the transparency of the assess- be that you have forest law enforcement as a ment. You encourage stakeholders to buy in to criterion, but no indicator addressing forest of- the process of assessment and make it more ficers’ authority to make arrests. This may be a likely that they will accept the results. major issue with local communities, or with the forest officers themselves, and may deserve its When to Vet own indicator. You can use vetting at each of the steps outlined in this chapter. Vetting at this stage may also disclose that stake- holders are misunderstanding the purpose of Vetting your description of what to measure the assessment. This will allow you to clarify can yield surprising results. People have different what the assessment aims to achieve. interests and values, and these lead them to see the world differently. What is a matter of forest Vetting your methods can expose bias or im- governance depends greatly on your point of view. practicality. Stakeholders may point out that your methods do not treat all stakeholders fairly or For example, you may want to perform a general that they entirely exclude some voices. They may assessment of forest governance, focusing on the point to the need to expand your expert panel, forest department, and you may develop your de- to translate materials to local languages, to hold scription accordingly. When you vet the description regional focus group sessions, and so forth. with stakeholders, you may hear from the forest officers that they have real problems coordinating How to Vet enforcement with prosecutors and courts, so the Vetting employs some of the same methods assessment must look beyond the forest agency used in gathering data but is generally more in- and capture this problem. You may hear from ru- formal. For example: ral communities that they have issues with land tenure, and the assessment should evaluate such • Rather than recruiting experts, you might sim- things as conflict resolution, land use policy, the ply send a draft of your scope to several inter- location of new roads and communication tow- ested people and invite them to comment. ers, and other pressures that lead to conversion of • Rather than hold an organized focus group forest lands to non-forest uses. You may learn of discussion, you could invite several stake- problems with revenue collection, unfair admin- holders to an informal workshop. istration of timber sales and licensing, corruption, 86 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE • Rather than have a carefully written interview scope or methods, you could provide an e-mail protocol, you could informally visit some rep- address for people to send you comments. Be resentative stakeholders, explain what you aware, though, that only the most motivated and were planning, and invite their reactions. technologically capable stakeholders may partici- pate this way. To assure broader participation, it Ideally, you have already identified the key stake- is better to actively recruit people to comment. holders as part of your planning and you know who to contact. You must take care not to intro- Another tool that assessments have used in vet- duce bias through your selection of stakeholders. ting is to tap existing advisory committees. If the You should aim for a representative sampling of forest agency has a citizen advisory board that interests. You do not want to exclude views; you represents many different stakeholders, or if there may even want to issue some sort of public invi- is an association of academic foresters that could tation to comment on your proposed methods. offer technical comments on sampling and mea- surement, you can invite these bodies to give As a matter of transparency you might be routine- advice. For more specific ideas, see the process ly posting documents of your review on the web note at the end of Chapter 1 for a discussion of or making them available in government offices. engaging other stakeholders in planning. As part of this, when you post information about Planning for Data Collection 87 88 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE SECTION II: IMPLEMENTING YOUR ASSESSMENT OVERVIEW • Section 2 provides guidance on implementing your assessment. It focuses on collecting your data and then processing it to produce outputs that can help you achieve your objectives. The Section is divided into two chapters: the first (Chapter 4) is on Data Collection; the second (Chapter 5) is on Interpretation and Analysis. • Chapter 4, Data Collection, covers the steps you need to take to acquire your data. These include assembling and training a data collection team, collecting data, and taking steps to assure its quality as well as taking note of some of the ethical considerations that come with data collection activities. • Chapter 5, Interpretation and Analysis, covers moving from raw data to providing more useful findings and conclusions. These steps include processing the data from the field so that it can be displayed and visualized in a way that is relevant to your target audience, analyzing the data to make sure you draw clear conclusions from it, and making recommendations that will help your assessment have an impact and provide a clear path forward. It also notes the value of having your data vetted and validated by a range of stakeholders to increase acceptance of the assess- ment’s findings. • While this section is presented in a sequential order, planners and implementers will need to con- sider many of the elements concurrently. Clear links need to be identified within your assessment between how data is collected and analyzed to ensure data formats are usable and contain the information needed. This section also has strong links to both preceding and subsequent sec- tions. Users should not lose sight of their objectives and target audience (discussed in Chapters 1 and 2). Links between tool design (Chapter 3) and data collection are paramount. You will need to continually review design during implementation to ensure that it is efficient and effective. How you interpret and analyze your data also has a strong relationship with how you plan to use that information, which is linked to both your objectives and your dissemination strategy (Chapter 6). Finally, it’s never too early to think about self-evaluation and improvement (Chapter 7), which should happen throughout the process.   Implementing Your Assessment 89 4 DATA COLLECTION Data collection begins with recruiting a collection team. Collection itself can take a number of forms. Most assessments use more than one data collection method. Once you have data, you will want to make sure that it is accurate and complete. ASSEMBLE AND TRAIN A DATA COLLECTION TEAM STEP The team can be a few researchers or a large number of survey takers. It all depends on your 1 tools and budget. COLLECT DATA STEP Collection can take many forms, from mining existing (secondary) data to acquiring new 2 (primary) data through experts, key informants, focus groups, surveys, or workshops. ASSURE DATA QUALITY STEP Once the data are in hand, you will want to make sure the data are accurate and complete. 3 You may spot-check that the data collector actually made the measurements according to the protocol. You may try to confirm the data by comparing it with similar measurements. You may comb through the data and investigate what appear to be data-entry or other obvious errors. POINTS ON PROCESS: PRACTICAL AND ETHICAL DATA COLLECTION When you collect data from people, your subjects must trust you. To earn trust, the project must be trustworthy. Following some basic rules of ethics and safety will avoid misunderstandings, improve the project’s reputation, protect participants, and, in the end, give you better results. 90 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 1: Assemble and Train a Data Collection Team Assessments vary on the number and skills of peo- every case, however, it is important to ensure ple they need to collect data. Part of the variation that training covers both the overall goals of the depends on the scale of the assessment and part assessment and the technical basis, the specific depends on the tools being used. For example, task a team member will need to play, and the the PROFOR assessment in Uganda (Annex I) basic logistical and operational requirements. gathered its data in a single national workshop and used six people: one national expert, one national Team Composition facilitator, two international experts, and two people Establishing a team to undertake the assessment proving administrative support. The NAFORMA as- will require consideration of the resources you sessment (Annex I) used sixteen field teams to have available in terms of time, human capacity, survey thousands of households via interviews. budget, and the methods you have chosen. A small team will be easier to manage but may lack The amount of training that data collectors need certain skills or may be unable to undertake an depends on the scale of the assessment, the assessment at the scale needed (e.g., conducting size and existing skills of the team, and the tools 200 interviews). As a team expands it will require used. A large assessment using a large team more coordination and management, and the faces a challenge in keeping data collection uni- role of the team leader or manager should not be form. This is especially true if the team is using underestimated. Table 7 provides an indicative list tools that rely on interviewing people and coding of positions within an assessment, although even their responses. In the NAFORMA case, the as- large projects may not require individuals in all sessment gave the data collectors a full month these roles. Some people in your team may take of training. In the PROFOR case, where the on more than one role, and sometimes people tool indicators were scored in a single national will share roles. Large teams almost always des- workshop, the national consultants needed only ignate specific people as leaders or managers; short, informal training on the use of the tool. In small teams may be less hierarchical. BOX 36: IDENTIFYING GOOD TEAM MEMBERS PRACTICE TIP UNDP (2007) offers some specific points to keep in mind when selecting people or organizations to collect primary data: • They should have reputations for trust and integrity, especially among the people they will be interviewing. Sources must be comfortable talking with the data collectors. • Their knowledge should be adequate for their roles. They should be familiar with the subject matter. If they are working in the field, they should be familiar with the local geography. • They should be reasonably impartial. Every data collector carries values and biases into the process. These should not be so great that they slant the data. Data Collection 91 TABLE 7: KEY ASSESSMENT TEAM MEMBERS Position Role Skills The managers will coordinate the effort. They should This person should have skills in managing both data Assessment Manager be familiar with the task in all its aspects, including and people. Sometimes the data collection manager is the data collection design, overall objective of the also responsible for recruiting the team. assessment, and budget. The manager should know the local context where data will be collected and also the administrative context of the assessment, including the expectations of sponsors and initiators. Depending on the design of the data collection, Researchers should be reliable and able to follow researchers may have the task of collecting secondary directions. If they are doing tasks like interviewing, Researcher data from agency records, libraries, and the Internet, or they should be good at communicating and building they may be collecting primary data through interviews, rapport—and potentially have good language skills. focus group discussions, or surveys. They should Because research seldom goes exactly as planned, they understand the nature of their assignment and should should be resourceful and able to respond to challenges. have the capacity to do the research. The logistics coordinator will manage many of the They should be familiar with the local context and, operational elements and will ensure that activities are ideally, have experience running similar activities in Logistics Coordinator carried out in a timely and safe manner. Key jobs might the past. They must be well organized and able to include arranging travel, renting meeting space, tracking communicate effectively with team members. expenses, securing support personnel, and so forth. This is particularly relevant in a large field-based assessment; logistical support may also be relevant for smaller assessments that require workshops and other group activities to be organized. The data manager is responsible for taking the data from The data manager should have excellent organizational Data Manager the researchers, assuring its quality (see Step 3 in this skills and a high level of attention to detail. Within chapter), and keeping it in forms accessible to others certain assessments, they may also have experience working on the assessment. using statistical or other data management software. Facilitators can be used in focus group sessions and A facilitator should have skills in running group workshops to assure a well-run, unbiased meeting with processes, experience in remaining neutral, and, ideally, Facilitator free and full participation. The facilitator may sometimes have the respect of the people they will be facilitating. also have the task of recording the data; other times, a separate researcher will have that task. Training Needs be impossible. Sometimes you have to rely Ensuring that the team is appropriately trained is on the team member’s judgment. The better critical to an assessment. Here is a list of some the team member understands the objec- topics that data collection training might cover. tives of the assessment, the more likely it will be that the team member makes decisions • Explanation of the assessment. The col- that further these objectives. lection team should understand the under- • Selection of subjects. If the data collectors lying objectives of the assessment and its are responsible for selecting subject for sur- overall approach. Team members will have veys, focus group, or workshops, they need to make decisions in the field about many to understand the procedure and standards things, from subject selection to coding of re- for subject selection. They may need to un- sponses. Anticipating all these questions will derstand how to conduct a random sampling 92 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE of a community. They may need to know • Coding. The data collectors need to write how to document their selection decisions down responses in a consistent way. You so that reviewers or analysts can know how may need to train them on how to handle to treat the data. ambiguous responses or responses that • Interviewing and facilitation. Interview and don’t fit the expected categories. The next facilitation training should cover two areas. step talks more about issues of coding. The first is good practice in interviewing and • Ethical practices. Data collectors need to keep facilitation. The next step in this chapter talks themselves and their subjects safe. They need a little about general good practice in these to keep promises made to subjects, such as areas. The second is good practice in the promises to keep responses anonymous. The specific types of interviews or group events point on progress at the end of this chapter has to be conducted: how formal they will be, more to say about data collection ethics. how much freedom the team member will have to deviate from the script, how the You should not limit training to written materials team member should explain questions if and lectures. Team members should have ex- the subject appears to misunderstand them, ercises, including role-playing ones, so that they how to ensure all participants in a group are can practice their skills and get feedback from fully engaged, and so forth. their instructors. Step 2: Collect the Data If you have designed your tools well and writ- Facilitation Techniques ten a careful data plan (both activities covered in Leaders of workshops and focus groups need Chapter 3), you should have a clear idea of how facilitation skills. A facilitator applies many of the to collect the data. This step and the next cover same practices as an interviewer, including good a few practical points about data collection. This manners and good listening skills, but group step has tips on interviewing and recording data interactions make facilitation more complicated in the field. The next step discusses handling the than interviewing. data that comes from the data collectors and as- suring its quality. A facilitator must be able to manage a meeting’s order and timing. This entails setting and apply- Interview Techniques ing ground rules with the group, getting agree- Many assessments rely on interviews. It is part of ment on an agenda, and keeping the meeting working with key stakeholders, focus groups, and on schedule. survey subjects. Box 37 has some basic practice tips for interviewers. The discussion below goes A facilitator must be aware of both individual into more detail for conducting survey interviews, behavior and group dynamics. Every participant where the quality of technique often determines should understand what is under discussion. the quality of the resulting data. Every participant should feel comfortable engag- ing in the process. To conduct a discussion that Data Collection 93 BOX 37: PRACTICAL TIPS FOR INTERVIEWERS PRACTICE TIP A good interviewer: • Honors basic courtesy. • Remembers participants’ names and titles and uses appropriate forms of address. • Uses body language and eye contact appropriately to engage with participants. • Spends most of the interview listening rather than talking. • Uses active listening, follow-up questions, and other techniques to assure that answers have been correctly understood and to encourage people to contribute complete information. • Keeps the purpose of the interview in mind and uses good judgment to achieve that purpose (even if it means deviating a bit from the protocol). • Remembers to thank participants. • Does not raise undue expectations about what participants input or the assessment will deliver. is open and understood by all, sometimes the the coming survey and then timing interviews facilitator must know something about the par- to avoid holidays and other inconvenient times. ticipants’ capacities, interests, and relationships (You can also take steps in the design of the prior to the meeting. survey to increase participation, such as offering incentive payments, crafting shorter surveys, or At times, a facilitator must manage conflict and making the survey content interesting to your re- encourage consensus. The facilitator should be spondents. You must, however, also make sure able to handle angry participants, to get par- not to raise undue expectations by overstating ticipants to step away from positions and talk the potential impacts of the survey and the as- instead about their underlying interests, to re- sessment more broadly.) frame issues in ways that blunt controversy, and to act impartially. The way interviewers present themselves can also encourage participation. Interviewers should Survey Administration be appropriately dressed, well prepared, likeable, Interviewers administer most surveys in forest happy, sensitive to the mood of the respondents, governance assessments. Surveys administered honest, patient, and willing to answer questions. through paper or electronic copies have a built-in bias toward educated respondents. The interview subjects should feel that the in- terviewer is impartial and not trying to influence Ideally, you want everyone identified in your their responses. Occasionally, governments want sample to respond to your survey. To increase to send minders to accompany interviewers. the percentage of people who respond, inter- Discourage this practice if it will cast doubt on viewers can contact people more than once, the neutrality of the process. perhaps beginning with a letter or visit to explain 94 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE If the interviewer needs to bring along an inter- For example, some allow the interviewer to ask preter, the interpreter should have traits similar a follow-up question about what appears to be to a good interviewer. Don’t forget to train your a partial or non-responsive answer from the interpreters on good practice. respondent. The interviewer should have clear instructions on points like these. With each respondent, the interviewer should begin with introductions. The interviewer should Some surveys use two interviewers: one to ask show the respondent the interviewer’s official questions and one to take notes. Some surveys identification and should explain the purpose of use one interviewer with an electronic device the survey, the potential benefits of participating, to record the interviews. In the latter case, the and the confidentiality of responses. interviewer should also try to take notes at the interview, as recording devices sometimes fail or At this point, the interviewer may be faced with become lost. a reluctant participant. Some training in persua- sion, especially in the face of common reasons At the end of the interview, the interviewer for not participating, will increase the interview- should thank the respondent for participating, er’s success in getting people to take the survey. answer any questions about the assessment that may have occurred to the respondent dur- Some survey protocols require the interviewer ing the survey, and get permission to contact to read only the text of the questions as writ- the respondent again if necessary to verify or ten. Others give the interviewer some flexibility. clarify responses. Data Collection 95 BOX 38: DATA COLLECTION AND ENTRY PRACTICE TIP When undertaking data collection, it is easy to feel that you and your team have a full grasp of exactly what information is being collected. However, when you return to that information later in the office, or provide it to another team member responsible for data entry, you may realize that you cannot remember key points, have missed details, or have different interpretations of key facts. Good practice is critical to ensure that both the right data are collected and the information can be easily used. Here are a few rules of thumb. Before collection: • Standardize response formats where possible. This goes back to tools design (Chapter 3). Standardization may vary, from giving interviewers bullet points on a blank sheet of paper to guide note taking to providing a full set of closed questions (see Box 30). Standardizing does not necessarily limit opportunities for further detail to be added; it does, however, provide you with a clear baseline for analysis and helps to reduce the subjectivity of interpreting information. • Give teams clear instructions. This goes back to training. Interviewers should understand what data the interview or survey need to capture, how to handle non-responsive answers, and how and when to prompt reluctant subjects or probe for more details. • Identify roles. Identify who will ask questions, who will take notes on what issues, and who will enter the data. Having clear roles helps ensure information is collected in every area and managed effectively. During collection: • Remember to capture some basic information. This basic information includes who (is interviewing and being interviewed), how (is the data being collected), where (location), and when (time and date). These can be used to help clarify data and may have a bearing on your analysis. • Write clearly. Illegible handwriting can mean that data is lost or requires considerable time to enter. After collection: • Enter it early. The sooner you enter the data into a final format (whether a master spreadsheet or your own compilation of interview notes) the more likely you are to remember any points you forgot to note down or how to interpret your own notes. Quick data entry will also help you identify any limitations in your approach early as opposed to after undertaking all your data collection and only then realizing you have missed a key point. • Encourage feedback. Have data collectors report back on what worked and what did not work in the field. You may find you need to adjust wording of frequently misunderstood questions or add further questions to bring out details. Consider, too, whether you are really getting the data you need. Assessments sometimes must revise their survey instruments or interview protocols when they discover that they are not collecting the data they need to fulfill their objectives. 96 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Coding Another option is to have a single coder for the am- The term “coding” refers to translating informa- biguous responses. To do this, when people in the tion into a form suitable for analysis. The kind of field encounter an ambiguous response, they record coding that you need depends on the kind of it exactly, either in writing or via audio or video, and data that you collect and how you intend to ana- flag it for the coder. A single coder acts as referee, lyze your data. For example, if you are only going looking at the recorded response, interpreting it, to use interviews to provide illustrative examples, and filling in the forms. Although body language you may be able to code your research as a set and other information may not come across in the of narrative answers to standard questions. The recorded response, having a single referee assures case studies in Annex I show information coded some uniformity in how the assessment treats cryp- in this informal way. tic responses. A few assessments take this a step further and have a single person review and code Another informal example of coding frequently every response, not just the questionable ones. happens with the input from vetting sessions. A common practice is to make a table of the If a tool uses open questions, or—like content comments that you receive with columns for the analysis—looks at material not prompted by spe- source of the comment, the substance of the cific questions, the coding task is more subjective. comment, and the response of the assessment Here you will usually want to come up with coding team to the comment. categories when you design the tool. (For example, for the question, “Does the community have fair In contrast, coding of survey answers is more access to fuel wood from the forest?” you could formal. The design of a system of formal coding code the responses as positive (fair), neutral, or begins when you design your tool. For example, negative (unfair).) As you pilot the question, you if you design a closed survey question (see Box may decide that you need to add coding categories 30 in Chapter 3), you have already described (e.g., “not sure,” “it’s different for different people in all the valid responses to the question. The cod- the community,” or “it varies”). ing job of the interviewer is to assign the survey subject’s answer to one of the valid responses. Once you define a coding system, generally you will want to give your data collectors forms that Having multiple coders opens the possibility of are consistent with that system. Particularly if the inconsistent coding for even the simplest ques- system is formal, data collectors will be able to tions. To increase consistency in coding, you can code the data in the field. Design of forms to to establish rules in your data collection manual avoid errors in entering data in the field and tran- (Chapter 3) and provide training for handling scribing data in the office is an art in itself. Box 34 cryptic and blank responses (Step 1 of this chap- offers some practical tips for form design. ter). After a non-standard response, you may want to direct the survey taker to ask again, reading off Obviously, this discussion only introduces the the list of possible responses. If the reply is still topic of coding. For more on coding in surveys, cryptic, you may want to give the survey taker the see Iarossi (2006) p.187 and Bryman (2012) option of coding the response as “response not pp.247–49; for coding in content analysis, see understood,” “did not respond,“ “not applicable,” Babbie (2010) pp.338–42 and Bryman (2012) “did not know,” or “refused to answer,” depending pp.298–304, 576–78. on the nuance of the response. Data Collection 97 Step 3: Quality Assurance Quality assurance encompasses the steps you During editing, the data manager may also spot take to make certain that data collection is cor- problems with accuracy or consistency in scor- rect and complete. ing. This may call for improvements in coding (discussed in Step 2). In some circumstances, quality assurance is aimed at preventing harm to data. You might Cleaning be concerned about the integrity of the data as Cleaning begins during editing by flagging data it travels from the researcher collecting it to the that stands out or raises suspicions of an error data manager. These days it is usually fairly easy (either because of the way they were collected to send the data directly, through e-mail, with or because the values are so different from other little chance of it being altered. However, if the data). If the data manager can determine that data is going to pass through multiple hands, the method of collection was irregular, the data including the hands of people who might have can be set aside or reported with an accom- an interest in the outcome of the assessment, panying caution. If an entry is obviously flawed you may want to consider methods to safeguard (for example, a six reported on a scale of one the data. These might include sealing the data to five, or a missing entry, or a “yes” recorded in a tamper-evident envelope, documenting for an entry that should have a numerical value), the chain of people with custody of the data, or sometimes the data manager can trace the error sending the data by trusted courier. and correct it. In most assessments, the emphasis of quality If a piece of data is simply an outlier, the situ- assurance is to check the data already collected. ation is more delicate. Almost every group of Assessments can take four kinds of these actions: measurements is likely to include some outliers. editing, cleaning, verification, and triangulation. To throw those out automatically would intro- duce bias and make your findings look more cer- Editing tain than they actually are. Investigate how the Editing includes steps to make sure the data outliers were collected. If you find irregularities from the field are complete and readable. The in circumstances or procedures, document them data manager should review all the data com- and pass the information on to the analysts to ing from the data collectors. If the manager finds decide how to treat the anomaly. problems, the manager can check with the data collector to resolve them. For example, survey Data editing and cleaning can also include questionnaires may come back without answers “tidying up” the appearance of the data: stan- entered for some questions. The manager can dardizing spelling and capitalization, correcting ask the survey taker if this was a failure to ask typographical errors, checking for the same data the question, a failure to record a clear response, unintentionally entered twice, and so forth. In or a decision not to record an unclear response. these cases, the cleaning should not actually Handwritten notes from an interview or work- change the values of the data. shop may be difficult to read. The manager can go over unclear portions with the note taker. 98 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Verification Triangulation Verification includes steps to check that the data Triangulation is an attempt to confirm the mea- was properly collected and transmitted. It may surements by finding another source that has be as simple as comparing a sample of the data made similar findings. Finding identical measure- received against the copy kept by the researcher. ments is rare; some sources, however, may have The data manager may review recordings to information close enough to suggest if the new make sure that the researcher followed the pro- data is consistent with what is already known. tocol and coded the responses accurately. The The data manager can also pass findings by data manager may contact participants to be experts or well-informed stakeholders to see if sure they actually did participate, or may send the new findings are consistent with accepted participants summaries of their responses and understandings. If they are not, it does not auto- ask them to confirm the accuracy. In some cases, matically mean that the new data is flawed, but it the data manager can repeat the measurement. invites the data manager (or, later, the data ana- With secondary data, the manager can go to the lysts) to explore the reasons for the differences. secondary sources and verify the data there. Data Collection 99 Points on Process Practical and Ethical Data Collection When you collect data from people, they must be working with their people or talking about trust you. Otherwise, they may withhold informa- their lands. In some cases this may result in lead- tion or even lie. To gain that trust, you must be ers suggesting respondents or ways of working. trustworthy. To be trustworthy, your actions must While this can help to ensure you gain insight be open, honest, and ethical. from the best people in a culturally appropriate manner, it may also restrict your access and bias The following are some ethical and practical con- your findings. As such, care should be taken to siderations in data collection. In the end, they are maintain representative samples and participa- simply steps to avoid harm to the public, the col- tion by a range of stakeholders. lection team, the reputation of the assessment, and the quality of the results. You may wish to Beyond getting official permission, people who incorporate some of them in your data collection might be contacted by your team (as well as, in manual or stress them in your training. some cases, the community as a whole) should know what you are doing and why. Do not give ru- Be more than transparent—practice outreach. mors a chance to start. For example, if you are pay- There is a range of formal and informal notifi- ing people compensation to participate in a survey, cation steps that could can take to let people you want people to know that participation is by invi- know exactly what the assessment is doing. It tation, not according to who shows up to meet you. is sometimes mandatory to get permission from national or local government to collect data, and Gain consent from participants. People who it is almost always a matter of courtesy to let participate directly in data gathering should give leaders of affected groups know what you are their prior informed consent to participation. planning. This may mean going to the national Sometimes this will be a formal matter, with the government, the local government, community assessment documenting that it received con- leaders, and even to business leaders if you will sent. Sometimes it will be informal or implicit. For PRACTICE TIP BOX 39: PROVIDING COMPENSATION Providing financial compensation to survey, workgroup, or focus group participants is a complex and politically charged process. It may enable you to access respondents who would not normally have the time or inclination to participate in an assessment, thus giving you a more representative sample—but it also raises ethical and practical considerations relating to levels of compensation, how it is paid, and who has access to it (for example who from an organization should be invited). Approaches to addressing these challenges vary widely by country, region, and organization. Seek advice from others on this matter. Check with your funders, political/institutional sponsors, and other implementers (see Chapter 2, Step 3) to identify what an appropriate approach might be. 100 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE example, you may send out invitation packets to Keep everyone safe. If it appears that participation a workshop explaining all about the event and its in the data collection could somehow later cause role in the assessment. If people accept the invita- problems for the participants, take extra caution. tion, there may be an implicit grant of consent, Consider if there might be additional ways to though you may want to discuss it further with the protect the participants. Consider whether their participants at the start of the workshop to make consent to participate was truly given freely. In sure there is no misunderstanding. the end, even with consent, safety should be your primary consideration. If you cannot assure Consider, discuss, and respect confidentiality. If the safety of participants, do not involve them in confidentiality is an issue, you should come to the assessment. an understanding about it with participants and honor the expectations that you create. People Have a similar concern about the safety and in- who are taking part in a survey should know tegrity of your team. Be sure they are isolated whether people in power could find out if they from outside pressures to produce particular re- participated or how they answered. People at sults. When you send people to the field, make group events should similarly know whether sure they have adequate transport, communica- their remarks could be attributed to them. Bring tion, training, and security to assure their safety. the topic up and make sure that you and the participants share the same understanding. Guard the integrity of the data. Be aware of your own biases, and do not let them affect data col- If discussing confidentiality might not be enough lection. Do not distort people’s comments or to take care of participants’ concerns about the take them out of context. Do not alter or invent consequences of being candid, you must take data. If it does not impinge on confidentiality, reasonable steps to reassure them. If responses keep records of your raw data so that it can be could be affected by the presence of an observ- used later to verify findings. er from an agency, from local government, from a local business, or anywhere else, do what you You may also want to have researchers take steps can to exclude that observer. If participants might to make it easier for the data manager to check change their answers because they see that a the quality of the data. These steps include having government employee is conducting the event, the researchers document how they collected the see if you can get a neutral party to be the data data, keep recordings of interviews and meetings, collector, such as a researcher from the local and keep copies of all the data they submit. university or a field worker from a trusted NGO. BOX 40: ETHICAL RULES OF THUMB PRACTICE TIP • Be candid with everyone. • Get people’s informed consent to participate. • Honor reasonable expectations of privacy and confidentiality. • Keep participants and your team safe. • Guard the integrity of the data. Data Collection 101 5 INTERPRETATION AND ANALYSIS At this point, depending on the tools you have used, many assessments have already begun to process and analyze their data. Almost all assessments, however, have further interpretive work to do. The interpretive portion of an assessment has one, two, or three steps, depending on the nature of the assessment. The first step is common to all assessments: organizing and processing the data. The second step is common to almost all assessments: analyzing the data. The third step is found in many assessments: making recommendations for action. PROCESS THE DATA STEP Assessments often produce complex data sets. You must process the data to make it more 1 readily understood. This may mean aggregating the data, calculating composite measures, or presenting the data graphically. DO THE ANALYSIS STEP Once you have the data in an understandable form, you can analyze it. For example, you might 2 make comparisons over time, among regions, or between institutions. MAKE RECOMMENDATIONS STEP Having analyzed the data, you can identify priorities and make recommendations for action. 3 POINTS ON PROCESS: VETTING AND VALIDATION OF ANALYSIS You should go to experts and knowledgeable stakeholders to vet and validate your analytic work. 102 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Interpretation takes three steps with progressively broader visions: • Processing the data, which involves dealing with individual data points and, often, producing basic summaries of it. • Analyzing the data, which involves seeking patterns and meaning. • Making recommendations, which involve placing patterns and meaning in the larger context of governance and drawing inferences about how to improve governance. The distinctions between these steps—and sometimes between data gathering (Chapter 4), interpretation (Chapter 5), and dissemination (Chapter 6)—can blur, and the steps can overlap. For example, the workshop in the PROFOR case study in Uganda (Annex I) not only scored indicators but also prioritized them, taking on tasks in Chapters 4 and 5. The cleaning of survey data (Chapter 4, Step 3) may take place during data entry and require an understanding of which data points are suspiciously far from the average response, an understanding that only comes after processing some of the data (Chapter 5, Step 1). And analysis (Chapter 5, Step 2) should produce results geared to the target audience for dissemination (Chapter 6). For that reason, you should read this chapter in conjunction with Chapters 4 and 6. This chapter ends with a discussion of the vetting of data processing and analysis methods. You should keep track of your methods. You will have to explain them if you vet the methods with stakeholders, and probably again in any assessment reports that you produce (Chapter 6). PRACTICE TIP BOX 41: DRAWING ON INTERPRETIVE TECHNIQUES FROM OUTSIDE THE FOREST SECTOR Chapters 3, 4, and 5 begin with planning steps (which require specific understanding of the forest sector), move to gathering steps (which tend to apply to governance assessment in many sectors), and end with interpretive steps (which apply to many kinds of evaluation activities and social research). As you work on interpreting your data, be aware that you can get ideas from cases and tools outside the forest sector and even outside the field of governance assessment. Interpretation and Analysis 103 Step 1: Process the Data Assessments commonly process raw data in Assessments use spreadsheet and database ap- three ways. They organize the data, they sum- plications to handle both qualitative and quanti- marize the data, and, in the case of quantitative tative data. For example, if an assessment used data, they describe it statistically. Often, they put the same protocol in several key interviews, it the data in a visual form for easier comprehen- could enter summaries of the interviews in a sion. In all three steps, for all but the simplest data spreadsheet (with each row devoted to a single sets, assessments frequently use computers. respondent and each column devoted to a single question in the interview protocol). This Data Organization would make it easy for an analyst to look at all It is possible to organize paper copies of data, the responses to one question. but assessments usually enter data into digital form. Digital records are easier to access, share, Sometimes, as with the PROFOR Uganda case and edit than paper records. With the power of (Annex I), the tools produce indicator scores. computers, digital entry often makes summariz- Spreadsheets are then a handy way to record ing and visualizing the data easier as well. and organize the scores. If the assessment has scored the indicators more than once—for ex- Assessments use word processors to organize ample, by regions as in the World Bank Russia text, such as interview notes and transcripts, re- assessment (Kuzmichev et al. 2012)—a spread- ports of experts, and summaries of outputs from sheet allows easy comparison of the scores. focus group sessions and workshops. The as- sessment analysts can then search the entered Assessments can find computer applications text for keywords, copy and paste passages to specifically designed for qualitative data entry group together text addressing similar subjects, and analysis. Babbie (2010), pp.406–13, lists or copy quotations for use in reports. several and gives an introduction to use of NVivo 8 (formerly NUD*IST). BOX 42: ASSURING QUALITY IN DATA ENTRY PRACTICE TIP Data entry can introduce errors. The data manager should apply a quality assurance method to catch errors of transcription, even if the method is as simple as proofreading the entries. A common error is to fail to enter a particular interview or survey response at all. Keep careful track of the number of items to be entered and compare them against the number of items actually entered. Give each survey or interview a unique identification number to make it easier to see if an item has been entered twice or if an item is missing. 104 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE BOX 43: SEARCHING FOR SOFTWARE PRACTICE TIP The text in this section provides an outline of only a few of the different software applications available to manage both quantitative and qualitative data. Web searches for “qualitative data analysis software” or “statistical packages” will produce links to many other applications, some of them free and open-source and some of them web-based. The site http://solutionscenter.nethope.org/ profiles ICT products for international humanitarian work, including database, analysis, and monitoring and evaluation products. Take time to review the options available and seek advice where possible on which will be the most practical. Iarossi (2006, pp. 191–95) gives advice on selecting and using software for data collection in survey work. For quantitative data, assessments can also find February 2014, not officially released) will per- computer applications specially designed to fa- form common quantitative data processing cal- cilitate data entry and analysis. For example, the culations. The Open Foris web page is at http:// NAFORMA study (see Annex I) used Open Foris, www.fao.org/forestry/fma/openforis/en/. a set of applications under development by FAO through the FAO-Finland Forestry Programme. A large number of free and proprietary applica- FAO is designing the applications specifically to tions for data entry and statistical analysis are support forest biophysical, socioeconomic, and available. See Box 43 for some suggestions for governance assessments. Open Foris Collect is locating them. Bryman (2012), chapter 16, offers a data entry and management application that a detailed introduction to one commonly used also assists in quality assurance and cleaning application (the Statistical Package for the Social (Chapter 4, Step 3). Open Foris Calc (as of Sciences (SPSS), currently marketed by IBM). Interpretation and Analysis 105 BOX 44: ARCHIVING PRACTICE TIP The data manager will want to select a stable format and place to archive electronic data so that they are available to other researchers, to critics, and to others doing similar assessments in the future. If the data are to be accessible over several years, the best formats are ones in wide use. A format used by a specialized program is more likely to become obsolete than the format used by a popular word processing, spreadsheet, or database application; many specialized applications, however can export data in common formats for sharing and archiving. If the data volume is large, the manager may want to use a compression application to reduce the size of the archive. Again, the safest choice is to use a method currently in wide use. For stability, an institution is probably a better keeper of the data archive than an individual. Actually, it is better to have multiple copies stored in multiple places. Stable businesses, international development organizations, NGOs, government agencies, or university libraries are possible homes; so too are online storage facilities (often referred to as “the cloud”), which will hold data and allow access by different groups. If parts of the data are sensitive or confidential, the manager will want to establish rules for access and will want to find storage sites willing to apply these rules. Data Summary Qualitative processing. Assessments can use If you have a complex set of data, you will want various means to make qualitative data easier to to capture its meaning in a simpler form. What grasp and summarize. One option is the addition that means depends on the kinds of data, the of keywords to the margins or text of qualitative methods, and the tools you have used. You can data (i.e., interview transcripts). Adding keywords summarize the outputs of some tools most easily is a form of coding, a concept introduced in in words. With loosely structured key stakeholder Chapter 4 in the context of entering survey an- interviews, for example, you can write a few swers. If you are using indicators, the keywords paragraphs capturing the key points of the inter- can correspond to indicators or the components view or the responses to key questions. Some that the indicators reflect. Adding keywords tools, including surveys and tools using content makes locating relevant passages easier. analysis, produce outputs that need quantitative processing. Some tools, such as expert scoring of You can also add more detailed marginal notes indicators, produce outputs that already include to transcripts. These may be simple summaries analysis. Depending on how those outputs are of the content, cross-references to similar or con- structured, they may be open to further quanti- trasting content, or the start of a more thorough tative comparison (see, for example, the World analysis of the data. Bank assessment in Russia, Kuzmichev et al. 2012) or better suited to qualitative treatment. Quantitative processing. The full statistical treat- ment of quantitative data is a subject beyond the 106 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE BOX 45: CODING WRITTEN MATERIALS PRACTICE TIP Unlike survey coding, where you set the codes when you write the questions, you may write or revise the codes for interview transcripts and other textual material after the material is collected. Bryman (2012), pp.576–77, offers some tips for this kind of coding: • Code as soon as possible after your data are collected. • Read through the full set of documents once without stopping to take notes. • Read the set again, this time indexing significant remarks and observations. These will be the basis for your coding. • Look at your index entries and try to sort them, connecting them to your narrative framework or components and indicators. Assign a standard code to each group of sorted entries. • Remember that you can assign a single passage to more than one group. • Don’t worry at first about having many potential codes; as you sort, however, aim to combine groups to highlight connections among related materials. You may find that you can reduce the number of codes without compromising your understanding of the documents. • Remember that coding is only a first step. You will still have to analyze the coded data. scope of this guide. Libraries and the Internet offer Averages can also be weighted. That is, the many guides to statistics. Most of the data entry components being averaged can be multiplied and analysis applications referred to above have by a weight, then summed, and then divided by built-in capability to perform statistical analysis. the sum of the weights to calculate a weighted mean. If the sum of the weights equals 1.00—for No statistical method is appropriate for all situa- example, the weights are a set of percentages tions. Use only statistical methods that you un- that add to 100—then the final step of division derstand. Beware of using statistics to give your is unnecessary. findings a false appearance of validity. As an example, a local governance assessment This guide will touch briefly on the topic of averag- in Paraguay assigned weights to its indicators ing, a basic way to summarize data and one that (UNDP 2009, pp.86–89). The three most im- has come up in assessments to date. Box 46 pro- portant indicators, dealing with service delivery, vides a quick overview of the concept of averag- had weights of seven percent. The 17 least ing. The key lesson is that there are three kinds of important indicators had weights of two per- averages and that they can yield different results. cent. The remaining 11 indicators had weights Most people are familiar with the kind of average between two and seven. The total weights known as the mean, but assessments can may summed to 100 percent. report medians and, less commonly, modes. Interpretation and Analysis 107 Besides assigning weights to indicators, an assess- the scores are worthy of the same weight. In other ment could assign weights to provinces or regions words, you cannot escape making a judgment by if data were collected regionally, or to agencies, if deciding not to use weighting. You may wish to separate data were collected to rate the perfor- make this judgment early in the process, as early mance of each agency. If the data collection is as when you are designing the data collection tool. stratified (see Chapter 3), each stratum can have its weight in determining a score for the whole. Visualizing Results A visual representation of data can help people If using weights, you should decide on them be- grasp them more easily. Many spreadsheet pro- fore you collect data or you should allow stake- grams can quickly turn quantitative data into holders or experts to set the weights as part of graphs and charts. For creative ideas about pre- the data collection process. Setting the weights sentation of quantitative data, see sources like later, after you know the initial scores of the Gapminder (http://www.gapminder.org/), Tufte items to be weighted, opens the door for bias. (2001), and other resources listed in Annex II. If you intend to calculate a composite score based on the scores of several indicators or the scores of Qualitative data can sometimes be translated several provinces and you decide not to use special into quantitative formats and displayed visually. weighting, you have made a tacit judgment that all In a recent forest governance assessment in LANGUAGE CHECK BOX 46: THREE WAYS TO REPORT AVERAGES—MEANS, MEDIANS, AND MODES Three common aggregates are kinds of averages: the mean, the median, and the mode. Say you surveyed a village and asked about income. You found that 10 households made $1,000/year, 10 made $2,000/year, 15 made $3,000/year, and two made $50,000/year. The mean income is what most people think about when they hear “average.” It is the sum of all the income divided by the number of households. In this case, that’s $4,730/year. The median income is the level that has half the households earning as much or more and half earning as much or less. Among the 37 households, the median income would be $2,000/year. The mode is the most common income level: $3,000/year. In this case, just reporting the mean might seem to make the villagers better off than they actually are. Only two households in the village earn as much or more than the mean. Just reporting the median or mode would hide the fact that the village has some high-income people. Reporting both the mean and the median—the mean income is $4,730, but more than half the households in the village earn $2,000 or less per year—gives a fuller picture of income in the village. 108 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Russia, for example, the analysts came up with a FIGURE 3: GRAPHIC CONVEYING THE DIFFERENCE BETWEEN IDEAL SCORING AND ACTUAL SCORING OF percentage representing the difference between FOREST GOVERNANCE INDICATORS IN RUSSIA “ideal” forest governance and the scoring of their indicator set in Russia. They presented it as a pie 100% chart, analogous to the one in Figure 3. Ideal Model of Forest Governance Assessments have used simple color schemes 65% (red=poor, yellow=fair, green=good) or shading Current to convey the meaning of large groups of indica- Status tor scores. Figure 4 presents a shaded visualiza- in Russia tion. For an example of how color can quickly convey scores on large tables, see the annexes of the Indonesia PGA (Situmorang et al. 2013). Source: Kuzmichev et al. 2012, p.85. Figure 5 is from a report on governance assessment in Liberia. It conveys the value of indicator scores by FIGURE 4: EXAMPLE OF A TABLE USING SHADING TO both color and bar height. Tall, red bars are poor CONVEY THE RELATIVE QUALITY OF SCORES scores. Short, green bars are good scores. Items Indicators scores on a zero-to-four scale. with no bars at all are best scores. This method Under 2 is poor, between 2 and 3 is fair, 3 or better is good. allows the reader to quickly see the difference be- tween the actual score, represented by the colored Government Province Province Province Province bar, and the ideal score, which has no bar at all. rating for … A B C D Forest Planning 3.6 2.5 1.8 2.6 Radar or spider graphs can also show the differ- Agency Capacity 2.9 1.9 1.7 2.8 ence between the actual situation (the inner line) Adequacy of 2.2 1.0 1.0 3.0 and the ideal situation (the outer line). Figure 6 Budgets is a radar graph showing the part of the indicator Conflict 2.6 1.0 1.0 1.1 scoring in the PROFOR Uganda case (Annex I). Resolution Effectiveness The assessment used a spreadsheet chart-mak- ing function to create this graph. Parliamentary 2.0 1.6 1.4 2.0 Engagement in Forest Issues Techniques like word clouds can convey a general sense of the key issues discussed in a report or Average 2.66 1.6 1.38 2.3 workshop. Side-by-side placement of word clouds generated from outputs of different focus groups or workshops will show the differences in the con- cerns that they discussed and emphasis that they devise icons for various attributes: a coin to rep- gave to them. Figure 7 offers an example. resent budgets, a pair of shaking hands to rep- resent conflict management, a speaking figure If the audience that you want to reach has limited to represent public participation, and so forth. literacy, you may want to convey results through These can be shown in different colors, num- symbols other than numbers or words. You can bers, or sizes to indicate importance. Interpretation and Analysis 109 FIGURE 5: USING COLOR IN A BAR GRAPH TO DISPLAY INDICATOR SCORES FROM LIBERIA 80 70 60 50 40 30 20 10 0 o e nt rti DA Su A age civi tion y t a nve e o ic a k re y no ry olic ss ew rru ac aw fo Ac Per ack sion blic Fre Tran ntab al. F . and DA on po pv of s Go Pr SO a terfe ctor ls su te fo ce s. t fo co mo A oC of c ors bu tre ertif uct n e g es nt ed a n t ab m A sy ch s se s s pa ag itiv ion n nc o . om are o st xp. Str om y of rest ficial co ry lan m ge ce bli rt t es ivi iet ren I otic ubl see ciet nc nc rre w di co s t tc ets th tio or d. st FD ive to mg --FD n e ia vt iva nd ren op ca ng ns tat tio co ta , l in pt ces s, e nd nto f p cce vi d rta ala Cu revie f me r C de nit y t e pa s--F v v F g C l. I t. s fic rvi ste an es pu Pu ppo venu w/ c soc ed sp ility ore e ng w/ cipa dg g ica o e itra t. p co f. E re s-- pp in re -- & on vt t e -se sul ipa p to so n- da ies to fr nc f fo of l l Go gov der con artic o c t o Ge lder lic p n cit em e ho Pub S Tr o cy u n ke en n po &b Sta c Ag Im cks e f Ch ten pa Ex ns Tra Source: Halton et al. 2013. FIGURE 6: RESULTS FROM SCORING INDICATORS IN FIGURE 7: HOW WORD CLOUDS COULD BE USED UGANDA PRESENTED IN A RADAR GRAPH TO COMPARE CONCERNS RAISED BY GOVERNMENT OFFICIALS (TOP) AND NGO OFFICIALS (BOTTOM). Data Accountability of Accessibility Notice Forest Agency Officials to Public 5 4 Performance Transparency Evaluation 3 in Products 2 Allocation Independence 1 and Freedom 0 Local Influence of Media of Forest Policy Availability of Local Involvement Venues to in Forest Report Concerns Management Consultation and Local Feedback from Security of Knowledge Communities Access to of Formal Rules Resources IDEAL CONDITION ACTUAL CONDITION Source: Kishor and Rosenbaum 2012. Source: Generated by the authors of this guide using Wordle.net. 110 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 2: Do Analysis An assessment needs to distill the mass of data coordination among ministries may explain into something that makes sense within the lo- poor customs control or lack of successful cal context. You may have calculated the average prosecution of forest crimes. response to your survey questions or the average • Show process. Identify patterns in the prob- scores that stakeholders gave your indicators, but lems and how one problem might tend to what does this mean for the country’s forest sector? lead to another. For example, discuss how lack of control over corruption may lead rural You can apply analytic frames to turn your data people to seek illegal access to resources into more easily understood information. Here rather than to ask for official permission, are some suggestions (based in part on Babbie leading to lack of government control over (2010), pp. 394–95, citing Lofland et al.): the resources. • Show cause and effect. Identify conditions • Score your indicators. If your tool has not that may be contributing to poor governance. produced indicator scores directly, use the data Use the data to strengthen the argument that to score your indicators and report the scores. these conditions are truly causes of the prob- Indicate the ideal score for each indicator and lems. For example, show how small budgets compare the actual score with the ideal. are limiting an agency’s capacity to engage • Use your data to apply a political, social, stakeholders. or economic theory or framework. For • Show outcomes. Use the data to identify the example, use what you have learned from outcomes that follow from poor governance. interviews with key stakeholders to evalu- For example, compare regional governance ate the principles from the PROFOR-FAO scores with reported deforestation rates. Framework (PROFOR & FAO 2011), such as • Make comparisons. If you have used strati- transparency and accountability. Discuss how fication (Chapters 2 and 3), comparing strata the actual situation compares with the ideal, can offer insights. You may be able to com- or with the practically achievable situation pare regions within your country or agencies (i.e., a sort of gap analysis). within your government. • Construct a history. Use the data to illustrate • Note the unexpected. Unexpected patterns the changes in governance over time and to or values in data can lead to new understand- show the trends. This is a useful approach if ing. For example, if a survey of market prices information from prior assessments is at hand. shows that charcoal prices in one city are • Report frequency. Present evidence on the consistently higher than in others, this then frequency of problems, such as illegal trade invites a discussion of possible reasons tied or lack of forest access. to governance. If one and only one region • Report magnitude. Present evidence of the has had no convictions for forest offenses in size or relative importance of problems. the last year, that fact invites explanation. It • Show structure and relation. Use the data could be simply a lack of forests, but it could to explain what different types of governance be a failure of enforcement. problems exist and how they influence each • Use anecdotes. Anecdotes serve two func- other. For example, explain how lack of tions in analysis. One is to illustrate points Interpretation and Analysis 111 established by more robust analysis. Stories The second role for anecdotes is to deal with sig- simply carry more rhetorical weight than nificant occurrences that are too rare to address numbers. (Take care to use anecdotes re- by other means. If a war in a neighboring country sponsibly, and do not use them as rhetoric has sent an influx of refugees onto public forests, to cast doubt on valid findings.) For example, you may lack measures to capture the extent of if the collected data show good coordination the problem statistically. Your next best option between the forest administration and other may be to discuss it anecdotally, with stories you sectors, an example of how the forest agency have taken from news reports or directly from and the communications ministry worked to- affected stakeholders. Similarly, if a crisis (e.g., gether to site a radio tower could make the fire, wind, flooding, insects, or disease) has over- point stick in the minds of readers. If the data whelmed the ability of the forest administration show poor coordination, a story of waste or to cope, your data tools may not be tuned to working at cross-purposes would also make pick out the details of the problem. The best way the finding more memorable. to add detail may be through anecdotes. Step 3: Make Recommendations Not every assessment starts with the purpose If the assessment is designed as a neutral evalu- of making recommendations for reform, but ation, it may want to be more circumspect in recommendations can be a first step to turn its recommendations to avoid charges that it is findings into action (a process discussed further pushing a political agenda. One approach is to in Chapter 6). place the task of coming up with recommenda- tions in the hands of respected and independent Making recommendations can be the most experts. Another is to give the task to a commit- subjective and controversial part of the analytic tee representing diverse interests and ask them phase of the assessment. Recommendations of- to reach a consensus. ten represent a particular point of view and have political implications. Two common kinds of recommendations are iden- tification of priorities and suggestions for action. If the assessment was designed as a reform or advocacy tool and the organizers have been Priorities transparent about their point of view from the Priorities are the areas of governance needing beginning, then controversy is not really a con- improvement most urgently. Identifying pri- cern. If an NGO is organizing the assessment, orities makes it easy to report the assessment’s the recommendations can be consistent with findings in summary form, because the priori- its advocacy. If the government is organizing the ties flag the main conclusions that the summary assessment, then the recommendations can be should feature. Making a list of “top 10 priorities consistent with the government’s standing poli- for reform” is a simple way to highlight findings cies or political philosophies. for decision makers. 112 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE The assessment team may identify priorities Actions based on its own expertise, or it could consult To identify good recommendations for ac- stakeholders to ask them to prioritize the issues tion, consider the SMART criteria introduced in identified in the analysis of the assessment’s Chapter 1. Good recommendations should be: data. A priority-setting exercise is relatively easy to perform in a short workshop or by survey. • Specific. They should point to concrete As an example, the PROFOR tool (Kishor & changes. Priorities can be general, but ac- Rosenbaum 2012) has stakeholders select a tions should be specific. small set of high-priority indicators that can be • Measurable. People should be able to track used to monitor the progress of reforms. and verify implementation. • Achievable. They should fit the available ca- Some items may be priorities because they lay pacity and resources of the people who will the groundwork for other improvements. For ex- implement them. ample, if a country has defects in its forest policy, • Realistic. They should fit the context, includ- reform of the policy should ordinarily come be- ing the politics. fore reform of laws and institutions implementing • Time-Bound. They should come with a the policy. If poor revenue collection is starving deadline for implementation. the budget of the forest agency, this could be the cause of many governance problems and should be promptly addressed. Some items may be priorities because they touch on strongly held cultural values or politi- cal commitments. A failure of the government to provide indigenous peoples their traditional access to forests, for example, might be a higher priority than a failure of the government to main- tain permanent forest inventory sample plots. Interpretation and Analysis 113 Points on Process Vetting and Validation of Analysis You should vet or validate two aspects of your opportunity to comment. If stakeholders have analytic work. The first is your choice of methods enough interest and resources, they can seek for processing and presenting data. The second assistance from outside experts to review the is your draft analysis and recommendations. methods so that they can be sure the assess- ment is treating them fairly. Validating data processing methods. Your choices of methods for processing data are going If you do not vet your methods as a separate to be technical ones, but they could have policy step, stakeholders will still have an opportunity implications. For example, you can shade your to comment on methods later in the process results by what weights you give the compo- (when they see your findings, recommenda- nents of averages and even by how you design tions, or conclusions). By then, however, it will graphs and diagrams. You will want to validate be more difficult for you to make changes in your methods, particularly if they are of your own response to comments, and might require you design and are not part of a standard approach. to reprocess your data—at some cost. Assessments tend to validate these methods Vetting analysis and recommendations. As among technical peers (e.g., seeking criticism discussed in the preceding chapters, stakeholder from colleagues within your own organization, involvement is central to both assuring the qual- asking for review by an academic expert, or con- ity of the assessment and securing support or sulting the assessment’s technical review panel, acceptance of your findings. You should vet your if it has one). analysis and recommendations. As with other kinds of vetting, you have many options con- Because these choices may have policy im- cerning when and how you do it. plications, you could offer stakeholders the LANGUAGE CHECK BOX 47: VETTING AND VALIDATION The dictionary definitions of vet and validate show that the words overlap. To vet is to examine closely and critically. To validate is to check for truth, accuracy, and acceptability. In assessment work, vetting usually refers to opening your work to outside scrutiny and criticism. Criticism from vetting may be objective but it often explores matters of values and opinion. Validation usually refers to examination of work on objective points—is this data collection method sound, are the collected data accurate, and do the findings follow from the data? The terms are used loosely, however, and they do overlap. 114 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE When to vet. Some assessments vet as they expert interviews, then vetted them in a stake- go along. This option fits well with assessments holder workshop. that use stakeholders to develop data. For ex- ample, the PROFOR tool pilot in Uganda (Annex Some assessments put off the vetting of analysis I) developed its basic findings in a stakeholder and interpretation until the vetting of the draft workshop. It then shared these findings with report (Chapter 6). other stakeholders in a series of interviews. This effort was part triangulation (getting information How to vet. The examples above have sug- on the same issues from a second source) and gested a few common ways to vet. The options part vetting (getting the second source’s reaction are similar to other options for reaching out to to the findings of the workshop). If you want to stakeholders or experts. Some assessments re- do both triangulation and vetting, you should get lease written copies of findings and seek writ- the second source’s views on the issues before ten responses. Some make oral presentations in you reveal the first source’s views. Otherwise, meetings or workshops and take oral feedback. you risk influencing the second source’s views, Some assessments vet narrowly, for example to making it not a truly independent triangulation. a representative sample of stakeholders, to an advisory group, or to a group of professional Some assessments vet when data collection peers. In doing so, they must take care not to and the initial analysis are complete but before introduce bias or favoritism through the selection the report is written. For example, the Indufor of reviewers. Other assessments vet their find- assessment in Kenya (Indufor 2011) developed ings widely, most often through publication and its findings through secondary data analysis and invitation to comment orally or in writing. Interpretation and Analysis 115 116 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE SECTION III: USING YOUR ASSESSMENT OVERVIEW Section III provides information on how best to use the results of your assessment through active dissemination as well as how to continually learn and improve throughout the assessment process through active evaluation. The section builds on the work done as part of the preceding chapters to help you use knowledge of your objectives, target audience, and results to develop and implement a dissemination strategy (Chapter 6). It also provides guidance on how to continually learn from the as- sessment process to both improve its quality and learn lessons for further assessments (Chapter 7). • Chapter 6, Application of the Results, discusses how to get your results into the hands of people who can use them to improve governance. Usually this means writing a report, although you will want to have a complete strategy for dissemination—and it may call for the results to go out to different audiences in different forms. You will also want to begin setting the stage for the next assessment, even if that will happen at some unknown date. This means finding ways to institu- tionalize the assessment process. • Chapter 7, Learning and Improvement, describes ways to capture lessons from your assessment to make the next assessment better. A short postscript, about improving the art of forest governance assessment, follows Chapter 7.   Using Your Assessment 117 6 APPLICATION OF THE RESULTS To have an impact, you must communicate the results of your assessment to decision makers, stakeholders, and others. Often, simply writing a technical report is not enough. You must write reports that your target audiences can use, and you must help your audiences find and use your reports. Finally, you should think about how to build upon your work by laying the groundwork for the next assessment. DECIDE ON A DISSEMINATION STRATEGY STEP Create a plan on when to disseminate results and how to craft reports or other outputs and 1 distribute them. IMPLEMENT YOUR STRATEGY STEP Create your report or other outputs, vet them, publish them, and let people know about them. 2 Spread the word. INSTITUTIONALIZE FURTHER ASSESSMENT STEP Some assessments are designed as limited exercises; others are intended as the start of 3 ongoing or periodic assessments. If this is the case with your assessment, you may want to find a permanent “home” for the assessment process and a base of supporters. Having an institution take ownership of the task of doing assessments will make it more likely that records of this assessment will be preserved and that future assessments will happen. POINTS ON PROCESS: FACILITATE USE OF YOUR FINDINGS Build the capacity of decision makers and other stakeholders to understand your findings and apply them to forest sector problems. 118 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Once you have completed your analysis, you want to communicate the results to others in order to maximize its impact. At this stage you should again review your objectives and the planning you undertook in Chapter 2 to consider what you want to achieve through the results and who your target audience is. For example, if your objectives include encouraging reform in response to the results, you must think about how to sow the seeds of change. Although assessments have many options for reporting results, most assessments first produce a written summary of findings (i.e., a report). Most assessments vet their report internally through peer review or externally with stakeholders. You may consider several other ways to communicate findings, including short versions of the report aimed at particular users, electronic versions, and audiovisual versions. When the report is first released, the assessment team may also hold briefings for senior officials, stakeholders, or the press, or hold workshops to discuss and present the findings and discuss what could happen next. After you release your report, you may want to think about ways to amplify its impact. These could include helping stakeholders to make better use of the report and institutionalizing the process of assessment to assure that future assessments build on your work. BOX 48: THINKING BEYOND THE REPORT PRACTICE TIP Assessments have made the mistake of seeing the report as the end of the process. This is especially common if the assessment is given over to a consultant whose “deliverable” is the report. From the beginning, dissemination and communication should be part of the work plan (Chapter 2). Application of the Results 119 Step 1: Decide on a Dissemination Strategy When you first wrote a work plan and budget for The target audience are the people you hope will your assessment (Chapter 2), well before you read the assessment and be influenced by the had any findings, you began to think about dis- results. If you did a political economy analysis semination. As you complete your analysis, you as part of your early planning, the results of that will want to revisit your initial plans and polish analysis may be helpful in identifying your target your dissemination strategy. What outputs will audience (and you can revisit the analysis now). you use to report your results and how will you get those outputs to your intended audience? You may have many target groups—from high- level politicians to rural stakeholders—and they A dissemination strategy is for the internal will have different levels of sophistication. Even guidance of your team. It does not need to be within particular groups, your audience will vary. lengthy or elaborate. For example, senior officials in the forest depart- ment will have different technical strengths and In devising or revising your strategy, begin by use a different technical language than targeted restating the objectives of your assessment, officials in sister agencies dealing with law, fi- which should be clear from your initial planning nance, or trade. (Chapter 1). Those objectives will point to a tar- get audience for your work. You can then con- Next, consider if there are any constraints or re- sider whether this group is still the most relevant quirements in place regarding how you dissemi- given your results or whether other target groups nate your results. The constraints you face will need to be added. commonly come from three sources: BOX 49: RETHINK AND REVISE PRACTICE TIP Even if your initial work plan already includes a detailed dissemination strategy, you should consider revising it before you start to produce outputs. The context of the sector may have shifted since you began work. For example, a change in leadership, a scandal, or a natural disaster may have made the government more interested in specific kinds of reforms. That may suggest a new emphasis for your report or publicity plan. You may have learned new things about your target audience that suggest better ways to reach it. You may have encountered new constraints of time or budget that require changes in dissemination approaches. You may have new insights on how to organize or present your findings. In any case, as the time to write up your findings approaches, you will want to think again about dissemination. 120 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE • Directives. For example, an assessment done These constraints should have been consid- to fulfill a requirement related to REDD must ered during the planning stage but may need produce a report that meets that requirement. to be revisited now, particularly if your results An assessment funded by an outside donor are relevant to a target audience that you had probably must present a report to the donor. not previously considered. An assessment done by a government or organization with more than one official lan- Once you are clear on the objectives, the target guage may face a requirement to produce its audience, and the constraints, you should try to report in more than one language. answer three questions: what kind of outputs the • Limited resources. You may lack the budget assessment will produce, how the assessment to produce specialized summaries of the re- will make the outputs available, and how the as- port or to conduct regional workshops on it sessment will draw attention to the outputs. for stakeholders. You may lack the capacity to support a website after the assessment Kinds of Outputs wraps up and the assessment team moves Box 50 lists some frequently used output for- on to other assignments. From the available mats. You should pick outputs that match your options, you must choose the best way to target audience. If the target audience is varied, spend your limited resources. your outputs should reflect that. For example, • Constraints of authority (particularly if you say that the primary objectives of your assess- are working within a government or large in- ment are to prepare a report to the cabinet on stitution). A government office may lack the the status of forest governance and to encour- authority to brief elected officials unless the age improvements in governance. You will need elected officials request the briefing. Local law to produce a main report for the cabinet. Your may prohibit an educational institution from actual target audience, however, will be broader engaging in activities that appear to be po- and include elected officials, technical staff at the litical. Organizational rules may prohibit tech- forest agency, and influential stakeholders. You nical staff from sending out press releases. may want your main report to contain technical BOX 50: SOME OUTPUT FORMATS PRACTICE TIP Most assessments prepare some sort of detailed written report. In addition, you may want to think about: • Summary versions for decision makers. • Summaries aimed at key stakeholders. • A website reporting the results. • Versions in multiple languages. • Versions on compact disk. • Audio or video summaries. • PowerPoint presentations. Application of the Results 121 details to be convincing to the scientifically mind- Making Outputs Available ed along with an executive summary or chapter In considering how to distribute your outputs, summaries that will communicate to the less you should again think about your objectives and technically minded. You may want to prepare target audience. Printed copies of materials are separate summaries of the report, perhaps each useful for formal presentation to decision makers a few pages long, aimed at particular stakeholder and sponsors. They are essential to reach people groups. If your target audience includes rural who lack access to computers and the Internet. people, you may want to consider steps such Electronic copies are less expensive and easier as preparing report versions geared to their level to produce as needed. Oral presentations reach of education and preparing versions in local lan- fewer people, but they assure that you have the guages. If you audience includes international attention of your audience. Box 51 offers some actors, such as funders, you may want to provide distribution ideas. report summaries in the funders’ languages. BOX 51: POSSIBLE WAYS TO DISTRIBUTE THE ASSESSMENT FINDINGS PRACTICE TIP • Printed copies These can be sent to: –– All stakeholders involved in the process. –– Key decision makers. –– Members of the press. –– Libraries. or –– Made available at low or no cost on request from a central location. –– Made available at low or no cost on request at district offices. • Electronic copies –– Having the key results on the project’s own website. –– Having the full report downloadable. –– Having the full report on compact disks and distributing them like printed copies. –– Having video or audio (podcast) summary versions available online. • Oral “copies” –– See “Educational Outreach” (Box 52). Remember that techniques can build on each other. For example oral presentations or summaries may pique interest in reading the full report. 122 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Drawing Attention to the Findings After considering objectives, constraints, and op- You should also think about appropriate ways to tions, you do not need to write out a separate dis- let your target audience know that the report ex- semination strategy if you decide that all you will ists and that they can get copies of it. These may do is produce a single version of your report and include press strategies, electronic media strate- send it to decision makers and stakeholders. If you gies, and educational outreach. See Box 52 for have decided on a strategy that involves more than some specific ideas. one or two steps, however, or if you have arrived at a more detailed idea of what you want your outputs to be, you should capture that in a short memo for later reference by yourself and the team. PRACTICE TIP BOX 52: POSSIBLE WAYS TO DRAW ATTENTION TO THE ASSESSMENT FINDINGS Publicity • Traditional press strategies –– Press release sent to interested journalists. –– Press briefing for interested journalists. –– Appearances on radio call-in programs or TV public affairs programs. –– Paid notices or advertisements. • Electronic media strategies –– Postings about the findings on blogs. –– Postings on discussion boards. –– Postings on social media. –– E-mails to interested groups through their list servers. –– Postings on Twitter accounts. Educational outreach • Talks, presentations, or briefings for the general public or target groups—especially for key decision makers and stakeholders. • Scholarly papers and conferences. • Workshops aimed at particular targets, such as timber operators or rural community leaders. • An educational website. Application of the Results 123 Step 2: Implement the Strategy The following are the typical steps for imple- Writing an assessment report is much like writing menting your dissemination strategy: any other report. The previous step in this chap- ter mentioned the key consideration: keep your • Create a draft of your main or most compre- objectives and target audience in mind. Write for hensive output. that audience, using language and examples that • Vet the draft. they will understand. • Revise it to produce a final main output. • Produce supplemental outputs. Quite often the team members responsible for • Publish your outputs. analyzing the data will be involved in writing up the findings. They may not be skilled writ- Create a Draft of your Main Output ers. Do not hesitate to bring in a good writer to Quite often your main output is a report. help them, either as a co-author or an editor. The Writing takes a major commitment of staff time. writer needs enough of a grasp of forest gover- Assessments may spend a third of total staff nance and assessment to deal with the material hours writing outputs, and your budget (Chapter accurately and fully. In addition, having a person 2) should reflect that. If you have already written unfamiliar with the specific assessment as an ed- up your findings as part of vetting your analysis itor can sometimes help the writers avoid a com- (Chapter 5), you may be more than halfway to- mon mistake: assuming that the average reader ward finishing your output. If not, expect to spend knows more than the reader actually does. a significant amount of staff time in writing. LOOKING DEEPER BOX 53: EXAMPLES OF REPORTS Here are three web pages that include links to assessment reports. These are mostly “high end” reports, aimed at an educated audience, with professional layout and editing. Not every report needs to be so sophisticated or elaborate. • PROFOR assessment in Burkina Faso: http://www.profor.info/knowledge/assessing-forest- governance-burkina-faso. • World Bank assessment in Russia: http://www.profor.info/notes/results-are-assessing-forest- governance-russia. • Indonesia Participatory Governance Assessment: http://www.undp.org/content/indonesia/ en/home/library/environment_energy/participatory-governance-assessment--the-2012- indonesia-forest--.html. 124 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Vet the Draft and Revise however, you may nevertheless want to do full How you will vet the draft report depends on the outreach to stakeholders. You may also need to extent of prior vetting of your findings. Some as- follow your organization’s procedures for review- sessments use the draft report as the vehicle for ing documents prior to publication. major stakeholder vetting. In that case you will want to follow the same sort of steps discussed Create Supplemental Versions or Outputs for validation of results in Chapter 5. These may Once you have a final version of your main re- include internal peer review, external peer re- port, your strategy may call for producing sum- view, key informant interviews, general release maries, translations, or simplified versions. The to stakeholders for written comments, and/or timing of production of these can vary, and you stakeholder workshops. may want to produce some after release of the main report. In fact, based on how people react If you use face-to-face meetings to vet your re- to the main report, you may come to see that port, consider whether to use a neutral facilita- additional versions would be of value. Different tor. If the subject matter is sensitive, people may stakeholder groups may see different parts of feel safer dealing with a neutral party and having your findings as key—rural communities, for some promise of non-attribution of their com- example, may be particularly interested in your ments. If the subject matter is emotional, the findings on access to forest resources or benefit discussion will be easier to conduct if the cause sharing—and you may want to produce notes of the anger or fear is not leading the group. In or summaries aimed at them and highlighting addition, a trained facilitator will know how to ac- these findings. You must take care, though, not knowledge and diffuse emotional tensions. to appear to be an advocate for one group or another unless that is your acknowledged role. If you have already vetted your findings, you may just need some internal or external peer review. Publish your Outputs Going to stakeholders too many times for vet- Remember that publication is more than delivery ting can result in “vetting fatigue,” and you may of a printed copy to whoever commissioned the have trouble getting people to give you their assessment. See Box 51 for ways to distribute full attention. If the report’s findings are likely to your report. Publication should typically include be controversial or touch on sensitive matters, building awareness of the report’s availability. BOX 54: PROTECT YOUR SOURCES PRACTICE TIP If you promised confidentiality to key informants or participants in workshops and focus groups, be sure to honor that promise in your report. Do not attribute quotations to people promised anonymity. Omit details that might point to a source of the information. If you gathered some data with the understanding that it was “off the record,” do not report it at all unless you have a separate “on the record” source. Application of the Results 125 See Box 52 for ideas on publicizing the report’s Sometimes you can link publication to an event availability. Your dissemination strategy should that is already drawing attention. For example, be your guide. you might be able to present the report when a new, reform-minded minister or agency head Timing of your publication can be important. If takes office. Or you might release the results at your report has political implications, releasing it an international conference where it could catch before a key vote or before an election might the eye of potential donors who might otherwise give it more impact than releasing it afterward. overlook it. Similarly, a report showing flaws in the gover- nance of forest concessions will have more You may wish to publish your report simultane- impact if released before a major concession ously in multiple formats and link them. A printed auction than after. If you want your report to report can refer people to a website for updates draw press coverage, it may be better to release and specialized information for particular audi- it mid-morning early in the week rather than on ences. The website can allow visitors who have a Friday afternoon. not seen the full report to download it. PRACTICE TIP BOX 55: TRUST AND IMPACT People may try to assign ulterior motives to the assessment and cast doubt on its conclusions as biased. Countering this depends on trust; building trust begins with transparency and candour in initial planning (Chapter 2) and continues through data collection (Chapter 4), analysis (Chapter 5) and dissemination. Step 3: Institutionalize Further Assessment Some assessments are designed as one-time commit to serve as a permanent home for the events—but the impact of assessment is greater assessment process; the assessment process if assessment is a regular exercise. The first as- should have some wider base of support, in law, sessment then serves as a baseline for the from institutions, or from stakeholders; and lead- second, allowing you to discuss trends. The col- ers should emerge to champion the process. lective experience from prior assessments can provide a base of institutional knowledge that An Institutional Home makes the next assessment better than its pre- The institutional home of the assessment can decessors (see the discussion of evaluation in maintain the records, including the data from prior Chapter 7 for more discussion on this topic). For assessments. It may be able to house key members this to happen, however, some institution should of the assessment team as long-term employees. 126 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE It should be able to plan and budget for the next A Base of Support assessment and raise funds if necessary. To assure that people will devote the funds and energy needed to conduct the next assessment, The ideal situation is to establish the institutional the process needs a base of support. That base home as early as possible. In other words, this is must be strong enough to ensure that when the a question best settled when dealing with issues time for the next assessment arrives something of who will sponsor and carry out the initial as- will actually be done. sessment (Chapter 2)—or even before that. Then, as the first assessment proceeds, the institutional In a rule-of-law society, that assurance can come home will be closely involved, building institutional from a statute, a regulation, or other binding ownership of the process (see A Base of Support, mechanism. It may be impractical, however, to below) and in-house infrastructure and capacity. establish such a legal mechanism. The report may point to the need, but the people organizing If the government conducts the assessment, then the assessment are rarely in a position to create a government agency with an established moni- a binding mechanism by themselves. toring and evaluation division is the natural home. This could be a forest agency, a planning agency, A parallel influence is social support for assess- a statistical agency, or an internal oversight agency. ments. When a binding mechanism can be put in place, social support works to bolster it. Where If the assessment is conducted by an NGO, there is no binding mechanism, social support be- then the NGO is a possible home—but stability comes the best hope for continuing assessments. of long-term funding could be an issue. If the original host cannot promise ongoing funding, it Social support can be built throughout the process, may be better to associate the assessment with using transparency and stakeholder engagement another civil society organization, possibly even to educate potential supporters. If you have en- a foundation or university, with a stable financial gaged stakeholders well, as suggested in Chapter base. The new host could express its commit- 1 and throughout this guide, support may emerge ment to maintaining the assessment through automatically. Stakeholders will come to see the a binding contract with some of the other par- assessment as a way to voice their concerns, to ticipants. (See Chapter 2 for more discussion of be heard and even to have a measure of power who should conduct the assessment.) over the course of forest sector decisions. They will want to have future assessments. If no one will assume responsibility for doing peri- odic assessments, the next best institutional safe- Good publicity (Step 2 of this chapter) can also guard is to find someone to serve as keeper of strengthen stakeholders’ interest in holding fu- the current assessment’s data and records. That ture assessments. Showing people how specific way, if a new host emerges, some of the docu- findings link with policies they wish to influence ments and memory of the assessment will be or actions they favor can make them supporters available to tap. Universities, research institutions, of assessments. and libraries are possible document repositories. Application of the Results 127 Leadership The full process of recruiting institutional commit- Stakeholders as a whole may like the assess- ments is beyond the scope of this guide, but it ment but give it only passive support. It will often begins with engaging individuals within the insti- take the actions of a leader to persuade people, tutions. These must be people who know how to to organize groups, and to catalyze action. bring their own institutions to make commitments. Leadership is hard to guarantee over time. Do not stop at a single supporter or leader. The Individuals come and go. The best course is assessment process will be best served if it has often to seek leadership from institutions. A many supporters, with effective leaders to galva- respected donor or NGO that cannot serve as nize that support. home to the assessment could still become a leading advocate for assessment. The host in- stitution itself can become a champion for the next assessment if it has the respect of decision makers and stakeholders. 128 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Points on Process Facilitate Use of Your Findings This guide does not intend to go beyond as- • Identify the real. Go out and determine the sessment into the details of policymaking. This capacities of the stakeholders, usually by talk- chapter des offer a few ideas, however, on mak- ing to key informants and experts. You may ing the assessment more valuable in the suc- already have data on this; some governance ceeding steps of policy development. If you have assessments view stakeholder capacity as a followed Steps 1 and 2 above, you have already measurable subcomponent of governance. helped facilitate action by creating outputs that are geared to your target audience, making the Once you have identified the gap between the outputs available, and publicizing them. ideal and the real, you can begin to design train- ing or other measures to fill the gap. You should If you developed a theory of change during your try to pilot your measures before you roll them planning (Chapter 1, Step 3), now is the time to out full-scale. revisit it. Think about how you envisioned your outputs leading to outcomes and think about Other steps that you might be able to take to what you can do now to advance those out- facilitate use of findings: comes. The needed actions may be as simple as delivering your outputs into the right hands. • Identify stakeholders and potential leaders to follow up on needed changes. You may, however, need to do more. For exam- • Work with these key stakeholders to develop ple, your target audience may need additional a common vision (in effect, a revised theory capacity. Stakeholders may lack capacity to fully of change or roadmap of next steps). understand the assessment and its implications. • Foster communication among people inter- They may lack capacity to engage effectively with ested in change. Beyond stakeholders in the decision makers or otherwise act on the assess- country, these may include stakeholders in ment’s findings. other countries in the region facing similar is- sues, international civil society organizations, You may want to assess what additional capaci- or international donors and development ties your audience needs. A capacity needs as- partners. Consider meetings, workshops, and sessment at its most formal is as elaborate a social media. process as a governance assessment, but at its most informal it is just a few steps.   • Identify the ideal. What capacities should stakeholders have to take full advantage of the assessment? This is largely a desk exer- cise, based on your knowledge of stakehold- ers, the assessment, and the local context, but you will want to verify your understand- ing of the ideal by discussing it with some stakeholders and/or experts. Application of the Results 129 7 LEARNING AND IMPROVEMENT This chapter will help you make this assessment and the next assessment better. An assessment should be a learning process for the assessment team. Through evaluation, you can capture lessons learned along the way to improve your current effort and lessons after the assessment is over to improve future work. Evaluation can strengthen the skills of your team members and help them in their next assignment, whatever it may be. BEGIN SELF-EVALUATION DURING THE ASSESSMENT STEP Collect feedback from your team, from stakeholders, and from other participants as you go along. 1 HOLD AN EVALUATION AFTER THE ASSESSMENT STEP As soon after the assessment as you can, while the experience is still fresh in people’s minds, 2 arrange an evaluation. This can be a team self-evaluation or an evaluation conducted by an outsider. MAKE THE EVALUATION RESULTS AVAILABLE STEP Store it in an archive where it will be available for the next assessment team to use; publish it 3 in an open journal where others can learn from it. KEEP THE DOOR OPEN TO RECEIVE FURTHER FEEDBACK STEP The impact of the assessment will not be apparent right away. Establish some way to collect 4 ongoing feedback on the effort. POINTS ON PROCESS: CONDUCTING A TEAM SELF-EVALUATION You can “do it yourself”: hold a workshop with your team to capture lessons learned from your work. 130 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Step 1: Begin Evaluation During the Assessment The previous chapters have already mentioned give oral feedback on the event. See Box 56 for that an assessment can get feedback by vetting sample event evaluation questions. and piloting its methods, validating data and vet- ting results. Here are some further methods for Web forms. If you have participants or stake- getting feedback. holders who have access to the Internet, you can set up a web page where people can leave Steering or advisory committee. The assess- comments and feedback on the assessment. ment can establish an independent group of professionals or stakeholders to oversee its work Internal channels. You can provide ways for and periodically make recommendations for team members to report problems and make improvement. The committee can meet to give suggestions during the assessment (e.g., regular joint recommendations or can give their opin- feedback meetings, back-to-office memos). You ions as individuals. should keep a record of these submissions. Even if you cannot address them during the assess- Post-event evaluations. At the end of focus ment, they may identify areas to explore in a group discussions or workshops, you can ask the post-assessment evaluation. participants to fill in a brief evaluation form or to Step 2: Hold an Evaluation After the Assessment Post-assessment evaluations are productive ways Through a post-assessment evaluation, you can to capture lessons from the assessment experi- capture some of this knowledge. Some of the ence in order to make the next assessment better. knowledge will make the next assessment bet- You should include a post-assessment evaluation ter. Some will relate to practices like stakeholder in your work plan and budget (Chapter 2). outreach or report dissemination and will be useful in many other projects. After you have finished the assessment, several things will be certain. Who will conduct the evaluation? You can conduct the evaluation using people on your • You will know more about the process of as- team, someone in your organization who did not sessment than when you began. participate in your team, or someone outside of • The new knowledge will be spread among your organization. Table 8 summarizes some of the team that did the assessment. No one the pros and cons of these options. person, even the manager, will know it all. • You will have made some mistakes, some of which you might not be aware of but could profit from recognizing. Learning and Improvement 131 LOOKING DEEPER BOX 56: SAMPLE EVENT EVALUATION QUESTIONS Here are samples of questions to give participants at the end of a workshop or other group event. Many of these are open questions (see Box 30). You can turn them into closed questions to make it easier to analyze the responses, however you should leave a few broad open questions (like the last one) to catch concerns that you might not think to ask about specifically. • How well did someone explain the event to you before you came? Was the event what you expected? • If you received written information before the event, was it useful? How could it have been better? • Rate the overall process (the organization, the agenda, the presentations, the moderation, and so forth) on a scale of 1 (=very good) to 5 (=very poor). • Was the event too short, too long, or just about right? • If you had run the event, would you have spent more or less time on: –– Introductions and background presentations. –– Plenary exercises and discussions. –– Small group exercises (break outs). –– Breaks and meals. • Did you have an opportunity to express yourself? Do you think people paid attention to what you had to say? • If you had run the event, would you have invited different people to participate? • If you had run the event, what topics would you have spent more time on? Less time on? • How could we have improved the facilitation or moderation of the event? • How could we have improved the logistics of the event (meeting space, refreshments, and so forth)? • Do you think that this was a worthwhile use of your time? What would have made the event more valuable to you? –– What else could we do to make the next event better? 132 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE TABLE 8: WHO WILL CONDUCT THE EVALUATION? Option Pros Cons Team self-evaluation Least expensive (and often the Greatest probability of biases and blindness to faults. fastest) option. Greatest tendency of people not to speak candidly. Team may have little experience in project self-evaluation. Evaluation run outside the team Likely to be more objective May carry institutional biases and blindness. but inside the larger organization than Option I and less Occasionally becomes tainted by personal or organizational expensive than Option III. conflicts. May be able to use people with expertise in evaluation. Independent evaluation Usually brings in people with Often the most costly. special expertise in evaluation. Likely to be impartial. More likely to draw candid responses from those involved. BOX 57: KEY QUESTIONS FOR A PROJECT EVALUATION PRACTICE TIP Key questions to ask: • What did we do well that we don’t want to forget? • What did we learn? • What should we do differently next time? • What still puzzles us? Source: Kerth (2001). If time, budget, and organizational constraints allow What will the evaluation cover? Box 57 sug- you to use an independent evaluator, then that is gests four general questions for project evalu- your best option. Even if you must put together a ations. Box 58 suggests some questions and quick effort with your own team, however, doing exercises for self-evaluation. some evaluation is better than doing none. The precise focus of your evaluation should de- If you are hiring an independent evaluator or pend on the experience and problems that you working with an evaluator outside your team, then encountered. The best practice is often to let the let the evaluator take charge of the effort. Instruct assessment team help design and set the focus your team to cooperate and be supportive. If you of the evaluation, including the questions to be are doing an internal team self-evaluation, the asked (Patton 1997, pp.29–31). The team can next few pages will offer some suggestions. point out what kinds of information will be useful to the people conducting the next assessment. Learning and Improvement 133 Step 3: Make the Evaluation Results Available Evaluations teach lessons to the people who The evaluation results may also be of use to other participated in the assessment, but the more people doing similar assessments. These may in- important audience may be the people who clude people doing governance assessments out- will participate in the next assessment. For that side the forest sector in your country and people reason, you must put the evaluation results in doing forest governance assessments in other writing and store them in a place where the next countries. For that reason, you should consider assessment will easily find them. publishing the evaluation where it can be widely available. That may mean preparing a scholarly pa- per, making it available through your organization’s website, or publicizing it through social media. Step 4: Keep the Door Open to Ongoing Feedback One source of information will be missing from Another thing that cannot be covered in an early any early evaluation: the feedback from actual evaluation is the assessment’s impact. You may users of the assessment findings. The assess- want to conduct a review some months or years ment report should invite readers to submit after conducting the assessment to determine feedback. You may get ideas about new and what impact it had and how the next assessment useful criteria, new indicators to assess, and bet- could be more effective. ter ways to assess them. A good practice is to find an institution that is The usual way to do this is to identify a point of likely to be active in the area for several years contact for feedback in assessment publications. and ask it to commit to sponsor a future evalu- The contact could be one of the lead authors, but ation of the assessment’s impacts. If an exter- authors typically move on to other projects and nal donor funded work on the assessment, the may even change institutions. A better practice is donor may be interested in long-term impacts. to set up a permanent institutional contact, within If the key sponsor of the assessment is a govern- an office in a government or NGO. A good choice ment agency or NGO, you may look into ways to is the institutional home of the assessment (dis- get a follow-up evaluation put into the sponsor’s cussed in Chapter 6, Step 3). That office can com- long-term planning or budgeting. mit to collect, archive, and perhaps also analyze the feedback. The office can also keep contact information for the assessment team and key stakeholders so that they are easy to locate if their insights and evaluations are needed. 134 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Points on Process Conducting a Team Self-Evaluation The best time to conduct an initial self-evaluation is team self-evaluations. People may fear dam- soon after the assessment is complete (although aging their relations with their colleagues, you can also conduct a quick evaluation midway superiors, or funders. through the assessment, looking for things that need correction). The process will still be fresh in Two ways to encourage candor are to protect the people’s minds. Finding the team members and sources of information and to control the focus other participants will be easiest before they have of the discussion. Here are some approaches to moved on to other positions and tasks. protect the sources of information: The key questions to ask are in Box 57. You can • Confidentiality. Establishing shared expec- use any of the methods for collecting qualitative tations with participants about information data in Chapter 4 to answer them. For example: use and confidentiality may make people more likely to be candid. You may want to • Key informant interviews. In this case, the assure people that you will not identify the key informants are the people on your team, sources of information in your evaluation the stakeholder representatives who par- report. Addressing confidentiality should be ticipated in the process, and perhaps people one of the first things you do in any evalua- who had roles funding, supporting, or over- tion interview or group meeting. seeing the team. • Full or partial anonymity: You may want • Focus groups. You can conduct focus group to allow people to comment without re- discussions with your team members or with vealing their identity to the evaluator or to stakeholders. colleagues. The evaluation could accept un- • Workshops. An evaluation workshop may signed written comments (either on paper or be little more than an extended version of electronically). In a group setting, managers a focus group discussion. Some managers, could be excluded when the group discusses however, prefer workshop-based evaluations their actions. structured as a retreat or a team-building • Atmosphere of non-retaliation: The man- exercise, particularly if the team will remain agement of the organization can create an together to take on future assignments. atmosphere of non-retaliation. They can be candid about admitting their own short- Box 58 offers some tools to use in these settings. comings. They can be gracious in accepting criticism. They can promise to protect whis- • Encouraging candor. A good evaluation is tleblowers, and they can reward people who going to depend on two things: that people make good suggestions for improvements. think carefully about the work being evalu- They should foster this atmosphere from the ated and that people speak candidly about beginning of the assessment. what they think. Assuring candor is often the greater challenge. This is especially true in Learning and Improvement 135 LOOKING DEEPER BOX 58: SOME EVALUATION EXERCISES AND TOOLS Here are a few ideas for exercises to use in evaluation interviews, focus groups, and workshops. Charting the timeframe (Kerth 2001). This exercise encourages individuals or groups to begin thinking deeply about the assessment process. Take a large piece of paper, whiteboard, or chalkboard. Draw a horizontal line representing time and mark the milestones in the project: initiation, initial planning begins, the team is recruited, data collection begins, and so forth. The vertical axis will be the person’s satisfaction with the process at that point. Ask the person to take a pen and trace a line indicating when he or she was feeling good about the process and when he or she was feeling worried, unhappy, or dissatisfied. Then ask the person to explain the highs and lows in the line. If this is done in a group, people can use different colored pens, chalk, or markers and each draw his or her own line on the same chart. Strengths, Weaknesses, Opportunities, and Threats (SWOT). The SWOT tool is often used in workshops and is adaptable to focus group discussions and interviews. The individual or group is asked to analyze the assessment by identifying its strengths and weaknesses and discussing opportunities for improvements and threats to future work. In a setting of self-criticism, some people may be uncomfortable speaking of weaknesses and threats. Instead of a traditional SWOT analysis, you might reframe the exercise as seeking answers to the four questions listed in Box 57. Paired lists. In this tool, the individual or group is given a pair of complementary questions, such as “What did we do well and want to repeat next time?” and “What do we want to do differently next time?” Another question pair might be, “What do we know now about assessments that we didn’t know before?” and “What do we still need to learn?” Place each question at the top of a sheet of paper or on a board, and record multiple responses under each question. Plus-Delta. A variation on paired lists, this exercise is potentially useful midway through the process. The idea is ask each person to respond to four questions. On the plus side: What is the assessment doing well? And, more specifically, what has the respondent done personally that worked well? On the delta (change) side: What needs to change in the assessment? And, more specifically, what has the respondent been doing that needs to change? Ask people to respond in writing or collect oral answers during a group discussion. 136 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Here are some approaches to change the tone focuses on setting goals and finding ways of discussion and make it less threatening to to achieve those goals. You can find more participants: about this approach at http://appreciativein- quiry.case.edu/. • Appreciative inquiry. Some evaluators be- • Presumption of good faith. The evaluation lieve that the most productive way to im- can adopt a presumption of good faith—that prove performance is to focus on what went every team member was doing the best he right rather than what went wrong and to en- or she could with the information and re- courage people to repeat and even expand sources available at the time (Kerth 2001). the good parts of the process. Rather than This tends to turn the process away from see past performance as a collection of prob- looking for scapegoats. lems needing solutions, appreciative inquiry Learning and Improvement 137 POSTSCRIPT Forest governance assessment is a developing art. It has grown from the publication of the IIED (2005b) “Pyramid” tool to the many tools and examples available today. People have borrowed from other fields, experimented, and shared their experiences. The result is a rapidly evolving practice. The aim of this guide is to provide an overview of planning and conducting an assessment as the art is practiced today. This guide cannot hope to stay current or complete forever. However the basic information on planning and implementation should be useful for several years. The sponsors of this guide hope to see the field of forest governance assessment advance through the sharing of practical experiences. If readers have suggestions for improving future versions of this guide or if they would like to share their own lessons learned and outputs from their assessments, the sponsors encourage them to e-mail assessment@forestgov.info. 138 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE ANNEX I: CASE STUDIES This annex describes five recent assessments: • A national forest, land, and REDD+ governance assessment in Indonesia with regional components, using multiple data-gathering methods to score over 100 indicators and with high stakeholder involvement. • A broad national forest inventory in Tanzania that included a governance component, which has six indicators, scored using data from a survey of 3500 households and key informant interviews. • A national assessment in Ecuador, intended to be repeated as periodic monitoring as part of an effort to create forest transparency report cards for multiple countries. • An assessment in Liberia focused on governance and benefit sharing in seven forest concessions, scored using surveys and secondary data. • A national assessment in Uganda, intended as an initial diagnostic, using a stakeholder workshop to score over 100 indicators. The descriptions of each case parallel the organization of the chapters of this guide. That is, the descriptions begin with objective-setting and early planning, then cover tool design and data collection, then analysis and dissemination, and finally evaluation and learning. Annex I: Case Studies 139 Case Study: Indonesia Piloting the Participatory Governance Assessment (PGA) for REDD+ Thumbnail description: Indonesian stakehold- I. Setting Objectives ers, facilitated by the UN-REDD Programme, conducted the first pilot Participatory Governance Defining the “Why” Assessment (PGA) for REDD+ from 2011–2013. Indonesia’s national policymaking and international The process was entirely stakeholder-led, with REDD+ commitments both demanded robust and additional stakeholder consultations conduct- credible baseline data on forest, land, and REDD+ ed throughout the process at the national and governance as a first step toward improvements. sub-national levels. The assessment was based The UN-REDD Programme agreed to pilot its on three components of governance used to “Participatory Governance Assessment for REDD+” categorize 117 indicators measuring forest, land, in Indonesia based upon the interest expressed by and REDD+ governance. relevant government and key civil society actors to actively contribute throughout the process. What the case illustrates: This case illustrates how to engage stakeholders in the assessment Considering Context process by putting assessment planning and In 2009, Indonesia’s president committed to re- oversight in the hands of an expert stakeholder ducing the country’s greenhouse gas emissions committee and involving wider groups of stake- by 26 percent by 2020. Indonesia had received holders in verification and vetting. In terms of significant external support from UN agencies data collection, the PGA demonstrates how to and foreign governments to advance Indonesia’s carry out geographic sampling in a large coun- REDD+ efforts, including a national climate and try and how to collect baseline data for future forest strategy. Some forest governance data comparisons. It also shows how to use both was already available, but it was incomplete. qualitative and quantitative data to develop and The national REDD+ strategy and the Ministry of score indicators and how to use assessments to Forestry’s 2010–2014 Strategic Plan include forest develop specific policy recommendations. and REDD+ governance as core objectives, and Indonesia’s Safeguards Information System (SIS) What the case does not illustrate: This case requires complete and credible forest governance does not provide an example of rapid assess- data to meet international reporting obligations. ment or how to work within a strict budget of time and resources. It does not show how to Setting Objectives plan and conduct an assessment with little gov- The objectives for this PGA were to gather robust ernmental or international support. and credible data to support REDD+ readiness and Indonesia’s international climate change Web link for reports or further information: commitments, to improve forest governance http://www.id.undp.org/content/dam/indone- generally and to inform policymaking in other sia/docs/envi/PGA%20Report%20English%20 sectors that affect forests. At the same time, the Final.pdf. PGA aimed to bring together different stakehold- ers to assess stakeholders’ capacities to support 140 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE REDD+ readiness initiatives and implementation the PGA process. The coordinator prepared the and to establish baseline data against which to work plan, was responsible for stakeholder com- measure progress. Additionally, the PGA was seen munication and was in charge of the financial as a potential data source for Indonesia’s larger aspects of the PGA process. Three government Safeguards Information System. agencies were heavily involved: the Ministry of Forestry, the Presidential Delivery Unit for Development Monitoring and Oversight (UKP4)/ POINT ON PROCESS: INVOLVING REDD+ Task Force, and the National Planning STAKEHOLDERS IN OBJECTIVE SETTING and Development Agency (Bappenas). National NGOs, including the Indigenous Peoples Alliance The PGA process was stakeholder-led; objectives of the Archipelago (AMAN), the Indonesian Forum on the Environment (WALHI), and the Association were set by the multi-stakeholder Expert Panel for Community and Ecology-Based Law Reform and through broader stakeholder consultations. (HuMa), were also actively involved. After pre- liminary stakeholder consultations, all stakeholders agreed that the data collection process should be II. Developing a Work Plan based on joint agreement of data collection meth- ods and conducted by a third party to ensure the Identifying the Scope of the Assessment results’ objectivity and the report’s credibility. This PGA was conducted to assess forest, land, and REDD+ governance in Indonesia at the na- Figuring Out How Much t Would Cost tional level, in 10 provinces, and in 20 districts. The PGA Coordinator determined the neces- Setting the scope of work required consensus sary PGA budget for the data collection phase among members of the multi-stakeholder by estimating how many people would need to Expert Panel, which was composed primarily of be hired, including data collectors, consultants to government agency and civil society represen- conduct media analysis, consultants to input and tatives, as well as academia and private sector transcribe data, and a coordinator for the whole representatives. process. In total, approximately 45 salaries were included in the budget, as well as meeting costs Identifying the General Methods and transportation to and accommodation at the The PGA used a range of methods, such as docu- intended PGA locations. This part of the budget ment review, content analysis for newspapers was estimated to be approximately $130,000 (coding key terms each time they appear, provid- when the Request for Proposal for third-party ing a quantitative measure for the occurrence of data collectors was sent out. these terms in the newspapers), semi-structured interviews, and focus group discussions. Figuring Out How Long it Would Take The Expert Panel did not establish a strict timeline Identifying Who Would Conduct the Assessment for the first PGA phase, although it expected the Indonesian government and civil society actors process to take approximately two years based on equally led the PGA process via their participa- UNDP Oslo Governance Centre estimates, upon tion in the multi-stakeholder Expert Panel. A PGA which much of the PGA approach relies. It was Coordinator, who sat in UNDP Indonesia, was re- estimated that attaining stakeholder buy-in and cruited by the UN-REDD Programme to facilitate support would take two to three months, but this Annex I: Case Studies 141 stage lasted nearly six months. Developing the III. Refining the Data Collection Method PGA framework and indicators also took a long time; the Expert Panel held meetings to discuss all Defining What to Measure of the indicators. The actual data collection period There were 117 indicators, reflecting six gov- lasted five months and drafting the report lasted ernance principles agreed upon by the Expert four to five months. In total, it took approximately Panel: participation; transparency; accountability; two years to complete the baseline PGA. effectiveness; capacity; and fairness. Each indica- tor also fit into one of six forest governance issue Writing a Work Plan areas: forestry and spatial planning; regulation The PGA Coordinator developed a five-step plan of rights; forest organization; forest manage- for the PGA project cycle, adapted from the ment; law enforcement and control over legal Indonesia Democracy Index6: (1) develop the processes; and REDD+ infrastructure. The Expert indicator set and select data collection methods; Panel reviewed each of the indicators to ensure (2) produce the index; (3) disseminate results; their relevance, differences between the indica- (4) repeat #2; and (5) repeat #3. In the original tors and data availability. The SMART criteria (see work plan, only the first two steps, to be complet- Annex V on developing indicators) were also ed in 2011 and 2012, were detailed. In 2013, the used to review the indicators. All of the indicators Expert Panel planned how to approach step three. were categorized into one of three components: law and policy; actors’ capacity; or performance of various actors. The capacity component was POINT ON PROCESS: COMMUNICATING broken into four subcomponents: government THE PROCESS capacity; civil society capacity; business capacity; and community capacity (of Indigenous Peoples, One of this assessment’s successes was its par- women, and local communities). ticipatory approach. The UN-REDD Programme Identifying Potential Sources of Information held a series of meetings with government, civil The PGA used government-issued legal and society, international partners, and the private policy documents from the national, provincial, sector to identify potential Expert Panel mem- and district levels for document analysis where bers, who were then assessed and approved by accessible; the PGA also used media analysis, all stakeholder groups. interviews, and focus group discussions. The Expert Panel identified key sources at each loca- The UN-REDD Programme held consultations tion early in the process, and the data collection nationally and sub-nationally with all stakehold- team at the Institute for Social and Economic er parties throughout the PGA process. The PGA Research, Education, and Information (LP3ES) Coordinator met regularly with core stakehold- used its networks to gain access to other data ers, including NGOs and government officials, to sources. inform them on the PGA’s progress. 6. The Indonesia Democracy Index is a country-led assessment of democracy development at the provincial level. It is a joint initiative of Bappenas and UNDP. 142 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Selecting Data Collection Methods and Considering a Sampling Plan POINT ON PROCESS: The PGA used a mix of methods to collect and VALIDATING METHODS analyze quantitative and qualitative data. The assessment was organized into different levels Once a consultant developed a draft of meth- of government administrative structure, namely ods, the PGA Coordinator reviewed the draft central, provincial, and district. The data collec- in coordination with LP3ES. In December tors gathered data at 31 assessment locations: 2011, stakeholders reviewed and discussed at the national level; for 10 provinces (those the methods at a national consultation. The that declared themselves REDD+ pilot provinces methods went to the Expert Panel for final and had the most forested area per capita); and discussions and ultimate approval. for the two best and worst districts within each of the 10 provinces (according to forest condi- tions and population density in and around the forests). Data collectors, in consultation with the IV. Data Collection Expert Panel and the PGA Coordinator, chose focus group participants based on whether (a) Recruiting and Training Data Collectors they were key sources of information at both The Expert Panel chose LP3ES, a credible and the provincial/district and central levels; (b) they experienced NGO, as the third-party data collec- were representativeness of different stakehold- tion team, based in part on LP3ES’s networks ers, accounting for gender and equity issues; and and familiarity with the topic. LP3ES and the (c) they had already been interviewed. Expert Panel held intensive meetings to review the data collection manual and discuss methods. Developing Data Collection Tools/Methods The assessment hired a consultant to identify Collecting the Data the data collection methods to be used for each All 117 indicators were assessed at each of the 31 indicator, and the Expert Panel discussed and locations. For content analysis, LP3ES first deter- agreed to the methods. mined whether the documents were available and then analyzed them for relevance. At some loca- Creating a Data Collection Manual tions, archiving had not been properly conducted, The same consultant created a draft data collec- so interviews were used instead. The data collectors tion manual, which included material on how were unable to interview some government em- to conduct interviews, focus group discussions, ployees due to bureaucratic barriers. Focus groups document analysis, and media analysis. The allowed the data collectors to gain more accurate interview guide listed questions to ask, directed data because group members would discuss the is- data collectors to inform the interviewee that sue together and often come to a consensus. questions could be asked off the record, and so forth. The PGA Coordinator worked closely with Besides scoring the indicators, the focus groups the consultants at LP3ES to finalize the manual. identified which indicators pointed to the highest This process would have proceeded more priority areas for reform, which government enti- smoothly if the same consultant(s) had been ties had the most control over reform in those hired to follow the method identification process areas, and which stakeholders could best sup- through to data collection. port reform in those areas. Annex I: Case Studies 143 Assuring Quality of Data V. Analysis and Interpretation The finalized data collection manual served as a guide for field data collection. Data collectors Processing the Data all used the same form and the same coding The Expert Panel assessed the data, guided by system. There was also a second person present a scoring system; data were scored on a scale at interviews to transcribe. of 1 (insufficient) to 5 (very good) according to a matrix outlining the ideal conditions for each The coding system was checked by external indicator. Each indicator could be measured us- experts in data management and data collec- ing more than one “item”; these item scores had tion, and at random by the PGA Coordinator. equal weightings and were averaged to calculate Resources posted or e-mailed by stakeholders the indicator score. The indicators in each com- were coded based on the location and compo- ponent category were then averaged to arrive at nents and stored both in hard copy form and on a composite component index score. The Expert external hard drives. Panel calculated an average index value for each of the 31 locations, and averaged these to ar- The PGA used multiple data sources. Conducting rive at an overall index value of forest, land, and interviews allowed for a cross-comparison of re- REDD+ governance in Indonesia. sponses, and focus group discussions led by the Expert Panel were used to validate data obtained Analyzing the Processed Data in interviews. The Expert Panel also looked at the All of the indicators were composites, which the central and provincial media reports and checked Expert Panel used to create composite index soft and hard copies of government documents. scores that could be compared across provinces and districts and among the central, provincial, and district levels. The final product was an over- all PGA index calculated using all of the indica- POINT ON PROCESS: PRACTICAL AND tors. After calculating indicator and index scores, ETHICAL DATA COLLECTION the Expert Panel used statistical analyses to ex- amine relationships among the components and Some private sector stakeholders were con- among the good governance principles. cerned about the information provided to data collectors about illicit money exchanges in the Making Recommendations forestry sector, particularly at the subnational The Expert Panel identified key issues from its level. The PGA Coordinator and data collectors data analyses (which included consideration of discussed with the Expert Panel how to deal the priority areas for reform that focus groups with this sensitive issue. identified) and used the results to determine which indicators were strong, which were weak and which needed to be addressed. They draft- ed five main recommendations: technical policy recommendations, which were formulated using the focus groups’ insights, and macro policy rec- ommendations that would enable the realization of the technical policy recommendations. 144 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE previous REDD+ Task Force, which will work in POINT ON PROCESS: VETTING FINDINGS collaboration with all government agencies in- volved in REDD+ to collect data and make it ac- The Expert Panel’s suggestions and drafts cessible to stakeholders. This agency will also be were talked over by key stakeholders. In in charge of measurement, reporting, and veri- October 2012, the UN-REDD Programme invit- fication and work to integrate the findings and ed key district and province level stakehold- recommendations of this PGA into Indonesia’s Safeguards Information System. The agency is ers to a soft launch organized to validate the well situated to ensure regular measurement of PGA’s findings. and updates to the PGA indicator set. VI. Spreading the Results POINT ON PROCESS: MOVING FROM RESULTS TO ACTION Deciding on a Dissemination Strategy The five-step PGA project cycle planned for a dis- During the dissemination process in the provinc- semination strategy, which was elaborated upon es, some additional stakeholder feedback was in 2013. The detailed strategy was informed by received. The UN-REDD Programme is holding discussions with the Expert Panel and by the policy discussions to identify the actions needed Indonesia Democracy Index’s experiences with results dissemination at the province level. to address the PGA’s findings. It is also holding workshops to build government capacity to use Disseminating the Results the data for policy making and NGO capacity to The Expert Panel launched the PGA report in use the data for policy advocacy. They want to Bahasa Indonesia at the Ministry of Forestry in continue to provide technical assistance and are Jakarta in May 2013 and the English version in seeking partners and donors to look more deeply June 2013 during an information session for the into some of the issues. UN-REDD Programme. Later, province-specific findings from the larger report were pulled out together with province-specific recommenda- VII. Learning and Improvement tions. The PGA Coordinator and PGA Expert pan- el presented the findings to each of the four key Self-evaluating During the Assessment stakeholder groups in each province from July to Data collectors noted where they thought an October 2013. The panel will use the results and indicator or question was unclear or irrelevant. recommendations to conduct workshops for key They also provided feedback in their field notes. stakeholders in government and NGOs on how to use the data to improve the planning process. Evaluation After the Assessment It also plans to work with NGOs to translate the Several meetings have been held to discuss the reports from Bahasa Indonesia. process. The Expert Panel prioritized and stream- lined the indicators for the second PGA cycle, Institutionalizing Further Assessment reducing the number of indicators to approxi- The Indonesian Government established a mately 32 based on lessons learned and a desire new REDD+ Agency in late 2013, replacing the to make the data collection process less costly. Annex I: Case Studies 145 These indicators were validated in meetings held with various stakeholders. Data collection instru- POINT ON PROCESS: CONDUCTING ments have also been revised accordingly. The INTERNAL GROUP EVALUATIONS objective is for the government to take respon- The team sat down together in August and sibility for the PGA after the 2014 assessment, October 2012 for informal evaluations. For during which they received continued technical instance, when the data collectors submitted support from the UN-REDD Programme. their first reports, they were asked to pres- ent them one by one. They also discussed the Sharing Lessons Learned The UN-REDD Programme expects to release a completeness and quality of field data, bar- public five to seven page self-evaluation document riers faced by data collectors, and the data that will share key lessons from the PGA process. collection timeline. Keeping the Door Open for Further Feedback At the province visits, the PGA Coordinator left business cards and encouraged stakeholders to send e-mails if they disagreed with the results. Case Study: Tanzania The Governance Component of Tanzania’s National Forestry Resources Monitoring and Assessment (NAFORMA) Thumbnail description: NAFORMA is a large- What the case illustrates: This case illustrates scale, field-based study of Tanzania’s forest re- how to conduct a forest governance assessment sources as well as their uses and management. It as part of a large-scale data collection process for is the first ground-based inventory of biophysical forest monitoring and assessment meant to inform and socioeconomic data that covers the entirety national planning, policies, and priorities and to es- of mainland Tanzania. NAFORMA is designed to tablish baseline data. NAFORMA is an example of be a multi-source forest inventory, allowing for a field-based approach that provides guidelines for combining of biophysical field data with remote data collection and makes use of more than 4000 sensing imagery to produce accurate data for household surveys and key informant interviews. It small areas. This assessment has piloted the demonstrates use of the open-source Open Foris FAO-led Open Foris Initiative’s open-source soft- data collection and statistical software. ware tools (http://www.fao.org/forestry/fma/ openforis/en/) and has been planned, funded, What the case does not illustrate: The socioeco- and supported by the Tanzanian government, nomic and governance sample for NAFORMA fol- the Finnish government, and FAO. lowed the biophysical sampling design; therefore, 146 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE the sample is not representative of the Tanzanian added in January 2011. The governance work had population as a whole. NAFORMA is not an exam- to fit within the larger inventory effort. ple of a separate forest governance assessment; forest governance issues were incorporated into Setting Objectives the socioeconomic survey as a supplementary sec- One of NAFORMA’s goals was to develop a ro- tion. NAFORMA does not illustrate how to conduct bust biophysical and socioeconomic assessment in-depth qualitative research, nor does it provide of Tanzania’s forest resources and forest-adja- an example of how to engage multi-stakeholder cent communities (communities living in or next groups in project design and validation. to forests). These baseline data would guide national forest resource management decisions Web link for further information: http://www. with the primary aim of providing a sound plat- fao.org/forestry/17847/en/tza/. form for informed decision making and for the development of policies and plans concerning I. Setting Objectives the country’s forest resources. NAFORMA data can also help inform Tanzania’s Safeguards Defining the “Why” Information System and feed into the national The Tanzania Forest Services (TFS) Agency, REDD+ strategy. Objectives also included creat- which is part of the Ministry of Natural Resources ing a national database and maps of the assess- and Tourism (MNRT), conducted the National ment’s data and strengthening the capacity of Forestry Resources Monitoring and Assessment TFS and MNRT to collect, analyze, and update (NAFORMA) to capture biophysical and socioeco- information about Tanzania’s forests. nomic data about the country’s forest resources. Knowledge of the extent, condition, and uses of the forest was needed as the first step toward POINT ON PROCESS: INVOLVING sustainable forest management. The insights STAKEHOLDERS IN OBJECTIVE SETTING from NAFORMA will provide baseline data, inform national policy, strategies, and planning, and help NAFORMA conducted stakeholder consultations Tanzania meet international reporting obligations. with government agencies, research agencies, NGOs, and the private sector in mid-2009 to Considering Context develop the information needs assessment NAFORMA was originally going to be a biophysi- which provided the foundation for the indica- cal and socioeconomic forest resources inven- tory, without a separate governance component. tors included in the initial survey. During a mid-2009 needs assessment, however, stakeholders pointed out that Tanzania should collect data that could be used in a possible II. Developing a Work Plan REDD process. During 2010, a separate study was made to determine to what degree the Identifying the Scope of the Assessment original NAFORMA socioeconomic component NAFORMA was designed as a broad national forest addressed REDD+ preparedness. The conclusion resources inventory, gathering physical, biological, was that adding an additional section on gover- social, and economic information. The governance nance would make the inventory more useful for questions covered six indicators, which mostly con- REDD+ preparedness. The separate section was cerned government capacity and accountability. Annex I: Case Studies 147 Identifying the General Methods Assistant Project Coordinator, Chief Technical Household surveys and key informant interviews Advisor, heads of the four technical working were the core methods used for NAFORMA’s groups, and national consultants form the Project socioeconomic and governance components. Technical Unit. The national stakeholders sought to produce data that could both be used at subnational level Figuring Out How Much it Would Cost (preferably the district level) and be useful in The cost of adding governance to NAFORMA preparing for REDD+. The sampling design had was small. NAFORMA had already planned and to be developed with these goals in mind, so budgeted for sending teams to the field, so the NAFORMA ended up with a larger field compo- project planner did not calculate additional costs nent than foreseen in the original project docu- for the governance assessment. Most extra costs ment and budget. In order to accommodate this came from spending slightly more time in the more ambitious scope, an additional $3 million field conducting surveys and more time entering and eight months were added to the original and analyzing data. project agreement, which included a budget of $3 million over three years. The selected statisti- Figuring Out How Long it Would Take cal framework was double sampling for stratifi- The original work plan for the inventory (see be- cation. The country was divided into 18 strata low) allowed 14 months for survey data collec- based on predicted growing stock (as assessed tion. This estimate reflected how many house- from satellite imagery), accessibility (based on holds were to be interviewed, how many teams road network), and elevation (based on a digital would be working, how many interviews each elevation model). This created a layout of the team could do in a week (considering interview sample clusters with a higher sampling intensity time and travel), and when the teams could in areas where the growing stock was predicted work. For various reasons, from longer than nor- to be high and lower where the growing stock mal rainy seasons to scheduling conflicts, data was predicted to be low. Data collection proceed- collection took 26 months. ed according to Tanzania’s seven agro-ecological zones, which are also used in the management The governance component comprised 15–20 min- structure of TFS as the forest resources in each utes of each 90–120 minute household interview. zone are managed via a zonal office. Writing a Work Plan Identifying Who Would The Project Technical Unit drafted a work plan for Conduct the Assessment the whole process in 2009. They have modified NAFORMA was funded by the governments of the work plan several times since due to such Tanzania and Finland and implemented by the unforeseen factors as competing TFS priorities, Tanzania Forest Services Agency under MNRT late appointment of staff, restructuring within with technical support from FAO. A National MNRT, and delays in access to field sites and Project Coordinator and Chief Technical Advisor equipment delivery caused by prolonged rains are responsible for day-to-day operations, and a in 2010 and 2011. The preparatory phase lasted Steering Committee composed of government 13 months rather than the predicted nine, and agency officials, national and international ac- the implementation phase took 26 months in- tors, and academics meets occasionally to make stead of 14 (as predicted in the original proj- key decisions. The National Project Coordinator, ect document). In 2010, NAFORMA’s project 148 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE duration was extended by eight months, to Governance for REDD+,” and expert meetings, December 2012, for a total of 44 months; later, the NAFORMA team determined that the survey an 18-month extension phase was added, ex- should further consolidate the governance data tending NAFORMA into mid-2014. collected by the socioeconomic component by allocating a separate section (Section K) on gov- ernance to the interview protocol. POINT ON PROCESS: COMMUNICATING THE PROCESS Selecting Data Collection Methods and Considering a Sampling Plan The work plan incorporated stakeholder consul- The governance survey was part of the socioeco- tations into the inception phase and involved nomic survey and followed its overall sampling design and work plan for fieldwork. stakeholders in a Steering Committee that met on occasion. In the original project, NAFORMA had intended to use the National Forest Monitoring and Assessment (NFMA) plot design. However, a III. Refining the Data Collection Method 2009 study concluded that, by revising the de- sign, the sampling could be done more quickly, Defining What to Measure more households could be sampled, and the The NAFORMA team based its decisions of statistical accuracy of the findings would improve. which governance indicators to measure on the The new sampling design was based on 18 strata. results of a December 2010 technical workshop during which attendees considered data collec- NAFORMA’s biophysical data collection plan tors’ preliminary feedback and the recommen- sampled heavily forested areas more intensely dations of the unpublished study, “Measuring than lightly or non-forested areas. NAFORMA Forest Governance for REDD+.” In January 2011, limited its socioeconomic data collection to NAFORMA added a section on governance, which households lying within a two-kilometer radius included 19 questions covering six indicators, to from the center of the biophysical clusters. As the socioeconomic survey. The indicators address a result, most interviewees were from forest-ad- accountability; conflict and dispute management; jacent communities, meaning that NAFORMA’s transparency; monitoring and enforcement; eq- socioeconomic findings are not based on a ran- uity; and access to governance assistance/incen- dom sample of the whole of Tanzania. tives for land-use alternatives. Indicators were based on the Chatham House, World Bank, and Balancing the available funds for the fieldwork UNFCCC Social Safeguards frameworks. with the need for accuracy, the NAFORMA sam- pling design ended up with approximately 3,400 Identifying Potential Sources of Information clusters containing about 32,000 biophysical NAFORMA was developed as a national inven- plots. Twenty-five percent of the clusters are tory with the aim of promoting more sustainable meant to be permanent (i.e., for future mea- management of the nation’s forest resources; surements for monitoring and updating of the it always included a socioeconomic field sur- findings), while 75 percent are now regarded as vey. After a literature review and consultations, temporary. The percentage of permanent clus- an unpublished study on “Measuring Forest ters, however, may change. Annex I: Case Studies 149 The sampling plan called for conducting house- available at http://www.fao.org/forestry/23485- hold surveys at only half (1,700) of the clusters— 0c45f59c134a7d94ee53613174fab93bb.pdf. at all of the permanent clusters and one-third of It describes the identification of households for the temporary clusters. Before entering the field, interviews and provides a protocol and code of data collectors were to identify and map the four conduct for socioeconomic field data collection, households closest to each sampling unit’s cen- instructions for filling out field forms, the field ter and three additional “back-up” households. forms for the household survey, and the key in- formant survey. In addition to the household surveys, the teams were to try to interview two key informants for each cluster where they conducted household POINT ON PROCESS: VALIDATING surveys. The data from key informants would METHODS complement and triangulate the data from households. The field manual and field forms were tested over six months in the field. They were revised based The new sampling design reduced time avail- on feedback from the field teams and on an un- able to cover each cluster, and the team had to published study on additional indicators needed reduce the length of the socioeconomic com- ponent so that both it and the biophysical mea- to consolidate the governance component. surements could be completed in one day (in the conventional NFMA design there were four to five days available on average per cluster for IV. Data Collection the southeast component). Recruiting and Training Data Collectors Developing Data Collection Tools The data collection team, which consisted of FAO consultants developed the socioeconomic government employees from MNRT and local survey field forms (questionnaires) in early 2010, government authorities (such as District Forest based on the NFMA protocol. They created one Officers), conducted both the biophysical and questionnaire and data collection protocol for socioeconomic surveys. Data collectors received household surveys, and a second questionnaire one full month of training between November and data collection protocol for key informant 2009 and March 2010. interviews. The NAFORMA Project Technical Unit conducted field-testing and revised the field Collecting Data forms from May to December 2010; the team Sixteen NAFORMA field teams collected the added governance questions to the household data. Before going into the field, data collectors survey in January 2011. would try to contact the village executive officer and explain the project. They would also explain Creating a Data Collection Manual the purpose of the project to interviewees before FAO and NAFORMA staff created a field man- conducting interviews. ual for socioeconomic data collection between November 2009 and March 2010. The current In total, the field team conducted socioeconomic manual is called NAFORMA Document M05- interviews in 3,493 households in 1,066 clus- 2010, the Socioeconomic Field Manual, and is ters. Interviews were only conducted in 1,066 150 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE clusters instead of the planned 1,700 (about half wrote any notes in English, making it easy for data of the total number of clusters) because some cleaners to code and record the field forms’ data. clusters were uninhabited. The database application was gradually im- Data collectors also interviewed 1,120 key infor- proved and logical checks built in to capture mants, who were selected by the team leader with obvious errors. Two quality assurance teams the help of forestry authorities, government em- double-checked biophysical data for about 10 ployees, and NGO representatives. Key informants percent of the clusters; the complexity of finding included village elders, local property owners, household respondents made it impractical to forestry officials, NGO representatives, and other recheck the socioeconomic surveys. individuals knowledgeable about local forest use. Key information interview questions overlapped One complicating factor with data collection in- with household survey questions; these ques- volved language. The data collectors intended to tions served to help triangulate the data. use a Kiswahili version (the official language of Tanzania) of the questionnaires to ensure that The original field forms are stored systematically data collectors, household members, and key on shelves in data management rooms and or- informants all had the same understanding of ganized by zone, district, and cluster number. All the questions. However, due to errors in the of the data entered are stored on a server and translated field forms, the team ended up using backed up digitally offsite (i.e., at FAO HQ) and English field forms and translating the questions via Dropbox. to Kiswahili during the interviews. Assuring Quality in Field Data Collection POINT ON PROCESS: PRACTICAL AND Four individuals reviewed the data before it was ETHICAL DATA COLLECTION analyzed: Road safety was the biggest practical con- • Data collectors rechecked and signed their cern in field collection. In terms of interview field forms, confirming that the forms were ethics, the interviewees had to be at least 18 correctly and completely filled out. years old. One concern about data quality is • Field team leaders verified that the entered that interviewees may have been hesitant to data were correct and complete. The field team leader would then sign the form and answer all the questions honestly, especially submit it to the data management team. regarding sensitive issues such as illegal for- • One member of the data management team est resource use, because the data collectors would enter the data into the NAFORMA were government employees. database. • Another team member would clean the data to ensure they were error free and V. Analysis and Interpretation ready for analysis. Processing the Data When filling out the field forms, data collectors The data management team based at NAFORMA’s noted the numbers to code each response and office at MNRT cleaned and entered the data Annex I: Case Studies 151 from the field forms. The FAO-Finland team at VI. Spreading the Results FAO Headquarters in Rome coordinated with the Tanzania-based team to develop and continually Deciding on a Dissemination Strategy improve FAO-Finland’s open-source data man- The NAFORMA data-sharing guidelines and com- agement application, Open Foris Collect. munication strategy were developed in 2013 through a process of stakeholder consultations Analyzing the Processed Data and national endorsement. NAFORMA analyzed the governance data using another of FAO-Finland’s software tools, Open Non-sensitive data will be available for free access: Foris Calc. It is statistical analysis software that pro- duces averages, percentages, error estimates, and • Processed data and .pdf versions of maps will other statistical data, as well as graphs and tables. be available in a free and transparent manner. Results of queries can be exported to Excel for • Raw data will only be shared where written further processing by the user. As of September agreements exist between TFS and a collab- 2013, Open Foris Calc was still under develop- orating institution and only where the collab- ment. During 2013, the NAFORMA team com- oration is contributing to a more sustainable pared results of statistical analysis to other studies management of the forest resources. and local knowledge of conditions on the ground. Sensitive data, include data that may compro- Making Recommendations mise national security or privacy, disclose loca- Staff at TFS and FAO-Finland are jointly compiling tions of red-listed species, disclose plot locations, the final report. The key findings of NAFORMA and so forth, will not be accessible. will feed into the review of Tanzania’s National Forest Programme. Disseminating the Results FAO Finland is supporting the development of a self-service web platform where the public can POINT ON PROCESS: VETTING FINDINGS access and query NAFORMA data and results in Open Foris Calc. If the budget permits, MNRT MNRT held a final workshop in May 2013 to will also conduct some targeted efforts to get the present and discuss provisional findings. Each NAFORMA findings into the media. section of the final NAFORMA report is being Institutionalizing Further Assessment compiled by the national consultants and their NAFORMA is meant to be institutionalized as a counterparts at TFS. The draft sections are be- routine assessment conducted by TFS’s Forest ing sent to the Chief Technical Advisor for review. Resources Monitoring and Assessment Section. The sections will then be compiled into a final TFS is awaiting the release of NAFORMA to use document, which will be circulated among the as a baseline to guide revision of the National NAFORMA/FAO-Finland team for comments. Forest Programme, which expired in 2010. 152 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Sharing Lessons Learned POINT ON PROCESS: MOVING FROM TFS has accommodated visit requests from neigh- RESULTS TO ACTION boring countries, such as Kenya and Malawi, and has sent field staff to Zambia to assist in training NAFORMA will feed directly into the review staff there in conducting Integrated Land Use of Tanzania’s National Forest Programme as Assessments. It has also held workshops and will a baseline on the state and extent of forest release a report on lessons learned for designing resources. and implementing the socioeconomic survey. The National Project Coordinator and the Chief Technical Advisor participated in an information- VII. Learning and Improvement sharing consultation at FAO HQ in March 2013 with other FAO-Finland pilot countries. Self-evaluating During the Assessment The NAFORMA team held a mid-term evalua- Keeping the Door Open for Further Feedback tion of its objectives and progress to date in May The NAFORMA team is currently developing a 2011. At this meeting, data management was data-sharing policy, a communication strategy, identified as an area in need of attention due to and a web-based platform for dissemination. a data entry backlog. FAO proceeded to recruit The website will include a mechanism for pro- 13 data entry clerks to help clean field data and viding feedback. conduct data entry. Evaluation After the Assessment POINT ON PROCESS: CONDUCTING The final NAFORMA report, to be jointly released INTERNAL GROUP EVALUATIONS by FAO and Tanzania’s MNRT, will include an evaluation of the process and lessons learned. The team management has used informal conversations to identify problems and spread lessons learned. Case Study: Ecuador Grupo FARO’s Forest Transparency Report Card for Global Witness’ “Making the Forest Sector Transparent” Initiative Thumbnail description: In August 2010, Grupo as the availability, disclosure, and dissemination of FARO joined the international initiative “Making forest sector information (i.e., forest management the Forest Sector Transparent,” led by international plans, logging permits, revenues, and infractions). NGO Global Witness. From 2010–2013, Grupo This Report Card is the first global tool to assess FARO used the Forest Transparency Report transparency and access to information in the for- Card to annually monitor Ecuador’s forest-related est sector in forest-rich countries and is a partner- legal, policy, and regulatory frameworks, as well ship between eight NGOs. Annex I: Case Studies 153 What the case illustrates: This case provides an Considering Context example of using periodic assessments to moni- In 2009, Global Witness piloted the Forest tor changes in specific aspects of forest gover- Transparency Report Card with partners in four nance. It demonstrates how to tailor an assess- countries, three of which were in Africa and one ment to the country context while collaborating in Latin America. When expanding the project in with global partners and how to make use of 2010, Global Witness sought to better represent stakeholder coalitions. It also exemplifies how to Latin American countries. Global Witness chose use a qualitative approach to scoring indicators Ecuador because its legal frameworks and insti- that contributes to building a global baseline. tutional structures were amenable to assessment via the Report Card mechanism. It asked Grupo What the case does not illustrate: Grupo FARO to be its Ecuadorian partner in light of the FARO’s Forest Transparency Report Card does organization’s expertise in monitoring transpar- not illustrate concrete linkages between trans- ency, compliance, and access to information. parency and forest governance, which is a con- tested ground globally. Greater in-depth analysis Setting Objectives is required to understand and explain linkages. The international initiative’s main objective was to Also, as its focus is on national institutions and assess transparency and access to information in agencies, it does not provide an example of how the forest sector in Ecuador and other forest-rich to evaluate forest transparency at the local or countries. Grupo FARO was concerned primarily regional level. with the legal and regulatory frameworks of the for- est sector and with examining the public finance Web links for further information: http://www. commitments to forest sector regulation. It aimed foresttransparency.info/ecuador/2012/ to use the results to advocate for better forest http://www.grupofaro.org/sites/default/files/ governance; to make the Ecuadorian government archivos/publicaciones/2012/2012-05-29/op- more responsive and accountable to the public; mvillacis-dyoung-echarvet.pdf. and to build civil society’s capacity to access infor- mation and participate in decision making. I. Setting Objectives Defining the “Why” POINT ON PROCESS: INVOLVING “Making the Forest Sector Transparent” is an STAKEHOLDERS IN OBJECTIVE SETTING international initiative aiming to improve forest sector policy and practice in seven forest-rich Grupo FARO established an informal coalition of countries: Ecuador, Peru, Guatemala, Liberia, 15 to 20 other organizations working on forest Ghana, Cameroon, and the Democratic Republic governance and access to information. Grupo of Congo. The assessment focuses on assess- FARO identified many of these through stake- ing forest transparency and using the results to holder mapping. The coalition was involved in advocate for improvements. Grupo FARO looked the whole project. Although, the main objec- specifically at the availability of and access to tives where already set by the international information and public participation in decision project, Grupo FARO and national stakeholders making in Ecuador. had the opportunity to decide on the actions to take to achieve the key objectives. 154 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE II. Developing a Work Plan Figuring Out How Long it Would Take Grupo FARO planned for the whole assessment, Identifying the Scope of the Assessment from preparation to dissemination, to take two Global Witness’s Forest Transparency Report to three months. Due to the amount of coordi- Card, which acted as the diagnostic tool, defined nation and actors involved in the assessments, the technical scope of the assessment. Grupo however, each annual assessment took more FARO chose to apply the tool to national agencies than six months. and institutions. It collected data from over a dozen agencies to determine the status of 20 indicators. Writing a Work Plan Grupo FARO and Global Witness designed a Identifying the General Methods three-year work plan, which covered conduct- Grupo FARO and Global Witness determined that ing assessments through 2012. The groups re- Grupo FARO could score the indicators by using viewed and adjusted the work plan annually. stakeholder mapping to identify key stakehold- ers in the Ecuadorian forest sector, engaging in a desk review of available data from forest-related POINT ON PROCESS: COMMUNICATING agencies and conducting structured interviews THE PROCESS with key informants. Grupo FARO used the stakeholder coalition to Identifying Who Would keep stakeholders informed and to provide Conduct the Assessment general feedback throughout the process. Grupo FARO coordinated and conducted the in-country technical work, including data collec- tion and analysis. It received technical support from Global Witness and funding from the UK III. Refining the Data Collection Method Department for International Development (DfID). Defining What to Measure Figuring Out How Much it Would Cost Global Witness and partner NGOs in the four Grupo FARO estimated the project budget in pilot countries (Liberia, Ghana, Cameroon, and coordination with Global Witness. In order to Peru) designed the report card in April 2009. estimate the budget, Grupo FARO calculated the Each country partner developed and used differ- costs for two full-time employees’ salaries, field ent indicators and methods based on its coun- collection (holding meetings, conducting inter- try’s context, with the goal of contributing to a views, and travel expenses) and analysis (fees common data set. Indicator scores were based for peer reviewers and workshop expenses). on yes-no questions, meant to be objective Approximately half of the $100,000 annual bud- and straightforward and supported by evidence get was dedicated to mini grants that aided small collected in-country. The first common Forest organizations in capacity building. For instance, Governance Report Card template, which in- the Ecuadorian Center for Environmental Law cluded 70 indicators covering 15 components, (CEDA) conducted workshops with government was refined at a May 2010 workshop based on officials and civil society to gauge their knowl- lessons from the pilot countries. edge about the transparency law and build their capacity to use it. Annex I: Case Studies 155 In 2011 and 2012, the report card focused on Creating a Data Collection Manual 20 indicators at the core of forest governance: Due to the nature of its data collection meth- 12 “framework indicators” to assess whether the ods, Grupo FARO did not create or use a data legal, policy, and regulatory frameworks include collection manual for its Forest Transparency provisions for forest sector transparency and Report Card. good governance, and eight “data indicators” to assess whether key documents and data on forest sector activities are comprehensively and POINT ON PROCESS: VALIDATING regularly published. Grupo FARO tailored the in- METHODS dicators to the national context and its focus on access to information. Global Witness worked closely with Grupo FARO throughout the assessment process, Identifying Potential Sources of Information A stakeholder mapping exercise identified 13 which included validating the data collection institutions that were relevant to Ecuador’s for- tools and methods. est governance. Grupo FARO employees then looked through the institutions’ websites to assess the data available and either requested IV. Data Collection additional data or requested interviews with key informants at these institutions. Recruiting and Training Data Collectors There was no need for external data collectors. Selecting Data Collection Methods and Internal collectors received informal guidance Considering a Sampling Plan and feedback from the Global Witness team. Grupo FARO primarily used desk reviews to col- lect data, relying on direct government sources Collecting Data and information published by other stakeholders In addition to searching institutions’ websites, through official channels. Primary data collection, Grupo FARO made use of Ecuador’s legal and reg- mainly in the form of interviews, was done via ulatory measures to access institutions’ documents contact with stakeholders directly involved in and assess their transparency. To do so, it invoked forest-related policy and decision making and Article 91 of the 2008 Ecuadorian Constitution and complements the secondary data. Article 7 of the 2004 Organic Law of Transparency and Access to Public Information (LOTAIP). It also When Grupo FARO joined the initiative in 2010, used external search engines, such as Lexis’s legal it assessed the three national-level agencies with regulation search engine, which required payment direct forest sector involvement and nine oth- of a licensing fee. ers with indirect forest sector responsibilities. In 2012, this was expanded to 20 institutions. When conducting interviews, Grupo FARO’s two data collectors followed their interview proto- Developing Data Collection Tools col. They explained the assessment’s goal and Grupo FARO chose which data to collect based methods before showing key informants the on the Report Card indicators. It wrote an in- data they had been able to publicly access. terview protocol, which it then validated with The interviewees then validated, added to, or Global Witness. expanded upon the data. 156 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Assuring Quality in Field Data Collection the assessment’s definition of good governance. There were only two data collectors; they used After their draft analysis was complete, it was an interview protocol when conducting inter- sent to their supervisor for comment before views with key informants from relevant agen- being sent to the key informants for valida- cies. Grupo FARO always sent the information to tion. Grupo FARO then held a meeting with the the key informants who had been interviewed stakeholder coalition and sent the recommenda- in order to validate the information before it was tions to three core forest governance experts for published. In addition, Grupo FARO shared find- feedback before providing the analysis to Global ings with the stakeholder coalition, which point- Witness for review. ed the researchers to additional data sources. Global Witness compared the national report cards Grupo FARO used the online program Zotero to to look at global trends in indicators to see whether record the information that it collected from the they had improved or worsened. Its editing team web. This tool allows the taking of screen shots then wrote a first draft of the analysis, which the of the visited websites in order to have a backup. country teams enriched with their more-detailed knowledge of the context on-the-ground. POINT ON PROCESS: PRACTICAL AND Making Recommendations ETHICAL DATA COLLECTION Grupo FARO used its analysis to develop recom- mendations; it then followed the same review The data collection methods posed no practi- process. It made separate recommendations for cal or ethical concerns. the national government, the national assembly, civil society, and international donors. V. Analysis and Interpretation POINT ON PROCESS: VETTING FINDINGS Processing the Data Taking qualitative findings about agency transpar- As described under “Analyzing the Processed ency, Grupo FARO applied its own method, de- Data,” Grupo FARO vetted the findings with veloped in 2005, to arrive at composite scores key informants, the stakeholder coalition, and for several agencies involved in forestry. These Global Witness. scores are expressed as percentages. Also, using the Global Witness Report Card protocol, Grupo FARO assigned a red, yellow, or green dot to con- vey a visual sense of the score of twenty transpar- VI. Spreading the Results ency indicators, and assigned one of five symbols to convey whether the indicator’s score had sig- Deciding on a Dissemination Strategy nificantly improved, improved, not changed, wors- The dissemination strategy was partly deter- ened, or become significantly worse. mined by Global Witness, which created a website dedicated to the program and all seven Analyzing the Processed Data countries’ report cards: www.foresttransparency. The two full-time staff dedicated to the Report info. See ”Disseminating the Results” for a more Card analyzed the data they had collected using complete list of dissemination activities. Annex I: Case Studies 157 Disseminating the Results a year with the larger stakeholder coalition in- Grupo FARO hosted a public launch of the re- volved in the project. Global Witness gathered ports in Quito, issued a press release with Global all of the country partners annually to evalu- Witness, and mailed copies of the reports to key ate the project. During a May 2011 workshop stakeholders and the provinces. It also made use of in Cameroon, for instance, the partner NGOs its networks and was invited to present the Report agreed on a reduced indicator list. Card at such events as a Forest Roundtable with GIZ, Solidaridad Internacional, and others; a re- Evaluation After the Assessment gional conference in Guatemala on transparency Global Witness hired an external evaluation team in infrastructure; and Transparency International’s in 2012 to conduct an evaluation of the entire International Anti-Corruption Conference in Forest Transparency Report Card process. The December 2012. Grupo FARO has also collabo- evaluators contacted project partners, govern- rated with other organizations and the government, ment officials, and others involved in any part of publishing a collection of articles about forest gov- the project process. The team then provided rec- ernance with its 2012 Report Card. ommendations to Global Witness and the civil society organizations with which it collaborated. Institutionalizing Further Assessment Grupo FARO and regional partners are seeking Sharing Lessons Learned funding and general support to continue the Global Witness has published the results of its monitoring activities. The initiative was framed evaluation: http://www.foresttransparency.info/ as a pilot, with the intention of refining it and report-card/2012/lessons-learnt/. replicating it in other Latin American countries. Keeping the Door Open for Further Feedback Grupo FARO is always open to receiving feed- back from the general public through meetings, POINT ON PROCESS: MOVING FROM e-mail, or any other means. RESULTS TO ACTION Through its Report Card assessments, en- POINT ON PROCESS: CONDUCTING gagement of government officials, and other INTERNAL GROUP EVALUATIONS capacity-building work, Grupo FARO has raised awareness about how to improve for- Sometimes evaluations must take place on est transparency in Ecuador. With partners in multiple levels. This project had three levels Guatemala and Peru, Grupo FARO is exploring of organization—the Grupo FARO data col- funding opportunities to build on this work. lection team, the larger stakeholder coalition, and the international Global Witness effort— and each held reviews. As noted above, Grupo VII. Learning and Improvement FARO organized periodic reviews on the first two levels and Global Witness organized an- Self-evaluating During the Assessment nual reviews on the top level. Grupo FARO held internal meetings every month to assess its progress and three review meetings 158 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Case Study: Liberia Sustainable Development Institute (SDI) Independent Forest Monitoring to Produce the First Social Audit of the Forestry Sector Thumbnail description: This assessment, by Considering Context the Forest Governance Program of SDI and the The Liberia-EU voluntary partnership agreement Civil Society-Independent Forest Monitors (CS- includes provisions for civil society monitoring. IFM), examined how community benefit-sharing It was through this provision that the study was and governance mechanisms were working in funded. Liberian logging concessions. It focused on a sample of seven concessions and consisted of a Setting Objectives social audit of the communities and a fiscal audit As well as providing baseline data, the hope was of the revenue records. that the information contained in the social audit would influence government policy and highlight What the case illustrates: This case illustrates areas that need new regulation or modifications how third-party observers have conducted a fo- to existing regulation. Visiting communities to cused audit of a government forestry program, conduct the interviews provided an opportu- using community input gathered through sur- nity to share information with them and to keep veys along with data from government docu- them updated on relevant forest governance ments and records. developments. What the case does not illustrate: This is not a national assessment or a broad examination of POINT ON PROCESS: INVOLVING all aspects of forest governance. STAKEHOLDERS IN OBJECTIVE SETTING Other NGOs and the Forestry Development I. Setting Objectives Authority (FDA) were involved in the Making the Forest Sector Transparent project. This as- Defining the “Why” sessment followed from that project. The purpose of this work was to assess whether logging concessions were helping to meet the national forest policy’s objectives of economic development, equitable forest access, and stake- II. Developing a Work Plan holder participation. In particular, the assessment was to measure the impact of the concessions Identifying the Scope of the Assessment on poverty reduction through contributions The geographic scope was limited to communi- to communities and contributions to govern- ties affected by seven logging concessions. The ment revenues. A secondary objective was to subject matter scope covered four main areas: set a baseline to help document the social ef- whether the concessions were fulfilling their fects of implementing the Liberia-EU Voluntary legal requirements; whether the communities Partnership Agreement (VPA). have access to forest management planning Annex I: Case Studies 159 documents; whether the communities receive • Data analysis and report writing: $8,000 benefits; and how well the communities man- • Publishing and printing costs: $6,000 age the receipt of community benefits. Figuring Out How Long it Would Take Identifying the General Methods The assessment estimated that the entire project The main method of social data collection was would take one year, including data collection, surveys conducted by interview. The interview analysis, report writing, and report publication. subjects were community members, particularly members of community forestry development Writing a Work Plan committees (CFDCs), local government officials, There wasn’t a work plan specifically for the social and local leaders. The assessment used stake- audit itself, but there was one for the larger proj- holder workshops to validate the information ect. It covered things like reporting and vetting from the social audit. findings with stakeholders before publication. There was also a desk-based component. This included a fiscal audit, using data on forestry tax POINT ON PROCESS: COMMUNICATING payments provided by financial updates from THE PROCESS SGS, a private consultancy verifying timber har- vests and revenue collection for the government. The assessment included a stakeholder meet- Contracts and other forest management docu- ing to discuss the initial findings and to pro- ments, such as environmental and social impact vide feedback, which was then incorporated assessments (ESIAs), were also used to obtain into the final report. information on the level of compliance that each concession had with forestry law. Identifying Who Would Conduct the III. Refining your Data Collection Method Assessment The assessment was conducted by the Civil Defining What to Measure Society-Independent Forest Monitors, led by The assessment wrote out a set of questions members of the SDI Forest Governance Program. that it needed to answer, laid out under four The assessment was funded by the European main objectives or themes. Union and the UK Department for International Development (DfID). Identifying Potential Sources of Information The assessment needed to understand the legal Figuring Out How Much it Would Cost duties of the government and concession hold- The Civil Society-Independent Forest Monitors’ ers and the rights of the communities. These program head and finance manager wrote the it found in the forestry laws. It needed to learn budget. The resources were obtained through EU about benefit-sharing in practice. The informa- and DfID funding to undertake civil society moni- tion source for this was the community members toring of the VPA. The initial estimated costs were: and the public records kept on the concessions. To validate some of the information, the assess- • Transportation and field trips: $3,500 ment sought official documents from the Forest • Data entry: $1,250 Development Authority and other agencies. 160 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Selecting Data Collection Methods and Considering a Sampling Plan POINT ON PROCESS: VALIDATING The assessment used survey interviews, work- METHODS shops, and desk studies. The assessment did not seek separate vali- The assessment selected areas with active con- dation of its methods, but the assessment cessions. The surveys took place only in com- plans to do this for an upcoming social audit munities that were affected by the concessions to ensure a robust study design that will be and that had CFDCs. The assessment chose relatively straightforward to analyze. interview subjects from among CFDC members, local government officials, and traditional com- munity leaders. These people were more likely to have a greater understanding of forest gover- IV. Data Collection nance issues than the community at large. The assessment set out to interview 10 people per Recruiting and Training Data Collectors affected community, although it was not always All the data collectors were members of the Civil possible to interview this many people due to Society-Independent Forest Monitoring team. logistical and transportation issues. The survey team was trained in interview skills. Developing Data Collection Tools Collecting Data The assessment developed a survey question- The survey interview process involved making a naire for interviews, with 36 questions under series of visits to affected communities to con- four main themes. The questionnaires were duct the interviews. Notes were taken on the mainly yes/no questions, but interviewees could questionnaire, and these notes were transcribed add information if they needed to. In addition, into the questionnaire templates. This informa- the assessment used templates for collecting tion was collated and recorded in tables. desk-based data. The tax payment data were obtained directly The decision on which type of analysis to use from the SGS financial updates. wasn’t made until after most of the data collec- tion had been done. In retrospect, the surveys Data on legal compliance were obtained, where collected a combination of qualitative and quan- available, directly from contracts and official titative data; this made analysis more difficult. documents. The CS-IFM team will improve the template for the next social audit. There will be separate The three strands of information were used to quantitative and qualitative sections, making the analyze the situation within each affected com- data easier to analyze. munity in relation to each of the four objectives. Creating a Data Collection Manual Assuring Quality in Field Data Collection The assessment did not create a data collection The data collection team members were all manual. trained in interview techniques. The question- naire and assessment template was designed according to SDI’s experience in the sector, Annex I: Case Studies 161 knowing what to look for and what breaches of owing. This was then further broken down into the law there have been in the past. the amount that was due to communities (as the study focused on community benefits). Every effort was made to follow the sampling plan; due to the interviews being carried out in remote Analyzing the Processed Data parts of Liberia, however, it was not always possible Because the assessment had a small sample size for all targeted interviewees to be contacted. for each of the concessions it assessed, it wasn’t meaningful to statistically analyze the data. As a re- The data collectors transcribed the raw interview sult, the assessment used the counts for analysis data into the templates; the forms were then and conclusions (e.g., the number of people at- checked for any inconsistencies and errors and tending community meetings versus the number corrected before the data was collated and ana- of people not attending community meetings). lyzed. The data forms were organized into files and folders for easy access. These were stored Making Recommendations on a number of computers to ensure that the The assessment found some striking patterns data would not get lost. in all of the concessions. It made recommen- dations based on these patterns. It also held a As mentioned above, the assessment used stakeholder meeting to discuss the findings, and stakeholder workshops to validate information the inputs received during this meeting were from the social audit. To an extent, the assess- useful in refining the recommendations. ment also validated the field data by looking for consistency with information from laws, contracts, official documents obtained from gov- POINT ON PROCESS: VETTING FINDINGS ernment agencies, and tax payment documents and receipts. The assessment held a stakeholder meeting on the findings and gathered feedback and comments. It also contacted partner organi- POINT ON PROCESS: PRACTICAL AND zations and obtained feedback on the entire ETHICAL DATA COLLECTION assessment. There were no practical or ethical concerns, or worries about the safety of data collectors. VI. Spreading the Results V. Analysis and Interpretation Deciding on a Dissemination Strategy The assessment was slated to publish its report Processing the Data in late 2013. It was organizing an official launch, The assessment did counts on the responses to inviting stakeholders, drafting press releases, and ascertain the patterns within each community inviting the press. The assessment was also pre- and between the different affected communities. paring briefing papers on the findings to be re- It used the tax data to make calculations that were leased alongside the report. It will also be produc- relevant to the assessment—the total amount ing community specific versions of the social audit due, the total amount paid, and the total amount to highlight relevant issues to rural communities. 162 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE The assessment will call on international and Evaluation After the Assessment national partner organizations to publicize the None yet. report on their websites and newsletters and will also arrange meetings with relevant government Sharing Lessons Learned agencies to discuss the findings and next steps The assessment expects to produce a lessons (i.e., in terms of policy change and implementa- learned document that can be disseminated to tion of current legislation). partners and published on its website. Disseminating the Results Keeping the Door Open for Further Feedback Not yet done at the time the case study was The e-mails of the report authors will be includ- written. ed so that readers are able to give feedback and ask further questions. The stakeholder meeting Institutionalizing Further Assessment during the launch will also provide space for Not done. feedback on the assessment. POINT ON PROCESS: MOVING FROM POINT ON PROCESS: CONDUCTING RESULTS TO ACTION INTERNAL GROUP EVALUATIONS The organizations conducting the assessment As noted above, this will be part of the pro- will be meeting with relevant government min- cess of reevaluating the social audit design istries and agencies to discuss next steps. They and improving the process of data collection. will also be educating communities and assist- ing them in organizing their responses to the findings and relevant developments in light of the issues highlighted by the social audit. VII. Learning and Improvement Self-evaluating During the Assessment This will be done during the process of making adjustments to the assessment template and questionnaire (which will be done before the next set of data collection begins). Annex I: Case Studies 163 Case Study: Uganda The World Bank Piloting of the PROFOR Diagnostic Tool Thumbnail description: The World Bank/ (1) to conduct a general diagnosis of forest gov- PROFOR developed a forest governance diag- ernance in Uganda, and (2) to learn more about nostic tool based on its “Roots for Good Forest the diagnostic tool itself. Outcomes: An Analytic Framework for Forest Governance Reforms.” The tool used a set of Considering Context about 130 indicators, scored in a consensus-ori- The key element of country context was the ented stakeholder workshop. The case, in Uganda willingness of the Government of Uganda to in 2010, was the first pilot test of the tool. participate in the diagnostic exercise. The World Bank had looked into using a number of countries What the case illustrates: The case is an ex- to test the tool, and Uganda was among the first ample of taking an off-the-shelf tool, adapting to agree to participate. The country’s willingness it to local conditions, and using it. Because the was due partly to the importance of forestry to PROFOR tool relies on stakeholder scoring of indi- the country’s national development plans. In addi- cators, the case illustrates one avenue for involv- tion, a pair of corruption scandals had recently hit ing stakeholders. It is also an example of a fairly the National Forest Authority. The scandals raised quick assessment that does not require a great awareness among all stakeholders of the need deal of data processing and management skill. for reform. Because one of the scandals affected use of donor funds, international development What the case does not illustrate: The case partners had frozen grants and lending to govern- does not offer examples of use of surveys or ment forest projects. The government was eager quantitative analytic tools, or of complex data to move past these problems, and the diagnostic management. Because the case used an off- assessment seemed a good way to start. the-shelf tool, many of the choices about scope and method were decided beforehand. The case Setting Objectives does not provide a good example of post-as- The primary objective of the pilot project was sessment implementation of recommendations. to conduct a broad, diagnostic assessment of forest governance in Uganda and to identify Web page for further information: priority areas for improvement. The secondary http://www.profor.info/events/ objectives were to field-test the new PROFOR workshop-forest-governance-reforms-uganda. diagnostic tool and to foster consensus about re- form among stakeholders. Fostering stakeholder I. Setting Objectives consensus is a generic secondary objective built into the PROFOR tool, which scores its indicators Defining the “Why” in a consensus-oriented workshop. This was a pilot test of a new governance assess- ment tool. The “why,” therefore, had two parts: 164 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Figuring Out How Much it Would Cost POINT ON PROCESS: INVOLVING The exercise had a budget of roughly $60,000. STAKEHOLDERS IN OBJECTIVE SETTING This included the cost of bringing two tool de- signers to Uganda from Washington to partici- PROFOR set the piloting objective. The gov- pate in the stakeholder workshop. ernment invited PROFOR and the World Bank to conduct a broad assessment. PROFOR and Figuring Out How Long it Would Take the government did not consult other stake- Once the local expert and facilitator were hired, holders in setting the objectives. the diagnostic design called for completion in six weeks. The expert review of the sector and customization of the indicators was to take three II. Developing a Work Plan weeks. The stakeholder workshop was to take one week. Tabulation of results and vetting was Identifying the Scope of the Assessment to take two weeks. The PROFOR tool uses a broad definition of for- est governance that has about 130 indicators. Writing a Work Plan It is possible to alter the scope by adding or re- The testing of the tool in Uganda was part of a moving criteria or indicators. After review by local larger project to develop the tool. That project experts, the assessment added a few indicators had a work plan (a “concept note”). The Uganda (e.g., one on honoring human rights in the en- pilot did not have a separate work plan. forcement of forest laws) and thinned the full set to about 100 key indicators. This did not narrow the breadth of the assessment, but it did slightly POINT ON PROCESS: COMMUNICATING reduce its depth and complexity. THE PROCESS Identifying the General Methods The local expert prepared a list of stakehold- The PROFOR tool comes with a basic method, ers in the sector. From the time they were in- which users can vary as needed. The Uganda pilot vited to participate in the scoring workshop, test was to follow the basic method. A local ex- stakeholders knew about and played a central pert wrote a background paper on the sector and role in implementing the assessment. customized the indicator set for Uganda. A local facilitator conducted a stakeholder workshop to score the set of indicators, by consensus if possi- ble, and to identify a smaller set of priority issues. III. Refining your Data Collection Method Local stakeholders then reviewed and validated the findings of the workshop. The local expert and Defining What You Intend to Measure the facilitator prepared the report on the findings. The World Bank had developed a five-building- block model of forest governance, with each Identifying Who Would block broken into components and subcom- Conduct the Assessment ponents. The subcomponents formed the The World Bank, under an invitation from the initial criteria. (Note that the current version of Ministry of Water and Environment of Uganda, the PROFOR tool now uses the PROFOR-FAO sponsored and conducted the assessment. Framework three-pillar model as the basis of Annex I: Case Studies 165 its criteria.) A pool of experts at the World Bank had developed indicators for each criterion. The POINT ON PROCESS: VALIDATING Uganda expert helped customize these indica- METHODS tors for Uganda. The method was set without consulting stake- Identifying Potential Sources of Information holders; the project did, however, seek stake- The tool design called for stakeholder scoring holder feedback on the approach during the at a single workshop. A key task was to iden- workshop. tify a representative group of stakeholders. The local expert and local facilitator compiled lists, in consultation with assessment coordinators IV. Data Collection at the World Bank. The tool design also called for preparation of a background paper on the Recruiting and Training Data Collectors forest sector in Uganda. The local expert wrote The key data collectors were the workshop fa- this paper using published sources and his own cilitator, the local expert, and the World Bank ex- knowledge. perts who designed the tool. The administrative staff of the World Bank office in Kampala pro- Selecting Data Collection Methods and vided the data collectors with logistical support. Considering a Sampling Plan The facilitator and local expert already had the The tool called for a stakeholder workshop to skills to conduct the workshop. Through reading score the indicators, with the general results to the tool and discussing it with the tool design- be vetted through interviews with key stakehold- ers, they learned what they needed to know to ers not at the workshop. The selection of people use the workshop to score the indicators. In a to invite to the workshop therefore constituted way, the participants at the workshop were data the sampling plan. Post-workshop vetting took collectors when they scored the indicators in place in Kampala, drawing upon stakeholders their breakout groups. The workshop included who were readily available in the city. a session explaining the tool and how to score the indicators. Developing Data Collection Tools The primary tool (the indicator set and workshop Collecting the Data format) was already developed. The local facili- The assessment collected the data in a two- tator, in consultation with local and World Bank day stakeholder workshop. The workshop par- experts, designed the scoring workshop. ticipants scored the indicators and then selected fifteen indicators that they considered to be the Creating a Data Collection Manual highest priorities. The assessment did not have a data collection manual; each indicator did, however, include Assuring Data Quality notes explaining its rationale and how to inter- The tool includes detailed notes on each indica- pret it. The object of these notes was to avoid tor. The participants in the workshop were given misinterpretation of the indicators by the work- the indicators and the notes in advance and shop participants. were able to consult them as they scored the indicators. 166 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE The data were simply the scores of the indica- tors. Data transcription and storage were not POINT ON PROCESS: VETTING FINDINGS major concerns. The workshop report included all the data that the scoring workshop produced. Because stakeholders developed the findings themselves, the results did not need a great deal of additional vetting. However, there POINT ON PROCESS: PRACTICAL AND could have been biases in the findings if the ETHICAL DATA COLLECTION selection of participants at the scoring work- shop was biased. To test the workshop find- The team sought to be transparent about ings, the project team discussed them with what it was doing (in terms of collecting data) stakeholders who were not at the workshop. and why. The ground rules of the stakeholder workshop required the participants not to at- tribute remarks to individual speakers or their VI. Spreading the Results organizations. Deciding on a Dissemination Strategy The plan called for a workshop report and an an- V. Analysis and Interpretation nex with recommendations. However, because the key stakeholders were at the scoring work- Processing the Data shop and the workshop was open to the press, The scores from the workshop did not need ex- the raw results were spread by word of mouth. tensive processing. The team did devise ways to display the scores graphically, including in spider The primary target audience was decision mak- web diagrams. ers within the government. International devel- opment partners who control funding of key Analyzing the Processed Data forestry projects, were also an important audi- The spider web diagrams allowed the team to ence; so too were nongovernmental stakehold- plot the actual scores against the ideal scores. ers generally. (All the PROFOR indicators are normative and have ideal scores.) The workshop itself did some Disseminating the Results analysis by identifying the highest priority issues. There were two official reports. In addition, there was considerable “word of mouth” from people Making Recommendations who attended the scoring workshop. There were The PROFOR indicators are all actionable—that also informal discussions after the workshop is, low scores suggest actions in response. The between the tool implementers and target audi- local expert wrote up a set of recommendations ence members. based on the indicator scoring. Annex I: Case Studies 167 Outside of Uganda, PROFOR discussed the Evaluation after the Assessment Uganda pilot when it released the guide to its The team did not do a formal self-evaluation tool and the people who worked on the tool and exercise, but members did discuss the experi- the assessment have discussed it in scholarly ence among themselves. These discussions in- publications. fluenced subsequent use of the tool. Institutionalizing Further Assessment Sharing Lessons Learned Although the assessment identified priority indi- The workshop report included the participants’ cators that the government could use to monitor critique of the tool. The guide to using the tool, its progress in governance reform, actually insti- which PROFOR published in 2012, reflects les- tutionalizing future assessment was beyond the sons learned. scope of this pilot test. Keeping the Door Open for Further Feedback The people who worked on the pilot project con- POINT ON PROCESS: MOVING FROM tinue to follow governance activities in Uganda. RESULTS TO ACTION There is, however, no formal mechanism for feedback. Beyond supplying the recommendations to the government, the project had no follow-up geared toward implementation. POINT ON PROCESS: CONDUCTING INTERNAL GROUP EVALUATIONS VII. Learning and Improvement As noted above, the project team did not con- duct a formal post-assessment self-evaluation. Self-evaluating During the Assessment A portion of the workshop was devoted to evalu- ation of the assessment tool. 168 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE ANNEX II: METHODS, TOOLS, GUIDANCE, AND REFERENCES This annex lists materials that may be of use in planning and carrying out assessments. It begins with a set of materials that provide general information on data collection, forest governance, and gover- nance assessment and improvement. It then lists materials that may be useful in applying specific methods. If you are seeking more information on a method discussed in the guide, check the chapter entries that follow the general references. These offer more specific reference works and citations to specific pages within some of the general references. General References About Data Collection and Analysis in the Social Sciences Earl Babbie. 2010. The Practice of Social Research. Twelfth Edition. Belmont, California, USA: Wadsworth/Centage Learning. H. Russell Bernard. 2006. Research Methods in Anthropology: Qualitative and Quantitative Approaches. Fourth Edition. Lanham, Maryland: AltaMira Press. Alan Bryman. 2012. Social Research Methods. Fourth Edition. Oxford: Oxford University Press. Giuseppe Iarossi. 2006. The Power of Survey Design. Washington: World Bank. https://openknowl- edge.worldbank.org/handle/10986/6975. About Forest Governance PROFOR and FAO. 2011. Framework for Assessing and Monitoring Forest Governance. Rome: FAO. http://www.fao.org/climatechange/27526-0cc61ecc084048c7a9425f64942df70a8.pdf. The World Bank. 2009. Roots for Good Forest Outcomes: An Analytical Framework for Governance Reforms. Report No. 49572-GLB. Washington, DC: The World Bank. http://www.profor.info/sites/ profor.info/files/docs/ForestGovernanceReforms.pdf. About Forest Governance Assessment and Improvement Crystal Davis, Lauren Williams, Sarah Lupberger, and Florence Daviet. 2013. Assessing Forest Governance: The Governance of Forest Initiative Indicator Framework. Washington: WRI. http:// www.wri.org/publication/assessing-forest-governance. Annex II: Methods, Tools, Guidance, and References 169 FAO. 2010. Enhancing Stakeholder Engagement in National Forest Programmes: A Training Manual. Peter O’Hara. National Forest Program Facility. Rome. http://www.fao.org/docrep/014/ i1858e/i1858e00.pdf. FAO. 2012. Strengthening Effective Forest Governance Monitoring Practice, by A. J. van Bodegom, S. Wigboldus, A. G. Blundell, E. Harwell, and H. Savenije. Forestry Policy and Institutions Working Paper No. 29. Rome. http://www.fao.org/docrep/015/me021e/me021e00.pdf. FAO. 2013. Improving Governance of Forest Tenure: A Practical Guide, by J. Mayers, E. Morrison, L. Rolington, K. Studd, and S. Turrall. Governance of Tenure Technical Guide No. 2. London and Rome: International Institute for Environment and Development and FAO. http://www.fao.org/do- crep/018/i3249e/i3249e.pdf Global Witness. 2005. A Guide to Independent Forest Monitoring. London. http://www.globalwit- ness.org/library/guide-independent-forest-monitoring IIED. 2005b. The Pyramid: A Diagnostic and Planning Tool for Good Forest Governance. http:// www.policy-powertools.org/Tools/Engaging/docs/pyramid_tool_english.pdf. Jens Friis Lund, Helle Overgaard Larsen, Bir Bahadur Khanal Chhetri, Santosh Rayamajhi, Øystein Juul Nielsen, Carsten Smith Olsen, Patricia Uberhuaga, Lila Puri and José Pablo Prado Córdova. 2008. When Theory Meets Reality—How to Do Forest Income Surveys in Practice. Forest & Landscape Working Papers No. 29-2008, 48 pp. Forest & Landscape Denmark, University of Copenhagen, Hørsholm. http://curis.ku.dk/ws/files/20573307/workingpapersno29.pdf. Nalin Kishor and Kenneth Rosenbaum. 2012. Assessing and Monitoring Forest Governance: A User’s Guide to a Diagnostic Tool. Washington: Program on Forests (PROFOR). http://www.profor. info/sites/profor.info/files/docs/AssessingMonitoringForestGovernance-guide.pdf. About Other Kinds of Governance Assessment and Improvement Arild Angelsen, Helle Overgaard Larsen, Jens Friis Lund, Carsten Smith-Hall, and Sven Wunder. 2011. Measuring Livelihoods and Environmental Dependence: Methods for Research and Fieldwork. London: Earthscan. Klaus Deininger, Harris Selod and Anthony Burns. 2012. The Land Governance Assessment Framework: Identifying and Monitoring Good Practice in the Land Sector. Washington: World Bank. https://openknowledge.worldbank.org/handle/10986/2376. J. Hinton and M. R. Hollestelle. 2012. Methodological Toolkit for Baseline Assessments and Response Strategies to Artisanal and Small-Scale Mining in Protected Areas and Critical Ecosystems. Published under the Artisanal and Small-scale Mining in and around Protected Areas and Critical Ecosystems (ASM- PACE) project of WWF & Estelle Levin Ltd. http://www.asm-pace.org/projects/methodological-toolkit.html. 170 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE IUCN. 2013. Governance of Protected Areas: From Understanding to Action. http://www.iucn.org/about/work/programmes/gpap_home/gpap_people/ diversity_and_quality_of_protected_area_governance_2/. Kate O’Neill, Erika Weinthal, Kimberly R. Marion Suiseeya, Steven Bernstein, Avery Cohn, Michael W. Stone and Benjamin Cashore. 2013. Methods and Global Environmental Governance. Annu. Rev. Environ. Resour. 38:11.1–11.31. Mercy Corps. 2011. Guide to Good Governance Programming. http://www.mercycorps.org/sites/ default/files/mcgoodgovernanceguide.pdf. UNDP. 2013. User’s Guide on Assessing Water Governance. http://www.undp.org/content/dam/ undp/library/Democratic%20Governance/OGC/Users%20Guide%20on%20Assessing%20Water%20 Governance1.pdf. UNDP. 2012. Institutional and Context Analysis Guidance Note. http://www.undp.org/content/ dam/undp/library/Democratic%20Governance/OGC/UNDP_Institutional%20and%20Context%20 Analysis.pdf. UNDP. 2009a. A Users’ Guide to Measuring Local Governance. http://www.undp.org/content/ dam/aplaws/publication/en/publications/democratic-governance/dg-publications-for-website/a- users-guide-to-measuring-local-governance-/LG%20Guide.pdf. UNDP. 2009b. Planning a Governance Assessment: A Guide to Approaches, Costs, and Benefits. http://www.undp.org/content/rbas/en/home/presscenter/events/2012/November/regional_gov- ernance_week/_jcr_content/centerparsys/download_8/file.res/Planning%20a%20governance%20 assessment.pdf. UNDP. 2009c. Practice Note on Supporting Country-Led Democratic Governance Assessments. http://www.undp.org/content/dam/aplaws/publication/en/publications/democratic-governance/ oslo-governance-center/governance-assessments/supporting-country-led-democratic-governance- assessment-a-undp-practice-note/UNDP_Oslo_Eng_1.pdf. UNDP. 2007. Governance Indicators: A Users’ Guide. Second Edition. http://gaportal.org/sites/de- fault/files/undp_users_guide_online_version.pdf. USAID. 2013. Guidelines for Assessing the Strengths and Weaknesses of Natural Resource Governance in Landscapes and Seascapes. Washington: USAID. http://frameweb.org/ CommunityBrowser.aspx?id=10650&lang=en-US Annex II: Methods, Tools, Guidance, and References 171 Chapter References Chapters 1 & 2: Objective Setting and Work Plan Development • Context Analysis –– Method: Political Economy Analysis (PEA) Political economy analysis is a qualitative method used to identify factors that may promote or hold back changes in forest governance. Interviews and triangulation form a key component of PEA, which requires strong country and sector knowledge, access to key stakeholders, and the ability to communicate with people in their native languages. »» Tool: Political Economy Assessment This is a tool for analyzing formal and informal institutions and identifying stakeholders’ underlying interests and incentives. o See: The World Bank. 2011. Political Economy Assessments at Sector and Project Levels. http://gsdrc.org/docs/open/PE1.pdf. »» Tool: Force Field Analysis Force field analysis helps identify the forces or factors that are likely to drive or hold back a desired change in forest governance. o See: ODI. 2009a. Management Techniques: Force Field Analysis. http://www.odi. org.uk/publications/5218-force-field-analysis-decision-maker. »» Tool: Drivers of Change Analysis The Drivers of Change tool looks at how and why change occurs in specific contexts. It can be used to examine the institutions and structural features that drive or hinder change in forest governance. o See: Debbie Warrener. 2004. The Drivers of Change Approach. (ODI Synthesis Paper 3). http://www.odi.org.uk/sites/odi.org.uk/files/odi-assets/publications-opin- ion-files/3721.pdf. »» Tool: Net-Map Net-Map uses interviews to produce a diagram showing actors, how they are linked, what their influence is, and what their goals are. o See: “How Net-Map Works” and the links on the web page. http://netmap.word- press.com/about/. »» See for general guidance: DFID. 2009. Political Economy Analysis How To Note. (A DFID practice paper.) http://www.odi.org.uk/sites/odi.org.uk/files/odi-assets/ events-documents/3797.pdf. »» See for general guidance: Daniel Harris and David Booth. 2013. Applied Political Economy Analysis: Five Practical Issues. (ODI Politics and Governance note.) http://www.odi.org.uk/ publications/7196-applied-political-economy-analysis-five-practical-issues. »» Reference: Verena Fritz, Kai Kaiser and Brian Levy. 2009. Problem-driven Governance and Political Economy Analysis: Good Practice Framework. http://siteresources.world- bank.org/EXTPUBLICSECTORANDGOVERNANCE/Resources/PGPE_book_8-25-09. pdf?resourceurlname=PGPE_book_8-25-09.pdf. »» Reference: Daniel Harris. 2013. Applied Political Economy Analysis: A Problem-Driven 172 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Framework. (ODI Politics and Governance note.) http://www.odi.org.uk/ publications/7380-applied-political-economy-analysis-problem-driven-framework. »» Reference: European Commission. 2008. Analysing and Addressing Governance in Sector Operations. (Tools and Method Series, Reference Document Number 4.) http://ec.europa.eu/europeaid/infopoint/publications/europeaid/ documents/149a_governance_layout_090306_en.pdf. –– Method: Poverty and Social Impact Analysis (PSIA) Poverty and Social Impact Analysis provides a way to analyze the social and economic im- pacts that policy reforms may have on forest governance stakeholders, particularly those who are poor and vulnerable. PSIA is a qualitative method that can provide an analysis of why changes should or should not be made. »» Tool: Poverty Impact Assessment This tool analyses the social impacts of an intervention, such as a Voluntary Partnership Agreement (VPA), with an eye to reducing risks and enhancing positive impacts. It com- bines a number of different methods, including qualitative and quantitative analyses. o See: Forest Trends. 2012. Poverty Impact Assessment for Reducing Social Risks and Enhancing Pro-Poor Outcomes of Voluntary Partnership Agreements. (Forest Trends Information Brief No. 4.) http://www.forest-trends.org/publication_details. php?publicationID=3267. »» Reference: The World Bank. 2003. A User’s Guide to Poverty and Social Impact Analysis. http://siteresources.worldbank.org/INTPSIA/ Resources/490023-1121114603600/12685_PSIAUsersGuide_Complete.pdf. –– Method: Institutional and Context Analysis (ICA) »» Reference: UNDP. 2012. Institutional and Context Analysis Guidance Note. http:// www.undp.org/content/undp/en/home/librarypage/democratic-governance/ oslo_governance_centre/Institutional_and_Context_Analysis_Guidance_Note/. –– Method: Participatory Mapping This method uses an open and participatory process to produce maps that visually represent local communities’ knowledge, including information about forest resources and resource management practices. »» Reference: U.S. NOAA’s Coastal Services Center. 2009. Stakeholder Engagement Strategies for Participatory Mapping. http://csc.noaa.gov/digitalcoast/sites/default/ files/files/1366314383/participatory_mapping.pdf. »» Reference: IFAD. 2009. Good Practices in Participatory Mapping. http://www.ifad.org/ pub/map/pm_web.pdf. »» Reference: PPGIS.net. Open Forum on Participatory Geographic Information Systems and Technologies. http://ppgis.iapad.org/. »» Reference: Rainforest Foundation. 2011. La Cartographie Participative: Guide pour la Production des Cartes avec les Communautés Forestières dans le Bassin du Congo (Participatory Mapping Guide for forest communities in the Congo Basin). http://www.mappingforrights.org/files/Guide%20methodologique%20pour%20 la%20cartographie%20participative%20final%20Low%20Res.pdf. Annex II: Methods, Tools, Guidance, and References 173 »» Reference: Mapping for Rights. Video Training for Participatory Mapping. http://www.mappingforrights.org/video-training. • Preparation –– Method: Timeline and work plan development The work plan is a key tool to guide forest governance assessments from preparation through to report dissemination. It is often based on a logical framework. »» See: The sample outline of a basic work plan in Annex IV of this guide. »» See: The Nature Conservancy (TNC). Conservation Action Planning Handbook. “Step 8: Develop Work Plans” & “Step 9: Implement Work Plans” in TNC. 2007. Conservation Action Planning Handbook: Developing Strategies, Taking Action and Measuring Success at Any Scale. The Nature Conservancy, Arlington, VA. http://www.conservation- gateway.org/Files/Pages/8-develop-workplans-basic.aspx & http://www.conservation- gateway.org/Files/Pages/9-implement-workplans-bas.aspx. »» See: An example of a relatively detailed work plan with timeline for the assessment of the institutional structure of the UN: http://www.un.org/esa/coordination/pdf/swe_re- view-workplan.pdf. –– Method: Development of a background document Developing a background document is a useful method for objectively and concisely pre- senting sector background and the current state of forest governance. Contextual analysis methods and a literature review may be used to inform the background document. »» See for an example: Kishor & Rosenbaum 2012 (general reference list, beginning of this Annex): “Prepare Background Materials,” pp. 11–12 & Appendix VII: “Sample Outline for Forest Governance Background Paper,” pp. 109–10. http://www.profor.info/ sites/profor.info/files/docs/AssessingMonitoringForestGovernance-guide.pdf. –– Method: Project Budgeting Creating and following a project budget and making the best use of available resources are key to project success. The following resources provide guidance on how to estimate and manage a project budget. »» See: Sustainable Sanitation and Water Management (SSWM). “Budget Allocation and Resource Planning.” (Web page.) http://www.sswm.info/category/planning-process-tools/ implementation/implementation-support-tools/project-design/budget-al. »» See for general guidance: John Cammack. 2013. Project Budgeting How to Guide. http://www.bond.org.uk/data/files/project_budgeting_how_to_guide.pdf. »» See for general guidance: Mango. Undated. Guide to Financial Management for NGOs. (Web page.) http://www.mango.org.uk/Guide/GettingTheBasicsRight. • Setting Objectives –– Method: Logical Framework Approach The logical framework approach is useful in early project planning. It helps project planners think logically about what they want the project to achieve. 174 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE »» Tool: Logical Framework (Logframe) Matrix The logframe matrix includes the forest governance assessment’s objectives, indica- tors for measuring project progress, where to gather data that confirms each indicators’ progress, and external factors that may impact project progress. It uses a table to pres- ent this information in a clear, concise, and logical form. o See for general guidance and an example: University of Wolverhampton. Undated. A guide for Developing a Logical Framework. http://www.hedon.info/ docs/logical_framework-CentreForInternationalDevelopmentAndTraining.pdf o See: SSWM. “Logical Framework Approach”. (web page.) http://www.sswm.info/ category/planning-process-tools/implementation/implementation-support-tools/ project-design/logical-f. o See for general guidance: BOND. 2003. Logical Framework Analysis (Guidance Notes No. 4). http://www.gdrc.org/ngo/logical-fa.pdf. o See for general guidance and an example: Government of the Republic of Serbia and EU Integration Office. 2011. Guide to the Logical Framework Approach: A key Tool for Project Cycle Management. (Second Edition.) Ch. 2, pp. 27–49; Template and Example, pp. 70–72. http://www.evropa.gov.rs/evropa/ ShowDocument.aspx?Type=Home&Id=525. o Reference: NORAD. 1999. Logical Framework Approach: Handbook for Objectives-oriented Planning. (Fourth Edition.) http://www.norad.no/en/ tools-and-publications/publications/publication?key=109408. • Stakeholders –– Method: Stakeholder Analysis Stakeholder analysis is a qualitative method best suited for the planning phase of a forest gov- ernance assessment. It is used to identify the stakeholders in forest governance and to charac- terize how these stakeholders interact with each other, the roles they play in forest governance, and the influence they have over programs, policies, and reforms in the forest sector. »» Tool: Stakeholder Influence/Interest (or Influence/Importance) Matrix Once relevant stakeholders have been identified, listed, and categorized, stakeholder influence/interest matrices can aid project planners in planning how to engage different types of stakeholders. o See: ODI. 2009b. Planning Tools: Stakeholder Analysis. http://www.odi.org.uk/ publications/5257-stakeholder-analysis. o See: UNDP. Handbook on Planning, Monitoring and Evaluating for Development Results. “2.2. Stakeholder Engagement” (web page). http://web.undp.org/evalua- tion/handbook/ch2-2.html. o See: The World Bank. Undated. “Stakeholder Analysis Guidance Note.”http:// www1.worldbank.org/publicsector/politicaleconomy/November3Seminar/ Stakehlder%20Readings/CPHP%20Stakeholder%20Analysis%20Note.pdf. Annex II: Methods, Tools, Guidance, and References 175 »» Tool: Four Rs The Four Rs is a tool to examine stakeholder roles that is best used as a participatory process aided by a neutral facilitator. It can be used after conducting context analysis to clarify, negotiate, and strengthen the roles and responsibilities of stakeholder groups and the relationships among them. o See: IIED. 2005a. The Four Rs. http://www.policy-powertools.org/Tools/ Understanding/docs/four_Rs_tool_english.pdf. »» Tool: Conflict Assessment (also called Conflictology) o See for general guidance: DFID. 2002. Conducting Conflict Assessment: Guidance Notes. http://www.conflictsensitivity.org/sites/default/files/Conducting_ Conflict_Assessment_Guidance.pdf. o See for general guidance: Conflict Sensitivity Consortium. 2012. How to Guide to Conflict Sensitivity. http://www.conflictsensitivity.org/sites/default/ files/1/6602_HowToGuide_CSF_WEB_3.pdf. o See for general reference: FAO. Conflict Management (web page with links to FAO publications on the topic). http://www.fao.org/forestry/conflict/56824/en/. »» See for general guidance: Robert Nash, Alan Hudson and Cecilia Luttrell. 2006. Mapping Political Context: A Toolkit for Civil Society Organisations. Ch. 8, “Stakeholder Analysis.” http://www.odi.org.uk/sites/odi.org.uk/files/odi-assets/publications-opinion- files/186.pdf. »» Reference: Catholic Agency for Overseas Development (CAFOD), Christian Aid and Trocaire. Undated. Monitoring government policies: A Toolkit for Civil Society Organisations in Africa. Ch. 3: “Identifying Policy Stakeholders,” pp. 29–33, 36. http:// www.commdev.org/userfiles/files/1818_file_monitoringgovernmentpolicies.pdf. »» Reference: IIED. 2005c. Stakeholder Power Analysis. http://www.policy-powertools. org/Tools/Understanding/docs/stakeholder_power_tool_english.pdf. »» Reference: Michael Richards, Jonathan Davies and Gil Yaron. 2003. “Economic Stakeholder Analysis’ for Participatory Forest Management” (ODI Forestry Briefing Number 4). http://www.odi.org.uk/resources/docs/810.pdf. –– Method: Stakeholder Engagement Stakeholder engagement tools are used to enhance relationships and trust among those conducting the assessment and various stakeholders. Stakeholder engagement should fol- low stakeholder analysis and context analysis and forms the core of participatory approaches to forest governance assessments. »» Tool: Multi-stakeholder Workshops Multi-stakeholder workshops bring together a range of stakeholders to perform such tasks as refining indicators, validating data, and validating assessment findings. A neutral facilita- tor is often recruited to lead these workshops, which may include breakout sessions. o See: PARIS21. 2003. PARIS21 Workshop Guide: A Reference Manual for Running a Stakeholders Workshop. http://paris21.org/sites/default/files/18.pdf. o See for an example: Kishor & Rosenbaum 2012 (general reference list, begin- ning of this Annex): “Hold a Stakeholder Workshop,” pp. 13–15. 176 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE »» Tool: Focus group discussions Focus groups normally bring together stakeholders from one group to gather their point of view on an issue or validate findings. A facilitator is often recruited to lead these dis- cussions, which are based on predefined questions or goals. o See for general guidance: Eliot and Associates. 2005. “Guidelines for Conducting a Focus Group.” (Posted on the web site of the Office of Assessment, Duke University.) http://assessment.aas.duke.edu/documents/How_to_ Conduct_a_Focus_Group.pdf. o See for general guidance: Richard A. Krueger. 2002. Designing and Conducting Focus Group Interviews. http://www.eiu.edu/~ihec/Krueger- FocusGroupInterviews.pdf. o See: CARE. 2009. Climate Vulnerability and Capacity Analysis Handbook. “Field Guide 1: Facilitation Tips”, pp. 30–32. http://www.careclimatechange.org/files/ adaptation/CARE_CVCAHandbook.pdf. o See: J. Hinton and M. R. Hollestelle. 2012. (general reference list, beginning of this annex) “Tool #4c: Focus Groups and Exercises”, pp. 55–62. o Reference: Alan Bryman. 2012. (general reference list, beginning of this Annex) “Chapter 21: Focus Groups”, pp. 500–19. o Reference: Earl Babbie. 2010. (general reference list, beginning of this Annex) “Focus Groups”, pp. 322–23. »» See for general guidance: Hilary Coulby. 2009. A Guide to Multi-stakeholder Work: Lessons from the Water Dialogues. “Section 4: Bringing Multiple Stakeholders Together”, pp. 29–37; “Section 7: Building and Sustaining Multi-stakeholder Processes”, pp. 51–65; “Section 8: Organizing and Conducting Multi-stakeholder Meetings”, pp. 66–71. http://www.waterdialogues.org/downloads/new/Guide-to-Multistakeholder.pdf. »» See: International Association for Public Participation (IAP2). 2007. Spectrum of Public Engagement. http://www.iap2.org/associations/4748/files/IAP2%20Spectrum_vertical.pdf. »» See: FAO. Enhanced Stakeholder Participation in National Forest Programmes (web page). http://www.fao.org/forestry/participatory/63974/en/ • Scope –– See: “Deciding What Aspects of Governance to Assess,” under resources for Chapters 3 & 4. Annex II: Methods, Tools, Guidance, and References 177 Chapters 3 & 4: Data Collection, Planning, and Implementation • Deciding what aspects of governance to assess. These references are encompass two categories. The first few are general works on data con- cepts in social research. The rest contain methods or tools that use frameworks or indicators relevant to forest governance. –– Reference: Alan Bryman. 2012. (general reference list, beginning of this annex). “Concepts and their measurement,” pp. 163–67. –– Reference: Earl Babbie. 2010. (general reference list, beginning of this annex). “Conceptualization, Operationalization, and Measurement,” ch. 5, pp. 124–59. –– Reference: PROFOR & FAO. 2009. (general reference list, beginning of this annex). »» See for an example: Kishor and Rosenbaum. 2012. (general reference list, beginning of this annex). –– Reference: Crystal Davis et al. 2013. (general reference list, beginning of this annex). –– Reference: World Bank. 2009. (general reference list, beginning of this annex). –– Reference: Montréal Process. 2009b. Technical Notes on Implementation of the Montréal Process Criteria and Indicators: Criteria 1–7. Third Edition. Criterion 7, pp. 67–77. http:// www.montrealprocess.org/documents/publications/techreports/2009p_2.pdf. –– Reference: International Tropical Timber Organization (ITTO). 1998. Criteria and Indicators for the Sustainable Management of Tropical Forests. Criterion 1: Enabling Conditions for Sustainable Forest Management & Criterion 7: Economic, Social, and Cultural Aspects. –– http://www.itto.int/policypapers_guidelines/. –– Reference: Global Witness. 2010. Making the Forestry Sector Transparent: Transparency Indicators 2010. http://www.foresttransparency.info/cms/file/387. –– Reference: IIED. 2005b. (general reference list, beginning of this annex). –– Reference: FAO. 2013. (general reference list, beginning of this annex). –– Reference: Institute for Global Environmental Strategies (IGES). 2013. Quality-of- Governance Standards for Carbon Emissions Trading: Developing REDD+ Governance through a Multi-stage, Multi-level and Multi-stakeholder Approach. –– http://pub.iges.or.jp/modules/envirolib/upload/4658/attach/Discussion_paper_ Final_20130617_FLC.pdf. –– Reference: Transparency International. 2011. Analyzing Corruption in the Forest Sector. http://www.transparency.org/whatwedo/pub/ analysing_corruption_in_the_forestry_sector_a_manual. –– Reference: Mercy Corps. 2011. (general reference list, beginning of this annex). –– Reference: USAID. 2013. (general reference list, beginning of this annex). • Tips on using indicators –– See: UNDP. 2009b. (general reference list, beginning of this annex). “Section 7: Indicators— Existing vs. New Indicators,” pp. 23–27. –– Reference: UNDP. 2007. (general reference list, beginning of this annex). • Identifying potential sources of information (existing and new) 178 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE –– See: CARE. 2005. Tips for Collecting, Reviewing, and Analyzing Secondary Data. http://pqdl.care.org/Practice/DME%20-%20Tips%20for%20Collecting,%20Reviewing%20 and%20Analyzing%20Secondary%20Data.pdf. • Selecting from among the possible data collection methods –– See: UNDP. 2009b. (General reference list, beginning of annex). “Section 5: Types of Data and Data Collection Methods,” pp. 16–18. • Deciding who to ask (sampling issues) –– See: UNDP. 2009b. (general reference list, beginning of annex). “Section 8: Sampling—The Basics,” pp. 28–29. –– See: Catholic Agency for Overseas Development (CAFOD), Christian Aid and Trocaire. Undated. Monitoring Government Policies: A Toolkit for Civil Society Organisations in Africa. “Unit 6.2: Gathering Evidence on Policy Implementation,” pp. 74–75. http://www.commdev.org/userfiles/files/1818_file_monitoringgovernmentpolicies.pdf –– See: Arild Angelsen, et al. 2011. (general reference list, beginning of annex). “Chapter 4: Sampling: Who, How, and How Many?” pp. 51–70. –– See: Giuseppe Iarossi. 2006. (general reference list, beginning of the annex) “Chapter 4: A Practical Approach to Sampling,” pp. 95–146. –– Reference: Alan Bryman. 2012. (general reference list, beginning of annex). “Chapter 8: Sampling,” pp.183–206. –– Reference: Earl Babbie. 2010. (general reference list, beginning of annex). “Chapter 7: The Logic of Sampling,” pp. 187–228. • Designing/applying tools for each method –– Secondary data collection (Desk review) »» Tool: Literature Review Literature reviews may be conducted during the preparation and data collection phases of a governance assessment. The purpose of this qualitative tool is to review the exist- ing research and literature on the topic(s) of study, including indicators. o See: Hinton and Hollestele. 2012 (general reference list, beginning of annex). “Tool #1b: Conducting a Literature Review,” pp. 23–24. http://www.asm-pace. org/projects/methodological-toolkit.html. o See: CARE. 2005. Tips for Collecting, Reviewing, and Analyzing Secondary Data. http://pqdl.care.org/Practice/DME%20-%20Tips%20for%20Collecting,%20 Reviewing%20and%20Analyzing%20Secondary%20Data.pdf. o See: Catholic Agency for Overseas Development (CAFOD), Christian Aid and Trocaire. Undated. Monitoring Government Policies: A Toolkit for Civil Society Organizations in Africa. “Unit 2.3: How you can access policy information” & “Unit 2.4: Collecting policy documents,” pp. 22–25. http://www.commdev.org/user- files/files/1818_file_monitoringgovernmentpolicies.pdf. o Reference: Alan Bryman. 2012. (general reference list, beginning of annex). “Chapter 5: Getting started: Reviewing the Literature,” pp. 97–127. o Reference: Cornell University Library Guide. “Critically Analyzing Data Sources.” (A web page on qualitative selection of sources with reliable content.) http://guides. library.cornell.edu/criticallyanalyzing. Annex II: Methods, Tools, Guidance, and References 179 o Reference: Sarah Boslaugh. 2007. Secondary Data Sources for Public Health: A Practical Guide. Cambridge U. Press. Excerpt viewable on web: http://analysis3. com/An-Introduction-to-Secondary-Data-Analysis-download-w173.pdf. o Reference: British Library for Development Studies. “Resources for Developing Country Researchers”. (A web site with many pages of guid- ance on research.) http://blds.ids.ac.uk/about-us/resources-for-research/ resources-for-developing-country-researchers. »» Tool: Content analysis Content analysis is a quantitative tool used to analyze the themes and terms found in chosen documents and media. o See: Alan Bryman. 2012. (general reference list, beginning of annex) “Chapter 13: Content Analysis,” pp. 288–308. o See: Earl Babbie. 2010. (general reference list, beginning of annex). “Content Analysis,” pp. 333–343. • Primary data collection: –– Method: Surveys Surveys are employed to collect quantitative or quantifiable data that can be used in statisti- cal analysis. Using this method requires posing questions that can be coded for a significant number of respondents (individuals or households). »» Tool: Interviewer-administered Questionnaire (Structured Interview) Data collectors use this tool to gather data in person. They pose the same closed ques- tions directly to individual respondents and note responses in a uniform field form. o See: Giuseppe Iarossi. 2006. (general reference list, beginning of the annex) “Chapter 3: How Easy It Is to Ask the Wrong Question,” pp. 27–94; “Chapter 5: Respondent’s Psychology and Survey Participation,” pp. 147–86. o See for general guidance: FAO. Marketing Research and Information Systems. “Chapter 4: Questionnaire Design” (web-based publication). http://www.fao.org/ docrep/w3241e/w3241e05.htm o See: Alan Bryman. 2012. (general reference list, beginning of annex) “Chapter 9: Structured Interviewing,” pp. 208–30. o See: Earl Babbie. 2010. (general reference list, beginning of annex) “Interview Surveys,”, pp. 274–278. »» Tool: Self-completion Questionnaire A self-completion questionnaire can be distributed to respondents via a variety of means. Data quality depends on respondents correctly following instructions and filling out the forms. o See: Alan Bryman. 2012. (general reference list, beginning of annex) “Chapter 10: Self-completion questionnaires,” pp. 231–44. o See: Earl Babbie. 2010. (general reference list, beginning of annex) “Self- Administered Questionnaires,”, pp. 267–273. »» Tool: Mini Survey o See: USAID. 2006. Conducting Mini Surveys in Developing Countries (revised edition). http://pdf.usaid.gov/pdf_docs/pnadg566.pdf. 180 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE »» Tool: Household Surveys o See: Arild Angelsen, et al. (general reference list, beginning of annex) “Chapter 7: Designing the Household Questionnaire,” pp. 107–26. »» Tool: Citizen Report Card o See: Public Affairs Centre (PAC) & Asian Development Bank (ADB). Undated. Citizen Report Card Learning Toolkit. (Web-based training course.) http://www. citizenreportcard.com/. »» For general guidance on designing questions and using surveys: o Reference: Alan Bryman. 2012. (general reference list, beginning of annex). “Chapter 11: Asking Questions,” pp. 245–66. o Reference: Earl Babbie. 2010. (general reference list, beginning of annex). “Chapter 6: Indexes, Scales, and Typologies,” pp. 160–186 & “Chapter 9: Survey Research,” pp. 253–294 o Reference: Giuseppe Iarossi. 2006. (general reference list, beginning of the annex) o Reference: Work Group for Community Health and Development, University of Kansas. 2013. The Community Toolbox. “Section 13: Conducting Surveys” (web- based publication). o http://ctb.ku.edu/en/tablecontents/sub_section_main_1048.aspx o Reference: UNDP. 2009b. (general reference list, beginning of annex). pp. 14–15, 17–18. o Reference: Jens Friis Lund et al. 2008. (general reference list, beginning of the annex). –– Method: Use of experts (expert analysis) Experts are people with specialized knowledge of governance. They may be commissioned to prepare reports, score indicators, or otherwise provide information on governance. »» Tool: Expert panels »» The World Bank Land Governance Framework uses small expert panels to score components of a governance framework. o See: Klaus Deininger et al. 2012. (general reference list, beginning of the annex), “Expert Panels,” pp. 47–48. »» Tool: Delphi technique The Delphi technique involves submitting the same questions to several individual experts, giving summaries of the collection of answers back to the experts, and allow- ing each to revise his or her answers. This repeats until the experts reach consensus or the answers stop changing. o See: Better Evaluation. 2014. “Delphi Study” (web site of resources for evaluation tools). http://betterevaluation.org/evaluation-options/delphitechnique o Reference: RAND Corporation. Undated. “Delphi Method” (website from the group that invented the technique in the 1950s; includes a brief description of the tool; some of the linked reports on the page have insights into the tool’s use). http://www.rand.org/topics/delphi-method.html. Annex II: Methods, Tools, Guidance, and References 181 –– Method: Key informant interviews These are interviews with people who, by training or experience, have special knowledge of the inputs, processes, or effects of governance. They may be officials, advocates, or just citizens who use forest resources. They tend to be more loosely structured than survey in- terviews, but the sources cited above about survey questions and interviews contain advice that often also applies =to key informant questions and interviews. »» Tool: Semi-structured interview The semi-structured interview is primarily used to gain qualitative data about inter- viewees’ opinions or experiences. It provides the data collector with flexibility to pose open-ended questions. o See for general guidance: Tools4dev. 2013. “How To Do Great Semi- structured Interviews” (web page). http://www.tools4dev.org/resources/ how-to-do-great-semi-structured-interviews/. o See: SSWM. Undated. “Semi-structured Interviews” (web page). http:// www.sswm.info/category/planning-process-tools/decision-making/ decision-making-tools/gathering-ideas/semi-structure. »» Tool: Guide to conducting the interview (protocol) Protocols are used to ensure the validity and reliability of data collected in interviews. They provide data collectors with a general guide to how to interact with interviewees, obtain prior consent and conduct the interview, as well as the questions to ask. o See for general guidance: Stacy A. Jacob and S. Paige Ferguson. 2012. “Writing Interview Protocols and Conducting Interviews: Tips for Students New to the Field of Qualitative Research” in The Qualitative Report, 17: T&L Art. 6, 1–10. http://files.eric.ed.gov/fulltext/EJ990034.pdf. »» See: Giuseppe Iarossi. 2006. (general reference list, beginning of the annex) “Conducting the Interview,” pp. 178–85. »» See: J. Hinton and M.R. Hollestelle. 2012. (general reference list, beginning of the an- nex). “Tool #4b: Preparing for Interviews and Interview Guides,” p. 53. »» See: FAO. Undated. Marketing Research and Information Systems. Marketing and Agribusiness Texts-4. “Chapter 5: Personal Interviews” (web document). http://www. fao.org/docrep/w3241e/w3241e06.htm. –– Method: Workshops »» See: Stakeholder Workshops, this annex, above, under resources for stakeholder en- gagement, Chapters 1 & 2. –– Method: Focus Groups »» See: Focus Groups, this annex, above, under resources for stakeholder engagement, Chapters 1 & 2. • Choosing and training data collection staff –– See: Arild Angelsen, et al. 2011. (general reference list, beginning of the annex). “Chapter 9: Preparing for the Field: Managing AND Enjoying Fieldwork,” pp. 147–62 & “Chapter 10: Hiring, Training, and Managing a Field Team,” pp. 163–74. 182 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE –– See: Hilary Coulby. 2009. A Guide to Mult-istakeholder Work. “Choosing the Right Research Team,” pp. 79–82. http://www.waterdialogues.org/downloads/new/Guide-to- Multistakeholder.pdf. –– See: Jens Friis Lund et al. 2008. (general reference list, beginning of the annex). pp. 17–21. –– See for interviewer training: (all from the general reference list, beginning of the annex) Giuseppe Iarossi. 2006. “Training,” pp. 159–64. Also: Earl Babbie. 2010. “Coordination and Control,” pp. 278–79; Alan Bryman. 2012. “Training and Supervision,” pp. 225–26. • Data Collection –– Method: Interviewing »» See: Resources cited above on primary data collection methods, especially under sur- veys and key informant interviews. –– Method: Coding »» Reference: Earl Babbie. 2010. (general reference list, beginning of the annex) “Coding in Content Analysis,” pp. 338–339 and “Coding,” pp. 400–04. »» Reference: Alan Bryman. 2012. (general reference list, beginning of the annex) “Open or Closed Questions?” pp. 246–52; “Basic Operations in Qualitative Data Analysis,” pp. 575–78. –– Method: Use of ICT »» Reference: World Bank. 2013. ICT for Data Collection and Monitoring & Evaluation: Opportunities and Guidance on Mobile Applications for Forest and Agricultural Sectors. »» Reference: NetHope Solutions Center. ICT tools for international development work. http://solutionscenter.nethope.org • Data Management & Quality Assurance (Editing, Cleaning, Triangulation etc.) –– See: Giuseppe Iarossi. 2006. (general reference list, beginning of the annex) Chapter 6: “Why Data Management is Important,” pp. 187–217. –– See: Arild Angelsen, et al. 2011. (general reference list, beginning of the annex) “Chapter 11: Getting Quality Data,” pp. 175–89. and “Chapter 12: Data Entry and Quality Checking,” pp. 191–207. –– See: Alan Bryman. 2012. (general reference list, beginning of the annex) “Error in Survey Research,” pp. 205–06; “Missing Data,” p. 333; “Reliability and Validity in Qualitative Research,” pp. 389–98. • Verifying data –– See: J. Hinton and M.R. Hollestelle. 2012. (general reference list, beginning of the annex) “Tool #6: Reporting Back and Stakeholders’ Recommendations,” p. 62. –– Reference: Jens Friis Lund et al. 2008. (general reference list, beginning of the annex). • Ethical standards –– See: Arild Angelsen, et al. 2011. (general reference list, beginning of the annex) “The Challenges of Field Research,” pp. 28–31. –– See: Alan Bryman. 2012. (general reference list, beginning of the annex) “Chapter 6: Ethics and Politics in Social Research,” pp. 130–55. Annex II: Methods, Tools, Guidance, and References 183 –– Reference: U.S. Department of Health and Human Services. 1979. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. ” (al- though this report was written for biomedical applications, the basic principles in Part B have wider application). http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html. –– Reference: Kimberley A. Barchard. 2003. Ethics in On-Line Data Collection. http://faculty. unlv.edu/barchard/onlinedatacollection/ethics_in_online_data_collection_Barchard.pdf. –– Security Risks: J. Hinton and M.R. Hollestelle. 2012. (general reference list, beginning of the annex). “Tool #1d: Assessing and Preparing for Security Risks,” p. 28; Arild Angelsen, et al. 2011. (general reference list, beginning of the annex) “Personal Safety,” pp. 158–61. Chapter 5: Analysis • Data processing –– Method: Data entry »» Tool: Open Foris Collect o See: FAO. 2014. Forest Monitoring and Assessment Software Tools (web page). http://www.fao.org/forestry/fma/openforis/en. (Wiki) http://km.fao.org/OFwiki/index.php/Main_Page »» Tool: SPSS o Reference: Alan Bryman. 2012. (general reference list, beginning of the annex) “Chapter 16: Using IBM SPSS for Windows,” pp. 353–75. »» See also: “Use of ICT” under “Data Collection” tools. »» Reference: Earl Babbie. 2010. (general reference list, beginning of the annex) “Computer Programs for Qualitative Data,” pp. 406–13. »» Reference: Guiseppe Iarossi. 2006. (general reference list, beginning of the annex) “Electronic Data Entry,” pp. 191–95. • Data analysis –– Method: Simple techniques (averages, tables, and more) »» See: Catholic Agency for Overseas Development (CAFOD), Christian Aid and Trocaire. Undated. Monitoring Government Policies: A Toolkit for Civil Society Organisations in Africa. “Analysing Survey Data and Other Coded Information,” pp. 77–83. http://www. commdev.org/userfiles/files/1818_file_monitoringgovernmentpolicies.pdf. –– Method: Qualitative and narrative analysis »» Reference: Alan Bryman. 2012. (general reference list, beginning of annex) “Chapter 24: Qualitative Data Analysis,” pp. 564–89. »» Reference: Earl Babbie. 2010. (general reference list, beginning of annex) “Chapter 13: Qualitative Data Analysis,” pp. 393–420. »» Tool: Regional or international comparisons o Example: Sam Lawson and Larry MacFaul. 2010. Illegal Logging and Associated Trade: Indicators of the Global Response. London: Chatham House. www.illegal- logging.info/uploads/CHillegalloggingpaperwebready1.pdf. –– Method: Statistical analysis 184 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE »» Tool: Open Foris Calc. (prototype). o See: FAO. 2014. Forest Monitoring and Assessment Software Tools. (web page) http://www.fao.org/forestry/fma/openforis/en. (Wiki) http://km.fao.org/OFwiki/index.php/Main_Page. »» See: Arild Angelsen, et al. 2011. (general reference list, beginning of annex) “Chapter 13: An Introduction to Data Analysis,” pp. 209–26. »» Reference: Alan Bryman. 2012. (general reference list, beginning of annex) “Chapter 15: Quantitative Data Analysis,” pp. 329–52 and “Using IBM SPSS for Windows,” pp. 353–75. »» Reference: Earl Babbie. 2010. (general reference list, beginning of annex) “Chapter 16: Statistical Analysis,” pp. 466–504. »» Reference: Statsoft, Inc. 2013. Electronic Statistics Textbook. Tulsa, OK: StatSoft. http:// www.statsoft.com/textbook. Chapter 6: Reporting & Dissemination • Reports –– Reference: Alan Bryman. 2012. (general reference list, beginning of the annex) “Writing up Quantitative, Qualitative, and Mixed Methods Research,” pp. 692–703. –– Reference: Earl Babbie. 2010. (general reference list, beginning of the annex) “Writing Social Research,” pp. 521–27. • Dissemination strategies –– See: UNDP. 2009b. (general reference list, beginning of the annex) “Section 10: Communication and Dissemination of the Results,” p. 33. –– See: Hilary Coulby. 2009. A Guide to Multistakeholder Work. “Section 10: External Communications,” pp. 88–92. http://www.waterdialogues.org/downloads/new/Guide-to- Multistakeholder.pdf. –– See: Arild Angelsen, et al. 2011. (general reference list, beginning of annex) “Reaching the Audience,” pp. 230–32. –– Reference: IAP2. 2006. Public Dissemination Toolbox. http://www.dvrpc.org/GetInvolved/ PublicParticipation/pdf/IAP2_public_participationToolbox.pdf. • Visualizing data –– See: Catholic Agency for Overseas Development (CAFOD), Christian Aid and Trocaire. Undated. Monitoring Government Policies: A Toolkit for Civil Society Organisations in Africa. “Tool 22: Creating Tables or Charts to Summarise Data,” pp. 79–80. http://www.commdev. org/userfiles/files/1818_file_monitoringgovernmentpolicies.pdf. –– See: Arild Angelsen, et al. 2011. (general reference list, beginning of the annex) “Results,” pp. 237–40. –– Reference: John Emerson. 2008. Visualizing Information for Advocacy: An Introduction to Information Design. Open Society Foundation. http://www.opensocietyfoundations.org/ reports/visualizing-information-advocacy-introduction-information-design. –– Reference: The books of Edward Tufte. http://www.edwardtufte.com/tufte/index. Annex II: Methods, Tools, Guidance, and References 185 • Institutional and follow-up actions –– See: Hilary Coulby. 2009. A Guide to Multistakeholder Work. “Moving from Research to Policy,” pp. 103–05. http://www.waterdialogues.org/downloads/new/Guide-to- Multistakeholder.pdf. –– See: FAO. 2012. (general reference list, beginning of the annex) “Institutional Embedding,” pp. 26–27. –– Reference: The Access Initiative. 2010. Advocacy Toolkit. http://www.accessinitiative.org/ node/625. Chapter 7: Learning and Improvement • Post-project evaluation –– Method: Self-evaluation »» Tool: Appreciative Inquiry Appreciative inquiry is an evaluation technique that focuses on the positive aspects of a project or program and encourages their expansion or continuance. For resources, see Appreciative Inquiry Commons (web site). http://appreciativeinquiry.case.edu/. »» Tool: Retrospectives This is an approach for project reviews; it was developed for computer software writing projects but has wider application. See: Kerth, Norman L. 2001. Project Retrospectives: A Handbook for Team Reviews. www.retrospectives.com. –– Reference: This is a guide to project evaluation as practiced by a particular foundation. Many of the techniques are elaborate, and the guide covers many of the same topics (planning, data collection, analysis) as this forest governance assessment guide does. Much of it is not applicable to quick team self-evaluations, but it has some useful ideas and further referenc- es: W.K. Kellogg Foundation. 1998. W.K. Kellogg Foundation Evaluation Handbook. http:// www.wkkf.org/knowledge-center/resources/2010/w-k-kellogg-foundation-evaluation-hand- book.aspx. 186 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE ANNEX III: CREATING BUDGETS Assessments vary so much that it is impossible to provide a sample budget that will serve all pur- poses. This annex presents steps to creating a budget, a checklist of costs to consider in a budget, and a list of references with further information. Budgeting Steps The following steps are based on Cammack (2013): • Recall your objectives. You should have already set your objectives by working through the steps in Chapter 1. Always keep in mind why you are doing the assessment and what you want to achieve. • Gauge your resources (see Box 5). It’s especially important to be aware of the limits of your funding. If your dreams exceed your funding, you will either have to scale back your aspirations or find additional funds. You should work out your funding challenges before you commit to spending money. • Gather information. –– If someone has done a similar assessment, see if you can get a copy of that assessment’s budget. –– If you know you are going to have particular costs (e.g., travel, meeting room rental fees, mobile phone fees, hiring of survey-takers), see if you can find information about the usual rates for these expenses. –– Be aware of monetary costs. If you know that you will be changing currency, get the exchange rate. If you expect costs to change due to inflation, hedge for that. • Consider income. –– Think about grants and donations, and list only those that you are sure about. –– Consider support from budgets of other programs and organizations. For example, a govern- ment agency or civil society organization might second a professional to work on the assess- ment. Another organization might lend an administrative assistant to coordinate travel logistics. –– Look for in-kind support. The assessment may be able to borrow vehicles or have access to office space and computers. A stakeholder may be able to provide meeting space. • Consider costs. The next section of this annex provides a checklist of common costs. • Construct a budget. –– You should typically break up expenses into time periods (i.e., time periods that sync with the budgeting practices of your host organization or your funder). For example, if your host organization tracks its finances quarterly, you should project your spending by quarter. If the funder wants to see your projected costs in each of the fiscal years that the donor uses for its accounting, you should project your costs by fiscal year. If you have multiple conflicting demands for formats, you may have to make multiple versions of the budget—but the overall income and expenditures for the assessment should be the same. Annex III: Creating Budgets 187 –– Some donors are interested in the cost per activity or output, for example, how much will it cost to hold each regional assessment workshop or to conduct each regional survey. In that case, you may want to prepare an “input” version of your budget to gauge your costs by category of expense and an “output” version to show costs per output. –– Some donors may be interested in funding only a part of the assessment and want to see (or even approve) a budget for their part. If you have multiple donors, you may need to make separate sub-budgets for, say, data gathering funded by Donor A and dissemination funded by Donor B. • Seek approval –– You may need budget approval from the organizations that are providing funds or other support for the assessment. –– You may need budget approval from the organization that will be overseeing spending and taking responsibility for ensuring that the funds are properly spent. Checklist of Common Costs • Labor and participant costs –– Salaries for managers, data gathers, support staff, and assistants. »» Benefits such as pension contributions, continuing education support, health insur- ance, and holiday pay. –– Consultant fees. –– Honoraria for experts and other participants. • Office expenses –– Rent (or, if the office is owned, the costs of taxes, depreciation, and so forth.). –– Furniture (rental costs or depreciation of owned furniture). –– Telecommunication, postage, and Internet charges. –– Office insurance. –– Photocopying. –– Utilities (e.g., electricity, water). –– Office equipment (e.g., phones, computers, routers, printers). –– Computer software. –– Office supplies (e.g., stationery, printer ink, toner). –– Supplies for office kitchens and restrooms. –– Books and other publications. –– Legal, bookkeeping, and other professional services. –– Housekeeping, maintenance, and security services. –– Government fees and licenses. –– Bank fees. Note that if the office is part of a larger organization, that organization may charge “overhead” to cover many of the services and supplies listed above. • Travel costs –– Vehicle rentals, insurance, and fuel. –– Local transportation (e.g., taxis, buses). –– Tickets (e.g., bus, rail, airline, ferry). 188 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE –– Travel agency fees. –– Lodging and meals. Governments and international development partners publish tables of standard per diem rates for lodging, meals, and incidental expenses. –– Shipment of equipment, excess luggage, and so forth. –– Travel health care and insurance costs. –– Travel document fees. • Field equipment –– Voice recorders. –– Laptops, tablets, and clipboards for data entry. • Meetings –– Rental space. –– Interpreters. –– Printing and distribution of background and follow-up materials. –– Signage (e.g., banners, table tents, name tags). –– Meals and coffee breaks. –– Promotional materials (e.g., pads of paper, folders, pens, stickers) –– Travel, honoraria and other support for participants. • Publication and dissemination. –– Consultant fees for editing, translation, and design. –– Printing. –– Mailing. –– Website hosting. –– Publicity and launch events. –– Training. Note that staff time (salaries) for publication and dissemination is usually included under the “Labor and Participants” category at the top of this checklist. The rough rule of thumb is that re- port creation can take a third of total staff time in any project whose primary output is a report. • Follow-up. –– Self-evaluation activities. –– Collection of feedback from users. –– Documentation of methods and lessons learned. –– Archiving of data, methods, and lessons learned. • Oversight. –– Audits or other oversight and reporting activities required by donors. –– Oversight or reporting activities required by government. –– Costs associated with transparency, such as maintaining a project website, responding to public requests for information, or publishing periodic reports or newsletters (if not included under staff and/or office and publication expenses). • Contingency. –– You may wish to budget a reserve to cover unanticipated costs. Annex III: Creating Budgets 189 Publications on Budgeting John Cammack. 2013. Project Budgeting How to Guide. London: BOND http://www.bond.org.uk/ data/files/project_budgeting_how_to_guide.pdf. FAO. Forthcoming. “Annex 6: The Budget: An Example and Further Reading” in A Guide to Forest Policy Review. Rome: FAO. Nalin Kishor and Kenneth Rosenbaum. 2012. “Appendix V: Sample Budget Worksheet” in Assessing and Monitoring Forest Governance: A User’s Guide to a Diagnostic Tool. Washington DC: Program on Forests (PROFOR). http://www.profor.info/sites/profor.info/files/docs/AssessingMonitoringFores tGovernance-guide.pdf. Mango. Undated. “Guide to Financial Management for NGOs” website. http://www.mango.org.uk/ Guide/GettingTheBasicsRight. Sustainable Sanitation and Water Management (SSWM). “Budget Allocation and Resource Planning” web page. http://www.sswm.info/category/planning-process-tools/implementation/ implementation-support-tools/project-design/budget-al. 190 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE ANNEX IV: SAMPLE WORK PLAN OUTLINE • Introduction (or Executive Summary) • Background and Context –– Nature of the country’s forests. –– History of forest conservation and development in the country. –– Current situation and broad concerns. • Assessment Objectives Goal: –– To increase levels of transparency within the Forest Sector. Outcomes: –– To increase awareness of forest sector transparency. –– To increase capacity to address issues of forest sector transparency. Outputs: –– Forest Sector Transparency Report Card widely disseminated amongst key stakeholders. –– Increased number of organizations working on forest transparency. • Assessment Timeline –– Estimated to take approximately six months. –– Assessment to be repeated on an annual basis. • Scope of Assessment –– Technical scope will focus on forest transparency, especially the legal element of transparency. –– Geographical scope will be the national level, with a focus on national institutions. –– Social scope will focus on national-level actors and be based primarily in the nation’s capital. • Assessment Methods –– Develop a transparency “report card” using desk reviews of information plus key informant interviews. –– Give small grants to other organizations to generate further quantitative and qualitative infor- mation to triangulate findings (as well as to build capacity). • Groups Involved –– Project finance will come through an international NGO and a bilateral donor agency. –– Technical assistance will come from these same two sources. –– Primary work and coordination will be the responsibility of the national NGO. –– Supporting work and triangulation will come from government offices and small NGOs re- ceiving grants administered by the national NGO. –– National NGO will approach a variety of stakeholders to participate as key stakeholders. • Budget Total budget estimate: $100,000 per annum. –– Cost of data collection plus dissemination = $50,000 »» Smaller portion to cover national NGO core staff team. »» Larger portion for dissemination, including to run workshops and events to further raise awareness of forest sector transparency (using the report card as a tool). –– Cost of small grants to other organizations = $50,000 Annex IV: Sample Work Plan Outline 191 • Assessment Outputs –– A Forest Sector Transparency Report Card. This will be aimed at stakeholders generally and will also become part of the basis of the international NGO’s global report on transparency. –– A technical report explaining the report card, aimed at government officials, the press, inter- national donors, and technically oriented stakeholders. –– A press conference to release the report card. –– Four regional dissemination workshops. 192 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE ANNEX V: CONCEPTS TO HELP IN DEVELOPING INDICATORS This annex offers background and guidance on developing specific indicators, and is designed to be used after you have defined your scope in terms of components of governance. As Chapter 3 sug- gests, a good way to begin developing indicators is to look at what others have done. The concepts in this annex will help you understand some options that you have in developing indicators and may help you adapt existing indicators to your needs or create new indicators. Qualitative vs. Quantitative Indicators Indicators can be quantitative or qualitative. A quantitative indicator yields an amount—a number, often with associated units. For example, the area of forest lost to deforestation last year, the number of arrests for forest crime, or the percentage of rural households in a survey that say they have fair access to forest resources all could be quantitative indicators. Qualitative indicators can take several forms. They can be true-or-false (Boolean): Does the country have a written national forest policy? They can be multiple choice: Do appointed forest officers hold the qualifications called for in their job description (a) always, (b) usually, (c) sometimes, or (d) never or almost never? They can use indexes or scales: On a scale from 1 (poor) to 5 (excellent), how well have forest officers been trained in crime prevention and detection? Or they can be open, allowing narrative responses: Describe the adequacy of training of forest officers.7 Broad vs. Narrow Indicators From the above you can see that, like criteria, indicators can be broad or narrow. For example, the Montréal Process (2009a) has a single descriptive indicator for forest law enforcement: “Enforcement of Laws related to Forests” (Indicator 7.3.b). Its equivalent in the PROFOR-FAO Framework, “Forest Law Enforcement,” is a component (Component 3.2) with eight separate subcomponents under it, each of which could give rise to one or more fairly narrow indicators. As with criteria, using several specific indicators instead of a single general one gives you a more or- ganized and replicable assessment. It may also lead you to pay too much attention to specifics while missing some element important to the larger picture, and it may add to the cost of the assessment. You will have to keep this in mind and strike a balance between detail and organization on the one hand and flexibility and cost on the other. 7. For more on scoring of subjective indicators, see UNDP (2007), Section III. Annex V: Concepts to Help in Developing Indicators 193 Single vs. Multiple Values An indicator’s score may be a single amount, but indicator scores can also be sets of values. For example, say you were measuring forest incomes. Rather than measuring the average income of a forest dependent community and getting a single number, you could divide the population of the community into five groups depending on income: the lowest fifth, the second lowest fifth, the middle fifth, and so on. Get the median income for each fifth. This set of five numbers would tell you more about income distribution and poverty than the single average. Get this information for 10 representative communities and you have 50 numbers allowing you to compare distribution of income in these communities. Track one community over time and you have another set of numbers that might tell whether incomes are rising or falling and whether income distribution has become more or less skewed. Qualitative indicators can also have sets of values. For example, you could score the trust that ru- ral people place in the honesty of field officers at three agencies as low, moderate, or high and get the following three scores: Forestry Department, low trust; Wildlife Department, moderate trust; Agriculture Department, high trust. Measure these scores in several provinces, or by separately reporting the scoring of different experts, or by repeating the scoring periodically over time, and you have a larger set of scores conveying more information about the level of trust. Inputs, Processes, Outputs, and Outcomes Forest governance is an abstract concept. Measuring abstract concepts can be difficult: they do not have weight or physical dimension. You can measure some abstract ideas in terms of conventional units, like measuring monetary value in terms of the national currency. However, there are no con- ventional units for governance. As a result, most quantitative and qualitative measures of forest governance are indirect. We cannot throw a tape measure around forest law enforcement, but we can try to quantify how many officers are in the field, how many hours of training they have, how much money is spent on enforcement, how many arrests are made, how many cases are prosecuted, how many hectares of forest are lost or degraded, how much tax money goes uncollected, or even how much potential private investment is discouraged due to the risks posed by crime. For activities that do not generate records, such as corruption or human rights abuses, the indirect indicator may have to be several steps removed from the actual activity. A common way to track these is to measure reputation or public perceptions—for example, using opinion polls or focus group discussions to measure the reputation of the forest agency for resisting corruption. When trying to come up with a range of possible indicators, it helps to think in terms of inputs, out- puts, and outcomes. To take the forest law enforcement example again, officers in the field, training, 194 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE and budgets are inputs. Arrests and prosecutions are outputs. Areas of degraded forest, lost tax rev- enues, and lost investments are all undesired outcomes. Assessments regularly use all three kinds of indicators. No type is inherently better than the other, and you do not need to have all three types for a single criterion. Thinking about each of these three aspects separately, however, will help you to brainstorm more possible indicators. You may come across another category term: process indicators. People do not all use this term in the same way. Some people use it as an umbrella term covering both input and output indicators. Under this view, many of the input and output examples above would be considered indicators of how well the process of enforcement is going. Alternatively, some people reserve the term “process” for indica- tors that combine input and output data to show the efficiency of producing outputs from particular inputs. As such, they would consider the number of arrests per officer to be a process indicator. Still others use the term for indicators of the existence of a process. To them, “Do people have practical access to fair and rapid forums to resolve forest-related conflicts?” is not about inputs and outputs—it is about process (and would thus be a process indicator). Neutral vs. Normative Indicators Some indicators just describe things as they are, without judging—for example, “Area of Forest Under Local Community Ownership or Control.” Any question of whether the resulting measurement is good or bad may not be apparent from the measurement. It may come out in the analysis when this indica- tor is compared with others, or it may come out over time as the indicator changes. Some indicators carry an inherent sense of good or bad results. For example, “Does the government have adequate capacity to address forest-related crimes and illegal activities?” (from the PROFOR tool) clearly has favorable answers and unfavorable answers. This is a normative indicator. Some normative indicators take things a step further. If there is an unfavorable answer, that implies the need for action that is in the power of the government or another party to take. These are “action- able” indicators. If your objective is to appear neutral, and to describe without judging or casting blame, you may want to use mostly neutral indicators—and use normative indicators only when there is no likely contro- versy associated with the norm. If your objective is to promote reform, normative indicators can be natural tools to point out needed change. You could, however, also introduce norms during analysis and use neutral indicators to draw normative conclusions. Defining a Good Set of Indicators Indicators must be good individually and good as a working set or portfolio. To decide whether a single indicator is worthwhile, you can use a variation of the SMART test used in Chapter 1. A good indicator should be: Annex V: Concepts to Help in Developing Indicators 195 • Specific. It should be clear and well-defined. • Measurable. You should be able to assign a description or value to it. • Achievable. You should have the resources to make that measurement and, if necessary, verify it. • Realistic. The country context and other factors outside of your control should not stand in the way of an accurate assessment of the indicator. • Time-bound. You should be able to make the measurement during the time frame of the assessment. You may not know whether an indicator passes some of these tests until you actually try to measure it. It could be that a seemingly difficult indicator has already been scored as part of a routine govern- ment data collection process or a recent parallel assessment. It could be that what you thought was a simple indicator actually is quite difficult to score. You can get early warning of these kinds of prob- lems by pilot-testing your indicators. (See Box 35 on pilot testing.) Whether you pilot-test or not, you may have to adjust your indicators somewhat as new information comes to light. In addition, to be good an indicator must be suited to the task at hand. If you are conducting a one- time assessment, it may not help to know the area of land with forest cover; if you are planning a multi-year monitoring effort, however, this could be a key outcome indicator to watch. If you are trying to diagnose problems with governance, it could be good to use normative indicators, although you can introduce the normative element later in your analysis. You should look at the quality of your indicators individually and also as a set. Taken as a set, the indicators must serve the objectives of the assessment. In particular, a good indicator set should be: • Comprehensive. It should cover all aspects of forest governance that need to be examined to meet your objectives. It should be detailed enough to give you the information you need. • Consistent. The individual indicators should avoid overlap. If they are normative, they should reflect consistent values. • Organized. The organization of the indicator set should make it easy to see that the indicators are comprehensive and consistent. If individual indicators are going to be scored in different ways, this might be reflected in how they the set is organized (i.e., to make it easy to see that scoring the set will be achievable and realistic). What to Avoid in Choosing Indicators8 In choosing indicators, here are some traps to avoid: • A biased indicator set. Indicators will always cover some issues better than others. Your aim should be to have the set cover the most important issues well. Some indicators could show embarrassing results. These findings often turn out to be highly useful. Select indicators because they will provide important information, not because they will provide what people want to hear. 8. Source: FAO, forthcoming. 196 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE • Too narrow an indicator set. An indicator set should reflect the scope of the assessment. It should give weight to all the issues of concern, and not ignore the concerns of any particular stakeholders. • Indicators that don’t really reflect your objectives. To take an example, it may be easy to measure the number of people who work for the forest agency. It may be true that, with too few people, the forest agency cannot perform well. However, an increase in employees may be poorly correlated with specific things that matter, like fairer allocation of permits, better collection of public revenues, or more responsiveness to stakeholders. Therefore, an “easy” indicator like this is really a poor indicator. Select indicators that really tell you about what you need to know. • Measuring X when the real issue is the trend in X. Some indicators that are fine for continuous monitoring are not good for one-time assessments. For example, it may be of little interest to measure forest area, biological diversity, or rural incomes if what you are really interested in is changes in those variables over time. This is especially true if little baseline information is avail- able. It might be worthwhile, however, to measure and establish a baseline now for the benefit of future assessments. • Indicators that cannot be scored. Assessments have been known to ask questions that cannot be answered with existing data, or which would take a great deal of time and money to answer well. For example, accurately scoring the access of rural communities to several forest resources in a large and varied country might require extensive surveys in remote areas. A narrower indi- cator, such as access to wood fuel, might be more practical to score—and existing survey data or reliable expert opinion to score that indicator might be available at low cost. Sometimes the perfect indicator must be set aside and a more practical indicator adopted. • Too many indicators. It is possible to come up with a hundred or more potential indicators to score and analyze. In some cases, it may be practical to consider that many indicators, but in other cases it may be too costly. Some initial screening and selection will have to come first, with care not to introduce bias. Strengthening Your Indicators You can strengthen individual indicators and indicator sets by adding some instructions, explanations, or examples to help score them. For example, the Montréal Process (2009a, 2009b) indicators come with rationales explaining why each indicator is included. The ITTO indicators for sustainable manage- ment of tropical forests come with detailed reporting forms that break each indicator down into more specific questions and include instructions for supplying supporting documentation and descriptions of specific factors that fall under the indicator. The PROFOR indicators come with three scoring aids. The first is a brief statement of the rationale behind the indicator (i.e., the indicator is intended to be actionable, and the rationale states the norm reflected in the indicator). The second is a set of notes from the indicator’s authors explaining in more detail what they intended the indicator to mean. The third is a form that offers a set of multiple-choice options to score the indicator. Annex V: Concepts to Help in Developing Indicators 197 ANNEX VI: GLOSSARY Approach: The way different methods are brought together to complete an assessment; the overall path an assessment takes to plan, gather data, and arrive at results. Assessment: “Appraisal based on careful analytical evaluation” (PROFOR & FAO 2011, p.31). Closed Question: A question in an interview or survey with a limited set of possible answers. Examples include yes-or-no, true-or-false, and multiple-choice questions. Compare to open question. Coding: The process of turning an actual response or other raw data into a recorded, often standard- ized form (i.e. assigning responses to categories to allow analysis). (See Babbie 2010, p.G2). Components: “Essential elements of a pillar” (PROFOR & FAO 2011, p.31). Used in the develop- ment of indicators. Criterion: An element of governance used in the development of indicators. This guide prefers the terms components and subcomponents. Data Collection: The systematic gathering of information. Diagnosis: “Examination to identify or determine the nature and characteristics of a system or aspect of a system” (PROFOR & FAO 2011, p.31). Evaluation: Study or measurement, often with an aim to compare the current situation with a past situation or a desired goal. Indicator: “A quantitative, qualitative, or descriptive attribute that, if measured or monitored periodically, could indicate the direction of change in a governance subcomponent” (PROFOR & FAO 2011, p.31). Measurement: The size, amount, extent, status, or degree of something, or the act of finding the size, amount, extent, status, or degree of something. As used in this guide, it can apply to both quantita- tive and qualitative data collection. Method: A way for undertaking an activity—for example, data collection or stakeholder engagement. Methods lay out a specific set of actions to take to guide you in how to undertake them. Monitoring: “Systematic tracking or scrutiny for the purpose of collecting specified data or informa- tion” (PROFOR & FAO 2011, p.31). 198 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Open Question: a question offered in an interview or survey without a list of possible responses. Compare to closed question. Outcomes: The changes in conditions or behaviors that result from the delivery of outputs. Outputs: In the context of an assessment, its specific products or services (i.e. what the assessment directly produces). Participatory Approach: An approach that engages stakeholders throughout the development, im- plementation, and evaluation of an assessment. Pillars: “Fundamentals of good forest governance” (PROFOR & FAO 2011, p.31). Used in defining governance and designing indicators. Piloting: Testing a method or tool on a small scale, with the aim of improvement. Primary Data: New data that an assessment generates. Qualitative Data: Data expressed as words, not numbers—for example, expert opinions, focus group preferences, workshop findings, and anecdotal information such as individual stories, examples, or cases that illustrate a point. (Note that If you gather enough qualitative opinions in a public opinion poll or content analysis, you may be able to produce quantitative data—for example, 70 percent of people believe X, 20 percent of media reports state Y). Quantitative Data: Data expressed in numbers—for example, income levels, percentages, or budget figures. Research: Data collection with a particular aim to shed light on specific questions. Sampling: Measuring part of something to arrive at an estimate about the whole. Secondary Data: Existing data (e.g., from prior assessments, censuses, scholarly studies, and so forth) that an assessment can use. Stakeholders: “Any individuals or groups who are directly or indirectly affected by, or interested in, a given resource and have a stake in it” (PROFOR & FAO 2011, p.32). Stratification: Dividing a diverse collection of things into groups of similar things (strata) to make measurements more accurate and informative. Subcomponent: “An identifiable element of a governance component and an important aspect of forest governance by which a component may be assessed” (PROFOR & FAO 2011, p.32). Used in the development of indicators. Annex VI: Glossary 199 Terms of Reference: A description of what is expected from an employee, consultant, contractor, or project. This is typically the scope of work and the products or services to be delivered; it sometimes also includes the skills required and time, budget, or other constraints that may apply. Tool: A specific protocol for implementing part or all of a method or even an entire assessment. For example, a set of questions can be a tool for implementing a series of interviews or a survey. Some publications (e.g., USAID 2013, Kishor & Rosenbaum 2012) present tools for implementing an entire assessment. Triangulation: Obtaining information on the same issues from more than one source to cross-check findings. Validation: Objective examination to affirm quality. It may refer to verification of data and also to review of the methods of data collection and analysis. Done with outside scrutiny, it becomes a limited form of vetting. Verification: Confirming with a data source that data are accurately represented. Vetting: Opening work to outside scrutiny and criticism. Note that vetting can go beyond validation to include subjective and value-based criticism. 200 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Bibliography The Access Initiative. 2010. Advocacy Toolkit. http://www.accessinitiative.org/node/625. Angelsen, Arild, Helle Overgaard Larsen, Jens Friis Lund, Carsten Smith-Hall, and Sven Wunder. 2011. Measuring Livelihoods and Environmental Dependence: Methods for Research and Fieldwork. London: Earthscan. Babbie, Earl. 2010. The Practice of Social Research. Twelfth Edition. Belmont, California: Wadsworth/Centage Learning. Barchard, Kimberley A. 2003. Ethics in Online Data Collection. http://faculty.unlv.edu/barchard/ onlinedatacollection/ethics_in_online_data_collection_Barchard.pdf. Bernard, H. Russell. 2006. Research Methods in Anthropology: Qualitative and Quantitative Approaches. Fourth Edition. Lanham, Maryland: AltaMira Press. BOND. 2003. Logical Framework Analysis (Guidance Notes No. 4). London: BOND. http://www. gdrc.org/ngo/logical-fa.pdf. Boslaugh, Sarah. 2007. Secondary Data Sources for Public Health: A Practical Guide. Cambridge University Press. Excerpt viewable online: http://analysis3.com/An-Introduction-to-Secondary- Data-Analysis-download-w173.pdf. Bryman, Alan. 2012. Social Research Methods. Fourth Edition. Oxford: Oxford University Press. Cammack, John. 2013. Project Budgeting How to Guide. London: BOND. http://www.bond.org.uk/ data/files/project_budgeting_how_to_guide.pdf. CARE. 2005. Tips for Collecting, Reviewing, and Analyzing Secondary Data. http://pqdl.care.org/ Practice/DME%20-%20Tips%20for%20Collecting,%20Reviewing%20and%20Analyzing%20 Secondary%20Data.pdf. CARE. 2009. Climate Vulnerability and Capacity Analysis Handbook. http://www.careclimat- echange.org/files/adaptation/CARE_CVCAHandbook.pdf. Castrén, Tuukka and Madhavi Pillai. 2011. Forest Governance 2.0: A Primer on ICTs and Governance. Washington: PROFOR. Catholic Agency for Overseas Development (CAFOD), Christian Aid, and Trocaire. Undated. Monitoring Government Policies: A Toolkit for Civil Society Organisations in Africa. http://www. commdev.org/userfiles/files/1818_file_monitoringgovernmentpolicies.pdf. Bibliography 201 Conflict Sensitivity Consortium. 2012. How to Guide to Conflict Sensitivity. http://www.conflictsensi- tivity.org/sites/default/files/1/6602_HowToGuide_CSF_WEB_3.pdf. Coulby, Hilary. 2009. A Guide to Multi-stakeholder Work: Lessons from the Water Dialogues. http:// www.waterdialogues.org/downloads/new/Guide-to-Multistakeholder.pdf. DFID. 2002. Conducting Conflict Assessment: Guidance Notes. http://www.conflictsensitivity.org/ sites/default/files/Conducting_Conflict_Assessment_Guidance.pdf. Davis, Crystal, Lauren Williams, Sarah Lupberger, and Florence Daviet. 2013. Assessing Forest Governance: The Governance of Forests Initiative Indicator Framework. Washington: World Resources Institute. http://www.wri.org/publication/assessing-forest-governance. Deininger, Klaus, Harris Selod, and Anthony Burns. 2012. The Land Governance Assessment Framework: Identifying and Monitoring Good Practice in the Land Sector. Washington: World Bank. https://openknowledge.worldbank.org/handle/10986/2376. DFID. 2009. Political Economy Analysis How To Note. (A DFID practice paper.) http://www.odi.org. uk/sites/odi.org.uk/files/odi-assets/events-documents/3797.pdf. Eliot and Associates. 2005. Guidelines for Conducting a Focus Group. (Posted on the web site of the Office of Assessment, Duke University.) http://assessment.aas.duke.edu/documents/ How_to_Conduct_a_Focus_Group.pdf. Emerson, John. 2008. Visualizing Information for Advocacy: An Introduction to Information Design. Open Society Foundation. http://www.opensocietyfoundations.org/reports/ visualizing-information-advocacy-introduction-information-design. ENPI FLEG. 2013. European Community Grant to the Multi-Donor Trust Fund for Improving Forest Law Enforcement and Governance (FLEG) in the European Neighborhood Partnership (ENP) East Countries and Russia (Trust Fund No. 070964): Final Report. European Commission. 2008. Analysing and Addressing Governance in Sector Operations. (Tools and Method Series, Reference Document Number 4.) http://ec.europa.eu/europeaid/infopoint/ publications/europeaid/documents/149a_governance_layout_090306_en.pdf. FAO. 2010. Enhancing Stakeholder Engagement in National Forest Programmes: A Training Manual. Peter O’Hara. Rome: FAO National Forest Program Facility. http://www.fao.org/do- crep/014/i1858e/i1858e00.pdf. FAO. 2012. Strengthening Effective Forest Governance Monitoring Practice. (A. J. van Bodegom, S. Wigboldus, A. G. Blundell, E. Harwell, and H. Savenije, co-authors.) Forestry Policy and Institutions Working Paper No. 29. Rome: FAO. http://www.fao.org/docrep/015/me021e/me021e00.pdf. 202 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE FAO. 2013. Improving Governance of Forest Tenure: A Practical Guide. (Compiled by J. Mayers, E. Morrison, L. Rolington, K. Studd, and S. Turrall.) Governance of Tenure Technical Guide No. 2. London: IIED and Rome: FAO. http://www.fao.org/docrep/018/i3249e/i3249e.pdf. FAO. Forthcoming. A Guide to Forest Policy Review. Rome: FAO. Forest Trends. 2012. Poverty Impact Assessment for Reducing Social Risks and Enhancing Pro-Poor Outcomes of Voluntary Partnership Agreements. Forest Trends Information Brief No. 4. http:// www.forest-trends.org/publication_details.php?publicationID=3267. Fritz, Verena , Kai Kaiser, and Brian Levy. 2009. Problem-driven Governance and Political Economy Analysis: Good Practice Framework. Washington: World Bank. http://siteresources. worldbank.org/EXTPUBLICSECTORANDGOVERNANCE/Resources/PGPE_book_8-25-09. pdf?resourceurlname=PGPE_book_8-25-09.pdf. Global Witness. 2005. A Guide to Independent Forest Monitoring. London: Global Witness, Ltd. http://www.globalwitness.org/library/guide-independent-forest-monitoring. Global Witness. 2010. Making the Forestry Sector Transparent: Transparency Indicators 2010. http://www.foresttransparency.info/cms/file/387. Government of the Republic of Serbia and EU Integration Office. 2011. Guide to the Logical Framework Approach: A Key Tool for Project Cycle Management. (Second Edition.) http://www. evropa.gov.rs/evropa/ShowDocument.aspx?Type=Home&Id=525. Halton, Anna, James Tellewoyan, Nalin Kishor, Neeta Hooda, Peter G. Mulbah, Saah A. David, Jr ., and Haddy Jatou Sey. 2013. Liberia: Assessment of Key Governance Issues for REDD+ Implementation through Application of PROFOR Forest Governance Tool (Supplement to the Liberia Forest Sector Diagnostic: Results of a Diagnostic on Advances and Learning from Liberia’s Six Years of Experience in Forest Sector Reform. December 2012). Washington: Forest Carbon Partnership Facility. http://www.forestcarbonpartnership.org/sites/fcp/files/2013/ june2013/Liberia_Assessment%20of%20key%20governance%20issues%20for%20 REDD%2B%20implementation.pdf. Harris, Daniel. 2013. Applied Political Economy Analysis: A Problem Driven Framework. ODI Politics and Governance note. http://www.odi.org.uk/ publications/7380-applied-political-economy-analysis-problem-driven-framework. Harris, Daniel and David Booth. 2013. Applied Political Economy Analysis: Five Practical Issues. ODI Politics and Governance note. http://www.odi.org.uk/ publications/7196-applied-political-economy-analysis-five-practical-issues. Bibliography 203 Hinton, J. and M. R. Hollestelle. 2012. Methodological Toolkit for Baseline Assessments and Response Strategies to Artisanal and Small-Scale Mining in Protected Areas and Critical Ecosystems. Published under the Artisanal and Small-scale Mining in and around Protected Areas and Critical Ecosystems (ASM-PACE) project of WWF and Estelle Levin Ltd. http://www. asm-pace.org/projects/methodological-toolkit.html. IAP2. 2006. Public Dissemination Toolbox. http://www.dvrpc.org/GetInvolved/PublicParticipation/ pdf/IAP2_public_participationToolbox.pdf. IAP2. 2007. Spectrum of Public Engagement. http://www.iap2.org/associations/4748/files/ IAP2%20Spectrum_vertical.pdf. Iarossi, Giuseppe. 2006. The Power of Survey Design: A User’s Guide for Managing Surveys, Interpreting Results, and Influencing Respondents. Washington: World Bank. https://openknowl- edge.worldbank.org/handle/10986/6975. IFAD. 2009. Good Practices in Participatory Mapping. http://www.ifad.org/pub/map/pm_web.pdf. IIED. 2005a. The Four Rs. http://www.policy-powertools.org/Tools/Understanding/docs/four_Rs_ tool_english.pdf. IIED. 2005b. The Pyramid: A Diagnostic and Planning Tool for Good Forest Governance. http:// www.policy-powertools.org/Tools/Engaging/docs/pyramid_tool_english.pdf. IIED. 2005c. Stakeholder Power Analysis. http://www.policy-powertools.org/Tools/Understanding/ docs/stakeholder_power_tool_english.pdf. Indufor. 2011. Strategy Note for Forest Governance Reform in Kenya for the Miti Mingi Maisha Bora—Support to Forest Sector Feform in Kenya (MMMB) Programme. (Tapani Oksanen, Michael Gachanja, and Anni Blasten, co-authors). Institute for Global Environmental Strategies (IGES). 2013. Quality-of-governance Standards for Carbon Emissions Trading: Developing REDD+ Governance through a Multi-stage, Multi-level, and Multi-stakeholder approach. http://pub.iges.or.jp/modules/envirolib/upload/4658/attach/ Discussion_paper_Final_20130617_FLC.pdf. ITTO. 1998. Criteria and Indicators for the Sustainable Management of Tropical Forests. http:// www.itto.int/policypapers_guidelines/ ITTO. 2005. Revised ITTO Criteria and Indicators for the Sustainable Management of Tropical Forests, including Reporting Format. ITTO Policy Development Series No. 15. 204 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE IUCN. 2013. Governance of Protected Areas: From Understanding to Action. http://www.iucn.org/about/work/programmes/gpap_home/gpap_people/ diversity_and_quality_of_protected_area_governance_2/. Jacob, Stacy A. and S. Paige Ferguson. 2012. “Writing Interview Protocols and Conducting Interviews: Tips for Students New to the Field of Qualitative Research” in The Qualitative Report, 17: T&L Art. 6, 1–10. http://files.eric.ed.gov/fulltext/EJ990034.pdf. Kerth, Norman L. 2001. Project Retrospectives: A Hand Book for Team Reviews. New York: Dorset House Publishing Co. Kiyingi, Gaster. 2010. Forest Governance Reforms in Uganda: Workshop Organized by the Ministry of Water and Environment and the World Bank, Serena Hotel, Kampala, 15-16 June 2010. Workshop report. Kishor, Nalin and Kenneth Rosenbaum. 2012. Assessing and Monitoring Forest Governance: A User’s Guide to a Diagnostic Tool. Washington DC: Program on Forests (PROFOR). http://www. profor.info/sites/profor.info/files/docs/AssessingMonitoringForestGovernance-guide.pdf. Krueger, Richard A. 2002. Designing and Conducting Focus Group Interviews. http://www.eiu. edu/~ihec/Krueger-FocusGroupInterviews.pdf. Kuzmichev, E.P., A. Mitchell, M.A. Kopeikin, N. Kishor, V.I. Nemova, K. Rosenbaum, M.I. Smetanina, V.V. Soldatov, Yu.P. Shuvaev. 2012. Forest Governance Diagnostics in Russia: A Pilot Assessment from Four Regions.Working Paper. Moscow. World Bank. http://www.profor.info/sites/profor. info/files/ForestGovernanceDiagnostics-Russia-English.pdf Lawson, Sam. 2012. “Where Next for Forest Governance?” ETFRN News, Issue 53, pp. 127–35. (Moving Forward with Forest Governance). Lawson, Sam and Larry MacFaul. 2010. Illegal Logging and Related Trade: Indicators of the Global Response. London: The Royal Institute of International Affairs (Chatham House). http://www. chathamhouse.org/publications/papers/view/109398. Lund, Jens Friis, Helle Overgaard Larsen, Bir Bahadur Khanal Chhetri, Santosh Rayamajhi, Øystein Juul Nielsen, Carsten Smith Olsen, Patricia Uberhuaga, Lila Puri, and José Pablo Prado Córdova. 2008. When Theory Meets Reality—How to Do Forest Income Surveys in Practice. Forest & Landscape Working Papers No. 29-2008, 48 pp. Forest & Landscape Denmark, University of Copenhagen, Hørsholm. http://curis.ku.dk/ws/files/20573307/workingpapersno29.pdf. Montréal Process. 2009a. Criteria and Indicators for the Conservation and Sustainable Management of Temperate and Boreal Forests. Fourth Edition. Available in multiple languages at http://montrealprocess.org/Resources/Criteria_and_Indicators/index.shtml. Bibliography 205 Montréal Process. 2009b. Technical Notes on Implementation of the Montréal Process Criteria and Indicators: Criteria 1- 7. Third Edition. http://www.montrealprocess.org/documents/publica- tions/techreports/2009p_2.pdf. Mercy Corps. 2011. Guide to Good Governance Programming. http://www.mercycorps.org/sites/ default/files/mcgoodgovernanceguide.pdf. Nash, Robert, Alan Hudson, and Cecilia Luttrell. 2006. Chapter 8, “Stakeholder Analysis,” in Mapping Political Context: A Toolkit for Civil Society Organisations. London: ODI. http://www. odi.org.uk/sites/odi.org.uk/files/odi-assets/publications-opinion-files/186.pdf. The Nature Conservancy (TNC). 2007. Conservation Action Planning Handbook: Developing Strategies, Taking Action, and Measuring Success at Any Scale. Arlington, VA, USA: The Nature Conservancy. Web pages for chapters cited in this guide: http://www.conservationgateway.org/ Files/Pages/8-develop-workplans-basic.aspx and http://www.conservationgateway.org/Files/ Pages/9-implement-workplans-bas.aspx. NORAD. 1999. Logical Framework Approach: Handbook for Objectives-oriented Planning. Fourth Edition. http://www.norad.no/en/tools-and-publications/publications/publication?key=109408. ODI. 2009a. Management Techniques: Force Field Analysis. http://www.odi.org.uk/ publications/5218-force-field-analysis-decision-maker. ODI. 2009b. Planning Tools: Stakeholder Analysis. http://www.odi.org.uk/ publications/5257-stakeholder-analysis. O’Neill, Kate, Erika Weinthal, Kimberly R. Marion Suiseeya, Steven Bernstein, Avery Cohn, Michael W. Stone, and Benjamin Cashore. 2013. “Methods and Global Environmental Governance.” Annu. Rev. Environ. Resour. 38:11.1–11.31. PARIS21. 2003. PARIS21 Workshop Guide: A Reference Manual for Running a Stakeholders Workshop. http://paris21.org/sites/default/files/18.pdf. PROFOR & FAO. 2011. Framework for Assessing and Monitoring Forest Governance. http://www. profor.info/sites/profor.info/files/ForestGovernanceFramework_0.pdf or http://www.fao.org/ climatechange/27526-0cc61ecc084048c7a9425f64942df70a8.pdf. Rainforest Foundation. 2011. La Cartographie Participative: Guide pour la Production des Cartes avec les Communautés Forestières dans le Bassin du Congo (Participatory Mapping Guide for Forest Communities in the Congo Basin). http://www.mappingforrights.org/files/Guide%20method- ologique%20pour%20la%20cartographie%20participative%20final%20Low%20Res.pdf. 206 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE Richards, Michael, Jonathan Davies, and Gil Yaron. 2003. “Economic Stakeholder Analysis” for Participatory Forest Management. ODI Forestry Briefing Number 4. http://www.odi.org.uk/resources/docs/810.pdf. Situmorang, Abdul Wahib, Abdon Nababan, Hariadi Kartodihardjo, Jossi Khatarina, Mas Achmad Santosa, Myrna Safitri, Purwadi Soeprihanto, Sofian Effendi, and Sunaryo. 2013. Participatory Governance Assessment: The 2012 Indonesia Forest, Land, and REDD+ Governance Index. Jakarta: UNDP Indonesia. Statsoft, Inc. 2013. Electronic Statistics Textbook. Tulsa, Oklahoma: StatSoft. http://www.statsoft. com/textbook. Transparency International. 2011. Analyzing Corruption in the Forest Sector. http://www.transpar- ency.org/whatwedo/pub/analysing_corruption_in_the_forestry_sector_a_manual. Tufte, Edward R. 2001. The Visual Display of Quantitative Information. Second Edition. Cheshire, Connecticut: Graphics Press. UNDP. 2007. Governance Indicators: A Users’ Guide. Second Edition. http://gaportal.org/sites/de- fault/files/undp_users_guide_online_version.pdf. UNDP. 2009a. A Users’ Guide to Measuring Local Governance. http://www.undp.org/content/ dam/aplaws/publication/en/publications/democratic-governance/dg-publications-for-website/a- users-guide-to-measuring-local-governance-/LG%20Guide.pdf. UNDP. 2009b. Planning a Governance Assessment: A Guide to Approaches, Costs, and Benefits. http://www.undp.org/content/rbas/en/home/presscenter/events/2012/November/regional_ governance_week/_jcr_content/centerparsys/download_8/file.res/Planning%20a%20gover- nance%20assessment.pdf. UNDP. 2009c. Practice Note on Supporting Country-Led Democratic Governance Assessments. http://www.undp.org/content/dam/aplaws/publication/en/publications/democratic-governance/ oslo-governance-center/governance-assessments/supporting-country-led-democratic-governance- assessment-a-undp-practice-note/UNDP_Oslo_Eng_1.pdf. UNDP. 2012. Institutional and Context Analysis Guidance Note. http://www.undp.org/content/dam/undp/ library/Democratic%20Governance/OGC/UNDP_Institutional%20and%20Context%20Analysis.pdf . UNDP. 2013. User’s Guide on Assessing Water Governance. http://www.undp.org/content/dam/ undp/library/Democratic%20Governance/OGC/Users%20Guide%20on%20Assessing%20 Water%20Governance1.pdf. University of Wolverhampton. Undated. A Guide for Developing a Logical Framework. http://www. hedon.info/docs/logical_framework-CentreForInternationalDevelopmentAndTraining.pdf. Bibliography 207 U.S. AID. 2006. Conducting Mini Surveys in Developing Countries (revised edition). http://pdf.usaid. gov/pdf_docs/pnadg566.pdf. U.S. AID. 2013. Guidelines for Assessing the Strengths and Weaknesses of Natural Resource Governance in Landscapes and Seascapes. Washington: U.S. AID. http://frameweb.org/ CommunityBrowser.aspx?id=10650&lang=en-US. U.S. Department of Health and Human Services. 1979. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. http://www.hhs.gov/ohrp/hu- mansubjects/guidance/belmont.html. US National Oceanographic and Atmospheric Administration Coastal Services Center. 2009. Stakeholder Engagement Strategies for Participatory Mapping. http://csc.noaa.gov/digitalcoast/ sites/default/files/files/1366314383/participatory_mapping.pdf. Warrener, Debbie. 2004. The Drivers of Change Approach. ODI Synthesis Paper 3. http://www.odi. org.uk/sites/odi.org.uk/files/odi-assets/publications-opinion-files/3721.pdf. W.K. Kellogg Foundation. 1998. W.K. Kellogg Foundation Evaluation Handbook. http://www.wkkf. org/knowledge-center/resources/2010/w-k-kellogg-foundation-evaluation-handbook.aspx. World Bank. Undated. “Stakeholder Analysis Guidance Note.” http://www1.worldbank.org/publicsec- tor/politicaleconomy/November3Seminar/Stakehlder%20Readings/CPHP%20Stakeholder%20 Analysis%20Note.pdf. World Bank. 2003. A User’s Guide to Poverty and Social Impact Analysis. http://siteresources.world- bank.org/INTPSIA/Resources/490023-1121114603600/12685_PSIAUsersGuide_Complete.pdf. World Bank. 2009. Roots for Good Forest Outcomes: An Analytical Framework for Governance Reforms. Report No. 49572-GLB. Washington: World Bank. http://www.profor.info/sites/profor. info/files/docs/ForestGovernanceReforms.pdf. World Bank. 2010. Enabling Reforms: Stakeholder-based Analysis of the Political Economy of Tanzania’s Charcoal Sector and the Poverty and Social Impacts of Proposed Reforms. Washington: World Bank. http://documents.worldbank.org/curated/en/2010/06/12445670/ enabling-reforms-stakeholder-based-analysis-political-economy-tanzanias-charcoal-sector-pover- ty-social-impacts-proposed-reforms. World Bank. 2011. Political Economy Assessments at Sector and Project Levels. http://gsdrc.org/ docs/open/PE1.pdf. World Bank. 2013. ICT for Data Collection and Monitoring & Evaluation: Opportunities and Guidance on Mobile Applications for Forest and Agricultural Sectors. Washington: World Bank. 208 ASSESSING FOREST GOVERNANCE: A PRACTICAL GUIDE TO DATA COLLECTION, ANALYSIS, AND USE But when you have bad governance, of course, these resources are destroyed: The forests are deforested, there is illegal logging, there is soil erosion. I got pulled deeper and deeper and saw how these issues become linked to governance, to corruption...
 —Wangari Maathai (Recipient, 2004 Nobel Peace Prize)

 I have been struck again and again by how important measurement is to
improving the human condition. 
 —Bill Gates (Founder, Microsoft Corporation) FOREST GOVERNANCE ASSESSMENT IS AN EXPANDING PRACTICE. PEOPLE ARE USING ASSESSMENTS TO WATCH FOR DEVELOPING PROBLEMS, DIAGNOSE NEEDS FOR REFORM, MONITOR PROGRESS OF PROGRAMS, AND EVALUATE IMPACTS. GOVERNMENTS, CIVIL SOCIETY ORGANIZATIONS, DEVELOPMENT PARTNERS, ACADEMICS AND COALITIONS OF STAKEHOLDERS HAVE ALL PERFORMED ASSESSMENTS IN RECENT YEARS. IN 2012, AN EXPERT MEETING AT FAO HEADQUARTERS IN ROME RECOMMENDED THE CREATION OF A GUIDE TO GOOD PRACTICES IN FOREST GOVERNANCE ASSESSMENT AND DATA COLLECTION. UNDER THE GUIDANCE OF A DIVERSE COMMITTEE OF EXPERTS, FAO AND PROFOR HAVE OVERSEEN THE PRODUCTION OF THIS PRACTICAL MANUAL. THIS GUIDE PRESENTS A STEP-BY-STEP APPROACH TO PLANNING A FOREST GOVERNANCE ASSESSMENT, DESIGNING DATA COLLECTION METHODS AND TOOLS, COLLECTING AND ANALYZING DATA, AND MAKING THE RESULTS AVAILABLE TO DECISION MAKERS AND OTHER STAKEHOLDERS. IT ALSO PRESENTS FIVE CASE STUDIES TO ILLUSTRATE HOW ASSESSMENTS HAVE APPLIED THE STEPS IN PRACTICE, AND IT INCLUDES REFERENCES AND LINKS TO DOZENS OF SOURCES OF FURTHER INFORMATION. FOOD AND AGRICULTURE ORGANIZATION OF PROGRAM ON FORESTS (PROFOR) THE UNITED NATIONS (FAO) Cover Photos: Kenneth Rosenbaum and Flore de Préneuf THE WORLD BANK VIALE DELLE TERME DI CARACALLA 1818 H ST NW 00153 ROME, ITALY WASHINGTON DC 20433 USA EMAIL: FAO-HQ@FAO.ORG EMAIL: PROFOR@WORLDBANK.ORG WEBSITE: HTTP://WWW.FAO.ORG/FORESTRY WEBSITE: HTTP://WWW.PROFOR.INFO Profor is a multi-donor partnership supported by: