CIVIC TECH IN THE GLOBAL SOUTH CIVIC TECH IN THE GLOBAL SOUTH ASSESSING TECHNOLOGY FOR THE PUBLIC GOOD EDITED BY TIAGO PEIXOTO AND MICAH L. SIFRY The World Bank and Personal Democracy Press Washington DC / New York © 2017 International Bank for Reconstruction and Development / The World Bank 1818 H Street NW, Washington, DC 20433 Telephone: 202-473-1000; Internet: www.worldbank.org Some rights reserved This work is a product of the staff of The World Bank with external contributions. The findings, interpretations, and conclusions expressed in this work do not necessarily reflect the views of The World Bank, its Board of Executive Directors, or the governments they represent. The World Bank does not guarantee the accuracy of the data included in this work. The boundaries, colors, denominations, and other information shown on any map in this work do not imply any judgment on the part of The World Bank concerning the legal status of any territory or the endorsement or acceptance of such boundaries. Nothing herein shall constitute or be considered to be a limitation upon or waiver of the privileges and immunities of The World Bank, all of which are specifically reserved. Rights and Permissions This work is available under the Creative Commons Attribution 3.0 IGO license (CC BY 3.0 IGO) http://creativecommons.org/licenses/by/3.0/igo. Under the Creative Commons Attribution license, you are free to copy, distribute, transmit, and adapt this work, including for commercial purposes, under the following conditions: Attribution—Please cite the work as follows: Peixoto, Tiago and Micah L. Sifry, eds. 2017. Civic Tech in the Global South: Assessing Technology for the Public Good. Washington, DC: World Bank. License: Creative Commons Attribution CC BY 3.0 IGO Translations—If you create a translation of this work, please add the following disclaimer along with the attribution: This translation was not created by The World Bank and should not be considered an official World Bank translation. The World Bank shall not be liable for any content or error in this translation. Adaptations—If you create an adaptation of this work, please add the following disclaimer along with the attribution: This is an adaptation of an original work by The World Bank. Views and opinions expressed in the adaptation are the sole responsibility of the author or authors of the adaptation and are not endorsed by The World Bank. Third-party content—The World Bank does not necessarily own each component of the content contained within the work. The World Bank therefore does not warrant that the use of any third-party-owned individual component or part contained in the work will not infringe on the rights of those third parties. The risk of claims resulting from such infringement rests solely with you. If you wish to re-use a component of the work, it is your responsibility to determine whether permission is needed for that re-use and to obtain permission from the copyright owner. Examples of components can include, but are not limited to, tables, figures, or images. All queries on rights and licenses should be addressed to World Bank Publications, The World Bank Group, 1818 H Street NW, Washington, DC 20433, USA; fax: 202-522-2625; e-mail: pubrights@worldbank.org. ISBN 978-0-9964142-27 paperback ISBN 978-0-9964142-34 e-book Cover design: CCM Design TABLE OF CONTENTS Foreword: Beth Simone Noveck 7 Acknowledgements 15 About the Contributors 19 Introduction: Civic Tech—New Solutions and Persisting Challenges: Tiago Peixoto and Micah L. Sifry 25 Chapter 1: When Does ICT-Enabled Citizen Voice Lead to Government Responsiveness?: Tiago Peixoto and Jonathan Fox 47 Chapter II: The Case of UNICEF’s U-Report Uganda: Evangelia Berdou and Claudia Abreu Lopes, with Fredrik M. Sjoberg and Jonathan Mellon 93 Chapter III: MajiVoice Kenya—Better Complaint Management at Public Utilities: Martin Belcher and Claudia Abreu Lopes, with Fredrick M. Sjoberg and Jonathan Mellon 175 Chapter IV: Impact of Online Voting on Participatory Budgeting in Brazil: Matt Halkin with Fredrik M. Sjoberg and Jonathan Mellon 229 FOREWORD The work of governing is traditionally the domain of profession- als. The average voter does not typically play a role in making policy nor does she participate substantively in affairs of state beyond the ballot box. Political philosopher Robert Dahl goes so far as to call any suggestion that ordinary people have the capac- ity for deeper participation in policymaking on complex topics “extravagant.” The litany of complaints about the limits of citizen capacity is as long as it is familiar. Participation does not lead to effective decision-making or problem solving. It does not work because people lack time, education, and the motivation to par- ticipate in ways that are helpful. Furthermore, such engagement is unnecessary because interest groups amalgamate the views of citizens more efficiently and productively. Direct participation only adds “noise” to the signal. Even those participatory and progressive democratic the- orists, who are less skeptical of people’s cognitive abilities, generally do not focus on concrete, specific opportunities for substantive engagement by those outside government in formal decision-making processes. To the contrary, since the presumed motivations for citizens to participate is not to enhance the 11 12  outcomes of policymaking but to enhance the legitimacy of the process by ensuring more inputs from those who are ultimately to be governed, participatory theorists have tended to focus on deliberative practices of citizen debate. But those dialogues about political and administrative decision-making are not intended to impact directly how decisions are made, they are only meant to enhance the quality of discussion in the public sphere. Given the presumption of the circumscribed role of citizen engagement, dialogue practices, whether petitions, public com- ment processes, referenda, or polls, are typically designed to ask people how they feel about policies made by others. Since there is little expectation that such engagement can genuinely enhance the epistemic quality of decisions, they are not designed to do so. In the deliberative democracy narrative, expectations of citizens are low. Citizen engagement is relegated to the realm of talk and participation to the realm of civil society. It is no surprise when so few opportunities actually exist for the public to participate in governance and administration. Despite the proliferation of new Internet-based tools for cit- izens to express themselves more directly vis-à-vis their govern- ments, such as electronic petition websites like ePetitions in the United Kingdom or We The People in the United States, technology is not leading to greater impact as a result of citizen engagement. This is the amazing finding of this path-breaking book edited by Tiago Peixoto and Micah Sifry, Civic Tech in the Global South, the first systematic empirical analysis of the resulting impact of technologies for citizen engagement on governing. Using both qualitative and quantitative analysis, the different authors con- firm that there is a lack of clear evidence that citizen participation  13 produces institutional response. Whether the focus is technology to support aggregated individual assessments, such as complaint hotlines, or tools for collective action and mobilization, Peixoto and Sifry’s book rigorously lays bare that so-called “civic tech” in most cases is not producing changes in governing outcomes any more than old-fashioned dialogues in a church basement. There is no compelling evidence, the authors point out, that the U-Report text messaging system is helping Ugandans to hold their government or leaders accountable. The MajiVoice pub- lic utility complaint system in Kenya fares better. However, in a review of twenty-three studies on the use of different digital plat- forms to improve public service delivery through citizen engage- ment, the authors find little evidence that tools for “citizen voice” translate into “citizen teeth” to prompt action on the part of gov- erning officials. There is a wide chasm between uptake of these tools by the public and institutional impact. Coming from a book edited by two fierce advocates for par- ticipatory democracy, this conclusion would seem to be startling. Yet, counterintuitively, their work is one of the most important recent contributions to bolster and accelerate the use of technol- ogy to create a genuinely participatory democracy. By offering data on the performance of these citizen voice systems, they pro- vide the empirical evidence needed to change how such systems are designed in the first place. By backing up anecdotes with hard social science, the authors have shown us that we are limiting ourselves by designing platforms to ask people what they think instead of what they know. Take, for example, petition websites as a case in point. Although electronic petitions can get a topic on the public agenda 14  by opening a channel of communication beyond lobbying or appealing to parliamentary officials, even the most poignant peti- tion has no real impact on policymaking. As Sifry has pointed out in earlier writing, when the White House made the data from its petitions website available in May 2013, WeThePeople had hosted 200,000 petitions with 13 million signatures, yet only 162 had received a response—and none could be directly connected to a decision made, dollar spent, or action taken by government. Their book brings into stark relief the realization that we have been designing civic tech badly suited to producing impacts because we have been measuring citizen uptake without look- ing at institutional response. By shining a light on government responsiveness—and the lack thereof—the different authors are making the strongest possible case for building new kinds of plat- forms and processes that will make government more effective, not to mention more legitimate. When we compare current “toothless” citizen voice projects with a next generation of platforms designed to impact decision- making and policymaking more directly, such as Challenge.gov, the United States federal government website that asks the public to collaborate with federal agencies to solve hard problems, the results are potentially going to be very different. Since its incep- tion in 2010, federal agencies have run more than 450 challenges on Challenge.gov, turning to the public to help ameliorate prob- lems such as decreasing the “word gap” between children from high-and low-income families or increasing the speed at which saltwater can be turned into fresh water for farming in develop- ing economies. The same applies to innovations from the Global South, such as multi-channel participatory budgeting processes,  15 where citizens are expected to have a direct impact on the alloca- tion of budgetary resources. But the jury is still out on the impact of Challenge.gov and other technological innovations in governance. From crowd- sourcing to open data to challenge platforms, we are seeing the emergence of innovative ways to tackle society’s problems and make public institutions more effective. Yet little is known about which innovations actually work. We don’t know when, why, for whom, and under what conditions. The work by the different authors in this book is so important because it provides the social science methods to ensure that research catches up to the rapid evolution of technology, providing the impetus to ask and answer: what are the impacts of new platforms and how are they chang- ing how we govern? By bringing together research that focuses on the impacts of technology and how leaders respond, Peixoto, Sifry, and the contributing authors are helping to ensure that we design civic technology to foster meaningful citizen engagement. Beth Simone Noveck, GovLab, New York University ACKNOWLEDGEMENTS The contributions to this book are part of research efforts coordi- nated by the World Bank Group’s Digital Engagement Evaluation Team (DEET), led by Tiago Peixoto and Marieta Fall, with sup- port from Fredrik M Sjoberg (Lead Researcher), Jonathan Mellon (Senior Data Scientist), Sanna Ojanpera (Research Assistant), Kate Sunny Henvey (Research Assistant), and Daniel Nogueira-Budny (Research Assistant). The team, editors, and co-authors of this book are thankful to those scholars and practitioners who con- tributed with reviews, comments, and valuable insights: Adalmir Marquetti (UFRGS), Alexander Trechsel (European University Institute), Alexander Furnas (Sunlight Foundation), Anna Levy (Columbia University), Anthony Zacharzewski (The Democratic Society), Archon Fung (Harvard), Benjamin Goldfrank (Seton Hall University), Beth Noveck (GovLab-NYU), Brandon Brockmyer (American University), Brendan Halloran (Open Budgets), Dennis Whittle (Feedback Labs), Duncan Edwards (Institute of Development Studies), Fernanda Scur (Royal Holloway University), Hollie Russon Gilman (Harvard), Quinton Mayne (Harvard), Rafael Cardoso Sampaio (UFPR), Rebecca Murphy (University of Cambridge), Raul Zambrano (UNDP), Renee Ho 17 18  (Feedback Labs), Roberto Pires (Institute for Applied Economic Research), Rosemary McGee (Institute of Development Studies), Simon Dalferth (Council of the European Union), Susan Crawford (Harvard), Susan Stout (Georgetown University), Vonda Brown (Open Society Foundations). Also thanks to all of those who con- tributed with their time and expertise for the field evaluations and finalization of this book, including Eric Meerkamper (Riwi), Francis Kiribige (Hatchile Consulting), James Powell (UNICEF), Cary McCormick (Unicef), Erik Frisk (Unicef), Paul Mutebi (Uganda Scouts Association), Abraham Okiro (Unicef), Ricardo Almeida (SEPLAG Rio Grande do Sul), Paulo Coelho de Souza (SEPLAG Rio Grande do Sul), Adalmir Marquetti (UFRGS), Renan Fortes Alba (COREDE), Marco Antonio Caselani (COREDE), Ricardo Agádio Kraemer (COREDE), Joao Motta (SEPLAG Rio Grande do Sul), Tarson Nunez (Secretaria de Relacoes Internacionais Rio Grande do Sul), Rebecca Murphy (University of Cambridge), Fernanda Scur (Royal Holloway University), Louis Dorval (Voto Mobile), Renato Michel (NRM Estatistica), Claire Hughes (ITAD), Benjamin Goldfrank (Seton Hall University), Paolo Spada (Harvard Kennedy School), Roseli Cardoso (Centro Gaucho de Pesquisas), Elisangela Tarouco (EST Language Solutions), Sergio Baierle (Baierle.me), and last but not least, Erin Simpson (Civic Hall Labs), who provided invaluable editing sup- port as this book was prepared for publication. We are also grate- ful to those who provided input as staff of The World Bank Group, including Amy Chamberlain, Astrid Manroth, Boris Weber, Claire Davanne, Elena Georgieva-Andonovska, Harika Masud, Helene Grandvoinnet, Mary McNeil, Michael Woolcock, Stephen Davenport, Utpal Misra, and Zahid Hasnain. Finally, thank you to  19 the leadership at the World Bank Group who have championed cumulative learning in the fields of citizen engagement and civic technology, notably, Deborah Wetzel, James Brumby, Robert Hunja, Chiara Bronchi, and Jeff Thindwa. ABOUT THE CONTRIBUTORS THE EDITORS Tiago Peixoto is a Senior Public Sector Specialist at the World Bank’s Governance Global Practice. Having joined the Bank in 2010, Tiago’s activities focus on working with governments to develop solutions for better public policies and services. He also leads the World Bank’s Digital Engagement Evaluation Team (DEET). Prior to joining the World Bank, Tiago has man- aged projects and consulted for a number of organizations, such as the European Commission, OECD, United Nations, Bertelsmann Foundation, and the Brazilian and UK govern- ments. Formerly a research coordinator for the Electronic Democracy Centre at the University of Zurich, Tiago is cur- rently a faculty member of New York University’s Governance Lab. A board member for Our Cities Network and Intelligent Digital Avatars, he also sits in the advisory boards of a number of organizations such as The Participatory Budgeting Project and Our City Thoughts. Featured in TechCrunch as one of the “20 Most Innovative People in Democracy”, Tiago holds a PhD and a Masters in Political Science from the European University Institute, as well as a Masters in Organized Collective Action from Sciences-Po Paris. 21 22 Micah L. Sifry is co-founder and executive director of Civic Hall, New York City’s community center for civic tech, launched in 2015. Since 2004 he has been co-founder and editorial director of its parent company Personal Democracy Media, curating its annual Personal Democracy Forum conference and editing its news-site techPresident.com (now renamed Civicist.com), both focused on the ways technology is changing politics, government and civil society. From 2005-2015 he was a senior adviser to the Sunlight Foundation, which he helped found, and serves on the boards of Consumer Reports and the Public Laboratory for Open Technology and Science. He is the author or editor of eight previ- ous books, most recently A Lever and a Place to Stand: How Civic Tech Can Move the World (Personal Democracy Media, 2015) and The Big Disconnect: Why the Internet Hasn’t Changed Politics (Yet) (OR Books, 2014), and in the spring of 2012 taught “The Politics of the Internet” at Harvard’s Kennedy School. He has a M.A. in Politics from New York University and a B.A. in Politics from Princeton University. THE CONTRIBUTORS Evangelia Berdou is a Research Fellow at the Digital and Technology Cluster, Institute of Development Studies, UK. Her expertise encompasses the use of new technologies in support of citizen expression and action, the opportunities and challenges of new forms of data for adaptive programme monitoring and evaluation and the implications of increased connectivity for the lives and livelihoods of the poor. She has ten years strategic and programme experience working in ICT4D. Over the course of her career, Evangelia has consistently sought to develop new 23 vocabularies and practices between academics, technologists and development practitioners. She has undertaken research for UK’s Department for International Development (DFID), UNICEF and the World Bank. Martin Belcher is an experienced international development professional and consultant working in the area of ICT4D, research communications and uptake, impact assessment and monitoring, evaluation and learning. He has over 20 year’s pro- fessional experience working in a wide variety of sectors, con- texts and geographies. He has published in a range of disciplines including; ICT4D, research capacity strengthening in developing countries, academic and research network management, the use of ICTs in teaching and learning, geographical information sys- tems and archaeological landscape visualization. He is currently the monitoring and evaluation global lead on a multi-year climate change and sustainable land use investments program funded by the British government. Jonathan Fox is professor of development studies in the School of International Service at American University, where he is launching the Accountability Research Center. ARC is an action-research incubator that partners with public interest groups and policy reformers to develop research and learn- ing strategies that are designed to inform their change initia- tives. Prof. Fox received his PhD in political science from the Massachusetts Institute of Technology in 1986. His research addresses the interaction between citizen participation, transparency and accountability. His related books include 24  Accountability Politics: Power and Voice in Rural Mexico (2007), Mexico’s Right-to-Know Reforms: Civil Society Perspectives (co-edited, 2007) Demanding Accountability: Civil Society Claims and the World Bank Inspection Panel (co-edited, 2003), The Struggle for Accountability: Grassroots Movements, NGOs and the World Bank (co-edited, 1998) and The Politics of Food in Mexico: State Power and Social Mobilization (1992). He has held research fellowships from the Council on Foreign Relations, the Inter-American Foundation and the Woodrow Wilson Center. He worked closely with the Open Government Partnership, serving as a founding member of the International Expert Committee that guides the Independent Reporting Mechanism. He is currently serving as an external advisor to the MacArthur Foundation on their support for transparency and account- ability initiatives in Nigeria and as a member of the board of Fundar, a Mexican think tank that works on transparency and accountability. His recent publications include articles in World Development and IDS Bulletin, a background paper for the 2016 World Development Report and a series of practitioner-oriented evidence reviews and strategy proposals, commissioned by the Global Partnership for Social Accountability, Transparency and Accountability Initiative, U4: Anti-Corruption Resource Center and Making All Voices Count. Matt Haikin is an ICTs for Development (ICT4D) practitioner, researcher and evaluator currently working at Aptivate in the UK. His primary interest is the use of technology to enable com- munities to participate in and control their own development. Matt has undertaken development/research work for DFID and 25 the government of Nigeria (managing the YouWin! platform), Oxfam (‘Development is going digital’ – primary research into ICT4D opportunities in Africa) and led the team that developed the Guide to Evaluating Digital Citizen Engagement referred to in this book. He has a Masters’ degree in ICTs for Development from the University of Manchester. Claudia Abreu Lopes is a Research Associate at the Centre of Governance and Human Rights of the University of Cambridge and Head of Research and Innovation at Africa’s Voices Foundation. She has coordinated several small and big data proj- ects in East Africa that harness the widespread use of mobile phones and social media to consult citizens for governance and public health outcomes. With over ten years of experience in public opinion research, she has contributed to research projects funded by the Economic and Social Research Council (ESRC), the UK’s Department for International Development (DFID), the European Commission and the Wellcome Trust. She has also undertaken consultancies in the fields of media for development and big data for policy with BBC Media Action, the World Bank and UN Women Jonathan Mellon is a research Fellow at Nuffield College, University of Oxford working on the British Election Study. Jonathan is a senior data scientist with the World Bank’s Digital Engagement Evaluation Team (DEET), works on the BBC’s elec- tion night forecasts and has worked as a data scientist with the OSCE. He was awarded his DPhil in Political Sociology from the University of Oxford. His research interests include electoral 26  behavior, cross-national participation; developing tools for working with big data in social science and social network analysis. Fredrik M. Sjoberg is a data analyst and political scientist with extensive experience public and the private sector. He has been working for the World Bank’s Digital Engagement Evaluation Team (DEET) since September 2013. Fredrik has also worked with the EU, UNDP and the OSCE on governance and election-re- lated issues. Fredrik specializes in advanced empirical methods, including experimental methods, statistics, and data visual- ization. Fredrik was awarded his PhD from Uppsala University, Department of Government. He also has a MPhil degree from London School of Economics (LSE) and a BA from Helsinki University. In 2008-2009 Fredrik was a Fulbright Scholar at Harvard University. Dr. Sjoberg has held post-doctoral appoint- ments at Columbia University and New York University. Introduction Civic Tech—New Solutions and Persisting Challenges Tiago Peixoto Governance Global Practice, World Bank Micah L. Sifr y Co-founder, Civic Hall This book is comprised of one study and three field evaluations of civic tech initiatives in developing countries. The study reviews evidence on the use of twenty-three information and commu- nication technology (ICT) platforms designed to amplify citizen voices to improve service delivery. Focusing on empirical studies of initiatives in the Global South, the authors highlight both cit- izen uptake (“yelp”) and the degree to which public service pro- viders respond to expressions of citizen voice (“teeth”). The first evaluation looks at U-Report in Uganda, a mobile platform that runs weekly large-scale polls with young Ugandans on a number of issues, ranging from safety to access to education to inflation to early marriage. The following evaluation takes a closer look at MajiVoice, an initiative that allows Kenyan citizens to report, 27 28 I N T R O D U C T I O N through multiple channels, complaints with regard to water ser- vices. The third evaluation examines the case of Rio Grande do Sul’s participatory budgeting—the world’s largest participatory budgeting system—which allows citizens to participate either online or offline in defining the state’s yearly spending priorities. While the initiatives examined may vary on a number of aspects, their common denominator is the use of technology to engage citizens in public policies and services. In each case, the authors are breaking new ground, as there are few benchmarks available for comparison in terms of understanding what kinds of public engagement methods produce what outcomes. While the comparative study has a clear focus on the dimension of gov- ernment responsiveness, the evaluations examine civic technol- ogy initiatives using five distinct dimensions, or “lenses.” The choice of these lenses is the result of an effort bringing together researchers and practitioners to develop an evaluation frame- work suitable to civic technology initiatives.1 Each of the lenses, presented below, are accompanied by a set of questions that are relevant for both the design and implementation of civic technol- ogy initiatives. LENS QUESTION Objective What are the goals of the initiative, and how well is the project designed to achieve those goals? Control Which actors exert the most influence over the initiative’s design and implementation, and what are the implications of this? CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 29 Participation Which individuals participate in the initiative, and to what extent is their participation in line with their needs and expectations? Technology How appropriate was the choice of the tech- nology, and how well was the technology implemented? Effects What effects did the project have, and to what extent can this impact be attributed to technology? The results (effects) of an initiative depend largely on thoughtful alignment of goals (objectives), the people who are designing and implementing the initiative (control), the techno- logical choices that are made (technology), and the actual engage- ment of citizens (participation). While it should be recognized from the outset that these lenses are by no means exhaustive, we believe that they provide a useful starting point for those who seek to evaluate civic technology efforts. Though at first glance these lenses may appear abstract, the reader will note how their consistent application highlights a number of important issues that could go unnoticed if one were using metrics solely tailored to the granular outcomes of each project. Equally important to the questions asked by the five lenses are the methodologies the authors use to answer them. On that front, each of the field evaluations take a multidisciplinary approach that navigates the trade-offs of any one strategy by exe- cuting a hybrid-methodology analysis. From traditional quali- tative interviews, to mobile-based surveys, to the data analytics 30  I N T R O D U C T I O N of systems, the following chapters present a wealth of ways in which data can be collected and analyzed. We hope the findings and methodologies that these researchers have marshaled to look at participatory projects will help to inform future research and improve the practices of those working with participatory civic technologies. To better achieve these goals, in the following sections we provide a snapshot of the findings from each of the chapters, discussing their potential lessons for researchers and practitioners in the civic technology space. We then conclude with a brief discussion on the two major challenges faced by the civic technology movement. Chapter 1: When Does Civic Tech Lead to Government Responsiveness? ICT platforms designed to amplify citizen voices in order to improve public service delivery have burgeoned in recent years. Yet little is known about the extent to which these platforms lead to actual response from governments. To start addressing this gap, the study in Chapter 1 focuses on twenty-three initiatives in the Global South, highlighting both citizen uptake (“yelp”) and the degree to which public service providers respond to expres- sions of citizen voice (“teeth”). The authors start by providing a conceptual distinction between the two ways which civic tech platforms can mediate the relationship between service providers and users: upwards accountability occurs when users provide feedback directly to decision-makers in real time, allowing policy-makers and pro- gram managers to identify and address service delivery problems, but at their discretion. Downwards accountability, in contrast, CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 31 occurs either through real time user feedback or less immediate forms of collective civic action that publicly calls on service pro- viders to become more accountable, and depends less exclusively on decision-makers’ discretion about whether or not to act on the information provided. This distinction between the ways in which ICT platforms mediate the relationship between citizens and service providers allows for a precise analytical focus on how different dimensions of such platforms contribute to public sec- tor responsiveness. Another contribution of the study is the examination of the unclear relationship between uptake—understood as the number of users or participants in a platform—and the responsiveness of public service providers. Much of the first generation of research on civic tech platforms has focused primarily on citizen uptake— which is, clearly, easier to document and assess than institutional responsiveness. In a similar vein, civic tech practitioners often present uptake as an indicator of the success of a platform on its own, neglecting whether citizens’ participation have leveraged the intended response from service providers.2 However, as the study shows, the relationship between uptake and responsiveness is far from straightforward. The authors document a number of cases with high uptake and low responsiveness, and vice-versa. The authors also examine nine other factors that are expected to have an effect on responsiveness from public service providers, such as disclosure of feedback, combination of online and offline action, and partnerships between civic tech platforms and public service providers. The presence or absence of any of the factors examined by the authors did not seem to be a deter- minant in the degree of responsiveness for each of the platforms, 32  I N T R O D U C T I O N suggesting that none of the factors examined can be considered as a “silver bullet” for civic tech platforms to engender respon- siveness. Yet, one factor stands out in the analysis: in all of the cases that present a high level of responsiveness, the government is either leading the process or playing the role of a partner. While the analysis also shows that the involvement of a government (or service providers) is not the only important condition, it may well be an enabling one. These findings, although preliminary, call for further reflection on which types of civic tech platforms may require government involvement—and to what extent—if service providers’ responsiveness is one of the goals. The authors summarize their findings by suggesting that while civic tech platforms appear to have been relevant in increasing service providers’ capacity to respond, most of them have yet to influence their inclination to do so. Finally, the forward six propositions for discussion. These prop- authors put ­ ositions emerge as the starting point of a more focused conversa- tion around the prospects and limits for civic tech as a means to engage citizens in the achievement of more inclusive and better public services. Chapter 2: Crowdsourcing in Uganda: SMS for Listening at Scale? When it comes to the potential of mobile phones to promote cit- izen participation, few initiatives in the developing world have attracted as much attention as U-Report in Uganda. Created in 2007 by UNICEF, U-Report is an SMS-based platform running weekly polls with registered users on a broad array of issues con- nected with UNICEF’s agenda, ranging from attitudes towards women to access to polio vaccination. CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 33 To join U-Report and start answering the polls, mobile phone users send an SMS with the word “join” to a toll-free number. The results of the polls are widely disseminated through the project’s website and through diverse mass media outlets, including news- paper articles and radio shows. As the primary policy audience of U-Report, UNICEF provides Members of Parliament (MPs) with a weekly digest of results and access to the platform in order to reach out to their audiences. UNICEF describes U-Report as a “‘killer app’ for communica- tion toward achieving equitable outcomes for children and their families.” In an online video,3 a UNICEF Goodwill Ambassador and well-known sports personality invites young people, who are U-Report primary target users, to join it: “If you want to share your opinions on issues in your community that matter to you, U-Report is the way to do it. It helps amplify your voice and allows you to hold your government and leaders to account on the issues that matter to you.” The number of registered users, often referred to as “U-Reporters,” has grown steadily since its launch. With an impressive 300,000 users in Uganda, U-Report lives up to the expectations of using mobile phones to “listen at scale.”4 But who are the U-Reporters? This question is particularly relevant given that, apart from a few notable exceptions, there is a dearth of data on who the users of civic technology solutions are. In the absence of that information, one can only speculate whether civic tech- nology is helping those who need it most or, rather, is providing additional means for the privileged to make their voices heard even more. From a policy perspective, understanding the profile of participants is equally relevant: to which extent should data 34  I N T R O D U C T I O N that is collected through self-selection inform policies? As such, one of the core objectives of the U-Report evaluation was to gain a better understanding of its user demographics in Uganda. The evaluation finds that when it comes to the age of U-Reporters, the demographics are well-aligned with UNICEF’s objective to engage Ugandan youth. The majority of U-reporters (42 percent) are between twenty and twenty-four years old, well above the proportion of this age group in the general population (14 percent). If, given U-Report’s target audience, such a bias toward youth is a positive finding, other statistics are more unsettling from an inclusiveness perspec- tive. As described by the authors in detail in Chapter 2, the data suggest that U-Reporters are substantively more likely to be male and from privileged backgrounds in terms of education and professional occupation. As the authors discuss, it is worth noting that half of the respondents in a household survey stated that they did not know how to send SMS. The ability to send SMS was unevenly distrib- uted in favor of male and more educated individuals. Further research would be needed to effectively assess the extent to which the biases in the profile of U-Report users are an expression of the barriers implicit in mobile text messaging or other aspects of the methodology used by UNICEF. It is important to underline, however, that U-Report was never intended to run representative (i.e., probabilistic) sur- veys. Furthermore, while the data may not be entirely repre- sentative of Ugandan youth or the population as a whole, one can still draw conclusions about the views and preferences of those who can and choose to express their voices. In other CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 35 words, U-Report plays an important role in enabling a seg- ment of Ugandan youth to express itself. Additionally, as with any mechanism based on self-selection, while U-Report may fail to accurately capture the diversity of voices or the extent of a problem, it appears to provide a cost-effective means to quickly survey evolving problems,5 as in the case of an Ebola outbreak in the country in 2012. Even so, the socio-demographic findings add to a growing body of evidence6 that should temper the enthusiasm of donors and civic technology practitioners about the potential of SMS as a low-cost solution for promoting inclusive participation in devel- oping contexts.7 As we shall see in another case, the combination of technology with traditional channels of participation demon- strated, perhaps unsurprisingly, a greater potential to promote inclusiveness than the use of the technology alone. Bearing these considerations in mind, what difference does U-Report make when it comes to accountability? As the reader will notice, the evaluators found no compelling evidence of U-Report as a platform that helps Ugandans hold their govern- ments or leaders to account. It could be possible that U-Report generates second-order effects, with certain actors successfully using U-Report data in their processes of mobilization, repre- sentation, and bargaining. Yet no systematic evidence of this surfaced during the evaluation. As with many civic technology initiatives, the lack of a clear link between voice and response remains a challenge. Aligning with some of the findings in Chapter 1, U-Report shows that uptake, in the sense of the degree to which citizens actually use digital platforms, does not necessarily translate 36  I N T R O D U C T I O N into a response from governments and leaders. That realization does not deny the intrinsic value of expressing citizen voice as a socially valuable practice and learning experience. Indeed, as shown by the authors, U-Reporters still extract value from their participation.8 Nevertheless, U-Report joins the growing list of civic tech cases where voice may not be enough to generate a response from authorities.9 Yet the potential of U-Report should not be underestimated. The U-Report team has developed a unique expertise in mobiliz- ing large-scale participation through mobile phones. The chal- lenge remains in translating that participation into more tangible results for those who need them the most. Chapter 3: Tech-based Citizen Reporting: Lessons from the Water Sector One of the most well-known initiatives in the civic tech space, FixMyStreet is a web-based platform in the United Kingdom that enables citizens to submit reports about problems in their community, such as potholes, broken streetlights, and graffiti. Once a report is submitted, FixMyStreet automatically for- wards the report to the relevant local authority. Since its launch, FixMyStreet has attracted the attention of the international media, the development community, and scholars from numer- ous fields. Indeed, few approaches in the field of civic tech have drawn as much attention as those offering citizens the capacity to report public service delivery problems. Given the frequent system- atic failure of public service delivery in developing countries, it did not take long to see similar initiatives emerge in the Global South. FixMyStreet itself has been replicated in Malaysia and the CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 37 Philippines and inspired a number of other similar platforms such as Vecino Inteligente in Chile, I Change My City in India, and Huduma in Kenya. Similar platforms have also been dedi- cated to specific types of service delivery. CheckMySchool in the Philippines, for instance, allows citizens to report problems in public schools, and MyVoice in Nigeria enables citizens to provide feedback on the quality of health services. However, in developing countries, few sectors have been as receptive to the use of civic technologies as the water services sec- tor. To cite a few, these include Next Drop in India, Human Sensor Web in Zanzibar, and Maji Matone in Tanzania. The common denominator among these initiatives is the use of technolgy to enable citizens to report on access and quality of water services. For instance, through Human Sensor Web, an initiative supported by Google.org and UN-Habitat, citizens can report “no water” or “bad water” via SMS to the Zanzibar Water Authority, which—at least hypothetically—responds to these reports. The extent to which these initiatives have produced results remains uncertain. A recent review10 of civic tech initiatives in the water sector found that the majority of cases failed to pro- duce substantive water service improvements. With respect to Human Sensor Web, as noted in the study, “successful reporting did not take place using ICTs (very few text messages were sent), successful processing of the reports did not take place and service improvements were not made based on the reports.” While the overall picture is grim, a few successes offer valu- able insights for those who want to leverage technology’s potential for improving public services. The case of MajiVoice is one of them. Officially launched in Kenya in 2014, MajiVoice is an integrated 38  I N T R O D U C T I O N solution that facilitates the submission and handling of complaints by water services customers. Beyond the traditional walk-in cen- ters, MajiVoice enables customers to report problems via telephone hotlines, SMS, social media, and a dedicated online platform. Once reports are submitted, a web-based task management solu- tion assists water providers to process and handle the complaints received following clearly defined workflows. Customers, in turn, can track the status of their reports via a unique identifier number, and are notified once their issue is resolved. As shown in the evaluation, the results achieved by MajiVoice are by no means negligible, and a few numbers are worth high- lighting. Since its implementation, the number of complaints recorded has risen by a factor of ten, from 400 reports per month to 4,000 reports. Resolution rates have increased from 46 ­percent to 94 percent, and average resolution time has been reduced by half. If these results speak for themselves, the reasons behind MajiVoice’s success offers two main valuable insights. The first one probably refers to an obvious yet underesti- mated fact in the civic tech space: for any technological innovation that enables citizens to report problems, there must be corre- sponding non-technological structures that ensure responsive- ness to these reports. As shown in Chapter 2, part of MajiVoice’s success can only be understood in light of the Kenyan regulatory framework and the role played by the governmental oversight agency in holding water service companies accountable for the services that they provide. The second insight refers to the role that technology actu- ally plays in citizen reporting solutions. The most attractive com- ponent of MajiVoice is that it offers water service users multiple CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 39 channels to submit reports, such as web and SMS. Yet, as the evaluation shows, only a minority of reports are submitted using technological channels. The overwhelming majority of reports are submitted via traditional means. Out of the remaining indi- viduals (3 percent) who used other means (e.g., SMS, e-mail) to report their issues, most of them (77 percent) declared they would have complained in another way had the channel they used not been available. In other words, as it stands, the impact of technology on MajiVoice’s uptake is marginal at best. If so, then what is the actual impact of technology on MajiVoice? As the report shows, one essential reason for MajiVoice’s improvement on performance is the web-based task management solution that improves the capacity of water service providers to process and handle complaints, while reinforcing the capacity of the govern- ment to monitor the performance of these same providers. This finding challenges conventional wisdom with respect to the role of technology when it comes to citizen engagement. Much of the enthusiasm stems from technology’s potential to lower the barriers to participation, rendering actions such as voting and interacting with governments easier and more con- venient. Indeed, the rationale that underpins an ever-growing number of civic tech solutions is precisely an attempt to reduce the transaction costs incurred by citizens. Yet, as illustrated by the authors, the effect of these costs may sometimes be overes- timated. In the case of MajiVoice, the main role played by tech- nology is that it provides an internal management solution that facilitates the handling and monitoring of complaints, mainly received by traditional offline channels. This seemingly trivial finding calls for a more nuanced view on the potential usage of 40  I N T R O D U C T I O N technologies between citizens and their governments or service providers. In some cases, the issue of transaction costs may be overestimated, and offering alternative channels of participation may not be the most—or only—effective way to use technology. Chapter 4: Internet Voting in Participatory Budgeting Originating from the Brazilian city of Porto Alegre in 1989, par- ticipatory budgeting (PB) refers to the participation of citizens in the decision-making process of budget allocation and in the monitoring of public spending. Experts estimate that up to 2,500 local governments around the world have experimented with PB, from major cities such as New York, Paris, Seville, and Lima, to small and medium cities in countries as diverse as Poland, South Korea, India, Bangladesh, and the Democratic Republic of Congo. Internationally praised as a good governance practice, the imple- mentation of PB has been associated with desirable outcomes such as increases in tax revenue, improved service delivery, and reduced infant mortality. PB has also been a source of innovation in the use of tech- nology as a means to promote transparency and participation. 11 Over two decades ago, the city of Porto Alegre started to use the Internet as a means to facilitate citizen monitoring of its budget. In 1997, the medium-sized Brazilian city of Ipatinga started to provide online geo-referenced information about its budgetary allocation and the status of public works. It is noteworthy that both initiatives anticipated practices which would be popularized years later: using the Internet to foster budget transparency and the mapping of government spending. CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 41 In 2001, ICT in participatory budgeting was adopted, with the goal of increasing citizen participation. The municipalities of Ipatinga and Porto Alegre enabled their citizens to submit their demands for budget allocation via the Internet. Although embry- onic, these initiatives are the origin of an entire new field of digi- tally enabled participation. Since then, the use of ICT to facilitate participatory budgeting processes has gone beyond the Brazilian context, offering a wealth of innovative practices for civic tech researchers and practitioners. In this respect, the evaluation in Chapter 4 is important for three reasons. First, it is an early attempt to extensively document the world’s largest PB project in terms of numbers of participants and geographical coverage, which takes place in the Brazilian state of Rio Grande do Sul. It is the first detailed account of a pro- cess where citizens decide on part of the spending priorities of the state through a process that combines both offline and online voting (i-voting). Second, despite prior attempts to assess the effects of i-voting in PB processes, this is the first study that col- lects data from both offline and online voters. As the reader will notice, the evaluation employs a number of approaches, includ- ing both online surveys and exit polls, to capture the impact of technology on the profile of participants as well as the results of the voting process. Finally, while the majority of i-voting studies have focused on the United States and Europe, this study looks at an experience from a middle-income country, Brazil. The participation level from Rio Grande do Sul is, in itself, impressive. In 2014, over 1.3 million people took part in the pro- cess, corresponding to 15 percent of the total population in the state of voting age. This makes Rio Grande do Sul’s PB project 42  I N T R O D U C T I O N one of the largest participatory governance processes supported by digital technology. Though, as discussed earlier, uptake does not necessarily translate into response, in the case of PB, all of the highest voted spending proposals are automatically included in the state’s official budget. But what are the effects of technology in the process? The evaluation shows that the introduction of i-voting does bring new participants to the process, with nearly two-thirds of online voters stating that they would not have taken part in the vote if i-voting was not available. This evidence supports the view that technology increases participation among individuals who would not have participated otherwise, with an estimated percent increase in overall turnout.12 However, parallel to 12 ­ this, the study shows that introducing i-voting does not lead to a substitution effect, meaning that for the most part, those who voted offline will continue to do so, despite the introduction of i-voting. As discussed by the authors, while the overall introduction of technology generates an increase in turnout, when compared to offline voters, online voters are substantially more likely to be male and from privileged socio-economic backgrounds. As the evaluation shows, online and offline voters may differ in their preferences and, in some cases online votes do change the final selection of spending priorities. However, the extent to which online voting might affect PB’s goals of social justice and pro- poor spending remains an empirical question, and only further research can answer it. This is particularly true because, as in many other PB cases, Rio Grande do Sul’s PB design tradition- ally follows a redistributive logic that precedes the voting stage, CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 43 pre-allocating budgets that prioritize poorer geographic areas and investments that favor less privileged sections of society. From a democratic standpoint, the introduction of i-vot- ing does promote inclusiveness, increasing the diversity of participants, and allowing the engagement of citizens who would otherwise not have participated. Taking this increased diversity as a starting point, one could hypothesize that the introduction of i-voting also leverages the collective intel- ligence of the process as a whole. A growing literature in the fields of decision-making and epistemic democracy suggest that as the diversity of participants increases, so does the qual- ity of decision-making. In this respect, a sound hypothesis is that the combination of online and offline channels leverages the collective intelligence of the Rio Grande do Sul’s PB. In other words, as more cognitive tools, perspectives, heuristics, and knowledge inform the voting process, the more likely it is that voters will make superior choices. That hypothesis was, unfortunately, beyond the scope of the evaluation, and only further research can address it. Some more practical lessons are also worth highlighting. Among seasoned PB practitioners, a common concern with the introduction of i-voting is the risk of fraudulent behavior. However, as the evaluation shows, the largest security concerns are related to in-person offline voting. In comparative terms, the online voting process is significantly less vulnerable to fraud and manipulation. These findings have already prompted changes in the offline voting process, with the state of Rio Grande do Sul undertaking a series of new measures to reinforce the security of the offline voting process. 44  I N T R O D U C T I O N The evaluation also underlines the importance of outreach and communication efforts to mobilize participants. While the uptake in Rio Grande do Sul’s PB is high by any standards, the evaluation shows that the majority of participants in the state do not take part in the process precisely because they are not aware of it. For civic tech initiatives where popular mobilization is an important intermediary result, the case of Rio Grande do Sul also highlights the needs of understanding why citizens do not par- ticipate. In other words, it is as important to get an understanding of who the participants are in a certain type of initiative as it is to know who the non-participants are and what their reasons are for not participating. For enthusiasts of participatory budgeting, the evaluation in Chapter 4 is an opportunity to analyze the world’s largest pro- gram, providing a unique perspective on the achievements and challenges faced by the Rio Grande do Sul’s PB. For civic technol- ogy researchers and practitioners, the evaluation offers a fascinat- ing account of one of the largest experiences of online voting in the global south. Civic Technology: Disruptive Innovation in the Global South? From a historical perspective, each period of innovation in com- munication technologies has been followed by enthusiasm over its potential to enhance civic participation. For instance, French intellectuals in the eighteenth century saw the Napoleonic tele- graph as a way of establishing a new participatory democracy that Rousseau could not have anticipated. In a similar vein, two centuries later, in the 1970s and 1980s, technology scholars and enthusiasts saw in the emergence of cable TV an opportunity to CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 45 fundamentally reinvent democracy, a “teledemocracy” era that would move institutions towards a more deliberative and direct model of citizen engagement. Repeatedly, these renewal ideals failed to meet the expectations of their time. Since the 1990s, the popularization of the Internet and other digital technologies has raised new hopes under different names such as e-democracy, e-governance, e-participation and, more recently, civic technology. Broadly defined as “the use of tech- nology for the public good” or, more specifically, “any technology that is used to empower citizens or help make government more accessible, efficient, and effective,” civic tech has been described as “the next big thing,” and is often associated with seductive adjectives such as “disruptive,” “transformational,” and “revo- lutionary.” Together with the historical evidence, the contribu- tions in this book invite the reader to consider a more nuanced perspective. While civic technology may enhance participation in some cases, it is far from altering two fundamental issues when it comes to citizen engagement. The first issue relates to unequal participa- tion, democracy’s “unresolved dilemma” in the words of political scientist Arend Lijphart. The findings in this book show that civic tech is not immune to inequalities in participation. This does not mean that users of civic technology platforms should perfectly mirror the socio-demographic traits of the larger populations from which they come (a standard that no modern democratic institution meets). However, and particularly in the Global South, it is important to consider whether civic tech facilitates the par- ticipation of individuals who are traditionally excluded or if—to the contrary—it further empowers the already empowered. 46  I N T R O D U C T I O N Technological access is unevenly distributed in society in both developed and developing countries, directly impacting the profile and participation of civic tech users. In addition, as shown by one of the studies, access to a certain technology (mobile phones) does not translate into the capability to use the features of that technol- ogy (text messages). Overall, the findings in this book suggest that in a number of cases the combination of technology with tradi- tional channels of engagement remains essential. Beyond matters of technological access and capabilities, the participatory design of civic tech initiatives may also be critical in achieving greater inclu- sion and diversity of participants. To date, civic tech initiatives have overwhelmingly relied on voluntary, self-appointed models of par- ticipation. They have lagged behind recent participatory innova- tions that have adopted more sophisticated models that combine proactive outreach with different methods of participant selection, such as random selection and stratification of participants. Further exploring the combination of online and offline participation, as well as alternative participatory designs, remain essential steps for civic tech if inclusiveness is a value to be pursued. The second issue relates to government responsiveness. While creating avenues for participation has never been so easy, responding is as hard as ever. On the one hand, civic tech dramat- ically lowers the costs for governments and third parties to estab- lish channels for citizens to project their voices and express their needs. On the other hand, in most cases, the levels of willingness, capability, and resources available for governments to provide a meaningful response remain the same, at best. This generates a voice-responsiveness deficit that cannot be narrowed by the mere creation of more civic technologies alone. CIVIC TECH—NEW SOLUTIONS AND PERSISTING CHALLENGES 47 Addressing this imbalance between the costs associated with listening and responding requires a finer understanding of the mechanisms that drive government responsiveness in the first place. When it comes to civic tech, very little is known about these mechanisms. For example, consider the case of third party platforms such as FixMyStreet in the UK and IChangeMyCity in India, where citi- zens publically report problems to the authorities. Assuming gov- ernment takes action, what prompts them to do so? So far, there is virtually no knowledge about whether it is government access to decentralized information about problems, or the publicizing of government responses, or both together, that affects respon- siveness. While asking these questions may seem like a mere academic exercise, the answers could provide valuable insights into how to design these platforms to best leverage government responsiveness. Government responsiveness would likely still depend on a number of other factors, such as the existence of per- formance management mechanisms, the accountability institu- tions in place, the patterns of relationships with government, and the electoral incentives of politicians to respond, all of which will require a more in-depth understanding if a voice-responsiveness deficit is to be addressed. To conclude, the challenges of inclusiveness and government responsiveness are not exclusive to civic technology and are certainly not new. Rather, they are the backdrop against which institutions and democracy have evolved throughout history. Whether civic technology makes a difference or not will ulti- mately depend on the extent to which it addresses these chal- lenges as they are manifested today. 48 I N T R O D U C T I O N REFERENCES Welle, K.; Williams, J.; Pearce, J. and Befani, B. (2015) Testing the Waters: A Qualitative Comparative Analysis of the Factors Affecting Success in Rendering Water Services Sustainable Based on ICT Reporting, Brighton: Making All Voices Count Gilman, H. R. (2016). Participatory Budgeting and Civic Tech: The Revival of Citizen Engagement. Georgetown University Press. Spada, P., Mellon, J., Peixoto, T., & Sjoberg, F. M. (2016). Effects of the Internet on participation: study of a public policy referendum in Brazil. Journal of Information Technology & Politics, 13 (3). World Bank Group (2016). Evaluating Digital Citizen Engagement: A Practical Guide. World Bank, Washington, DC. © World Bank. 1 WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 49 Chapter 1 When Does ICT-Enabled Citizen Voice Lead to Government Responsiveness? Tiago Peixoto Governance Global Practice, World Bank Jonathan Fox American University, Washington DC INTRODUCTION Around the world, civil society organizations (CSOs) and gov- ernments are experimenting with information communication technology (ICT) platforms that try to encourage and amplify citizen voices, with the goal of improving public service deliv- ery. This meta-analysis focuses on empirical studies of initiatives in the Global South, highlighting both citizen uptake (“yelp”) and the degree to which public service providers respond to expressions of citizen voices (“teeth”). The conceptual frame- work is informed by the key distinction between two genres of ICT-enabled political will—aggregated individual assessments of service provision and collective civic action. The first approach constitutes user feedback, providing precise information in real 51 52 CHAPTER 1 time to decision-makers. This allows policymakers and program managers to identify and address service delivery problems—but at their discretion. Collective civic action, in contrast, can encour- age service providers to become more publicly accountable—an approach that depends less exclusively on decision-makers’ dis- cretion about whether or not to act on the information embodied in feedback. This conceptual distinction between two different ways in which ICT platforms mediate the citizen–service pro- vider relationship allows for a more precise analytical focus on how different dimensions of these ICT platforms contribute to public sector responsiveness. This study begins with a conceptual framework intended to clarify the different links in the causal chain in between ICT- enabled opportunities to express voice (platforms) and insti- tutional responses. In other words, how and why are these platforms supposed to leverage responses from service provid- ers? The answers turn out not to be so obvious. Our approach was informed by a close review of the available evidence, primarily quantitative, about experiences with twenty-three ICT platforms in seventeen countries.13 This focus on unpacking causal chains is informed by two factors. First, the broader literature on the driv- ers of accountability increasingly emphasizes using causal chains to address the analytical puzzle of how to distinguish how and why citizen action may or may not lead to public sector response (Fox 2014; Grandvoinnet et al. 2015; Joshi 2014; Peixoto 2013). Second, analysis revealed that we do not see a generic type of platform leading to a generic type of response. Instead, we see key differences in the institutional (not technological) design of the interface that may be relevant for voice, citizen action, and WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 53 institutional response. The evidence so far indicates that most of the ICT platforms that manage to leverage responsiveness some- how directly involve government. While ICT-enabled voice platforms vary widely across many dimensions, this analysis emphasizes several differences that hypothetically influence both citizen uptake and institutional response. These include the degree of public access to informa- tion about the expression of voice—does the public see what the public says? Does the ICT platform document and disclose how the public sector responds? They also include institutional mech- anisms for public sector response—do the agencies or organiza- tions take specific offline actions to prompt service providers’ response? As a first step toward homing in on these variables, this paper maps the twenty-three platforms studied in terms of vari- ous empirical indicators of these distinct dynamics. This exercise is followed by a discussion of propositions that may or may not link voice to institutional response. Note that this study does not focus on two ways in which ser- vice delivery agencies use ICT that are very relevant for under- standing their full array of relationships with users. First, many public agencies are using mobile phones and social media to dis- seminate information efficiently. However, if those interfaces are one-way (“inside-out,” or “top-down”), then they do not “count” as ICT-enabled citizen voice for the purposes of this study. Second, agencies can use ICT for internal administrative reforms that can bolster their capacity to respond to citizen concerns by reducing the discretionary power of front-line providers through increas- ing the capacity of managers to monitor service provider per- formance, as well as by helping consistently track whether and 54 CHAPTER 1 how problems are being addressed. This study covers evidence of institutional response to ICT-enabled systems for users to exercise voice, rather than the broader set of cases of relevant e-govern- ment initiatives. CONCEPTUAL MAP: UNPACKING DIGITAL ENGAGEMENT The broader analytical context for this paper involves three simultaneous trends in the literature on the role of information in leveraging public accountability. First, the number and diver- sity of practitioner-led digital engagement for service delivery initiatives continue to grow, involving both effervescent exper- imentation and efforts to scale up. Experimentation with social accountability tools has been growing within the portfolios of both large public and private aid donors for the past decade, and some involve ICT. For instance, many World Bank projects with “identifiable beneficiaries” now include some kind of feedback mechanism, and citizen engagement has become a policy frame- work which includes the use of ICT (World Bank, 2014a). Major private donors, such as the Omidyar Network and Google, are also making significant investments to encourage “civic technology” —in both the global North and South. New donor partnerships are also encouraging experimentation with civic technology in very low-income countries, led most notably by Making All Voices Count.14 Second, while growing media coverage of ICT-enabled voice platforms is often enthusiastic, social science research on the dynamics and impacts of these initiatives lags far behind, and the limited existing evidence does not yet support unqualified optimism.15 This study is distinctive in that it draws on a recent WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 55 round of unusually comprehensive empirical studies that involve both large-scale surveys and access to government agency data. This new research suggests that the key dynamics that drive both voice and institutional response may be different from some of the popular impressions projected by the media, donors, and platform developers. Take for example the case of Kenyan urban water agency’s MajiVoice, a large-scale user feedback system widely presented as an ICT-enabled voice platform. Recent sur- veys find significant evidence of institutional response, grounded in an effective complaint tracking system—yet three-quarters of percent by phone, and less the complaints are filed in person, 21 ­ percent by SMS or online (Belcher and Lopes, 2016). than 3 ­ Third, the focus on the potential for citizen voice to improve public service delivery involves at least four distinct yet overlap- ping arenas of practice: the open data movement, open govern- ment reforms, anti-corruption efforts, and social accountability initiatives. In spite of the apparent new policy consensus that all these good things go together, in practice, the limited syn- ergy between these distinct approaches suggests that the whole is still not greater than the sum of the parts (Carothers and Brechenmacher, 2014). Most of these governance reform approaches rely heavily on the potential power of information to stimulate voice, yet they assign information different roles. There are several conceptual challenges involved in specifying the causal mechanisms that may link voice and institutional response, aside from the empirical questions involved (documenting uptake is more straightforward than institutional response). The first analytical challenge is to disentangle voice from responsiveness. Much of the first wave of research on ICT-enabled voice platforms 56 CHAPTER 1 focuses primarily on citizen uptake (e.g., Gigler and Bailur, 2014), without clear evidence that the feedback loop actually closes. In practice, the concept of feedback loop is often used to imply that uptake (e.g., citizen usage of crowd-sourced platforms to report feedback) necessarily leads to positive institutional responses. In other words, there is a high degree of optimism embedded in the way the concept tends to be used. In contrast, the framework pro- posed here avoids this assumption by treating the degree of insti- tutional response as an open question. The second conceptual challenge is to specify the relation- ship between the role of ICT-enabled voice platforms and the broader question of the relationship between transparency and accountability. In spite of the pervasive view that “sunshine is the best disinfectant,” the empirical literature on the relationship between transparency and accountability is far from clear (Fox, 2007; Gaventa and McGee, 2013; Peixoto, 2013). The assumed causal mechanism is that transparency will inform and stimulate collective action, which in turn will provoke an appropriate insti- tutional response (Brockmyer and Fox, 2015, Fox, 2014).16 In this model, both analysts and practitioners have only just begun to spell out the process behind that collective action (Fung, Graham, and Weil 2007; Joshi, 2014; Lieberman, Posner and Tsai, 2014). In light of widely held unrealistic expectations about the “power of sunshine,” convincing propositions about causal mechanisms involved need to specify how and why the availability of an ICT platform (a) would motivate citizen action and (b) why the result- ing user feedback would motivate improvements in service provision. After all, decision-makers’ lack of information about problems is not the only cause of low-quality service provision. WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 57 Third, the relationship between ICT-enabled voice platforms and the transparency/accountability question is complicated by the fact that, in practice, a significant subset of those platforms does not publicly disclose the user feedback. Yet if citizen voice is not made visible to other citizens, where does its leverage come from? Such feedback systems aggregate data by asking citizens to share their assessments of service provision, but if the result- ing information is not made public, then it cannot inform citi- zen action. In these systems, if users’ input is going to influence service provision, voice must activate “teeth” through a process other than public transparency—such as the use of data dash- boards that inform senior managers’ discretionary application of administrative discipline. These conceptual propositions suggest that it is relevant to distinguish explicitly between two different accountability path- ways that link voice and “teeth”—shorthand for institutional willingness and capacity to respond (Fox, 2014). In downwards accountability relationships, service providers are held account- able by citizen voice and action. The arrow of answerability points downwards, insofar as it is driven by the potential political cost to policymakers of not responding to a publicly visible con- cern. In contrast, in upwards accountability relationships, front- line and middle-level service providers are held accountable to senior policymakers and program managers, who apply the user information to take administrative action. The arrow of answer- ability points upwards. In this approach, the incentives for poli- cymakers to act on user information are less clear. Clearly, both mechanisms can operate together, but they are empirically and analytically distinct (see Table 1). 58 CHAPTER 1 Table 1.  How does voice trigger teeth? Upwards and downwards accountability PRIMARY CAUSAL MECHANISM Voice pathway Upwards accountability Downwards accountability Individual user feedback From frontline service providers to managers and policymakers by identifying problems and triggering administrative action Collective civic action From public sector to society, by bringing external pressure to bear and raising the political cost of non- responsiveness Based on these conceptual propositions, this review of twen- ty-three ICT-enabled voice platforms distinguishes between two different types of citizen voice, “user feedback” and “civic action.” While these two approaches can overlap in practice, they are analytically distinct. Their common denominator is the use of dedicated ICT platforms to solicit and collect feedback on pub- lic service delivery. The differences between them involve three dimensions: i) whether the feedback provided is disclosed; ii) through which pathway individual or collective citizens’ pref- erences and views are expressed; and iii) whether these mech- anisms tend to promote downwards or upwards accountability. Note that this analytical approach differs from the World Bank’s current policy framework, which considers user feedback to be a variant of “citizen engagement” (World Bank, 2014a). The approach proposed here, in contrast, does not treat the adjectives “citizen” and “civic” as pure synonyms (though they overlap). We use citizen (as in “citizen voice”) to refer to individual, non-pub- lic actions, while civic refers to public, collective actions. The two approaches are, potentially, mutually reinforcing. In practice, some voice platforms combine them (see Figure 1). WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 59 Figure 1.  Unpacking user feedback and civic action: Difference and overlap With regard to the first dimension, we will assess cases in terms of the extent to which the feedback provided by individ- uals is publicly disclosed or not, thus enabling citizens to act to hold governments accountable. Citizens’ capacity to hold govern- ments accountable depends, among other things, on the acces- sibility of publicly available relevant and actionable information (Fung, Graham, and Weil, 2007). In this respect, whether the feedback provided by citizens on service delivery is publicized or not is directly related to the extent to which citizens can hold governments accountable for government performance and actions. Thus, the first distinction between user feedback and 60 CHAPTER 1 civic engagement is that, while a growing number of ICT plat- forms collect input from individuals, only user feedback that is made public counts here as civic engagement (in Figure 1, this is the area of overlap between the two circles, involving both indi- vidual feedback and public disclosure). For instance, in the case of the Punjab Proactive Governance model, the government solicits feedback via mobile phones on the quality of services provided on a large scale on an ongoing basis (Bhatti, Zall Kusek, and Verheijen, 2014). However, the feedback provided is not disclosed to the public, only to senior policymak- ers, as it is intended to inform internal administrative monitoring processes. This process does not contribute to citizens’ ability to act based on the feedback. In contrast, Uruguay’s PorMiBarrio is a mobile and web-based platform that enables Montevideo’s citi- zens to report problems like vandalism and breakdowns of public infrastructure. The problems reported, and the actions taken in response by government (e.g., repaired or not), are displayed on a map on the public website. Not only is the government able to act on citizen reports, the publication of the feedback makes it possi- ble for citizens to hold governments accountable. The second dimension that we use to categorize plat- forms assesses the mechanisms by which citizens’ views and preferences are expressed, either individually or collectively. Individualized mechanisms refer to those that do not involve col- lective action, yet the feedback provided by a single individual is expected to trigger a response, possibly through aggregation in order to identify problem areas in public service delivery. This is the case, for instance, of web-based citizen reporting initiatives such as PorMiBarrio, FixMyStreet in Georgia, and I Paid a Bribe in WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 61 India. In these cases, each individual report of a very specific ser- vice issue needing attention is assumed to be enough to lead to a governmental response. In contrast, collective mechanisms refer to those in which it is the magnitude, nature, and intensity of the aggregation of citizen concerns that is expected to trigger govern- mental action. Examples of platforms for collective voice include online petitions such as Change.org and mobile and web-voting in Brazil’s state-wide Rio Grande do Sul Participatory Budgeting (PB) process. In both initiatives, it is the collective mobilization around a cause or preference that is intended to trigger govern- ment responsiveness. The core of the technological platforms that support these mechanisms lies in the reduction of transaction costs for collective action that can address policy agenda-setting, in contrast to platforms that react to policy outputs. This collective dimension, we argue, is what gives the character of “civic-ness” to ICT-enabled voice platforms, insofar as they enable individuals to engage in collective action—or at least to address public concerns. In contrast to feedback systems that receive individual reactions to specific service delivery problems, ICT platforms that enable the public aggregation of citizens’ views have more potential to constitute input into the setting of broader policy priorities. This potential civic agenda-setting contribution goes beyond the con- ventional understanding of feedback, in which the agendas to which citizens are supposed to respond are set from above (See Box 1). Thus, our conceptual distinction can be summarized as: citizen feedback initiatives provide feedback from individual cli- ­ ents of services. Where such feedback is not publicly disclosed, the causal pathway to governmental response is via upwards 62 CHAPTER 1 Box 1.  Whose voices are they? Whose voices are expressing themselves on ICT-enabled governmental service delivery feedback platforms? What kinds of bias may be involved? ICT platforms can poten- tially select for some kinds of responses over others. This can happen in at least two distinct ways—differential access to communication of feedback, and categorization of user input that pre-selects for certain categories. First, the subset of citizens who engage with ICT sys- tems may or may not represent the concerns of those citizens who lack ICT access, such as rural women or people without access to formal education. This is the case with UR’s U-Reporters, one quarter of whom are government employees (Mellon et al. 2015), and who under-represent the low-income, rural citizens who are most in need of public services. Indeed, the whole notion of user feedback suggests that the target group is limited to those citizens who ostensibly should have access but who have problems in practice, such as those who have a water connection but lack water. This implicit framing excludes those who are not included in water systems, clinics, schools, or public security in the first place—and who are therefore not considered “users.” Second, as citizen concerns are input into government agency data systems for aggregation and transmission WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 63 upwards to senior managers, administrative legibility requires them to be categorized into lists of preexisting categories, which may also select for some kinds of citi- zen priorities to the exclusion of others—as in the case of issues that are priorities for low-income urban women, as Ranganathan found in her study of e-redressal systems in Karnataka (2012). To sum up, the framing of the main questions addressed in this study—whether or not ICT service deliv- ery feedback platforms lead to uptake, and whether or not such voice in turn leads to service delivery response— does not address two relevant questions: whose voice is projected, and how inclusive the feedback agenda is. accountability, from frontline and mid-level public servants to senior managers and policymakers. Conversely, civic engagement refers to mechanisms where the feedback is publicly disclosed, which allows for collective action and downwards accountability to also take place. Figure 1 illustrates our conceptual model. On the left side of Figure 1, feedback is individual and undis- closed, which we can describe as a typical case of governmental user feedback platforms. On the right side, citizen voice is simulta- neously collective and disclosed, meeting the two criteria for our definition of civic engagement. At the intersection point, however, we find platforms that both collect individually specific feedback and make those inputs public (sometimes also reporting whether 64 CHAPTER 1 and how the government responds). This overlap involves the fact that, while individualized feedback mechanisms are not designed to spur online collective action within the platform itself, the fact that the feedback is publicized may inform and facilitate collec- tive action—offline as well as online. This may be the case, for instance, when the sum of individual feedback in a certain plat- form, such as FixMyStreet, reveals to the public the patterns of failure in a certain service, or reveals patterns of failure in certain locations. In this case, even though the platform is not specifically designed to support collective action, the disclosure of evidence of patterns of failure in a given service may support well-targeted collective action to address service delivery problems. Figure 2 shows the same diagram above populated with the cases we analyze in this study. The platforms that generated a high degree of tangible response from the service delivery agencies are indicated (seven of twenty-three). High responsiveness to citizen voice is measured here as tangible service delivery agency action, registered in more than half of cases. In eight cases, user uptake was high—though only three of these cases were also among the eight cases of high responsiveness. As shown in Figure 2, approximately a quarter of the cases are found in the user feedback category, another quarter in the civic action category, and fourteen of twenty-three at the inter- section between those two, called citizen engagement here. The cases in the user feedback category are mostly web- and mobile- based systems for collecting citizen views on the provision of services in a specific sector, such as electricity, water, and health. Here the service provider plays either a passive or an active role in the collection of feedback. In the first role, the citizen voluntarily WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 65 Figure 2.  Mapping citizen voice platforms and degrees of institutional responsiveness Maji Voice Digital State Participatory Budgeting Rio 1746 Pressure Pan Punjab Proactive Change.org My Voice U-Report Karnataka BVS Degree of Institutional Proactive Listening Electricity Check my School Responsiveness Lungisa LAPOR High I Change my City Huduma Medium Por Mi Barrio IMCO Low Maji Matone e-Chautari I Paid a Bribe Barrios Digital Chemi Kucha Sauti Za Wananchi initiates contact to report an issue with public services via mobile- or web-based systems—sometimes in combination with offline, face-to-face citizen attention windows (as in the case of MajiVoice in Kenya). One large-scale example in this category is LAPOR, Indonesia’s complaint handling system, which allows citizens to submit their reports on issues ranging from teacher absenteeism to damaged roads through a number of channels which include SMS, mobile apps, and social media. The user feedback category also includes a second mecha- nism by which data is collected, which we call “proactive listen- ing”—also called “proactive feedback” by its practitioners (Bhatti, Zall Kusek, and Verheijen, 2015; Masud, 2015). Here, government service providers proactively reach out to citizens in order to gather feedback from them on the quality of services received. This mechanism is best illustrated by Punjab’s Citizen Feedback 66 CHAPTER 1 Model, where a system generates SMS and calls to public service users in order to ask them about satisfaction with the services received and about potential corruption incidents. The Punjab government has deployed this approach on an unprecedentedly massive scale, with more than six million outreach calls so far. Recent large-scale surveys of service users have found that these outreach efforts actually reached and received responses from 15 ­percent of citizens called (Bayern, 2015; World Bank, 2015). The citizen engagement platforms (those found at the inter- section between user feedback and civic action) predominately utilize web and mobile-based mechanisms for reporting public service issues, similar to many of the user feedback platforms. However, what distinguishes these platforms is that the user feedback provided to service providers is also disclosed publicly. For example, the Lungisa website allows Cape Town residents to report service delivery problems (e.g., sanitation, electricity) using an online form, which is then routed to the relevant gov- ernment agency and further investigated by Lungisa staff. Unlike many user feedback systems, however, Lungisa allows residents to view all other reports that have been submitted, as well as the status of each issue (i.e., “in progress,” “closed”). Indeed, if ICT platforms ultimately seek to facilitate disclosure about whether and how governments respond to citizen voice, then the capacity to track both citizen feedback and government response are nec- essary, but not sufficient, design features. Citizen engagement platforms also seem to differ from user feedback platforms in terms of their ownership. While user feedback platforms tend to be built by service providers, citizen engagement platforms have been launched primarily by CSOs WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 67 or donor organizations (see Table 3). Generally, platforms built by service providers tend to generate far more user uptake than those launched by CSOs or donors, with a few exceptions. Finally, Figure 2 shows that several cases do not involve individualized user feedback and fall entirely within the civic action category. In these cases, the ICT platform’s primary goal is to support collective action through the aggregation of individ- ual citizen inputs. In other words, the role of individual inputs is not simply to identify specific service delivery problems, but to demonstrate the extent of citizen concern through the process of aggregation. The civic action cases considered here are signifi- cantly less numerous and more heterogeneous than either the user feedback cases or the citizen engagement cases. They include projects as diverse as web-based participatory budgeting in Rio Grande do Sul and the international online petitioning platform Change.org. However, if the scope of this research was broadened to include e-participation, crowdsourced political deliberation, or the role of social media in enabling political protest, the num- ber of relevant ICT platforms would increase. However, the focus here is on citizen voice platforms that specifically address public service provision. DIGITAL ENGAGEMENT INITIATIVES: CATEGORIZING PLATFORMS IN TERMS OF VARIABLES OF INTEREST In this section, we categorize our twenty-three ICT platform cases by considering a number of factors (i.e., independent variables) that may contribute to our outcome of interest: institutional response. We define “institutional response” as a clearly identi- fiable action taken by government/service providers, following 68 CHAPTER 1 individual or collective input by citizens. For example, there is evidence of clear institutional response in the case of the Proactive Listening initiative of EDE Este, an electricity distribution com- pany in the Dominican Republic. The initiative combines a tradi- tional complaint handling mechanism with proactive outreach to users. This online/mobile phone platform allows citizens to report problems with electricity services, ranging from malfunctioning connections to bribe requests by maintenance crews. Following the handling of the complaint (e.g., re-connection of electricity), the company proactively re-contacts a random sample of users to gather feedback on the quality of services provided. The feedback received is systematically used to inform sanctions (e.g., adminis- trative procedures) and rewards (e.g., performance-related wage bonuses for company workers). Since its implementation in 2011, the initiative has recorded growing resolution rates of reported issues, with close to 100 percent of the feedback provided indi- cating good or excellent levels of satisfaction. The average of instances of disrespectful treatment of clients registered at the beginning of the project was drastically reduced, and reported cases of corruption fell by 70 percent. Turning next to our independent variables, we have identi- fied eleven factors that may have a relationship with institutional responsiveness—disclosure of feedback, disclosure of response, proactive listening, voicing modality, accountability directional- ity, uptake, combined offline action, driver, partnerships between public service provider and civil society organization(s), level of government, and institutional responsiveness. Of these, uptake— the degree to which citizens actually use digital platforms— deserves particular attention here. WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 69 Uptake is often used as a key outcome for evaluating ICT plat- forms. Yet, while uptake may be necessary, it is far from sufficient for triggering institutional response (as the data below show). As described above, our main outcome of interest here is govern- mental response. Rather than treating citizen voice as an end in and of itself, our analysis treats uptake as an intermediate output that is relevant to the extent that it informs governmental deci- sions about whether and how to respond (see Table 2). Making this distinction is not intended to diminish the intrinsic value of expressing citizen voice. To the contrary, citizen voice is a socially valuable practice with the clear potential to encourage learning. Nonetheless, differentiating between uptake as an output, and institutional response as an outcome, provides crucial concep- tual clarity that allows us to disentangle a number of different hypotheses about how a number of factors might influence insti- tutional responsiveness. Table 2 details this approach further, distinguishing between inputs, outputs, outcomes, and impacts. Considering uptake as an output helps us to better under- stand the role that it may play in generating the outcome interest, institutional responsiveness. Hypothetically, it of ­ should be relatively straightforward to find evidence sup- porting a causal relationship between uptake and respon- siveness. All other things being equal, governments are more likely to respond when more citizens are engaged. Indeed, the odds of successful collective action increase as the num- ber of participants grow (Lohmann, 2000). In a cross-na- tional study by the World Bank (2015) of online petitioning, the higher the number of signatories of a petition, the more likely governments are to respond. In fact, a number of both 70 CHAPTER 1 Table 2.  ICT-enabled voice platforms: inputs, outputs and impacts INPUT > OUTPUT 1 > OUTPUT 2> OUTCOME > IMPACT Platform: Expression of citizen voice Aggregation of voices Institutional response Tangible change in service Channel for voice (uptake) (e.g., breaking bottlenecks, delivery access repairs, resource allocation) Publicity: Disclosed or not? Disclosed or not? Disclosed or not? Disclosed or not? traditional and digital citizen participation platforms are explicitly designed to trigger governmental response only when citizen participation reaches a pre-set benchmark. This is the case with citizen initiatives, referenda, and the official e-petitioning systems in the United Kingdom and the United States. However, some development practitioners argue that sustained uptake itself can be used as a proxy for government responsiveness. Otherwise, the argument goes, citizens would not “keep coming back.” While this assertion is partially sup- ported by empirical evidence (e.g., Sjoberg, et al, 2017), there are a number of instances where one finds sustained uptake despite low levels of institutional responsiveness, perhaps best exem- plified in Downs’ (1957) work on the “paradox of voting.” Thus, treating citizen uptake as an indicator of government respon- siveness remains problematic (as we shall demonstrate later). Below, we provide a description of each variable of interest. While this list is by no means exhaustive, the selection of these variables is informed by the literature on digital engagement and institutional responsiveness, and reflects the availability of data across all cases. Further analysis would be necessary to assess the relative weights of each variable. The main focus of the subse- quent discussion will be on broad patterns that emerge across all WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 71 twenty-three cases. For brevity’s sake, discussion of specific cases and the explicit rationale used to code them will be limited. DESCRIPTION OF VARIABLES Disclosure of feedback – Refers to the extent to which the feed- back provided by the citizen is made public or not. Disclosure of response – Refers to whether the official response to citizen feedback (individual and collective) is publicly dis- closed or not. This would reveal the extent to which citizen input has led to institutional responsiveness. Proactive listening – Indicates whether at some point the service provider proactively contacts the citizen in order to collect feed- back on the quality of services provided. Voicing modality – Whether the feedback provided through the ICT platform is individual or collective. This indicates whether ICT-enabled collective action is involved in triggering a response.  Accountability directionality – Determines if the causal pathway is more likely to promote accountability between service provid- ers and higher authorities (upwards accountability) or between citizens and service providers (downwards accountability). Uptake – An essentially quantitative measure of the number of individuals who provide feedback or who join a collective action. Uptake was coded in absolute terms of input provided (e.g., num- ber of votes, reports) in a discontinuous range of low (between 1 and 10,000), medium (between 10,001 and 100,000) and high (above 100,000). Combined offline action – Identifies whether additional actions are taken offline in order to encourage government responsiveness. 72 CHAPTER 1 This could refer to a structured process of citizen follow-up on participatory budgeting, or to dedicated DE platform staff that fol- low up with the relevant authorities (e.g., Lungisa). Driver – Identifies the main institution driving the initiative, such as government, donors, and CSOs. Partnerships between public service provider and civil soci- ety organization(s) – This refers to the existence of formal and/ or informal relationships between government and civil society, where there is some degree of coordination toward a common outcome. For example, this is the case for the Por Mi Barrio proj- ect, a partnership between the organization DATA and the munic- ipal government of Montevideo. This relationship allows for direct communication between the digital platform (developed by DATA) and the government’s existing complaint response mechanism. Another example is the formal partnership between the IPaidABribe.com project and Indian governmental authori- ties, which facilitates communication and allows for coordinated follow-up of bribes reported to the government. Coding options include government-led, CSO-led, and donor-led. Level of government: Describes the level at which services are provided and feedback is provided, sub-divided as national, sub-national, and local. Institutional responsiveness: This indicator reflects the degree to which there is clearly documented evidence of government response to feedback provided through ICT platforms (includ- ing combined online/offline action). Whenever possible, coding categories for institutional responsiveness reflect the share of citi- zens’ inputs addressed, ranging from low (less than 20 ­percent) to WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 73 percent of citizen issues addressed) medium (between 20 and 50 ­ percent and above). When that was not possible, and high (50 ­ researchers compared the current and prior status quo with regard to the explicit and implicit goals of the project. Level of responsiveness ratings were based on existing data (e.g., I Change My City), original data analysis (e.g., Change.org), and in some cases, interviews with DE platform staff, who were asked to provide clear evidence of responsiveness to feedback provided platforms. This approach is limited by dependence through the ­ on self-­reported administrative data in cases where verifiable sys- tem data and/or user surveys are not available. Cases that lacked sufficient evidence with which to assess the degree of institu- tional responsiveness were not included. Table 3 presents the final coding of cases, followed by Table 4 with the specific evidence for the coding of institutional respon- siveness outcomes. The majority of platforms make their citizen feedback pub- lic (eighteen of twenty-three). Out of the five that do not disclose the feedback, two are governmental and three involve donor agencies in collaboration with governments. Conversely, all of the CSO-driven initiatives publicize the input given by citizens. This finding makes particular sense if one considers the direc- tionality of accountability relations. User-feedback initiatives (i.e., not disclosed) are more likely to be implemented by govern- ments or donors, where service providers are held accountable to a higher authority (upwards accountability). Conversely, given that CSOs have few means to hold providers directly accountable, they rely essentially on downwards accountability mechanisms, where the driving force of institutional responsiveness—at least 74 Table 3.  Mapping uptake and institutional response to ICT-enabled voice platforms CASE COUNTRY VARIABLES OUTPUT OUTCOME CHAPTER 1 OFFLINE FEEDBACK DISCLOSE RESPONSE DISCLOSE PROACTIVE LISTENING INDIVIDUAL / COLLECTIVE ACCOUNTABILITY DIRECTION DRIVER PARTNER GOVT LEVEL UPTAKE RESPONSE PROACTIVE LISTENING ELECTRICITY DO IND GOV SUB MAJI VOICE KE IND GOV LOCAL LUNGISA ZA IND CSO LOCAL RIO 1746 BR IND GOV LOCAL DIGITAL STATE PB BR COL GOV SUB I CHANGE MY CITY IN IND CSO LOCAL POR MI BARRIO UY IND CSO LOCAL MAJI MATONE TZ IND CSO LOCAL PRESSURE PAN BR COL CSO ALL CHANGE .ORG INT COL CSO ALL PUNJAB PROACTIVE PK IND GOV SUB I PAID A BRIBE IN IND CSO ALL Table 3.  Mapping uptake and institutional response to ICT-enabled voice platforms CASE COUNTRY VARIABLES OUTPUT OUTCOME OFFLINE FEEDBACK DISCLOSE RESPONSE DISCLOSE PROACTIVE LISTENING INDIVIDUAL / COLLECTIVE ACCOUNTABILITY DIRECTION DRIVER PARTNER GOVT LEVEL UPTAKE RESPONSE U-REPORT UG COL DON NAT CHEMI KUCHA GE IND CSO LOCAL CHECK MY SCHOOL PH IND CSO SUB LAPOR ID IND GOV ALL MYVOICE NG IND DON SUB HUDUMA KE IND CSO ALL IMCO MX IND CSO NAT KARNATAKA BVS IN IND DON SUB E-CHAUTARI NP IND DON SUB BARRIOS DIGITAL BO IND DON SUB WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? SAUTI ZA WANANCHI TZ IND CSO NAT 75 76 Table 4.  Evidence for Assessing Institutional Responsiveness CASE RESPONSE CRITERIA FOR CODING DATA SOURCE Reduction of corruption reports by 70%, increased levels of service- Interview with distribution company, system reports Proactive Listening Electricity High user satisfaction provided by company Increase in percentage of solved complaints, with time of response System data analysis (n=57,809), customer survey (n=1,064).” (Belcher CHAPTER 1 Maji Voice High reduced by half since implementation. Survey reveals 60% of & Lopes 2015) satisfied customers Lungisa High 98% of complaints reported as solved Website (http://www.lungisa.org/, March 28th 2015) Rio 1746 High 99% of complaints reported as solved, user satisfaction at 74% 1746 statistical report (http://www.1746.rio.gov.br/, March 28th 2015) Institutional response based on 100 % of prioritized projects World Bank report, “Impact of online voting in participatory budgeting Digital State Participatory Budgeting High submitted to official budget in Brazil” (Haikin, 2015) I Change My City HIgh 51% of complaints resolved Website (http://www.ichangemycity.com/, March 28th 2015) Por Mi Barrio High 50% of complaints resolved Montevideo’s data report, interview with project staff Service provider actions were taken in 40% of reports received Website (http://blog.daraja.org/2012/02/so-what-have-we-learnt- Maji Matone Medium summarising.html, March 28th 2015) Meu Rio’s Pressure Pan Medium 24% of campaigns supported by the organization are successful System data analysis provided by MeuRio Average individual signature has 25% chance of generating a Data analysis of 3.9 million users’ data from change.org through open Change.org Medium response API. World Bank analysis, 2015 With nearly one million citizens contacted, the government has taken World Bank Development Report 2016 (draft) about 6000 administrative actions, mostly warnings but also formal apologies, with limited instances of suspension and dismissal of Punjab Proactive Medium civil servants. There is a systematic proportion of fake and incorrect citizen cell phone numbers entered by government officials, indicating constraints on senior manager oversight capactiy Reponses to reports sent by IPAB to authorities described as “very Interview with IPAB staff (e-mail Sunil Nair, 19 December 2014) I Paid a Bribe Low limited” Table 4.  Evidence for Assessing Institutional Responsiveness CASE RESPONSE CRITERIA FOR CODING DATA SOURCE Limited to anecdotal data on MPs (rather limited) interest on Berdeu & Abreu-Lopes, 2015 U-Report Low U-Report data Chemi Kucha Low 4% of reports fixed within one year of activity Website (https://www.chemikucha.ge/en/ , March 28 2015) 11% response rate to reported school issues Crowdsourcing Citizen Participation: The CheckMySchool 3G Check My School Low Experience, 2014 Government considers the 8% of reports System data provided by LAPOR team LAPOR Low received got a response Out of 314 messages sent, only six were responded using the system, Lee, & Schaefer (2014) MyVoice Low and eighteen were tagged for follow-up Assessment shows that out of the 3,000 reports submitted via SMS, Bott & Young (2012) Huduma No email, and Twitter, none were resolved There was no evidence of responsiveness on the part of education Interview with project staff (phone) IMCO No authorities to school issues reported on the platform Despite study looking for evidence of institutional Georgieva Andonovska, E. (2014), Madon (2014) Karnataka Beneficiary Feedback Low responsiveness, no clearly documented evidence of government responsiveness on service delivery issues Despite study looking for evidence of institutional On Track Evaluation Report. e-Chautari Low responsiveness, no clearly documented evidence of government Keystone Accountability, 2015 responsiveness on service delivery issues Despite study looking for evidence of institutional On Track Evaluation Report. Barrios Digital Low responsiveness, no clearly documented evidence of government Keystone Accountability, 2015 responsiveness on service delivery issues WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? Despite interview with staff, no clearly documented evidence of Interview with project staff (e-mail) Sauti Za Wananchi Low government responsiveness on service delivery issues 77 78 CHAPTER 1 hypothetically—is the exposure of the behavior of service provid- ers vis-à-vis citizens. No pattern seems to emerge when looking at disclosure of feedback and institutional responsiveness, how- ever. In user-feedback initiatives (where feedback is not disclosed and there is no collective action), the four cases are equally split between low and high levels of institutional responsiveness. A similar pattern emerges when examining citizen engagement ini- tiatives: public disclosure of feedback does not seem to lead—per se—to increased responsiveness from providers. In fourteen cases, the provision of input through the dedi- cated platform is complemented by some type of offline action to prompt governments to respond and/or to monitor government responsiveness. This is the case, for instance, of the Rio Grande do Sul PB process, in which citizens are periodically elected to monitor the implementation of investments prioritized through a voting process (Spada, et al, 2016). In MajiVoice, the respon- siveness of the water service agency is actively monitored by the members of the Water Services Regulatory Board, which can trig- ger legal actions against service providers when they fail to meet established quality standards (Belcher and Lopes, 2016). Yet pre-­ offline action does not seem to ensure responsiveness by itself, as illustrated by the cases of e-Chautari in Nepal and Barrios Digital in Bolivia. However, among the fourteen cases, the evidence is insufficient to verify that the intensity and regularity of these offline actions varies. In the category of civic action initiatives, where response involves online collective action, we find four different cases, with varying degrees of institutional responsiveness. The Rio Grande do Sul Digital PB process has a high level of institutional WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 79 responsiveness, while the online petition platform Change.org and the Brazilian initiative Pressure Pan both have medium lev- els. A possible explanation of the different responsiveness levels is the difference in institutional design. Digital PB in Rio Grande do Sul is a governmental initiative mandated by state legislation. As such, all of the citizen-generated social investment proposals that are approved through the participatory process are officially included in the state’s budget, with a number of them effectively carried out by the state government. The other two initiatives are platforms that allow any citi- zen to initiate collective action to petition or exert pressure on the government to take an action toward any public agenda. This open-­endedness means that the platforms can launch both some actions that trigger extensive uptake and mobilizations, and many that fail to generate follow-up. This potential for a large denominator, in terms of the total number of initiatives, would affect the overall percentage of petitions that trigger respon- siveness. Indeed, some data seems to suggest the importance of mobilization capacity: online petitions on Change.org are sub- stantively more likely to be successful when sponsored by an organization (World Bank, 2015), and citizen campaigns through Pressure Pan are three times more likely to succeed when receiv- ing mobilization support from Pressure Pan’s staff. This evidence resonates with the proposition that the effectiveness of digital technologies in social mobilization depends on offline structures of organization and influence (Fung, Gilman, and Shkabatur, 2013). Finally, we find the widely recognized case of U-Report in Uganda, with low level of institutional responsiveness, which we shall discuss later. 80 CHAPTER 1 In terms of the institutional actors that drive the voice ini- tiatives, twelve are led by CSOs, six by governments, and five by donors. Out of the seven initiatives with high levels of responsive- ness, four are government-led and three CSO-led. Civil society and governments seem equally capable of creating platforms and processes that engender responsiveness. However, the three CSO high-response initiatives all share a common trait in that they involve partnerships with government. In other words, in all of the cases of high institutional responsiveness, the government is either leading the process or plays the role of a partner. However, not all of the initiatives that involve government–CSO partner- ships led to high levels of institutional responsiveness, as illus- trated by the cases of I Paid a Bribe and Check My School, both of which had low percentages of issues raised by citizens that led to documented agency responses. Seen together, these findings seem to suggest that while partnership with government is not a sufficient condition for the responsiveness of CSO-led initiatives, it may well be an enabling one. Finally, while the initiatives that show medium and high degrees of institutional responsiveness involve both CSO and government-driven efforts, we find no donor-driven platforms that led to institutional responsiveness. While we do not claim our sample to be representative and the results may be skewed due to the small number of donor-driven cases analyzed, these patterns suggest future research paths that will focus on the role that different drivers may play in institu- tional responsiveness. When examining uptake, the results in Table 4 support our previous argument that citizen use of platforms (an output) should not be equated with institutional responsiveness (an WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 81 outcome). This sample includes significant cases that combined high uptake with low responsiveness. The case of U-Report (UR), UNICEF’s social monitoring system for young Ugandans, pro- vides compelling evidence for this point. Created in 2007, this SMS-based platform runs weekly polls with registered users on a broad range of issues (e.g., child marriage, access to education). To inform public debate, the results of the polls are widely dis- seminated through the project’s website and diverse mass media outlets, including a variety of formats such as newspaper arti- cles, radio shows, and even a documentary broadcast on major Ugandan TV channels. Members of Parliament (MPs) are UR’s main policy audience. Aligned with a vision of real-time data collection to inform policymaking that goes beyond sending weekly newsletters with poll results to MPs, UNICEF also pro- vides MPs with access to the platform to reach out to their audi- ences. The number of registered users (U-Reporters) has grown steadily since its launch, recently reaching greater than 299,000 users (Bayern, 2015; World Bank, 2015b). UNICEF describes UR as a “‘killer app’ for communication towards achieving equitable outcomes for children and their families” (UNICEF, 2012). This enthusiastic view of UR has resonated in development circles, with the free SMS-based platform currently being rolled out in countries such as Rwanda, Burundi, the Democratic Republic of Congo, South Sudan, Nigeria, and Mexico. Uptake is not a problem for UR in terms of numbers, and it leverages the potential of mobile phones as a means to “listen at percent of UR participants have some uni- scale.” However, 47 ­ versity education and one quarter are government employees, raising questions about whose voices are being projected (see 82 CHAPTER 1 Box 1). Furthermore, until recently very little was known about the extent to which U-report’s uptake was translated into any type of institutional responsiveness. A new detailed evaluation of U-Report finds no systematic evidence of U-Report affecting pol- icy, let alone MPs behavior in terms of representation, legislation, and oversight (Berdou and Lopes, 2016). U-Report emerges thus as a significant case that illustrates the need to separate uptake (as an output) from institutional responsiveness (as an outcome). When examining the table above, one of the most noticeable patterns is the existence of numerous digital engagement initia- tives that meet dead ends despite different pathways—at least in the short run. The majority of the twenty-three cases studied led to low levels of institutional responsiveness, with eleven report- ing medium to high levels (defined conservatively as leading to at percent response rates). Notably, the multiple dead ends least 20 ­ do not seem to be motivated by the absence of any one specific fac- tor. None of these variables appear to be a sufficient condition for institutional responsiveness, suggesting that none of these factors can be a considered as a “magic bullet.” The findings suggest mul- tiple pathways to institutional responsiveness—involving the convergence of multiple, mutually reinforcing factors. If one fac- tor does stand out, however, it is government involvement, inso- far as four of the six cases of government-led voice platforms were associated with high rates of service delivery responsiveness. CONCLUSION This study reviewed cases of ICT-enabled voice platforms where evidence of institutional response was available. As suggested in our introduction, in the “yelp” feedback loop model, proponents WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 83 tend to assume that user feedback to identify service delivery problems is sufficient to induce service providers to respond. This review of the evidence from twenty-three ICT-enabled plat- forms finds that this implicit market model, in which (individ- ual) demands for good services produces its own supply, is not sufficient to leverage institutional response. This study organized the data from available empirical research in order to identify broad patterns of user uptake, public access to user feedback data, and institutional arrangements, and provide an assessment of whether service providers respond to user feedback. This con- clusion addresses some of the emerging issues that should be addressed in the future. Indeed, as the evidence base grows, more systematic explorations of the relationship between ICT-enabled citizen voice and institutional response should be possible. The findings from the twenty-three cases where both user uptake and institutional response data were available indicate mixed results on both counts. In eight cases, user uptake was high. Institutional response was high in seven cases, and intermediate in three. For the majority of cases, institutional response was low or non-existent. One reason for these mixed results, however, is that the umbrella category “ICT-enabled voice platforms” may have resulted in the selection of cases that are actually quite dif- ferent from one another. Separating some of these approaches from one another may help to clarify the findings. Indeed, a similar approach has been used to unpack outcomes from the diverse initiatives that fall under the conceptual umbrella of “social accountability” (Fox, 2014). What looks like “mixed results” at first glance may simply be a case of conflating apples with oranges. Since this research collected data on a diverse array 84 CHAPTER 1 of independent variables, patterns in citizen uptake and institu- tional response can be revisited, revealing patterns that would not otherwise be visible. This conclusion will highlight several variables that may be especially fruitful for future research on broad-based user feedback, civic engagement, and effective insti- tutional response. I) Does the feedback platform contribute to upwards accountability, downwards accountability, or both? The institutional design of ICT-enabled voice platforms deter- mines whether the role of citizen voice is limited to informing program managers and policymakers (i.e., upwards accountabil- ity), or whether voice is intended to contribute to public scrutiny and potential collective action, which in turn would create incen- tives for institutional response (i.e., downwards accountability). Through processes of upwards accountability, ICT-enabled user feedback can help senior policy-makers to identify bottlenecks and address front-line service provision issues. For example, in one of the cases with the highest uptake—Punjab Proactive Feedback—­citizen reports are not disclosed and there is no offline citizen engagement, so institutional response is left to the discre- tion of senior managers. However, there is evidence that many ICT-enabled voice platforms are also conducive to downwards accountability as well: user feedback is publicly-disclosed in eighteen of the twenty-three cases studied. In twelve of the twen- ty-three cases, ICT feedback was complemented by offline citizen engagement of some kind. While platforms that enable upwards accountability (e.g., large-scale opinion surveys) are associated with only modest WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 85 levels of institutional responsiveness, there appears to be a rela- tionship between platforms that are conducive to downwards accountability and platforms that produce greater responsive- ness: five of the seven high-impact platforms disclosed feedback. Six of the seven high-impact platforms involved offline citizen engagement. In all of the high-impact cases, government was present either as a driver (four cases) or as a partner (three cases). This suggests that for downwards accountability to work most effectively, both public disclosure of feedback and public collec- tive action may be necessary. In other words, civic engagement, in addition to information, is what generates the civic muscle neces- sary to hold senior policymakers and frontline service providers accountable. II) What institutional design features can influence the willingness and capacity of service providers to respond to citizen feedback? Another way to explore the role that citizen voice plays in driving institutional response is to explore the issue through the lens of a senior program manager. Their responsiveness to citizen feed- back is determined both by their willingness—intent and moti- vations—and their capacity—the institutional leverage they have to actually change practice. In some cases, institutional design and a strong sense of commitment to the organizational mission by high-level officials are enough to encourage a program manag- er’s willingness to respond. In these cases, the key role of ICT plat- forms is to bolster capacity to respond—as with MajiVoice’s water provision in Kenya. 17 Some program managers may have a strong sense of mission, while others may be more concerned about the potential political risk associated with dissatisfied citizens. In 86 CHAPTER 1 either case, the systematic collection of citizen feedback can be a useful tool. In other words, the motivations for responsiveness do not appear to be directly influenced by ICT voice platforms. In contrast, the determinants of senior manager capacity to respond to citizen voice are directly affected by ICT platforms’ institutional and technical design. These features will determine the precision with which user problems are identified, which is crucial to iden- tify which service providers are responsible. The cases studied suggest that it is crucial for user complaints to be routed to enti- ties within the service providing agency that have some incentive and capacity to respond. Specifically, experiences with the most high-impact platforms, such as the Dominican electricity agency and MajiVoice in Kenya, suggest that direct links between gov- ernmental feedback reception systems and internal work order systems greatly increase policymakers’ capacity to determine whether and how complaints have been resolved, which appears to be a necessary condition for effective institutional response. Similarly, two of the most successful CSO platforms—Por Mi Barrio in Uruguay and I Change My City in India—are connected to existing governmental service provider complaint systems. These are examples of the institutional questions that play cru- cial roles as intervening variables that shape whether or not voice triggers teeth to act. The proposition that emerges here is that regardless of their motivations, policymakers with a commitment to bolstering institutional responsiveness should in principle have incentives to: 1) institute tracking systems that directly link complaints to institutional responses; and 2) publicly disclose both citizen feed- back and data regarding institutional response, in order both to WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 87 inform and validate subsequent citizen action, and to potentially “name and shame” non-performing units with their agency. III) How can proactive listening systems broaden outreach to citizens and project voice more widely? One of the relevant findings from this review of the evidence is that proactive listening systems are both relatively rare and yet quite significant. Two of the most well-known cases of ICT enabled citizen voice—Punjab Feedback (which has the most uptake of any cases by far) and U-Reporters—involve proactive listening. Yet the evidence available indicates that neither of these plat- forms has triggered high levels of institutional response. While proactive listening in the Punjab Feedback case involves signif- icant willingness by senior policymakers to respond to users, their capacity is constrained by both a limited complaint track- ing system—citizens that have filed complaints often have their phone numbers misreported— and limited leverage over civil service employees, since their ability to sanction is limited by civil service rules. Indeed, the fact that user feedback is not made public could be interpreted as an indicator of the fragility of the system’s constituency within the government. Unlike the Punjab Feedback project, use of the U-Reporter system is not limited to users of basic services and its reporting bias towards urban, male, well-educated citizens suggests that its voice may not entirely be representative of citizens. The most clear-cut case of a proac- tive listening system with high levels of uptake and institutional response is the Dominican electricity system. This uneven pattern of uptake and responsiveness in such a diverse set of proactive listening cases suggests that more institutional experimentation 88 CHAPTER 1 and innovation is needed, with a strong emphasis on connecting the dots between incentives for citizens to express voice and the capacity of service providers to respond. IV) How can lessons from uptake of non-dedicated social media be applied to ICT-enabled service delivery platforms? The majority of cases where social media has enabled collective action have concerned broad issues of civic concern, like corrup- tion or authoritarian abuse, rather than specific service delivery issues. Moreover, these viral processes have been enabled by open, multi-purpose social media, rather than through dedicated, ser- vice-specific ICT platforms. The contrast between the track record of ICT-enabled civic engagement platforms dedicated to service delivery agencies, and that of much broader non-dedicated social media platforms, suggests that some of the lessons from the latter could be applied to the former. Crowdsourcing public grievances could, in principle, publicly legitimate citizen service delivery concerns, could identify problem hotspots, and could enable coordination for collective action that might encourage service provider responsiveness. Yet, in practice, the evidence (especially from CSO-led, crowdsourced citizen feedback platforms) sug- gests that this has actually happened far less often than one might hypothesize. V) How can society-facing “targeted transparency” find synergy with govern- ment-facing “targeted citizen feedback” to stimulate virtuous circles of mutual- ly-reinforcing voice and teeth? Intuitively, one would expect citizens to be more likely to report problems with service delivery to providers if they have reason WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 89 to believe that those service providers are likely to respond to that feedback. Conversely, non-responsiveness is likely to dis- courage citizen reports.18 This suggests the potential for encour- aging virtuous circles of increased citizen reporting as agencies’ capacity to respond grows. It also underscores one of the les- sons from research on “targeted transparency,” which empha- sizes the importance of embedding information disclosure and access in potential users’ everyday routines, in order to inform decision-making and potential collective action (Fung, Graham, & Weil, 2007). Yet, the limited institutional responsiveness achieved thus far by ICT-enabled citizen voice platforms suggests that perhaps the concept of embedded feedback should also be applied at the governmental “receiving end.” Indeed, while “tar- geted transparency” usually refers to disclosure of relevant infor- mation to citizens, perhaps “targeted citizen feedback” is needed to help deliver information to government program managers in ways that embed it in official decision-making processes (as in the case of MajiVoice, where citizen complaints are immediately attached to government work orders that can be tracked through the system). FINAL PROPOSITION FOR DISCUSSION To conclude, the empirical evidence available so far about the degree to which voice can trigger teeth indicates that service delivery user feedback has so far been most relevant where it increases the capacity of policymakers and senior managers to respond. It appears that dedicated ICT-enabled voice platforms, with a few exceptions, have yet to influence their willingness. Where senior managers are already committed to learning from 90 CHAPTER 1 feedback and are using it to bolster their capacity to get agencies to respond, ICT platforms can make a big difference. In that sense, ICT can make a technical contribution to a policy problem that to some degree has already been addressed. In summary, ICT platforms can bolster upwards accountabil- ity if they link citizen voice to policymaker capacity to see and respond to service delivery problems. This matters when policy- makers already care. If the challenge is how to get policymakers to care in the first place, then the question is how ICT platforms can bolster downwards accountability by enabling the collective action needed to give citizen voice some bite. REFERENCES Bayern, J. (2015) Investigating the Impact of Open Data Initiatives: The Cases of Kenya, Uganda and the Philippines, Washington: World Bank, January Bayern, J. (2015), Citizen Feedback Survey Report, Washington: World Bank, January Berdou, E. and Abreu-Lopes, C. (2016) The case of UNICEF’s U-report (Uganda): Final Report to the Evaluation Framework for Digital Citizen Engagement, Digital Engagement Evaluation Team docu- ment, World Bank Belcher, M. and Abreu-Lopes, C. (2016) MajiVoice Kenya: Better Complaint Management at Public Utilities, forthcoming Digital Engagement Evaluation Team document, Washington: World Bank Bhatti, Z.K., Zall Kuseck, J. and Verheijen, T. (2015) Logged On: Smart Government Solutions from South Asia, Washington: World Bank Bott, M., & Young, G. (2012). The Role of Crowdsourcing for Better Governance in International Development, Praxis: The Fletcher Journal of Human Security 2(1), 47-70 WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 91 Brockmyer, B. and Fox, J. (2015) Assessing the Evidence: The Effectiveness and Impact of Public Governance-Oriented Multi-Stakeholder Initiatives, London: Transparency and Accountability Initiative, http://www.transparency-initiative. org/reports/assessing-the-­e vidence-the-effectiveness-and- impact-of-public-governance-oriented-multi-stakeholder- initiatives (accessed 6 October 2015) Carothers, T. and Brechenmacher, S. (2014) Accountability, Transparency, Participation and Inclusion: A New Development Consensus? Washington: Carnegie Endowment for International Peace, http://carnegieendowment.­org/files/new_development_ consensus.pdf (accessed 6 October 2015) Cornwall, A. (2002) Beneficiary, Consumer, Citizen: Perspectives on Participation for Poverty Reduction, SIDA Studies No. 2, Stockholm: Swedish International Development Agency Dhiratara, A. and M. M. Gibran Sisunan, LAPOR! Layanan Aspirasi Dan Penguadan Online Rakyat, presentation, n.d. Diecker, J. and M. Galan (2014) “’Creating’ a Public Sphere in Cyberspace: The Case of the EU,” in Carayannis, E.G., Campbell, D.F. and Efthymiopoulos M.P. (eds) Cyber-Development, Cyber- Democracy and Cyber-Defense, New York: Springer Downs, A. (1957) An Economic Theory of Democracy New York: Harper and Row Fox, J. (2007) The Uncertain Relationship between Transparency and Accountability, Development in Practice, 17(4), 663-671 Fox, J. (2014) Social Accountability: What does the Evidence Really Say? GPSA Working Paper No. 1, Washington: World Bank Global Partnership for Social Accountability programme Fung, A., Graham, M. and Weil, D. (2008) Full Disclosure: The Perils and Promise of Transparency, Cambridge: Cambridge University Press Fung, A., Gilman, H.R. and Shkabatur, J. (2013) Six Models for the Internet + Politics, International Studies Review 15(1), 30-47, doi:10.1111/misr.12028 92 CHAPTER 1 Gaventa, J. and McGee, R. (2013) The Impact of Transparency and Accountability Initiatives, Development Policy Review 31(S1) s3-s28 Georgieva Andonovska, E. (2014), The Karnataka Beneficiary Verification System (BVS) – A Case Study. World Bank, December Gigler, S. and Bailur, S. (eds) (2014) Closing the Feedback Loop: Can Technology Close the Accountability Gap? Washington: World Bank Grandvoinnet, H., Aslam, G. and Raha, S. (2015) Opening the Black Box: Conceptual Drives of Social Accountability Effectiveness, Washington: World Bank Haikin, M.(2015). Impact of Online Voting on Participatory Budgeting in Brazil, Digital Engagement Evaluation Team, World Bank. Joshi, A. (2014). Reading the Local Context: A Causal Chain Approach to Social Accountability, IDS Bulletin 45(5), 23-35 Mellon, J., Peixoto, T. and Sjoberg, F.M. (2015) The Crowd Never Lies? Evaluating the Quality of Crowd-Sourced Data in Uganda, Digital Engagement Evaluation Team, World Bank, unpublished LAPOR (2014). Facts Sheet, Jakarta: LAPOR Lee, P., & Schaefer, M. (2014). Making Mobile Feedback Programs Work: Lessons from Designing an ICT Tool with Local Communities, Washington: World Bank Lieberman, E., Posner, D. and Tsai, L. (2014) Does Information Lead to More Active Citizenship? Evidence from an Education Intervention in Rural Kenya, World Development 60(1), 69-83 Lindstedt, C., & Naurin, D. (2010). Transparency is Not Enough: Making Transparency Effective in Reducing Corruption, International Political Science Review, 31(3), 301-322. Lohmann, S. (2000). Collective Action Cascades: An Informational Rationale for the Power in Numbers. Journal of Economic Surveys, 14(5), 655-684 Madon, S. (2014) “Information Tools for Improving Accountability in Primary Health Care: Learning from the Case of Karnataka” in S. WHEN DOES ICT-ENABLED CITIZEN VOICE LEAD TO GOVERNMENT RESPONSIVENESS? 93 Gigler and S. Bailur (Eds.) (2014) Closing the Feedback Loop: Can Technology Close the Accountability Gap? Washington: World Bank Masud, M.O. (2015) Calling Citizens, Improving the State: Pakistan’s Citizen Feedback Monitoring Program, 2008–2014, Princeton University, Innovations for Successful Societies http://success- fulsocieties.princeton.edu/publications/calling-public-empow- er-state-pakistan (accessed 13 October 2015) Mellon, J., Peixoto, T. and Sjoberg, F.M. (2015) The Crowd Never Lies? Evaluating the Quality of Crowd-Sourced Data in Uganda, Digital Engagement Evaluation Team, World Bank, unpublished Peixoto, T. (2013) “The Uncertain Relationship Between Open Data and Accountability: A Response to Yu and Robinson’s ‘The New Ambiguity of “Open Government,’” UCLA Law Review Discourse 60, 200-248 Prieto-Martín, P., de Marcos, L. and Martínez, J.J. (2011) The e-(R) evolu- tion Will Not Be Funded, European Journal of ePractice 15. Ranganathan, M. (2012) “Reengineering Citizenship: Municipal Reforms and the Politics of ‘e- Grievance Redressal’ in Karnataka’s Cities,” in Desai, R and Sanyal, R. (eds) Urbanizing Citizenship: Contested Spaces in Indian Cities, Sage: Thousand Oaks and New Delhi Sjoberg, F. M., Mellon, J. and Peixoto, T. (2017) The Effect of Bureaucratic Responsiveness on Citizen Participation, Public Administration Review, 77 (4). Spada, P., Mellon, J., Peixoto, T. and Sjoberg, F.M. (2016) Effects of the internet on participation: study of a public policy referendum in Brazil, Journal of Information Technology & Politics, 13 (3). Susha, I. and Grönlund, Å. (2014) Context Clues for the Stall of the Citizens’ Initiative: Lessons for Opening Up E-participation Development Practice, Government Information Quarterly, 31(3), 454 465 UNICEF (2012) U-report Application Revolutionizes Social Mobilization, Empowering Ugandan Youth, http://www.unicef.org/ infobycountry/uganda_62001.html (accessed 13 October 2015) 94 CHAPTER 1 World Bank (2014a) Strategic Framework for Mainstreaming Citizen Engagement in World Bank Group Operations, Washington, DC: World Bank. https://openknowledge.worldbank.org/handle/ 10986/21113 (accessed 13 October 2015) World Bank (2014b) Survey Report: Citizen Feedback Monitoring Program, Washington: World Bank World Bank (2015) ‘Digital Analysis’, conducted by the World Bank’s Digital Evaluation Team, unpublished 2 THE CASE OF UNICEF’S U-REPORT UGANDA Chapter 2 The Case of UNICEF’s U-Report Uganda Evangelia Berdou Institute of Development Studies, Brighton, UK Claudia Abreu Lopes POLIS, University of Cambridge, UK With Fredrik M Sjoberg Digital Engagement Evaluation Team, World Bank Jonathan Mellon Digital Engagement Evaluation Team, World Bank EXECUTIVE SUMMARY Background to the study This chapter examines UNICEF’s Ugandan U-Report project, an innovative crowdsourcing SMS platform that seeks to amplify the voices of the youth. The study sought to address the follow- ing questions: who are the U-Reporters? Why do U-Reporters participate and what practices and assumptions underlie their 97 98 CHAPTER 2 participation? What change does U-Report bring about for par- ticipants and decision-makers? And finally, how does the data collected through U-Report compare to those yielded through more systematic research? Methods The study used a multi-method approach to data collection and analysis with triangulated qualitative and quantitative data. Two surveys examined the profile of U-Reporters, the first involving 5,693 reporters sampled according their level of activity and the second targeting the entire U-Report population (N=286,800). The results of the two surveys were compared to the results from a nationally representative household survey that targeted 1,188 households across Uganda, and then compared to recent rel- evant datasets from the Ugandan Bureau of Statistics (UBOS). Twenty individual, semi-structured interviews with a sample of U-Reporters stratified across level of activity, location, and gender provided the primary data for examining motivations, expectations, and assumptions. Analysis of U-Report system data provided additional insights into how people participate. The dif- ference that U-Report makes in the lives of U-Reporters and the decision-making processes of Ugandan Members of Parliament (MPs) —allegedly one of U-Report’s main policy audiences—were examined through the face-to-face interviews with U-Reporters and a short telephone survey with Ugandan MPs (N=95). Finally, a content analysis of eight mini-case studies of targeted U-Report interventions allowed us to draw some additional, tentative conclusions with regard to the platform’s ability to inform and influence. THE CASE OF UNICEF’S U-REPORT UGANDA 99 A comparison of certain aspects of the data yielded through U-Report, both traditional and online surveys, such as rate of response and number of valid answers, allowed us to highlight more clearly the strengths and weaknesses of perception-based crowdsourced data. Through analysis of current practice and les- sons from participatory research, recommendations are provided about how the platform might continue to build on its strengths while improving its ability to represent a wide swath of the Ugandan youth and deepening its engagement. Key findings Who are the U-Reporters? Our analysis indicates that the majority of U-Reporters are young (between 15-29 years old), male, well-educated and quite rela- tively well-dispersed across the territory of Uganda. The average age of a U-Reporter is 25, with male and female average ages being essentially the same. The youngest U-Reporter, as reported by the users themselves, is 6 and the oldest 75. In terms of education percent have completed secondary school. 19.5 ­ Generally speaking, U-Reporters are active in all parts and districts of the country. The analysis of system data indicates that the districts with more U-Reporters are Kampala (16,751), Wakiso (11,766), and Gulu (10,549). percent reported Of the U-Reporters that are employed, 17.7 ­ percent in pri- that they hold jobs in the government and 29.9 ­ vate sectors. When it comes to political participation, slightly more than one-fifth of all users attend four or more meetings a year. Around a third of all users have contacted their members of parliament to talk about an important problem and an equal 100 CHAPTER 2 proportion of users have expressed their views one to three times in the last year. In understanding these findings, one should bear in mind the potentially aspirational character of some of these data, espe- cially in relation to education and political participation. How and why do U-Reporters participate and what practices and assumptions underlie their participation? Our analysis of system data indicates that the frequency of par- ticipation in U-Report varies significantly, with at least half of U-Reporters replying to up to one in five of incoming questions and one in five registered reporters never having responded to a poll. This analysis also showed that high-frequency contributors (those who generally respond to between 40–100 ­percent of the questions) are predominantly male, late joiners of U-Report and younger than the average U-Reporter. These high-frequency contributors also tend to send more unsolicited messages, i.e., messages that are not responses to a poll, and appear to be more willing to share basic demographic information about themselves. Data from the survey with U-Reporters and qualitative interviews indicate that U-Reporters appreciated both the opportunity that U-Report provided for them to voice their views and the information that was shared through the plat- form on issues around health and education. The majority of the interviewees understood the basic purpose of U-Report (to share views and access critical information), but were unclear on how the information is used and the difference that it made on the ground. THE CASE OF UNICEF’S U-REPORT UGANDA 101 Seven out of the twenty interviewees consistently discussed questions with friends and family before sending their responses, proving that the U-Report Uganda program instigates dialogue around the issues discussed. Almost all of them expressed the wish for obtaining more feedback from the platform and a sig- nificant number of them expressed the desire to connect with other reporters offline. The importance of the collective aspects of crowdsourcing was one of the most important emerging themes from the nineteen interviews. These findings point to the need to reconsider some basic assumptions with regard to the nature of crowdsourced data, such as that each text message responds to an individual voice. What change does U-Report bring about for participants and decision-makers? percent of surveyed U-Reporters indicated that U-Report 53.4 ­ has made some or many changes in their district. Twenty-one out of the twenty-seven surveyed MPs were aware of U-Report with some of them having used it in the past. The evidence from the examination of the mini case studies is mixed. The analysis indi- cates that U-Report has been successful in surfacing emerging problems, sharing critical information, and obtaining a first-level view of people’s opinions and priorities. What was less clear was the extent to which the information that they provided was use- ful for different stakeholders, and the degree to which it informed policy and practice. How do U-Reporters compare to the wider Ugandan population? In accordance with UNICEF’s goal, U-Reporters represent the younger cohorts of the Ugandan population. In a population 102 CHAPTER 2 with an almost even gender split, the platform, like similar initiatives, appears to be significantly more popular with men percent of surveyed U-Reporters being than women with 70.5 ­ male. A closer examination of the geographical distribution of U-Reporters reveals that they are over-represented in the Northern Regions and under-represented in the Western and Eastern regions. On the whole, U-Reporters appear to be much better edu- cated than their peers from the wider Ugandan population. Of the survey respondents, 15.1 percent aged between 18-34 indicated that they have a university level of education versus 47.1 percent of U-Reporters. Of all the U-Reporters surveyed, 15.2 percent work in agriculture, which our household survey indicated as the dominant occupation for most Ugandans (62 percent). At the same time, U-Reporters were more likely to reach out to their elected representatives than household survey percent of respondents from respondents were. Only 12.8 ­ the household survey indicated that they contacted an MP in the last year (one to three times). In terms of technology use, U-Reporters are much more technically savvy than their peers from the general population. Over half of the respon- dents between the ages of 18-34 in the household survey did not know how to text. Internet access and use was also largely problematic for the vast majority of household respondents. Of the respondents from the household survey, 62.9 percent did not know what the Internet was. Twenty-one percent knew what the Internet was but don’t have Internet access, and only 16.1 percent are able to use the Internet (easily or with difficulty). THE CASE OF UNICEF’S U-REPORT UGANDA 103 How does the data collected through U-Report compare to those yielded through more systematic surveys? Compared to traditional surveys, U-Report offered great value for money for each individual response, but much less so for a series of multiple questions. In the U-Report Uganda research survey (which asked demographic questions, not issue-based questions as the platform is intended to be used), the rate of response fell consistently with each added question. Our analysis suggests that U-Report is a cost-effective way to quickly assess what the better-educated and more tech-literate part of the population thinks about an issue. Current technical limitations combined with the lack of fundamental demographic data for a great proportion of its contributors, however, raise some concerns about the validity of the feedback and limit con- siderably the types of analyses that can be performed. Additional research needs to be conducted on the limitations and opportu- nities offered by the SMS format and its implications for different aspects of validity. Finally, the study also pointed to the need for further study in order to understand the impact of multiple SIM cards on representation. Overall, the study findings highlight the importance of using “small” data, i.e., qualitative data, to put into context “big” data, i.e., the thousands upon thousands of entries generated through the activities of U-Report and U-Reporter. Main recommendations The study suggests four strategies to address some of the chal- lenges identified in the analysis. First, to improve the depth of the analysis and the validity of the information a two-level 104 CHAPTER 2 approach to membership can be adopted, with the creation of a core of trusted U-Reporters representing different segments of Ugandan youth to whom additional resources and training are provided. Second, to increase U-Report’s capacity to reach the illiterate, the illiterate, and the poorest of the poor, a two- tech-­ pronged approach was suggested. First, these groups could be indirectly engaged by the trusted core of U-Reporters who would actively seek to obtain the views of the least-represented of their cohorts. Secondly, UNICEF could attempt to engage these groups through other technologies, such as inexpensive video recorders and cameras. Third, in order to improve analysis and do justice to the richness of the feedback captured through the platform, UNICEF could consider sharing or opening up some of its data, provided that anonymity and safety measures are in place. Lastly, UNICEF and its partners could more actively seek to understand how crowdsourcing might be blended with other forms of data both in terms of triangulation (how might other sources of infor- mation be used to confirm perception data) and sequencing (the extent to which crowdsourced data could be used to test emerg- ing concerns or trigger other, more in-depth research efforts). Introduction Crowdsourcing is a new method for data collection which relies on the increased connectivity supported by the ubiquity of mobile phones and networked computer devices. Crowdsourcing has attracted a lot of attention in recent years.19 In the accountability and service delivery domains, the ability to elicit almost immedi- ate feedback from hundreds and sometimes thousands of individ- uals at a low cost opens up new opportunities for service providers THE CASE OF UNICEF’S U-REPORT UGANDA 105 and governments to connect better with their citizens. However, it also raises some important questions: whose voice gets expressed through this new channel and how does this new process affect participants and users of the information? What is the nature of the feedback collected through crowdsourcing platforms? This chapter presents the key findings of the study on Uganda’s U-Report, a long-standing crowdsourcing platform cre- ated by UNICEF and an open source community in the North and South which seeks to amplify the voices of the youth, a part of the population that is usually under-represented in public debates. The results of the study should be of interest to researchers and practitioners working in the field of transparency and accountability, and those more widely interested in the potential of crowdsourcing for representation and expression. This chapter contributes to the discussion on the nature of crowdsourced data and this data’s place in the evidence chain by addressing four key questions: Who are the U-Reporters and how do they compare to the wider Ugandan population? Why do U-Reporters participate and what practices and assumptions underlie their participation? What change does U-Report bring about for participants and decision-makers? And finally, how does the data collected through U-Report compare to data yielded through more systematic research? Based on the find- ings, suggestions are made about how some of the challenges high- lighted through the research may be addressed. OVERVIEW OF U-REPORT The earliest polls in the U-Report website date from 2010 while the platform was being piloted. Following the pilot, the platform was scaled up, allowing anyone to join the platform through 106 CHAPTER 2 the website or by sending in a SMS. The platform was officially launched in 2011 in partnership with the Ministry of Gender, Labour, and Social Development with the support of a nation- wide media campaign and the blessing of nine NGOs and Faith Based Organisations (FBOs). The ranks of U-Report Uganda soon rose to 10,000 people. The unprecedented volume of SMS sent over the mobile network purportedly had a significant impact on mobile service providers, prompting them to expand their services. By 2014, the number of U-Reporters had risen to 266,000. In 2014, it won the UN-based World Summit Award20 as the best mobile application in the cate- gory for m-Government and Participation. U-Report is now active in many different countries—including Nigeria, Sierra Leone, Indonesia, and Mexico21. UNICEF currently views U-Report as “a “killer app” for communication toward achieving equitable out- comes for children and their families.” (http://uni.cf/1oPuT3K). U-Report Uganda has four goals: by developing a scalable platform that aggregates views in real-time, U-Report Uganda aims to amplify the voices of the youth, which are seen to be excluded from decision-making, and thereby empower them; to create dialogue around community development with commu- nity members as the core constituents; to raise awareness and provide useful, sometimes life-saving, information around criti- cal issues connected with UNICEF’s priorities; and to use mobile as a communication device that can reduce the distance between constituents and their representatives. In another document that was shared with the authors, two additional goals were put for- ward: influencing the behavior of individuals and using citizen accountability to strengthen programs. THE CASE OF UNICEF’S U-REPORT UGANDA 107 A more extensive and ambitious set of the goals can be found in the Ugandan U-Report Statute, which was co-designed by youth groups. According to this account, U-Report aims to mean- ingfully engage youth in sharing data/information on develop- mental facts within their local councils/communities; to inform and initiate an appropriate response from leaders and policy makers; to support campaigns that help inform young people and the public at large; to generate meaningful reports to help govern- ment agencies and development partners in supporting commu- nities; to inform young people and the public at large of national trends, services, and policies; to identify emergencies; and to stimulate a positive mentality and ideology among the youth for development. In our discussions, members of the team emphasized the fact that the platform was originally conceived of as a two-way platform: its purpose from the beginning was to both solicit and share information. Indeed, as we will see in more detail later in the study, UNICEF uses the platform in order to deliver important information to its members about health and education. Figure 1 summarizes the key aspects of U-Report’s logic of intervention: Amplifying the voices of the youth is not straight- forward, especially in the politically sensitive context of Uganda. Although UNICEF’s mission provides the starting point for the main agenda for U-Report Uganda, deciding what questions to pose can be a politically and culturally delicate exercise. The current model of governance involves frequent meetings with U-Report Uganda’s lead partners to decide the platform’s agenda for the coming weeks.22 In some cases, the content and format of the questions are debated across different departments of Figure 1.  U-Report’s logic of intervention 108 GOALS MEANS CHANGE Amplify voices of the youth Create a medium that is widely Recruit Widely Use partners Ugandan youth express their accessible, especially amongst to reach opinions on critical issues the youth marginalised CHAPTER 2 groups Inform public debate Results are aggregated and Results and issues raised are made available to the public discussed in the media and though different media amongst Ugandans Raise Awareness and provide Valuable information is valuable information broadcast on health, education, children’s wellbeing Forge closer relationships Decision makers use analyses U-Reporters see the effects of A more transparent and between U-Reporters and for planning and eliciting reporting on the ground responsive relationship decision makers feedback is established between U-Reporters and decision makers, and duty bearers Support Citizen Accountability U-Reporters send unsolicited UNICEF aggregates results and Duty bearers and solicited messages about depending on the assessment adjust their the progress/outcomes of confirms positive change or strategy different initiatives asks for steps to improve initiative from duty bearers THE CASE OF UNICEF’S U-REPORT UGANDA 109 UNICEF and in other cases the content is co-designed by Ugandan Government ministries or MPs who want to engage young peo- ple around any of the issues outlined above. The U-Report Statute created by the UNICEF partners ensures the platform is only used for child and youth development issues and is strictly apolitical. Throughout this process UNICEF is politically and culturally neu- tral in order to empower the partners to be the decision-makers and feel a sense of ownership of the platform. Some of the com- promises made in developing the questionnaires for survey 1 and 2 of this study reflect these constraints. For instance, a question on taking part in public demonstrations, which was intended to gauge the level of political engagement, was excluded as it was clearly political and could have adverse implications for the Ugandan government. Another interesting dimension of what it means to have Ugandan “voices heard” concerns the emerging uses of the plat- form. A look through U-Report’s database reveals that many messages sent by U-Reporters are in fact not part of regular polls. Unsolicited messages generally fall into the following: requests for information; reports on what is happening in the community and appeals for urgent help; and others, such as greetings and con- gratulatory messages. In order to support emergency responses, UNICEF has developed an automated text classification system that classifies messages into distinct categories according to their content, such as education, employment, and violence (Melville et al., 2013). The messages are then routed to the specific UNICEF teams focusing on education, health, or child protection and forwarded through the appropriate referral pathways, which include government or civil society partner programs who are 110 CHAPTER 2 expected to take appropriate action. A similar process is adopted with information requests. For example, MildMay and Marie Stopes—organizations that specialize in issues of reproduc- tive health—receive requests for information on issues that are related to things such as HIV/AIDS, maternal health, and family planning. UNICEF has also developed information dashboards that help all involved to manage the information and keep track of the actions taken. Figure 2 highlights the planned and emergent uses of U-Report. The planned uses are associated with the functions of U-Report that are included in the logic of intervention. The emergent uses cover unsolicited messages. In order to inform public debate and promote the use of the platform, and in order to provide feedback to U-Reporters, UNICEF has developed a multifaceted media and communi- cations strategy. These involve discussions of U-Report results on radio shows, regular features on U-Reporters, U-Report poll results in newspapers, a documentary about U-Report funded by UNICEF (which was aired on all the major TV channels), and Figure 2.  Planned and emergent uses PLANNED USES EMERGENT USES (FROM U-REPORTERS) Weekly Polls Appeals for help Observations Informative Broadcasted messages Social Monitoring Requests for Information Other THE CASE OF UNICEF’S U-REPORT UGANDA 111 U-Report newsletters for Parliamentarians and UNICEF staff where appropriate. In terms of advocacy, the UNICEF team has recently evolved their approach from primarily seeking to engage members of the Ugandan parliament to also involving district level civil servants and officials. This change originates from the desire to work more at a grassroots, community level, which was originally seen as the concern of UNICEF’s Ugandan partners. The U-Report Uganda team is also encouraging UNICEF departments and partners who wish to use the platform to think more systematically about potential impacts. Our analysis of system data also indicated that over time the UNICEF team has opted for more targeted mes- sages by sending them to selected subgroups (based, for example, on region or occupation) rather than to the entire population of U-Reporters to increase relevance. U-Report has over 250,000 registered reporters who have signed up to receive and respond to text messages sent through the system. As indicated in Figure 2, U-Reporters are asked to provide their opinions (e.g., how do you think you might be able to support your sisters, wives, and mothers?) and objective infor- mation (e.g., have new textbooks appeared in your classroom?) through weekly polls. Great care is given to respecting local val- ues and sensibilities. Results of the polls are shared with the public through vari- ous media outlets (newspapers, radios, television) and the proj- ect’s website (http://www.ureport.ug). Ugandan MPs, currently the main policy users of U-Report Uganda, can send questions via SMS to the UNICEF team so they can reach out to their communi- ties through the platform. 112 CHAPTER 2 METHODS The study adopted a mixed-methods approach that involved quali- tative and quantitative data collection and analyses. Research results were triangulated by using different methods of data collection for specific sub-questions. For example, the motivation to participate theme was extensively explored through in-depth individual inter- views with U-Reporters and raised in the survey of U-Reporters. Our goal was to highlight “big picture” aspects of the plat- form (e.g., investigating the profile of participants) and to delve more deeply into what contributing to the platform means for participants. Our emphasis on critical analysis was guided by our desire to provide honest and constructive feedback to UNICEF and to others experimenting with crowdsourcing. Table 1 provides details on the methods and instruments for data collection. FINDINGS Who are the U-Reporters and how do they compare to the Ugandan population? Understanding the characteristics of the U-Report population is important for three reasons. The first is representativeness: knowing how the U-Reporters compare to the wider Ugandan population puts their views and ideas in context. Secondly, knowing who participates and who does not can support efforts to improve inclusiveness.23, 24 Thirdly, in terms of analysis, being able to distinguish similarities and differences on views and perceptions between different age groups, for example, on gender and education, can provide invaluable insights to UNICEF and other users of U-Report information. Interestingly, despite the number of data produced by U-Report, demographic data about its contributors are not Table 1.  Instruments of data collection and sampling Description Target Sampling Comments Informal and formal A series of individual discussions with Key U-Report personnel Snowball sampling interviews with UNICEF UNICEF staff and a half-day workshop with staff the U-Report team in Kampala on logic of intervention, opportunities and challenges System data Data analysis of user statistics All available anonymized 275,827 entries user summary statistics from system’s launch to 20th August 2014 U-Report survey A survey consisting of 12 questions sent out U-Reporters Sample of 5,693 reporters stratified Response rates varied significantly for each question individually through the U-Report platform by level of activity (most frequent contributors, less etc) U-Report poll Two questions on views/experiences U-Reporters Entire population of 286,800 11,167 responses for question on the condition of regarding development issues sent out as U-Reporters 24 roads, 28,931 responses for question on whether part of the regular weekly poll people paid for access to healthcare Household survey A household face-to-face survey using a Wider Ugandan National representative sample of Questionnaire included questions posed to structured questionnaire (19 questions) population 1,185 households U-Reporters with some additional question on SMS conducted by a professional outfit literacy and awareness of U-Report Online RIWI survey An online survey that uses a technology that Ugandan Internet users Non probabilistic sample of 13,693 Questionnaire included questions posed to allows researcher to randomly intercept respondents U-Reporters with some additional questions on online survey respondents U-Report. Response rates varied significantly Semi structured Face to face, semi-structured individual U-Reporters 20 U-Reporters stratified according to Recruited 20 U-Reporters. Final pool of interviewees interviews with in-depth interviews gender, level of activity, location under-represents low-frequency users due to U-Reporters difficulties in recruiting Ugandan MPs survey Short telephone survey with Ugandan MPs 388 Ugandan MPs Randomized sample of 95 27 MPs responded on their awareness and use of U-Report MPs (stratified by region) and representatives of special interest groups (army, people with disabilities). The sample included all youth representatives THE CASE OF UNICEF’S U-REPORT UGANDA Mini case studies 9 mini case studies conducted by UNICEF on UNICEF selected cases that speak to Case studies are informative, but limited in scope. success that U-Report has had in supporting different aspects of U-Report’s logic of Our analysis of these cases was restricted as they change intervention were a very late addition to our dataset 113 114 CHAPTER 2 collected systematically. When they sign up, reporters are given the choice to specify gender, age, and the location from where they are reporting and although some of them do, many of them do percent of U-Reporters had not. For example, in August 2014, 60 ­ chosen not to declare their gender. The issue of whether or not to collect basic demographic data is not simple. There is undeniably a tension here, as in other comparable crowdsourcing initiatives, between making registration of personally-identifiable informa- tion voluntary (and not denying people their say on the basis of not wanting to provide additional info) and obtaining basic demo- graphic data to gauge representativeness and enrich analysis. The second issue that emerged from the analysis concerns the potentially aspirational nature of some of the answers. When presented with a draft of the report, members of the U-Report team indicated that U-Reporters might be overstating their edu- cation, perhaps, in the hope of employment. Our framing of the U-Report survey, which was advertised as a “World Bank survey,” could have contributed to such over-reporting. However, a recent poll by U-Report on its members’ levels of education also obtained results comparable to ours. The same could also be true for other data, such as reported Internet use and levels of political activity. These observations link to a lesson that emerged from this report, namely that the expectations and assumptions of contributors shape the nature, and therefore the meaning, of the data. Basic socio demographics As depicted in Figure 3, U-Reporters are widely dispersed across all districts of Uganda. According to the system data, the districts with more U-Reporters are Kampala (16,751), Wakiso (11,766), THE CASE OF UNICEF’S U-REPORT UGANDA 115 and Gulu (10,549). The other districts have less than 1,000 U-Reporters with the lowest numbers observed in the districts of Buvuma (111) and Bukwa (160). When grouped into the four major regions, the distribu- tion of U-Reporters is biased compared to the official figures for the Ugandan population. There is an over representation of U-Reporters in the Northern region (where 35.5 percent of U-Reporters live, compared to 20 percent of the total popula- tion of Uganda) and an under-representation in the Western region (where 15.7 percent of U-Reporters and 24 percent of Uganda population live) and Eastern regions (where 21.1 percent percent of Uganda population live). The of U-Reporters and 29.6 ­ figures for the Central region match roughly with the official ones (27.6 percent of U-Reporters and 26.5 percent of Uganda population). There are more U-Reporters surveyed in Kampala (11 ­ percent), Mbale (3.6 ­ percent), Wakiso (6.1 ­ percent), and Gulu percent). When grouped into four major regions, the (3.5 ­ U-Report survey is biased from the general population—there is percent of an over-representation in the Northern region (26.8 ­ representation of Western (20.3 ­ sample) and under-­ percent) and percent)—but not as biased as the system Eastern regions (23.7 ­ data. In reviewing the write-up of the study, UNICEF suggested that this disparity is due to having prioritized U-Reporters’ recruitment in the Northern region, a conflict-affected area of the country. In order to facilitate participation in the north, the plat- form also introduced the use of Luo. Figure 4 depicts the age groups of U-Reporters compared with those presented in the 2012 Ugandan Population and Health 116 CHAPTER 2 Figure 3.  U-Reporters’ location based on system data Gulu - 10,549 Oyam - 5,634 Amuru - 1,924 Lamwo - 1,241 Kiryandongo - 1,233 Kole - 1,426 Adjumani - 1,008 Amolatar - 643 Moyo - 2,378 Kaabong - 429 Yumbe - 1,290 Lira - 5,577 Koboko - 1,653 Kitgum - 2,782 Maracha - 856 Pader - 2,055 Nwoya - 1,691 Kotido - 576 Moroto - 950 Zombo - 1,075 Agago - 1,608 Nebbi - 2,590 Abim - 1,237 Arua - 9,044 Napak - 609 Buliisa - 286 Dokolo - 1,265 Masindi - 961 Alebtong - 1,098 Hoima - 1,178 Otuke - 546 Nakasongola - 419 Amuria - 1,091 Kyankwanzi - 228 Nakapiripirit - 379 Nakaseke - 320 Katakwi - 630 Mityana - 1,352 Apac - 2,319 Kibaale - 466 Kaberamaido - 1,118 Kiboga - 662 Amudat - 226 Kyenjojo - 1,080 Soroti - 1,911 Ntoroko - 223 Serere - 846 Mubende - 1,447 Ngora - 710 Bundibugyo - 649 Pallisa - 1,267 Kyegegwa - 684 Kapchorwa - 727 Kamwenge - 1,226 Buyende - 265 Gomba - 985 Bududa - 707 Butambala - 716 Budaka - 663 Kasese - 3,698 Manafwa - 1,950 Namutumba - 375 Ibanda - 1,046 Bugiri - 781 Rubirizi - 415 Busia - 2,117 Lyantonde - 295 Namayingo - 516 Bushenyi - 1,212 Kaliro - 433 Ntungamo - 1,180 Luuka - 355 Mbarara - 3,333 Buvuma - 111 Isingiro - 667 Jinja - 3,379 Lwengo - 726 Kamuli - 989 Rakai - 1,525 Kayunga - 776 Bukomansimbi - 864 Kampala - 16,751 Masaka - 1,960 Luwero - 1,469 Kalangala - 345 Wakiso - 11,766 Survey (referred to as “Population” in the figure).25 As can be seen, percent out of 3,188 responses) the majority of U-Reporters (41.8 ­ are between 20 and 24 years of age. In fact, a very high proportion percent) are in their twenties, which means of respondents (71 ­ that UNICEF is indeed reaching its targeted population, young Ugandans. The results of the household survey and the system data corroborate these findings. The household survey placed the majority of the respondents between the ages of 15 and 34 percent) roughly corresponding to the age groups (54.3 ± 2.9 ­ THE CASE OF UNICEF’S U-REPORT UGANDA 117 of the general population, but still statistically lower than the percent).26 Despite small gaps, sys- U-Reporter system data (89.3 ­ tem data correspond very well with those of the U-Reporter’s percent of U-Reporters falling between survey, with 45.9 ± 1.3 ­ percent in the 25 to 29 age the ages of 20 and 24, and 25.7 ± 1.3 ­ group. RIWI results are also biased with regard to age with 87.4 ± percent of the contributors younger than 35. 1.6 ­ Similarly, all data sources point to U-Report’s bias with regard to gender. Figure 5 presents the gender distribution across three surveys and system data. Both the household survey conducted for this study and the Ugandan National Household Survey from 2009/10 (indicated as “Population” in the figure) reveal an even gender divide for the wider population. In sharp contrast, U-Report survey data show that 7 out of 10 U-Reporters (out of 2,101 responses) are male. As with age, system data largely sup- ports the picture emerging from the U-Report survey. According percent of U-Reporters are male, accounting to system data, 64.1 ­ percent of the total messages sent. for 71.4 ­ Another factor that differentiates U-Reporters from the Ugandan population as a whole, as well as from their peers, is education.27 As shown in Figure 6, most U-Reporters are percent (1,756 out of a total of 3,731 highly educated, with 47.1 ­ responses) reporting that they have a university-level education. According to a poll conducted by U-Report on October 29, 2015 (n=30,759), 43 ­percent of U-Reporters have attended university. percent have also graduated from, or attended, Another 19.5 ­ high school. The data from the household survey reveal an altogether different picture. Here, the majority of people have percent) or secondary school at most attended primary (52.7 ­ 118 Figure 4.  U-Reporter age groups compared against the 2012 Population and Health survey Age Group 15-19 U-Report survey 6.60 Population 21.00 CHAPTER 2 20-24 U-Report survey 41.80 Population 14.40 25-29 U-Report survey 29.20 Population 14.40 30-34 U-Report survey 12.30 Population 10.60 35-39 U-Report survey 4.50 Population 9.70 40-44 U-Report survey 3.00 Population 7.00 45-49 U-Report survey 1.10 Population 5.50 50-54 U-Report survey 0.90 Population 4.70 55+ U-Report survey 0.50 Population 12.60 0 10 20 30 40% Figure 5.  Gender distribution across survey and system data Source U-report survey 29.50 70.50 System data 35.90 64.10 Household survey 50.50 49.50 Population 51.20 48.80 0 20 40 60 80 100% Female Male THE CASE OF UNICEF’S U-REPORT UGANDA 119 120 Figure 6.  Education across U-Reporters and general population Education No school U-reporter survey CHAPTER 2 0.00 Household survey 5.60 Primary U-reporter survey 2.90 school Household survey 52.70 Secondary U-reporter survey 16.60 school Household survey 27.90 High school U-reporter survey 19.50 Household survey 5.10 Technical U-reporter survey 13.90 school Household survey 4.30 University U-reporter survey 47.10 Household survey 4.30 0 10 20 30 40 50% Figure 7.  Occupation across U-Reporters and the general population Ocupation Agriculture U-report survey 15.16 Household survey 62.00 Civil Society U-report survey 7.55 organization Household survey 1.60 Government U-report survey 17.65 Household survey 2.50 Private U-report survey 29.89 company Household survey 6.80 Other U-report survey 29.76 Household survey 27.10 0 20 40 60% THE CASE OF UNICEF’S U-REPORT UGANDA 121 122 Figure 8.  Community meetings attendance Meetings CHAPTER 2 0 U-Reporter survey 49.50 Household survey 41.10 1 to 3 U-Reporter survey 28.10 Household survey 40.60 4 to 6 U-Reporter survey 12.70 Household survey 11.50 7 to 9 U-Reporter survey 2.10 Household survey 2.40 10 or more U-Reporter survey 7.60 Household survey 4.40 0 10 20 30 40 50% Figure 9.  Number of contacts with MPs Contacts 0 U-Reporter Survey 59.80 Household Survey 84.50 1 to 3 U-Reporter Survey 31.10 Household Survey 12.80 4 to 6 U-Reporter Survey 5.60 Household Survey 1.60 7 to 9 U-Reporter Survey 1.60 Household Survey 0.50 10 to 19 U-Reporter Survey 1.10 Household Survey 0.40 20 or more U-Reporter Survey 0.70 Household Survey 0.20 THE CASE OF UNICEF’S U-REPORT UGANDA 0 20 40 60 80% 123 124 CHAPTER 2 percent), with only 49 (4.3 ­ (27.9 ­ percent) out of 1,128 respon- dents reporting that they have a university education. The num- ber of people that attended high school (5.1 ­percent) was equally low. The examination of the levels of education of the younger percent respondents in the household survey shows that 13.2 ­ of those aged 18–34 have attended high school. This proportion percent reporting drops for higher levels of education, with 15.1 ­ that they have a university-level education. So, although edu- cation is somewhat connected to age in the general population, U-Reporters across all age groups are more educated than their peers from the wider Ugandan population. Significant differences between U-Reporters who opted into this research study and the Ugandan population can also be found in the way that they make their living. As shown in Figure 7, whereas the predominant occupation in the house- percent), almost half of the hold survey was agriculture (62 ­ percent out of 3,139 responses) hold jobs in U-Reporters (47.5 ­ the government and private companies (in line with their higher percent level of education). Amongst RIWI respondents, 39.7 ­ percent in the gov- were employed in the private sector and 7.2 ­ ernment. A high proportion of respondents in all three surveys have indicated “Other” as an employment category. In the case of U-Report, “Other” included mainly cases of “Unemployed” and “Student,” the latter being especially relevant given the tar- get audience of U-Report. In the face-to-face survey, many peo- ple included “Housewife” and owning a small business under “Other.” When we compare U-Reporters against the 18–34 group from the household survey, we see that U-Reporters are dispro- portionately involved in the government and private sector. THE CASE OF UNICEF’S U-REPORT UGANDA 125 The merits of regular employment of U-Reporters are placed in context when considering the difficulties reported by many of the household respondents in making ends meet. More than half of the household survey respondents reported percent), clean water that they had been without food (58.9 ­ percent), or medicines (65.0 ­ (50.0 ­ percent) in the six months leading up to survey date (July/August 2014). According to the household survey, older respondents are more likely to have percent or over), food (45 ­ gone without medicines (35 ­ percent percent or over). Less educated or over), or clean water (65 ­ respondents (with no formal or only primary school education) were more likely to have gone without food and those who work in agriculture are more likely to have gone without food, clean water, and medicines. RIWI respondents appear to be relatively affluent, with only one in four belonging to households that had to go without food, medicine, or clean water in the six months leading up to the survey. Political Participation An interesting picture also emerges when attention is turned to political participation. In Figure 8, we compare the num- ber of community meetings that U-Reporters and household survey respondents have attended in the preceding year. As we can see, despite their lower level of education, an average Ugandan is significantly more politically active than the better-­ educated U-Reporters. The data also shows that the proportion of U-Reporters who attend ten or more community meetings (7.6 ­percent) a year is slightly greater than the one reported in the household survey (4.4 ± 2.9).28 126 CHAPTER 2 This dimension of political participation, the willingness to partake in local processes of decision-making, does not translate to readiness to contact those higher in power. As indicated in Figure 9, U-Reporters are much more likely to contact their mem- bers of parliament about an important problem or in order to percent out of a total of 1,681 responses) express their views (31.1 ­ percent from the household sur- than the average Ugandan (12.8 ­ vey). Household survey respondents were also reluctant to talk percent) about poor public services to their local leaders (57.7 ­ (health services, roads, schools) albeit not in the same degree that percent). they were reluctant to contact their MPs (84.5 ­ The relative reluctance of household respondents to con- tact their elected representatives might not be only a result of decreased confidence. When asked whether public officials cared “about people like them” most household survey respondents percent) replied in the negative. (58.2 ­ Everyday experiences: technology use and access to health care services and roads The third part of the survey focused on issues that underlie some of the day-to-day experiences of respondents—access to, and use of, mobile phones and the Internet, roads, and health care services. As depicted in Figure 10, half of the household respondents percent) said that they don’t know how to send SMS mes- (49.5 ­ sages. Unsurprisingly, knowing how to text was positively associ- ated with levels of education [X2(10) = 334.20, p<.001], negatively with age [X2(12) = 460.56, p<.001], and also associated with gen- der, with men more likely to know how to send SMS [X2(2) = 7.85, p<0.01]. Those who could more easily send messages tended THE CASE OF UNICEF’S U-REPORT UGANDA 127 percent of those in this age to be between 18 and 34 years old (53 ­ group). Texting was also easier for those having attended sec- percent know how to text) and university ondary school (61.5 ­ percent know how to text). Interestingly, 16.3 ­ (79.6 ­ percent of university students/graduates reported not being able to send a text message, suggesting perhaps a gap between general and tech- nological literacy and/or a lack of interest in learning how to text. The most confident text messagers were high-school students percent know how to text) and technical students/gradu- (84.2 ­ percent know how to text). ates (91.8 ­ Knowledge of and access to the Internet was also highly problematic for the vast majority of household survey respon- dents. As shown in Figure 11, the majority the respondents percent), 21 ­ don’t know what the Internet is (62.9 ­ percent of the sample know what the Internet is but don’t have Internet percent are able to use the Internet (eas- access, and only 16.1 ­ ily or with difficulty). Women respondents who work in agri- culture or have gone without food and respondents educated up to high school are less likely to know what the Internet is. percent of those with Amongst household respondents, 22.8 ­ high school education and 36.7 ­percent of those with technical school education know how to use the Internet but had trou- ble accessing it. In contrast to this, U-Reporters are much more confident Internet users with 74.9 ­percent reported being able to use the Internet (with and without difficulty). Although age is associated with Internet usage in both the household and U-Reporter surveys, youth from the household survey (18-34 years of age) are less likely to use the Internet than U-Reporters in the corresponding age group. 128 CHAPTER 2 Figure 10.  Texting SMS Yes, quite easily 38.30 Yes, but with difficulty 12.20 No 49.50 0 10 20 30 40 50% Figure 11.  Internet use Occupation Use the Internet easily U-Report survey 43.40 Household survey 9.90 Use the Internet but U-Report survey 31.50 with difficulty Household survey 6.20 No Internet access U-Report survey 25.10 Household survey 21.00 Don’t know what U-Report survey 0.00 the Internet is Household survey 62.90 0 20 40 60% THE CASE OF UNICEF’S U-REPORT UGANDA 129 130 CHAPTER 2 Figure 12.  Paying for services/medicine at a government facility Occupation Yes U-Reporter poll 49.00 Household survey 33.30 No U-Report poll 51.00 Household survey 66.70 0 10 20 30 40 50 60% Figure 13.  No. of times poor roads prevented daily routine in the last six months Spoiled roads 0 U-Report survey 16.70 Household survey 50.50 1 to 3 U-Report survey 42.80 Household survey 27.20 4 to 6 U-Report survey 24.40 Household survey 13.30 7 to 9 U-Report survey 5.90 Household survey 2.80 10 to 19 U-Report survey 7.60 Household survey 3.40 20 or more U-Report survey 2.60 Household survey 2.70 0 10 20 30 40 50% THE CASE OF UNICEF’S U-REPORT UGANDA 131 132 CHAPTER 2 percent) Most of the household survey respondents (66.7 ­ indicated that they were not asked to pay for medicines or a ser- vice in a health clinic. However, despite the mandate for univer- percent did pay. Those with technical school or sal access, 33.3 ­ university education are less likely to be asked to pay for med- icines or services. As indicated in Figure 12, the proportion of U-Reporters stating that they have been asked to pay for seeing a doctor or medicine (49 ­percent) is significantly larger than that of the nationally representative sample (33.3 ± 2.9 ­percent). For half of the household survey respondents (50.5 ­percent), poor roads were not a problem in the six months leading to the survey (see Figure 13). When poor roads were a problem, it was typically up to four times. Women, those with a university degree, or those working in civil society are less likely to say that a poor road prevented them from travelling. U-Reporters seem to have more difficulties in using roads, perhaps because their occupation percent of U-Reporters required them to travel more. Only 16.7 ­ (out of 11,167 answers) reported no disruption to their every- day routine as a result of impassable or problematic roads, and 42.8 ­percent said that this had happened from one to three times in the last six months. WHY DO U-REPORTERS REPORT AND WHAT PRACTICES AND ASSUMPTIONS UNDERLIE PARTICIPATION? Patterns of Participation Very little attention is usually given to the level of activity that crowdsourcing sustains over time. Its use in emergencies means that more often than not the expectation is that people would send a couple of messages at most. Therefore, it is very useful to THE CASE OF UNICEF’S U-REPORT UGANDA 133 frame our understanding of who the U-Reporters are and what drives them against their level of activity. Figure 14 provides a breakdown of U-Reporters according to the proportion of mes- sages to which they have responded: percent Out of a population of 275,826 U-Reporters, 2.3 ­ percent of the questions posed to them, responded to at least 40 ­ percent responded to up to 20 ­ 55.5 ­ percent of questions (at least percent respond to between 20 ­ one question), and 10.8 ­ percent percent of the questions. Of the registered U-Reporters, and 39 ­ percent 22.7 percent have never responded to a poll and 8.7 ­ responded only once. Another fact revealed by the analysis of system data is that percent of respondents (577) send out more messages than 0.2 ­ questions they receive. This is because the platform allows par- ticipants to text messages both in response to a question and by sending any message they want (see section 2.1). As shown in Figure 15, the analysis also indicates that U-Reporters who are more responsive to polls also send, on aver- age, the most unsolicited messages. System data also indicate that high-frequency reporters tend to share basic demographic information about themselves more frequently. Both these findings (propensity to send unsolicited messages and to share basic demographic information) point to the existence of a core group of contributors who see themselves more as proactive citizen reporters than responsive data points. Moreover, when tested in a linear regression, the effects of gen- der, age, and joining date on the response rate are all statistically percent. Male, younger U-Reporters and significant at p<.001 ­ those who joined more recently tend to answer more questions. 134 Figure 14.  Breakdown of U-Reporters according to response rate CHAPTER 2 Type of U-Reporter Non-respondents 22.7 One-off respondents 8.7 Less than 20% 55.5 20-39% 10.8 40-59% 1.9 60% or over 0.4 0 10 20 30 40 50% 60% Figure 15.  Average number of unsolicited messages Response rate Non-respondents 2.90 One-off respondents 3.91 Up to 20% 6.31 20-39% 8.63 40-59% 10.43 60% and over 14.57 0 3 6 9 12 15% THE CASE OF UNICEF’S U-REPORT UGANDA 135 136 CHAPTER 2 The distribution of youth age sub-groups and gender in the more active part of U-Report is—therefore—quite skewed. Another issue that deserves further investigation and which might have significant implications for representation concerns phone sharing and the use of multiple SIM cards. Thirteen per- cent of the polled U-Reporters indicated that it is possible that someone other than themselves replied to a U-Report poll. Out of a total of 2,995 U-Reporters, 60.3 percent indicated that they percent have three. Mobile phone have two SIM cards, and 13.8 ­ sharing and multiple SIM ownership occur frequently in other countries in the global south, especially among the poor.29 The interviews highlighted another interesting practice: three inter- viewees indicated that they answer to polls using different SIM cards. One indicated that he uses both his SIM cards to send out information and another, who misunderstood the purpose of the platform, indicated that he replied to polls by giving differ- ent answers on his three SIM cards. Although these findings are not conclusive, they challenge the notion that each mobile phone number corresponds to an individual and point to the need to carefully consider how membership is defined and verified. Motivations and expectations Table 2 presents the results of the question in the U-Report sur- vey about what motivates reporters. As we can see, the majority of respondents indicated that the platform provides them with the means of expressing their opinion and, to a lesser extent, to find out important info. The interview findings corroborate this. Almost all the inter- viewees emphasized the fact that U-Report helps them stay up to THE CASE OF UNICEF’S U-REPORT UGANDA 137 date with developments in their community. Interviewees seem to particularly appreciate alerts about immunizations, child days,30 and obtaining birth certificates, alerts which they tended to share with others. Five out of twenty interviewees mentioned the importance of the platform being “free” without being prompted by the interviewer. Consistent with the answers provided in the survey, most interviewees described U-Report as an information gathering and sharing tool, a way of getting information from the commu- nity to those in power. Only one interviewee mentioned account- ability directly. U-Reporters were less clear on what happens after they send in their responses and less clear on how the results of the polls are used. Only four said the results are shared with the government, MPs, or get passed on to “coordinators.” U-Report is not branded as a UNICEF tool in an effort to ensure that U-Reporters and part- ners feel a sense of ownership around the platform. As such only one was aware of UNICEF’s involvement in the platform. Three Table 2.  U-Reporters’ motivations The reason I’m a U-Reporter is: Freq Percent It’s a way to find out important info 724 22.2 It’s a way to voice my opinion 2534 77.8 Total 3258 100.0 138 CHAPTER 2 respondents indicated that they would like to know more about the platform’s agenda. This is how one of them put it: They put together the information, compile, and get what they want and send us the feedback. They do something as when they send feedback they mention change. I would like to know the reason why things are carried out [. . .] We were told that it was a platform where you get asked questions and you answer, but we were not told why. (MO, 16.07.2015). MO was informed about U-Report in a scout meeting about m-trac (http://www.mtrac.ug)—another crowdsourcing tool developed by UNICEF and the Ministry of Health that tracks health related information. Unlike many interviewees who joined U-Report Uganda after learning about this in the media or through a friend, she learned about the system in a context that should have allowed her to absorb more information about the platform. No one explicitly raised issues of confidentiality and ano- nymity in the survey. Two possible reasons for this are a lack of understanding of what anonymity means in the context of the platform, and/or that there is little importance attributed to confidentiality by U-Reporters. Put more simply, people might either not know and/or not care about confidentiality. What may reinforce this lack of interest is the fact that most ques- tions are non-political and that there are not well-known cases of U-Reporters getting in trouble with the government or local authorities. This, of course, still means that strict ethical rules with regard to safeguarding privacy should be maintained. THE CASE OF UNICEF’S U-REPORT UGANDA 139 Does this lack of a more in-depth understanding of the deeper purpose and function of U-Report reduce the efficacy of reporters and consequently of U-Report itself? This question is of prime importance in the context of crowdsourcing initiatives where the drive for numbers and the ease of signing up may run counter to participants understanding how the information that they provide is being used and by whom. Connecting with others: collective dimensions of reporting and personalized feedback Crowdsourcing, in its current form, is an individualized experi- ence: messages are sent from the initiator of the initiative to the targeted population on a one-to-one basis. Access to the aggregate results usually provides the sole means of connection to other participants and collective action is supported through advocacy on the basis of the results, not through connecting with other individuals directly. The interview findings highlighted the lim- itations of this model and its underlying assumptions. The desire to connect to others in various ways was the most significant emergent theme from our in-depth discussions with the twenty interviewees. The first way in which this desire was expressed was through the sharing of alerts and messages distributed by U-Report with the broader community. This was especially the case for the seven teachers, who would share the messages communicated through U-Report with children and their parents. This function of U-Reporters as data sharing points within the community is consistent with the view of U-Report as an information dissem- ination and sensitization channel highlighted in the previous 140 CHAPTER 2 section. Sometimes this kind of activity brings its own challenges. This how RM, a 20-year-old from Kampala talked about his expe- rience of being a U-Report ambassador: The topics that are sent to me are shared with my family first then people outside. Sometimes I talk to parents about not hitting their child but they despise me. I tell them that if they continue I will take them to the police but it ends there as I cannot take them to the police. Some believe in me some do not. If the next day I find him sitting on his own I give him what I have. (RM, 18.07.2014) The second way in which the desire for more connection was expressed was through the importance ascribed to meeting offline, face-to-face. Only one of the interviewees had attended an event organized by UNICEF to bring together U-Reporters, but all of them expressed the desire to meet with others. Several of the interviewees also suggested that U-Report would benefit from having a physical presence within the community. One of the most surprising findings that relates to this theme was the habit of consulting with others before they replied. Seven out of twenty interviewees indicated that they tended to discuss the information asked by U-Report with friends and family before they sent in their reply. This is how AN, a 23-year-old woman from Iganga explains this: When I get the questions I speak to my friends and we come with one view then I reply. For example when they ask if there are drugs in the health centers, I ask my friends at work, where THE CASE OF UNICEF’S U-REPORT UGANDA 141 I stay and ask them how many times they have been and if there are drugs and then reply. I do this with all the questions. I do this both my numbers (Warid and MTN) because it is free. I say the same thing on both lines. When I talk to my friends we discuss the issue and pick the major one then pick one. We discuss based on the free time we have, we may start with five people then end with three. (AN, 17.07.2014) The discussion of U-Report questions amongst friends and family supports UNICEF goals to engage the wider Uganda society and provides a strong indication of U-Report having a role in shap- ing public debate. However, from an analytical perspective, this practice poses two important challenges. Firstly, this could have an effect on answers to sensitive questions and secondly, for more demanding analysis, this could affect correlations between indi- vidual attributes and answers. The fourth dimension for this desire for connection is expressed through the interviews related to feedback. Thirteen interviewees indicated that they would like to get more feedback about the results from the platform. Most of them indicated that they would prefer to receive this through SMS as they lacked access to the Internet. Only one was aware that U-Report had a Facebook page where results get published. None of the inter- viewees, even those with access to the Internet, were aware of the existence of the U-Report website where the results of the polls are published regularly. In line with the theme of a desire for greater connection, many interviewees greatly valued the mes- sages that UNICEF sends that report on the replies of people from other communities and parts of the country. 142 CHAPTER 2 What change does U-Report bring about? Participant views and experiences of impact How do participants perceive the impact of U-Report and how do they talk about it? Table 3 presents the results of the U-Report survey. It shows that opinions are divided almost equally, with percent of respondents indicating that U-Report has led to 53.4 ­ percent saying that they some changes in their district and 46.6 ­ are not certain about U-Report’s impact. The qualitative findings elaborate this high-level picture by helping to clarify what changes U-Reporters see as resulting from U-Report activities. The first type of change that was highlighted resulted from U-Report’s activity as an information dissemina- tion channel. As mentioned previously, people found alerts about immunization, child days, and the issuing of birth certificates particularly valuable. Another type of change concerned UR’s educational function. As one interviewee put it: “It makes a differ- ence by educating people about what is happening and teaching them about issues and what is right.” (NK, 17.07.2014) Five people specifically mentioned U-Report’s contribution to raising awareness about people with disabilities, with one of Table 3.  Reported Change U-Report has led to: Freq Percent Many changes in my district 359 15.8 Some changes in my district 852 37.6 It’s not clear what changes U-Report has made 1,055 46.6 Total 2,266 100.0 THE CASE OF UNICEF’S U-REPORT UGANDA 143 them indicating that she no longer fears those who are disabled and “no longer runs from the person with elephantiasis and the other person who has lost his fingers and now greets them” (JM, 15.07.2014). Another suggested that “children are now in school because of the messages” (EN, 17.07.2014). Only two out of the twenty interviewees stated that they did not see U-Report having any results.31 Like EN, however, the majority of the interviewees who talked about shifts in practice on the ground talked about it in general terms. Many formed tenuous links between U-Report and the indicated changes, even when prompted to elaborate (e.g., how do you know this?). One interviewee suggested that there has been a drop in the number of children who were dropping out of school “which is because U-Report sent a message about it and then spoken to girls to stop them dropping out” (HK,15.07.2014). Another one suggested that U-Report has made laws that prevent the transmission of diseases, and a third said that: Sometimes they ask us about the health centers and we find there is some improvement in supplies. They also sometimes ask us about people with disabilities and how we can speak to their parents and there is some improvement which is due to U-Report. Now most people sleep under mosquito nets because of sensitization.” (MO, 16.07.2014) Further research would be required to verify the extent of the perceived change and whether the actions prompting it could be linked with U-Report and UNICEF actions, which in turn might be a result of a bigger set of interventions. 144 CHAPTER 2 Overall we found interviewees to be guarded in their com- ments. This might be a result of the framing and setting of the interviews as the study was positioned as a World Bank study and the interviews in Kampala were conducted in UNICEF offices. These conditions might have amplified the power imbalance between interviewers and interviewees. Awareness and use of U-Report by Members of Parliament Ugandan members of parliament perform legislative and over- sight roles and pass the national budget. In addition to repre- senting the views of their constituents and holding consultative meetings to update them on the activities of the government, they are also responsible for mobilizing their constituents to par- ticipate in the formulation and implementation of development programs initiated by the government and other actors.32 The U-Report works with Ugandan MPs via the Uganda Parliamentary Forum for Children (UPFC). UPFC has worked to ensure that all MPs can receive messages via U-Report, and have trained over seventy MPs on the opportunities supported by the platform. One way in which U-Report works in this area is by forwarding questions from MPs to the U-Reporters in their constituency areas. The replies are then given to the MP, which they can use in their deliberations. The U-Report team also indicated that MPs are alerted about the feelings and views of U-Reporters across the country on a specific issue and MPs’ views are then solicited on what can be done. The UPFC often uses this information in advocacy materials and meetings to discuss upcoming issues. U-Report can provide a break- down of responses of U-Reporters by district for every poll. THE CASE OF UNICEF’S U-REPORT UGANDA 145 As indicated previously, Ugandan MPs have been considered a key audience for U-Report. This section highlights the main findings with our short interviews with twenty-seven Ugandan MPs. As we can see from Table 4, twenty-two out of twenty-six MPs who replied to the question indicated that they knew about U-Report. Figure 16 shows what MPs said when asked what they knew about U-Report and how they would describe it. As we can see the majority (ten) considered U-Report primarily as a means to elicit feedback from their constituents. Those MPs who said that they were aware of the platform were asked how they used it. In the context of the survey, use was defined in quite broad terms and involved actions from using the platform to reach constituents and raise awareness to access to the results of the platform. Five out of fourteen MPs who answered the question pointed out that they have engaged with the platform in the past but their interest has fluctuated over time. The rest indicated that they have never made any use of U-Report.33 In response to the question of whether their use of U-Report had affected their views or actions, five MPs responded in the affir- mative with two of them indicating precisely the way in which U-Report changed their ideas. The first talked about the effects of being more in touch with the challenges that communities face on the ground and the youth in particular: “Yes it is useful to me as I get to know the problems affecting my community and to know about the national issues affecting youth/health which he can then feedback to the community.” The second talked about the importance of getting in data about children with disabilities: 146 CHAPTER 2 Table 4.  Awareness of U-Report by MPs Are you aware of U-Report? Freq Percent No 4 14.8 Yes 21 77.8 Yes (but not much) 1 3.7 Does not want to answer 1 3.7 Total 27 100.0 Figure 16.  What do you know about U-Report and how do you describe it? Constituents give feedback Constituents ask questions Gather information from people Give information to people/youth Platform for youth “Yes, for advocacy in Parliament and in the constituency to help col- lect info on the number of children with disabilities to identify allow- ances and the challenges affecting children with disabilities and to help provide lunch to primary students.” Four MPs raised concerns about U-Report’s capacity to reach people in remote areas, the level of IT literacy that it requires, and the limitations of short questions. THE CASE OF UNICEF’S U-REPORT UGANDA 147 MINI CASE STUDIES ON U-REPORT INTERVENTIONS Ministry of Agriculture & World Bank Banana Bacteria Wilt “[In 2013] bacterial wilt infection was spreading throughout Uganda banana crops, adversely affecting primary agricul- tural export and presenting food security issues. In March that year U-Report was used to inform over 50,000 citizens about the infection, visualize incidence and spread of this devastating disease, disseminate description of symptoms as well as new ­ treatment and management options, all at a cost of about three cents person.” (MAWBBW study, docA, p1) The specific question asked by U-Report was: “Do you know any farmers whose banana plantations or crops are infected with banana bacterial wilt disease? YES or NO.” This information was to be used by the Ministry of Agriculture to identify epidemiological centers and to provide targeted agri- cultural extension services. Six months after the initial poll, U-Reporters were sent this follow-up question: “Have you noticed an increase in efforts of government & agricultural officials to eradicate Banana Wilt disease in your sub-county in the last 6 months? YES/NO.” Their responses were split evenly percent replying “Yes,” 42 ­ with 40 ­ percent replying “No,” and percent “Other.” U-Report does not usually provide more 18 ­ detailed analysis of “Other.” In this scenario, it is uncertain the extent to which U-Report played a role in raising awareness and supporting the efforts of the Ministry to take action. Interviews with ministerial staff would provide more information about how the data was used. It would also be interesting to know whether there were other forces at play at the time (e.g., reports in the media, farmers contacting 148 CHAPTER 2 their MPs directly about the disease) that might have prompted the Ministry to take action. World Bank UPPET Study: “Uganda’s Ministry of Education faced difficulties understand- ing the impact of new textbooks and science kits on students and teachers in public secondary school. U-Report was used to identify parent, student, and teacher beneficiaries of the pro- gram and engaged them on a month-long dialogue on use and impression of new textbooks and science kits.”(U-Report-Educ-­ casestudy-WB, p.1). According to U-Report: “5,000 beneficiaries from 1,583 schools collectively provided 31,187 feedback over a month. The feedback was used to improve the education service delivery system” (ibid). According to the twenty-page-long report on the intervention, the purpose of UPPET/U-Report intervention was to “create a more targeted and vibrant two-way communication with each group (PARENTs, TEACHERs, and STUDENTs), get feedback separately and have profiles created identifying key demographic information of each U-Report in the category” (UPPET case study, doc B, p.2). The intervention included many innovative elements, one of which involved the process of identifying the selected sub- groups. A more detailed presentation of this methodology would be of great value to those interested in crowdsourcing. What is more difficult to substantiate is the idea that U-Report instituted a direct dialogue, a two-way communication between students, parents, and teachers and also provided the means for increasing transparency and accountability. In particular, the study does not give details about how these ideas (dialogue, THE CASE OF UNICEF’S U-REPORT UGANDA 149 two-way communication) worked in practice. One could imag- ine how the results might start a conversation between parents, students, and teachers on the best way to get access to and use the free textbooks and science kits and also all of the challenges around this, but it is not clear whether this or something similar happened. Similarly unclear was whether any actions were taken following the publication of the results. UNICEF Emergency Preparedness Case Study In late 2014, UNICEF conducted a study on the appropriate- ness of its emergency preparedness strategies in Uganda as a means of providing more accountability to affected populations. U-Reporters were asked to share their thoughts on what emer- gencies were most pressing to them, what they were doing to pre- pare for these emergencies, and what assistance would help them to better prepare. Three questions were posed to U-Reporters: What emergencies are you concerned about? What are you doing to prepare for the emergency that you told us about? What would help you to prepare better for the emergency you face? Questions were open “to minimize response bias” (UNICEF EMOPS, Humanitarian Policy Section and UNICEF Uganda, Jan 2015, p.1) and responses were aggregated using phrase and word analysis. Similarly to most U-Reporter polls, the survey elicited a large number of responses: 9,000 people responded to the first question, 4,329 to the first and second question, and 2,888 to all three questions. However, similarly to the previous case, the study does not clarify how the results were used by UNICEF. A particularly interesting aspect of this case study con- cerns the level of the analysis of the data. The results presented 150 CHAPTER 2 in the report were aggregated at a very high level, without dis- tinctions being made on the basis of gender, age, or location. Although it is interesting to know that people valued “edu- cation and sensitization, knowledge” in preparing for a crisis, these abstract categories do not allow one to know what they mean by them. It would be interesting to learn more about how other departments within UNICEF made use of this infor- mation. Did they find such a high level of aggregation useful? Did they conduct their own more detailed analysis? In our experience, planning and action usually necessitate more dis- aggregated analysis. MoGLSD Youth Venture Capital Fund The Ministry of Gender officially requested UNICEF Uganda to support the identification of U-Reporters wishing to take advan- tage of an opportunity to obtain a loan or credit for their business through a Youth Venture Capital Fund. Two messages were sent through the platform: the first was to ask whether U-Reporters had access to credit, and the second one was to verify whether they had access to credit for their business. The platform was also used to identify and recruit U-Reporters for business clinics orga- nized by the Ministry. percent of the According to the report, “On average over 90 ­ youths that confirmed their availability actually showed up and participated fully in the two day clinic.” (PEP, Interim Report, July 2014, p.2). This is an important result for U-Report. Another important outcome that was not documented in the study but which was mentioned in our discussions with the U-Report team was the platform’s role in adjusting the criteria for the fund’s THE CASE OF UNICEF’S U-REPORT UGANDA 151 eligibility. This ensued after a number of U-Reporters complained that the criteria were unfair. Nodding Disease Case Study Three thousand Ugandan children have been diagnosed with nodding disease, whose cause remains unknown. In 2012, UNICEF received over three hundred unsolicited SMS calling for action, asking for advice, and general expressions of fear and con- fusion surrounding the disease. UNICEF used U-Report to pro- vide advice on how to diagnose the disease and the right course of treatment. A communication strategy was developed which used U-Report to understand the desire for information by the public in the north. U-Report would provide information to U-Reporters on the key facts regarding the disease to a) educate and b) let the public know the Ministry of Health and international community were engaged; and to understand if the information was useful to the community and what additional assistance may be needed. percent of fourteen report- With regard to the first goals, 90 ­ ers indicated that they would like to receive more information about the disease (case study Nodding Disease in Uganda, p. 4). Subsequently six messages were sent to this group on symptoms, possible outcomes of the disease and treatment. Sixty percent of the reporters indicated that they found the information useful. Further research is needed to better understand what people mean by “use- ful,” and how they perceive its validity, especially with regard to other sources (e.g, media, traditional healers, community elders). Thirty-four percent of U-Reporters raised additional questions and requested more information, especially about symptoms. 152 CHAPTER 2 Ebola Case Study Following an outbreak of the Ebola virus in certain parts of Uganda in 2012, U-Report started receiving unsolicited messages request- ing info on the progress of the disease, how to guard against it, and possible incidents. The platform was subsequently used to share information on symptoms, prevention methods, answer individ- ual queries, and evaluate the efficacy of information sharing (Ebola case study, p.3). U-Report then disseminated information on symptoms, spread, and prevention. Sixty-seven percent of report- ers replied that they found this information useful. Some reporters indicated that they would like to continue to receive information about the virus from U-Report and others indicated that they already had heard about this information from the media. This case illustrates two important aspects of U-Report. The first one concerns the importance of unsolicited messages in guiding the priorities for the platform. The ability to respond appropriately to emergent issues is a clear advantage of crowd- sourcing that U-Report has leveraged. The second concerns the place of U-Report in the wider information landscape of Uganda and U-Reporters in particular, especially with regard to dissemi- nating critical information. A more thorough investigation of the role of U-Report as an information sharing platform, especially for emergencies, would need to take account of how it comple- ments (or makes up for the lack) of other information sources. Family Health Days “Family Health Days (FHDs) are part of a UNICEF Uganda’s effort to provide key health services to women, young mothers, chil- dren, girls, boys and adult males across select districts that have THE CASE OF UNICEF’S U-REPORT UGANDA 153 experienced high levels of infant mortality” (U-Report Final Summary Report, Family Health Days, p.1). The platform was used to raise awareness of FHDs, increase participation, collect feedback to improve service delivery, and provide logistical sup- port with UNICEF staff and DPOs. On the question of whether they have heard of FHDs, 827 out of 3,446 U-Reporters responded that they had not, despite extensive earlier efforts made to raise awareness at church and on the radio. In the follow up, about a week later, 2,361 respondents out of 12,132 participants indicated that they had taken part in the FHD over the weekend. As the study indicates, the number of people who reported they had attended the health day was greater than those who reported that they were aware of the ini- tiative in the previous week. However, the direct link between becoming aware of and attending a FHD, which is drawn in the case study, is not straightforward. For one thing we don’t know how many of the people who answered in the negative in the first poll actually attended the FHDs. For another we cannot be sure that lack of awareness is the only reason why people might not attend FHDs, e.g., people might not be able to take part because the event is too far away from where they are. Indeed, in response to a question about whether people had any issue or problem accessing the FHD services, U-Reporters mentioned location and travel as two main barriers to par- ticipation. Although the case study refers to incidents where U-Reporters identified broken printers (essential in printing the birth certificates), misappropriation of funds, and lack of services in certain areas, it is unclear how these incidents were addressed and whether the feedback was used to guide more general 154 CHAPTER 2 improvements in the way in which FHDs are offered. If and how the feedback helped UNICEF staff and DPOs logistically is also unclear from the case study. Post International Development Agenda In an effort to strengthen the Ugandan youth in discussions per- spective on the Rio+20 negotiations, U-Report was used to raise awareness about Rio+20, ask U-Reporters about key issues, and update U-Reporters about the meeting’s outcomes. Results were forwarded to the Ugandan government and one U-Reporter was selected to represent U-Reporters in Rio. According to the case study (Rio+20 Case study final), over 2,000 people responded to the alert about Rio+20 and around 500 reporters asked for information. U-Report was used to pro- vide feedback to the Uganda delegation in Rio on a series of issues, such as whether U-Reporters considered that the Millennium Development Goals (MDGs) were achieved. How does the data collected through U-Report compare to those obtained by tradi- tional and online surveys? It should be noted from the outset that the SMS technology of the U-Report program was never meant to yield complex survey data. However, the comparison with more traditional survey methods is useful in drawing out clearly some of the key strengths and lim- itations of SMS-based crowdsourcing when compared with, for instance, face-to-face household surveys. In principle, crowdsourcing supports real-time analysis and this is one of the great benefits of platforms like U-Report. An interesting question that emerges in the case of real-time analysis THE CASE OF UNICEF’S U-REPORT UGANDA 155 is when a poll should be regarded as closed, especially in the case of non-emergencies. In the U-Report survey, responses to our questions continued to trickle in after we obtained the dataset. Table 5 summarizes some of the key costs of the three sur- veys in the forms of ratios. The baseline is provided by the house- hold survey (X). 34, 35, 36 One way to look at this data is in terms of value for money for each reply. U-Report brings more value for individual replies. Therefore, for someone wishing to obtain a one-off response from a population with the characteristics of U-Reporters, U-Report is more cost efficient than other instruments for data collection. Another way to look at this information is with regard to the value for money for a series of related questions. The following analysis explores this perspective. Figure 17 presents the response rates for the questions that were posed across the three surveys. As we can see, these differ signifi- cantly. Whereas the household survey had an average response rate percent, U-Report had a rate of 49.4 ­ of 97.3 ­ percent. Starting with the baseline of the 2,884 self-selecting respondents, RIWI fared bet- percent. ter than U-Report with an average response rate of 60.4 ­ Table 5.  Key survey aspects U-Reporter U-Reporter Household RIWI35 survey poll survey Questions asked 12 2 1936 17 Sample size 5,693 286,800 1,188 13,693 Total of respondents for most 3,731 (65.6%) 15,967 (roads) 2,884 answered question 26,859 (health) Total Cost (USD) X*0.30 X*0.24 X X*034 156 CHAPTER 2 However, on several questions U-Report did much better than the RIWI. More U-Reporters, for example, responded to the ques- tion on occupation and Internet usage than RIWI respondents. The relatively low responses of U-Reporters to questions on gender and age, on the other hand, might be because these questions have been raised by U-Report so many times in the past. Further analy- sis and experimentation is required in order to identify the factors that may increase response rate amongst U-Reporters. Figure 18 provides another perspective on data quality by comparing the rate of completion for interviews and the entire battery of questions asked to U-Reporters and RIWI respondents in terms of valid data. These are data that do not include erroneous entries. Of sampled U-Reporters, 49.8 percent completed five to ten questions with a steady drop-off rate as a result of each added question. Only 2.1 percent of people responded to all of the twelve questions that were asked. RIWI has done substantively better in percent of respondents providing an answer this regard, with 29.5 ­ to all fifteen questions in the questionnaire. The U-Report team is aware of this weakness, which is why they do not pose more than three to four questions in a row for a specific topic.37 The design of the two modes of survey collection is a key factor in interpreting these differences. RIWI respondents are exposed to the whole questionnaire instantly and were able to click through most of the answers. U-Reporters received ques- tions over a period of two to three weeks, during different times of the day, and they had to take the time to read a long message on a tiny screen and type in the answer. A technology like USSD— which allows researchers to pose multiple questions in one go— might be significant in this regard.38 THE CASE OF UNICEF’S U-REPORT UGANDA 157 Figure 17.  Response rates across the three surveys Questions Age Household survey 100.00 Riwi 100.00 U-Report survey 36.90 U-Report poll 1 54.00 U-Report poll 2 45.60 Gender Household survey 98.80 Riwi 100.00 U-Report survey 32.80 U-Report poll 1 53.40 U-Report poll 2 58.90 Educat Household survey 95.20 Riwi 69.30 U-Report survey 65.60 U-Report poll 1 U-Report poll 2 Household survey 100.00 District Riwi 75.90 Region U-Report survey 48.30 U-Report poll 1 76.80 U-Report poll 2 76.70 Occupation Household survey 97.90 Riwi 35.70 U-Report survey 55.20 U-Report poll 1 U-Report poll 2 Internet use Household survey 97.70 Riwi 35.30 U-Report survey 52.50 U-Report poll 1 U-Report poll 2 Community Household survey 96.00 meetings Riwi 35.30 attendance U-Report survey 42.10 U-Report poll 1 U-Report poll 2 0 10 20 30 40 100% As this analysis indicates, U-Report offers great value for money for one-off questions, but similar to other crowdsourcing platforms, it is less suited for obtaining data on multi-dimen- sional variables (i.e., variables that are measured with more than one question) and, in, general, less suited for questionnaires that involve series of questions. Figure 18.  Response rates across three surveys taking into account number of valid responses 158 Response Rate Household 80 CHAPTER 2 survey 60 40 20 0 0 4 8 12 16 20 Valid Answers Riwi 80 60 40 20 0 0 4 8 12 16 20 Valid Answers U-Report 80 survey 60 40 20 0 0 0 4 4 8 8 12 12 16 16 20 20 Valid Answers Valid Answers Figure 18.  Response rates across three surveys taking into account number of valid responses U-Report 80 survey 60 40 20 0 0 4 8 12 16 20 Valid Answers THE CASE OF UNICEF’S U-REPORT UGANDA 159 160 CHAPTER 2 The process of SMS polling offers some additional challenges. The limited character space (160 characters) restricts the kinds of questions can be asked. The short space also precludes offer- ing clarification to the respondents or examples, as can be done through paper or online questionnaires. This can be a challenge, especially when people are asked to provide ideas about com- plex issues that might be perceived differently across different socio-economic groups and cultures. In section 7, some ideas are presented about how some of these challenges can be addressed. DISCUSSION U-Report and inclusiveness Through U-Report, UNICEF aims to provide an opportunity for young people to speak out on issues they care about and receive information important to Ugandan youth, who are considered as a key driver of change in Ugandan society. How well, then, does it achieve this goal? The results of the study show that UNICEF has done remark- ably well in recruiting young Ugandans. In terms of geographi- cal distribution, U-Reporters appear to be over-represented in Uganda’s Northern region and under-represented in the Western and Eastern regions. At the same time, our analysis indicates that U-Reporters are disproportionately male and are significantly better educated than average Ugandan youth and the average Ugandan citizen. In contrast to the wider Ugandan population, U-Reporters who have finished their schooling have salaried jobs and are competent Internet and mobile phone users. While texting was also quite common amongst U-Reporters’ better-ed- ucated younger counterparts in the wider population, access THE CASE OF UNICEF’S U-REPORT UGANDA 161 to and use of the Internet remains an unattainable good for the majority of Ugandans, including the wider Ugandan youth pop- ulation. High-frequency U-Report contributors also tend to be predominantly male and younger than the rest of U-Reporters, which can introduce additional biases in representation. The picture that emerges with regard to political participa- tion is complex: U-Reporters are more confident than the ­average Ugandan as far as approaching their elected representatives. Concerning gender inclusiveness, the findings are consis- tent with the literature. Overall,39 other studies have shown that there is a gender gap in mobile ownership (Gillwald et al., 2010), that mobile-phone users in Rwanda are disproportionally male (Blumestock and Eagle, 2012), and that men are more likely to own a mobile phone than women in Nigeria, Ghana, Senegal, and percent of men versus 54 ­ Uganda—where 77 ­ percent of women own a mobile phone (Poushter, 2015). Age and education have also previously been shown to be factors. In Uganda, Kenya, Tanzania, South Africa, Senegal, and Ghana, more young people (generally below 35) than older people own a mobile phone (Poushter, 2015). In Rwanda, mobile phone users were found to be better educated than average Rwandans (Blumestock and Eagle, 2012). Ninety-three percent of Ugandans percent of with secondary or higher education as opposed to 61 ­ those with less education own a mobile phone (Poushter, 2015). Location and language (which was not examined in this survey) have also been shown to have a significant impact. There is a significant difference in mobile phone owner- percent of ship across the urban–rural divide, with only 50 ­ rural households owning a mobile phone (May, 2012), and 162 CHAPTER 2 three-quarters of Ugandans who speak English own a mobile phone, whereas only half of those with no English skills own one (Poushter, 2015). As a rule of thumb, the more complicated the use (from voice, to texting, to use of services such as mobile money), the more gender and education become important (Zainudeen and Ratnadiwakara, 2011). Deeper disparities emerge if the wider information land- scape is taken into account. A large household survey on informa- tion and communication technology (ICT) usage in four African countries showed that very few even relatively affluent house- holds owned a computer or had an Internet connection (May, 2012). In the majority, poor households lacked not only access to computers, landline, and Internet, but also to comparatively well-established and accessible technologies such as radio. This urban–rural divide is also reflected in the access to Internet cafes, with only a few cafes being available in rural areas. Lastly, broad- band access remains highly problematic for most of Africa, both in terms of availability and of pricing (ibid). Even mobile Internet access is putting many restrictions on access, offering lower lev- els of functionality and content availability to users than regular access (Napoli and Obar, 2014). All these findings indicated the persistence of warped geogra- phies of access. Even with the rate of mobile phones and Internet penetration increasing steadily, important efforts have to be made to ensure the more deprived groups within a society are repre- sented UNICEF indicated that the organic, socially driven recruit- ment of U-Reporters means that a district by district recruitment cannot be controlled in the same way as in a household survey. THE CASE OF UNICEF’S U-REPORT UGANDA 163 They suggested that should U-Report ever intend to be represen- sample of U-Reporters with tative, they would randomly select a ­ nationally representative characteristics. Although re-weighting could address some of the biases with regard to representative- ness, it is not without limitations. Participation: Unexpected practices and varied meanings The kind and depth of engagement—the “how” and the “why” of participation—constitute two other questions examined in this report. The analysis of system data pointed to a 7,000-strong core of dedicated U-Reporters who regularly send unsolicited messages and are more willing to provide additional informa- tion about themselves as well as consistently replying to poll percent of them have responded to between questions—55.5 ­ one and five questions. One in five registered members has never responded to a poll and 8 ­percent has only responded once. U-Report survey respondents said they valued the oppor- tunities for expressing their opinion and accessing important information afforded to them by the platform. A significant find- ing from the interviews is that U-Report’s use as a tool for infor- mation sharing is more highly valued and understood than its potential for influencing policy and making a difference on the ground. This might be because the information-sharing benefits of the platform were more readily visible than its policy dimen- sions. This hypothesis is supported by the fact that the majority of interviewees understood the basic function of the platform, but were unclear about how the results are used and for what purpose, and were unclear why certain questions were raised in the first place. In contrast, surveyed U-Reporters indicated that 164 CHAPTER 2 they valued the opportunity offered by the platform to voice their opinions more. Further research is needed to account for this dif- ference. A possible hypothesis is that high frequency U-Reporters are likely to value voicing their opinion as the main advantage of U-Report, while less frequent reporters are more appreciative of its informational character. The interview findings highlighted the need to reconsider the individualized character of crowdsourcing. Peer-pressure and peer-consultation are not usually associated with crowdsourcing, where each message is meant to represent one voice, a unique perspective, which, when added up, provides a bigger picture. However, one of the stronger emergent findings of the research is the re-socialization of the information shared in U-Report— interviewees consulted with friends and family regularly before texting their reply. Further research is needed to understand the factors that influence such a practice: are women, for instance, more likely to defer to men’s opinions? Do people seek to con- sult more often on factual questions (for instance “Is your health center well-stocked with anti-malaria drugs?”) than on questions asking for their opinions? How do these dynamics shift when sen- sitive questions are raised? Two additional issues that surfaced during the interviews concerned the desire for more feedback and for connecting to other U-Reporters offline. The importance of these collective dimensions of participation remains unacknowledged in many initiatives. The predominant view is that new data collection platforms create direct links between citizens and decision-­ makers, creating immediate feedback loops between “benefi- ciaries” and “accountability bearers.” This point speaks to the THE CASE OF UNICEF’S U-REPORT UGANDA 165 heart of a tension between crowdsourcing as a direct channel of communication between citizens and decision-makers, and the way in which reforms and social mobilization have been shown to happen—often involving a collective process of mobilization, representation and bargaining that existing implementations of crowdsourcing seem to preclude (Kabeer, 2005; Newell and Wheeler, 2006). Our discussion with the UNICEF team helped surface some of these issues. Indeed, one of their priorities over the coming years is to deepen U-Reporters engagement with the platform. U-Report as an agent of change When asked whether U-Report has caused any change in their percent district, respondents were split almost evenly with 46.6 ­ indicating that it is not clear what changes U-Report has caused percent indicating that U-Report has caused some or and 53.4 ­ many changes in their district. Although interviewees greatly appreciated receiving alerts about the issue of birth certificates and immunization, they had difficulty identifying other types of changes that U-Report may have supported. This complements the finding that many interviewees were unclear about how the information that they provided was to be used. The majority of Ugandan MPs who responded to our short survey were aware of the platform, and more than one third indi- cated that they had used it occasionally. This part of the research only began to scratch the surface of how U-Report’s usefulness for policy and planning purposes could be improved. The evidence from the examination of the mini case studies is mixed. The analysis indicates that U-Report has been successful in surfacing 166 CHAPTER 2 emerging problems, sharing critical information, and obtaining a first-level view of people’s opinions and priorities. What is less clear is the extent to which the information that they provided was useful for respondents, and the degree to which it informed policy and practices. It is very difficult to draw a direct link from individual actions or behaviors to policy outcomes (see, for example, Gaventa and Garett, 2010). Intermediate outcomes such as the strengthening of a sense of citizenship and the strengthening of alliances and networks may contribute greatly toward strengthening trans- parency and accountability. One of the challenges of translating U-Report feedback into action is that it is not always clear what kind of action the results warrant and by whom. Targeted trans- parency, the process through which designers and consumers of information work together to identify and develop information streams that tie into specific policy processes, has been indicated as a key factor for the success of transparency systems (Fung et al., 2013). UNICEF is beginning to head in this direction by switching its focus from the national to a local level. One of the questions that emerged from the qualitative findings is whether better understanding of how the platform works, how the agenda is formulated, and how results are used could improve the efficacy of the platform. There are two import- ant challenges here. The first is that there seems to be tension between the drive for greater numbers and the investment that needs to be made in order to properly explain to people how the platform works and what results they should expect. The second challenge—common to most feedback platforms—has to do with managing expectations. It is possible that the more people expect THE CASE OF UNICEF’S U-REPORT UGANDA 167 from a platform, the greater the chance that they will try to over or under-report depending on the perceived gains. THE CHARACTER OF CROWDSOURCED DATA There is still much that we don’t know about crowdsourcing and perception-based data. This study contributes to the current debate by offering some findings around the nature of the feed- back obtained through decentralized two-way text messaging. One important finding concerns the way in which U-Reporters and individual responses are counted. Findings on the sharing of mobile phones and the swapping of SIM cards point to the need for such practices to be taken into account as far as presenting results. Even though in the case of Uganda these practices were more exceptional, they may have a cumulative effect. They may also be more prevalent in other countries where the platform is being rolled out. Our examination of the three surveys in terms of data qual- ity yielded some important lessons about the strengths and weaknesses of each approach. U-Report provided great value for money for individual responses, but less so for a series of twelve consecutive research-focused questions. With an overall percent, the U-Report survey response rate response rate of 49.4 ­ is comparable with many online surveys (Nulty, 2008)40 and the RIWI survey did significantly better than the average online sur- percent) (Nulty, 2008). vey (64.5 ­ From a surveying perspective, the weakest metric for U-Report was the degree to which respondents replied to all the questions that percent of U-Reporters responded to all were sent to them. Only 2.1 ­ twelve questions that made up the U-Reporter survey. In this case, 168 CHAPTER 2 the fact that the U-Report platform was not intended to support lengthy questionnaires counted against it. However, this suggests a limitation of SMS-based crowdsourcing as far as being able to sup- port more demanding forms of inquiry that include, for example, multiple indicators. The constraints of U-Report were also apparent as far as the restrictions in question formulation are concerned, due to the limitations in the number of characters (160). It is widely rec- ognized that the process of filling in a questionnaire often involves a discussion between interviewer and interviewee to clarify what a question means and also to probe. The character restriction of tex- ting may amplify this challenge, as researchers cannot elaborate on aspects of the question that may not be self-evident.41 LIMITATIONS AND FURTHER RESEARCH The study did not include a counterfactual. The views of people who have opted to de-register from U-Report and those who knew about it, but who decided not to join, are not reflected in the study. One set of stakeholders that were not interviewed who might bring another valuable perspective are UNICEF staff mem- bers who use the platform. Non-English speakers of U-Report were also not a part of the study. In terms of analysis, there were many opportunities for more in-depth work that were not pursued due to time constraints. These included an in-depth analysis of location data and their cor- relation to regional and district level socio-economic indicators and a content analysis of the questions that were posed by UNICEF, including their response rates, to identify what types of content attracted more engagement. We also did not conduct a more thor- ough examination of the way that incoming messages are coded THE CASE OF UNICEF’S U-REPORT UGANDA 169 and aggregated or write up a more thorough grounding of the results of the analysis in the youth, media, and political culture of Uganda. Additionally, a more detailed examination of specific interventions initiated and supported by the platform could have been pursued. Finally, the creation from system data of the contri- bution profiles of interviewees could have been used as prompts in the interview process and could have been used to triangulate claims and assertions. An analysis of material from media reports of U-Report, provided by UNICEF could have contributed to this chapter, but were not included due to time constraints. In terms of broader themes, future research could address the issue around governance, agency, and control at the higher levels of decision-making around the platform. Future research could also examine more deeply the emergent importance of perception data and their limitations in terms of validity and rep- resentativeness. An interesting question that has emerged from this analysis concerns the degree to which more detailed knowl- edge about the purpose and function of the platform is necessary for accurate reporting. This chapter has indicated that many of the assumptions that underlie much of the thinking behind crowdsourcing, such as that each message represents an individual’s opinion, need to be revised and that the use of multiple SIM cards might have affected representation and analysis. Both these issues need fur- ther unpacking. Another issue that we only began to scratch the surface of concerns analysis. Current analysis of U-Report data is fairly basic. Another set of questions could focus on the issue of the level of disaggregation that is needed for the data to be useful to different stakeholders. 170 CHAPTER 2 RECOMMENDATIONS The evidence presented in this chapter confirms the innovative character of U-Report and its potential for amplifying the voices of Ugandan youth. The platform clearly has the ability to engage a significant number of dedicated contributors, to highlight emerg- ing issues, to support debate, and to disseminate critical informa- tion. However, there is significant scope to improve whose voices are being heard, the way they are being heard, how the data gen- erated are used, and for rethinking the role of crowdsourcing to support policy and practice. Improving the quality of the feedback We begin by considering how U-Report can improve existing feed- back as some of these ideas form the basis of further suggestions. One strategy to deepen the analysis and increase the validity of the information is the creation of a stacked, two-level membership. This would involve the creation of a trusted group of U-Reporters spread across different socio-economic groups, locations, education levels, gender, etc. The information that the second-level provided would be weighted differently to those of first-level members, and they could be contacted to verify emerging reports and incidents. The second-level group would be supported through training and would act as ambassadors of the platform in their districts. Some vetted U-Reporters might also be encouraged to docu- ment different aspects of their lives and their communities using low cost tablets and cheap video cameras with which they would be provided. This could provide invaluable insights on what it means to be a youth in Uganda and could complement the feed- back generated through the platform. THE CASE OF UNICEF’S U-REPORT UGANDA 171 Participatory research can provide invaluable insights and ideas on how best to organize such an effort. For example, a recent initiative called Participate (Institute of Development Studies, http://participate2015.org) aimed at increasing the representa- tion of the poor in the post-2015 agenda, putting cameras at the hands of the poorest to tell their own stories. Additional quality checks might also be required to determine more precisely the extent of the use of multiple SIMs for reporting to decide how to treat cases of people using multiple SIMs to reply. Improving representativeness and engagement How can U-Report improve representativeness? UNICEF sends out messages in three languages (English, Luo, and Luganda) and the new version of the system that is being used in Nigeria, based on the RapidPro software, makes multi-language support easier. This is an important step in ensuring inclusiveness. Although the institution of the two-level membership described above might help overcome some of the biases (especially those introduced by high-frequency contribu- tors), such a step would not be enough to address the deeper disparities perpetuated by mobile phone ownership and tex- ting capacity. Therefore, a stronger strategy to target specific sub-groups (illiterate youth, youth without English skills, youth without tech skills, youth living in remote areas) needs to be adopted. Two ways of engagement are suggested: direct and indirect. Indirect engagement would occur through the vetted, second-level U-Reporters. The U-Report team might ask these trusted U-Reporters to contact youth from the tar- geted subgroups and elicit their opinions. Direct engagement 172 CHAPTER 2 of the targeted sub-groups would occur through the use of technologies other than SMS. The new version of the system supports Interactive Voice Response (IVR) that should increase U-Response reach among the less literate and those with dis- abilities. In addition, low-cost video cameras could be given to the youth at the margins to capture their views. This step would lead away from the usually quantifiable, real-time character of the feedback generated through the platform, but the richness of the data might be worth it. The organization of meetings of U-Reporters at a district level could also help to foster a stron- ger sense of community and agency amongst U-Reporters. DATA SHARING AND FEEDBACK There is great scope for further analyzing U-Report data in ways that more concretely support improved planning, transparency, and accountability. To do justice to the richness of feedback and the contributions of the U-Reporters, UNICEF could consider opening up some of its data. Developer camps on how best to analyze and visualize the information for different stakeholders could be organized with UNICEF staff, UNICEF Ugandan part- ners, and the broader Ugandan civil society. This could support the use of U-Report for planning, monitoring, and collective action. Innovations and ideas emerging from these camps could then be adopted in the roll out of U-Report in other countries. SITUATING CROWDSOURCING IN A BROADER EVIDENCE CHAIN Crowdsourcing can be part of a multi-step strategy for data col- lection and analysis and its usefulness might increase by consid- ering how it might be blended with other sources of information. THE CASE OF UNICEF’S U-REPORT UGANDA 173 There are two important aspects of such a multi-step design to consider: sequencing and triangulation. Sequencing refers to the order in which different processes, tools, and approaches are mobilized to progressively develop an accurate understanding of what is happening and why. Triangulation42 refers to the process that enables the cross-checking of results by combining several perspectives in a systematic investigation. Four types of triangulation have been distinguished: a) data triangulation, which entails the use of several sampling strategies to ensure that data are gathered across different slices of time and social situations; b) investigator triangulation, which refers to the use of more than one researcher in the field to collect and interpret the data for the same sample; c) methodological tri- angulation, which entails the use of more than one method to collect data and d) theoretical triangulation, which refers to the use of different theoretical perspectives to make sense of the data (Denzin 1970). The strategies suggested thus far could help sup- port all these types of triangulation. In short, if SMS-based crowdsourcing is to be used to repre- sent a greater, more representative proportion of the Ugandan youth, an approach to engagement needs to be adopted which targets specific under-represented groups and varies the tech- nologies of communication through which people express their opinions. The collection of more systematic demographic data (at least for a portion of the U-Report population), the support for more collective modes of engagement, and the sharing or opening up of some of its existing data, would further add to the strengths of U-Report. 1 74 CHAPTER 2 BIBLIOGRAPHY Blumestock, J. E., and Eagle, N. (2012). ‘Divided We Call: Disparities in Access and Use of Mobile Phones in Rwanda.’ Information Technologies and International Development 8(2): 1-16. Denzin, N. K. (1970). The Research Act in Sociology: A Theoretical Introduction to Sociological Methods. Chicago: Aldine. Gaventa, J., and Barrett, G. (2010). ‘So What Difference Does It Make? Mapping the Outcomes of Citizen Engagement.’ Citizenship, Participation and Accountability Research Centre, IDS. Gillwald, A., Milek, A., and Stork, C. (2010). ‘Gender Assessment of ICT Access and Usage in Africa: Towards Evidence-Based ICT Policy and Regulation.’ Policy Paper no 5 1. Guest, G., K. M. MacQueen, and Namey. E. Emily (2011). Applied Thematic Analysis. London: Sage. Kabeer, N., (ed.) (2005). Inclusive Citizenship: Meanings and Expressions. London: Zed Books. Newell, P., and W. Joanna, (eds.) (2006). Rights, Resources and the Politics of Accountability. London: Zed Books. Mellon, J., Peixoto, T. and Sjoberg, F.M. (2015) The Crowd Never Lies? Evaluating the Quality of Crowd-Sourced Data in Uganda, Digital Engagement Evaluation Team, World Bank, unpublished. Melville, P. (2013). ‘Amplifying the Voice of the Youth in Africa via Text Analytics’. KDD’13, August 11–14, 2013, Chicago, Illinois, USA. Partnership on Measuring ICT and development, ITU. (2013). ‘Stocktaking and Assessment of Measuring ICT and Gender.’ Background paper for the 11th World Telecommunication/ ICT Indicators Symposium, available for download at: http://ow.ly/3xKKLj, last accessed 15.04.2015). Poushter, J. (2015). ‘Cell Phones in Africa: Communication Lifeline.’ Pew Internet Research, available for download at: http://ow.ly/3xK1j9, last accessed 14.04.2015. Zainudeen, A., and Ratnadiwakara, D. (2011). ‘Are the Poor Stuck in Voice? Conditions for Adoption of More-Than-Voice Mobile THE CASE OF UNICEF’S U-REPORT UGANDA 175 Services.’ Information Technologies and International Development 7(3): 45-59. Guest, G., K. M. MacQueen, and Namey. E. Emily. Applied Thematic Analysis. London: Sage. 3 MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES Chapter 3 MajiVoice Kenya — Bet ter Complaint Management at Public Utilities Mar tin Belcher Aptivate, UK Dr Claudia Abreu Lopes Universit y of Cambridge, UK With Fredrik M Sjoberg Digital Engagement Evaluation Team, World Bank Jonathan Mellon Digital Engagement Evaluation Team, World Bank EXECUTIVE SUMMARY This chapter sets out to investigate the impact of an ICT mediated beneficiary feedback system, specifically water utility customer complaint handling in Kenya using the “MajiVoice” system, and the associated issues in terms of utility service provision, organi- zational process management, and organizational responsiveness 179 180 CHAPTER 3 to beneficiary feedback. In particular, this evaluation seeks to accomplish four tasks. First, to investigate the extent to which digital feedback mechanisms (SMS, USSD, email, online portals— desktop and mobile—and social networking tools) are being used. Second, to explore the impact of digital feedback mechanisms on the propensity of beneficiaries (customers) to provide feedback and service providers (water companies) to respond to that feed- back. Third, to explore the effect of digital feedback mechanisms on participants’ attitudes to water service provision and supply (customers and water company staff). Finally, to understand the adoption progress of MajiVoice and associated management of organizational processes by a water utility. methods The evaluation has been conducted using a mixed-­ approach, combining telephone customer survey (n=1,064), paper-based staff survey (n=106), semi-structured interviews, documental analysis, and statistical analysis of system data (n=57,809). The channels used for submission of feedback, based on system data, are as follows: Channel Count Percent Email 61 0.1 Letters 76 0.1 Mobile Web 430 0.7 Online Portal 522 0.9 Over the Counter 43,761 75.7 SMS 449 0.8 Social Network Site 6 0.0 Telephone Call 12,245 21.2 USSD 259 0.4 Total 57,809 100.0 MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 181 On the question of how much feedback had been received via each digital feedback mechanism, we found that the vast majority percent) are not received through digi- of complaints made (96.9 ­ tal feedback channels. Most of the feedback is submitted over-the- percent) or through a telephone call (21.2 ­ counter (75.7 ­ percent). Digital feedback channels receive only a small proportion of all the feedback provided through the MajiVoice system—just 3.1 ­percent. All digital channels received at least some usage, and usage of some of those digital channels (online portal and mobile web platforms) is growing at a significantly faster rate than non-digital. In investigating where the feedback comes from, as we would expect, it is the younger, more educated Kenyans who use digi- tal channels more frequently. The feedback comes from across the water utilities customer base. Males dominate complainants (65.2 ­percent) across all channels. There is no statistically significant association between gender and use of digital/non-digital channels. Male and female complainants equally use over-the-counter, tele- phone hotline, and the online portal channels. The digital channels of SMS and email are used mostly by male complainants. The dig- ital channels of mobile web and USSD are more balanced in terms of gender usage. Digital channels are most heavily used by those between 20-44 years of age. There is a clear positive association between education level and the use of digital channels. We also investigated whether the ability to use digital feed- back mechanisms is the deciding factor in providing feedback. We found that survey respondents themselves do not consider the mode of complaint submission to be a decisive factor in com- plaints being submitted. Alternative channels would have been used if any of the ones chosen had not been available. 182 CHAPTER 3 People providing feedback are generally satisfied or very satis- percent) with the experience. When analyzed by channel, fied (49 ­ people are more satisfied when they submit feedback over-the- percent) or by email (50 ­ counter (58.1 ­ percent) and more dissatis- fied when they submit feedback by the online portal (42.9 ­percent), percent), or mobile web (40.3 ­ USSD (42.9 ­ percent). Currently, the data available does not provide a firm insight into this channel and satisfaction variation. A number of hypotheses are presented. Across all channels, most customers who have submitted a complaint feel that it has been taken seriously by the water company (60.1 percent of all channels). Across digital channels, customers report slight variations in how seriously their com- plaint has been taken by the water company (56.2 ­percent online percent SMS, 59.7 ­ portal, 58.3 ­ percent mobile web, 62.5 percent USSD, 90 percent email). Across all channels, the process of pro- percent). viding feedback is perceived as easy or very easy (80.5 ­ Similarly, across all channels, the water utility is resolving the percent). majority of customers’ complaints satisfactorily (59.5 ­ Our findings on uptake and implementation included discov- ering that MajiVoice has been successfully taken up and integrated by the water utility, as indicated by over three quarters of all staff percent of survey staff being registered to use the system and 90 ­ reporting daily usage. During the first year after MajiVoice’s launch, the number of complaints recorded has increased almost tenfold (from approximately 400 to 3,794 per month more recently), the percentage of complaints closed has steadily increased, and aver- age turnaround times have halved (from over seventy-one days in the first six months of operation to approximately thirty-two days in the second six months of operation). MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 183 Overall, we found that MajiVoice has been operational within Nairobi City Water and Sewerage Company (NCWSC) for just over a year and it provides a successful customer feedback receipt and handling solution. It has resulted in significantly improved complaint handling and, when combined with a supportive reg- ulatory environment, it contributes to providing better account- ability of the NCWSC to their customers/beneficiaries. The wider water utility regulatory framework is an important factor in influencing the successful implementation and management of MajiVoice by providing a range of incentives and mechanisms. There is significant potential for business efficiency improve- ments associated with the management data possibilities that the adoption of MajiVoice enables but these have not been examined or utilized in much detail to date. In this case, MajiVoice has been primarily used as a com- plaint management handling and tracking system first and fore- most, with digital channels communication functionality as an important additional set of functionalities. It provides an effective technical solution in both regards. Its reuse in similar contexts provides realistic opportunities to replicate the successes seen in Kenya. INTRODUCTION This chapter sets out to investigate the impact of technology in a beneficiary feedback system,43 specifically, a water util- ity customer complaint handling in Kenya (MajiVoice), and the associated issues in terms of utility service provision and its responsiveness to beneficiary feedback. In particular, this eval- uation looks to investigate the extent to which digital feedback 184 CHAPTER 3 mechanisms are being used; explore the impact of digital feed- back mechanisms on the propensity of beneficiaries (customers) to provide feedback and service providers (water companies) to respond to that feedback; and explore the effect of digital feed- back mechanisms on participants’ attitudes to water service pro- vision and supply (customers and water company staff). It should be noted that this evaluation is of limited scope and should be considered a rapid mini-evaluation, following best practices as recommended in the DCE evaluation framework but undertaken primarily to offer opportunities to test certain aspects of the framework, not to undertake a complete external program evaluation study. BACKGROUND The World Bank’s Global Water Practice supports client countries in improving access to safe water and sanitation services, espe- cially among the poor. To achieve this aim, lending operations are combined with technical assistance. In Kenya, the Water Practice has targeted customer care quality at some of the largest pub- lic utilities by developing MajiVoice—a feedback and complaint management software platform that targets benefits to consum- ers, utilities, and the national sector regulator (WASREB). Within the first year of deployment at Nairobi City Water and Sewerage Company (NCWSC)—Kenya’s largest water utility and the first to adopt MajiVoice—the number of complaints recorded has risen dramatically, increasing almost tenfold, and resolution percent to 94 ­ rates have climbed markedly, from 46 ­ percent, and average resolution time has been halved. MajiVoice has since been rolled-out to additional utilities, and WASREB is actively MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 185 monitoring performance. MajiVoice has the potential to be scaled up to other sectors and countries, further facilitated by its public domain license status.44 Context Kenya is rapidly urbanizing and its WSPs (Water Service Providers) are straining to keep up with population growth and economic development. The population served by public utilities has more than doubled to approximately nine million between 1990 and 2012, but demographic growth was so fast that this has only translated into a 3 ­percent increase of access to piped water.45 In view of this rapid population growth and ever-increasing demand for safe and affordable water, many WSPs are struggling to ensure high-quality services. Customer complaints are fre- quent. In 2009, an African utility performance assessment found complaint rates that were five to ten times higher than in devel- oped countries.46 Dealing with complaints efficiently is vital, not only to main- tain customer satisfaction, but to identify and resolve issues such as leaks, water quality, or wrong bills that directly impact service standards and revenue. Responsiveness to citizens is central to sector accountability and service quality. While Kenya’s water sector has a well-defined accountability framework with a strong regulator and clear performance targets, it lacked an effective tool to facilitate the submission, manage- ment, and monitoring of complaints. As a result, not only were recorded complaints relatively few, but complaint processing by WSPs was inefficient. Moreover, the regulator had no reliable, timely access to complaint statistics, and thus no basis for the 186 CHAPTER 3 enforcement of existing standards. As the regulator’s annual per- formance report of service providers, IMPACT, outlined in 2013: “customer complaints handling [procedures have not been] sub- mitted” for any of the water providers, although their “develop- ment [. . .] is mandatory under Clause 7.2 of the Licence.”47 MajiVoice was developed to fill this gap with an integrated solution that increases accountability pressure from below by facilitating submission and tracking of complaints by customers, reinforcing monitoring from above by giving the regulator better data access, and by equipping public utilities with a modern com- plaint management tool to react productively to these pressures by processing customer issues more efficiently. MajiVoice system goals The MajiVoice system is aimed at improving WSP accountability and performance in two key areas. Firstly, by providing a robust complaint submission mechanism that enables two-way commu- nication between the citizen and water providers via text mes- sages (SMS, USSD) and through the Internet (desktop, mobile, email, and social media). Secondly, by providing a comprehensive complaint handling and management back-end platform for use by water providers to manage and track that complaint and feed- back data. The MajiVoice software platform provides an effective mech- anism for registering, handling, and tracking complaints received through any medium (for example, over-the-counter or via tele- phone).48 MajiVoice, therefore, provides water providers with a complete solution for receiving and effectively managing cus- tomer complaints and feedback. MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 187 Box 1.  A Regulatory Perspective on MajiVoice The MajiVoice complaint processing software benefits from an existing legal and regulatory structure, which it can plug into and operationalize to maximum effect. In Kenya, Water Service Providers (WSPs) are con- tracted through Service Provision Agreements (SPAs) which set performance targets, including mandatory turnaround times for complaints. MajiVoice can track and report against these standards, which provides WSPs with an incentive to address complaints as vio- lations of SPAs can trigger legal action by the regulator (WASREB). The development and roll-out of MajiVoice itself also had strong backing in existing legislation. The 2002 Water Act specifically mandated the national regulator to put in place and monitor procedures for handling complaints made against utilities (Section 47c and 47f). A citizen who submits and tracks a complaint through MajiVoice can thus do so knowing that WSPs have committed to its timely resolution and are address- ing it under clear mandates and supervision from the regulator. While full compliance remains an ambitious challenge, the existing legal structure embeds MajiVoice and makes it a regulatory instrument rather than just a voluntary tool. 188 CHAPTER 3 Using MajiVoice, customers can use their mobile phones, computers, or go in person to a WSP office and share their com- ments, concerns, feedback, and complaints on service delivery with those water providers, and, where necessary, receive timely feedback on how those issues are being addressed. The aim is to improve efficiency, accountability, responsive- ness, and transparency. How MajiVoice works MajiVoice was developed entirely in Kenya through collaboration between the World Bank’s Water and Sanitation Program, the sector regulator (WASREB), and water service providers. After a pilot in Nairobi in mid-2013, MajiVoice was formally launched in March 2014 and has since been rolled out to water service pro- viders in Nairobi, Nakuru, Mathira, and Nanyuki. Within a year, the system has thus gone from a small pilot to processing all cus- tomer feedback from over 500,000 accounts across four water companies.49 The backbone of the system is a web-based task manage- ment software that allows utility staff to receive, process, and resolve consumer-submitted complaints following clear, guided workflows with an intuitive interface. Each complaint is tracked, and each staff action logged, with automatic alerts if set timelines are exceeded. The NCWSC MajiVoice database has received over 60,000 individual complaints after little more than a year, with over half a million logged staff actions in response. Utility man- agement benefits from detailed, always up-to-date statistics that break down complaint processing performance and reveal bottle- necks by regions, departments, and even individual staff. MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 189 To achieve efficiency gains such as faster turnaround times and comprehensive performance monitoring, broad uptake of MajiVoice by all staff involved in processing complaints—not just the customer-facing service agents—was critical. In Nairobi, the number of staff using MajiVoice on a regular (at least weekly) basis has risen from less than 100 in mid-2013 to over 400 in the last quarter of 2014.50 The cloud-based MajiVoice system can be accessed from any Internet enabled device, be it a work-computer or smartphone. Its light design ensures smooth operation even on slow Internet connections. A mobile, optimized version facilitates work on small devices. The advantages of MajiVoice’s openness extend to consum- ers who can submit complaints using a dedicated SMS shortcode (15444), a USSD shortcode (*624#), through the Internet, or by traditional channels such as walk-in service centers or the util- ity hotline, in which case customer care agents input the com- plaint into MajiVoice. The hotline is free, and SMS and Internet submissions are possible for KES 1 or less (< $0.02), thus giving poorer consumers multiple ways to avoid time- and cost-inten- sive personal service center visits. For each complaint, moreover, a unique reference number is sent to the consumer’s phone for free, which can then be used to query the exact complaint status by SMS, USSD, or the Internet, or to follow up with customer care agents. Improved accountability between customers and their ser- vice provider is reinforced by better regulatory supervision avail- able through reporting and management data available via the MajiVoice system. The Water Services Regulatory Board of Kenya (WASREB),51 which sets rules and enforces standards, has its own Figure 1.  MajiVoice Staff Interface 190 CHAPTER 3 MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 191 Figure 2.  Result of online status check by consumer MajiVoice dashboard through which it can monitor aggregate sta- tistics across all participating utilities. In particular, information relevant to the compliance with Service Provision Agreements is shared, such as service providers’ ability to respond to consumer complaints within agreed turnaround-times. WASREB can thus use MajiVoice as a regulatory monitoring tool and has already for- mally engaged utilities on the basis of MajiVoice statistics (e.g., to press for resolution of overdue complaints). In summary, MajiVoice aims to increase accountability pressure on water service providers from below by facilitating submission and tracking of complaints by customers. That is reinforced by monitoring from above by giving the regulator better complaint data access. It equips water service providers with a modern complaint management tool to react produc- tively to these pressures from above and below, by processing customer issues more efficiently within a supportive regulatory environment. MajiVoice is both the feedback channel and the feedback handling mechanism. 192 CHAPTER 3 Figure 3.  MajiVoice accountability chain ABOUT THE EVALUATION In addition to the DCE evaluation framework, there are two key factors providing the context and informing the research objec- tives and research questions for this evaluation: 1. The World Bank (Kenya Office, Water): to understand the progress of MajiVoice against its original terms of reference and examine its impact on Water Service Providers in terms of service delivery improvements. 2. The World Bank (DCE Evaluation Framework commis- sioning team): to understand the role that digital feed- back channels played in a client/service delivery focused beneficiary feedback system to help inform the frame- work development and provide a representative sam- ple of digital citizen engagement projects to support its development. MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 193 Research themes and research questions There are three key themes, each with associated research ques- tions that are explored in this evaluation. The first is investi- gating digital feedback mechanism usage, which inquires as to how much feedback has been received via each digital feedback mechanism, where that feedback comes from, who provides this feedback (and how do they compare to the general population), and how does the use of digital feedback mechanisms compare to non-digital feedback mechanisms. The second theme exam- ined here is the impact of the mechanisms on the propensity to provide feedback, which asks whether the ability to use digital feedback mechanisms is the deciding factor in providing feed- back and how do people providing feedback rate their experience (across all channels). The third theme explores the effect of the digital feedback process on attitudes, perceptions, and perfor- mance (with respect to providers of feedback and receivers of feedback on an organizational and individual level). This asks attitudinal questions like: has providing feedback been worth- while? Has the feedback/complaint been addressed? And has the water service provision and responsiveness improved as a result of the feedback mechanisms? We have also looked at some specific World Bank Kenya Office (Water) program objectives to understand if MajiVoice has been successfully adopted and led to any significant changes in terms of changing patterns of usage of the system at the water utilities over time, by changing complaint submission patterns by water utility customers, by changing the closure rate over time and by complaint category, and by changing complaint resolution time. 194 CHAPTER 3 Methodology The evaluation in this chapter has been conducted using a mixed-methods approach, combining telephone and paper based surveys, semi-structured interviews, and documental analysis of existing system data and documentation. The key data collected is summarized in Table 1. The analysis of the data sources is mostly straightforward reporting of results and comparisons. In certain cases, more detailed analysis has been undertaken and details on sample sizes and confidence intervals are provided. The statistical analyses consisted of frequencies and per- centages, two-way tables, and line and bar graphs to present the results of the user and staff samples. The percentages of the users survey can be generalized to the population of users with percent (for a 95 ­ a maximum margin of error of ±3 ­ percent confidence level). This implies that if two percentages differ percent or more, the difference between them is statis- by 6 ­ tically significant. The percentages of the staff survey can be generalized to the population of staff with a maximum margin of error of ±8.7 ­percent (for a 95 ­percent confidence level). This percent or also implies that if two percentages differ by 17.4 ­ more, the difference between them is statistically significant. Associations between variables were tested using chi-square test (for categorical variables). Linear and binary logistic regressions were used to test associations between dichot- omous response variables and explanatory variables, con- trolling for other variables. For testing statistical hypotheses, we assumed significance if p<.05. Table 1.  Research methods used Description Target Sampling Comments Complainant A telephone survey of customers that Sample of people Random sample of complaints Survey delivered by enumerators calling Customer survey have recently submitted a complaint to that have submitted from the last 4 months with complainants directly and entering results into NCWSC (and so the complaint details a complaint to over sampling of complaints online survey tool. Due to high return rate an are included within the MajiVoice NCWSC in the submitted through digital increased sample size was achieved. High level system). last 4 months channels. Target of at least 500 of positive responses to survey request. (complainants). completed surveys. NWC staff survey A paper based survey distributed to Sample of NCWSC Stratified sample of users of Survey distributed and completed on paper. selected NCWSC staff. Data input into staff from across MajiVoice from across NCWSC Returned to enumerators for data input online survey tool for collation and the company that staff. Drawn from across all into online survey tool. High level of survey analysis. are registered users aspects of the company (staff completion. of the MajiVoice roles, job types and geographical system. coverage). Target of at least 100 completed surveys (representing approximately 25% of regular users). Semi structured Semi-structured interviews with selected Key individuals Limited discussions with NCWSC Limited interviews NCWSC staff. and roles to solicit targeted feedback. System data analysis Quantitative data analysis of MajiVoice All available system 57,809 complaint and associated Export of all transactional data associated with system and transactional data. data from launch to transactional (handling) records. all complaints registered by the MajiVoice date. system. Anonymized individual customer ID MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES data. 195 196 CHAPTER 3 Summary of data and results The table below summarizes the key data collected and analyzed as part of the evaluation. This population comprises all complaints received by NCWSC via all channels during the period August 2013 to November 2014. All of these complaints have been registered within the MajiVoice system, either by the complainant directly (automatically, via the submission of the complaint using a digital channel), or via NCWSC staff entering the complaint on behalf of the customer (when dealing with complaints received over-the- counter, by telephone, or by post). This population is people that have submitted a complaint to NCWSC in the four months leading up to December 2014 (com- plainants). They represent a subset of the MajiVoice system data population. Table 2.  MajiVoice system data summary Population Analyzed Total 57,809 57,809 % 100% Table 3.  NCWSC complainant survey data summary Did not answer Population Called Responses Refusals Number did not work Total ~16,000 1,056 540 167 349 % 100% 51.13% 15.81% 33.05% MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 197 Table 4.  MajiVoice staff survey data summary Population Surveys distributed Completed and returned Not returned Total 668 120 106 14 % 100% 88% 12% Table 8: MajiVoice staff survey data summary This population is made up of NCWSC staff from across the company that are registered users of the MajiVoice system. Sampling strategies NCWSC complainant survey: The sampling frame was all per- sons who have submitted complaints via all channels (non-dig- ital and digital) to NCWSC in the four months leading up to the evaluation, to ensure an adequate chance that the persons will remember details about their complaint. This meant a potential population size of ~16,000. From this population we drew a sam- ple of 1,056 persons, split equally between digitally-submitted vs. non-digitally submitted (over-the-counter, telephone, and by post) complaints. Originally, we had planned to stratify by additional criteria (complaint subject billing/non-billing, complaint time on sys- tem—beyond/within mandated turnaround-time) but due to the higher completion rates and speed of initial data collection, we only stratified by complaint submission channel. The sample size ended up being large enough to get adequate representation in terms of turnaround times and billing/non-billing without stratifying. The complainant survey was undertaken by four enumera- tors calling complainants over ten days. The results and responses 198 CHAPTER 3 from all calls were entered by the enumerators directly into an online survey tool (Survey Monkey)52 for functionality (enforced logic), collation, and preliminary summary analysis. MajiVoice user staff survey: The paper based staff survey was dis- tributed to 120 NCWSC staff, about twenty to each geographic zone covered by the company. These staff members were chosen randomly from MajiVoice user lists, and stratified by department and region to try and ensure a response from each major depart- ment in each region. In a departure from a purely randomized approach, we also specifically selected some known supervisors to participate. No direct incentives were given to complete the surveys (which were also anonymous—names or staff IDs were not asked and we delivered an envelope with each survey for its confiden- tial return). However, we worked closely with the customer rela- tions department, and their local staff distributed and collected the surveys. Active MajiVoice users (defined as users that have logged into the system in the previous seven days) are drawn from NCWSC staff and average just under 400 people per week that log in and use the system in some form. The total number of registered users for NCWSC is 668. The completed paper surveys were returned and transcribed into an online survey tool (SurveyMonkey) by the enumerators. MajiVoice system data: A complete system data analysis was undertaken. This was on all complaint records held in the MajiVoice system since its launch in August 2013 up until January 5, 2015. A MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 199 total of 57,809 complaints were logged on to the system during this period. This includes all complaints logged by the system (via all digital and non-digital channels), which should equate to all complaints received by the water company during this period. Each complaint is a record of an individual complaint received by the water company. The complaints received by the digital channels are all entered automatically on to the system. The complaints received in person or via telephone are entered onto the system by the receiving officer at the water company. It can be seen that the system is both a complaints handling system and a complaints submission and receipt system. The system data was obtained from system queries and direct exports of raw data. The complaint data consisted of 57,809 records with at least sixteen data points associated with each record. The raw data files in .csv format were approximately 63MB in size. FINDINGS Theme: Investigate digital feedback mechanism usage We examined two key research questions in this regard. First, how much feedback has been received via each digital feedback mechanism? Second, how does the use of digital feedback mech- anisms compare to non-digital feedback mechanisms? The vast majority of complaints made are not received through digital feedback channels. There were 57,809 unique complaint tickets. Most of the feedback (96.9 ­percent) is submit- percent) or through a telephone call ted over-the-counter (75.7 ­ percent). All digital channels combined accounted for the (21.2 ­ remaining feedback (3.1 ­percent). All digital channels received at 200 CHAPTER 3 Figure 4.  Complaints by channel (system data) Channel Over the counter 75.70 Telephone call 21.18 Online portal 0.90 SMS 0.78 Mobile Web 0.74 USSD 0.45 Letters 0.13 eMail 0.11 Social Network site 0.01 0 20 40 60 80% Number of tickets least some usage (mobile web, online portal, SMS, USSD, email, and social network sites).53 The main focus of complaints relates to individual service delivery issues (e.g., my water services bill, my meter reading, etc.). A much smaller proportion of complaints can be considered public complaints about issues such as water leaks and corrup- tion. Some of these smaller complaint categories are so small in terms of numbers submitted as to not be statistically relevant (e.g., refunds, corruption, vandalism/theft, and illegal connections). percent), The main reasons for feedback are billing (27.7 ­ percent), sewer blockage (12.5 ­ meter reading (16.7 ­ percent), no percent), and water leak (10.7 ­percent). water (11.4 ­ People who submitted complaints prefer to do it over-the- counter, except for reporting corruption and vandalism/theft (online portal), but note the small number of occurrences; illegal connection, major burst, new connection, water leak, and water quality (tele- phone call); secondary ticket (mobile web); general (SMS). For a full breakdown of channel and complaint category see Figure 6. MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 201 Figure 5.  Category of complaint (system data) Category Billing 27.74 Meter Reading 16.70 Sewer Blockage 12.53 No Water 11.40 Water Leak 10.71 Reconnections/Disconnections 8.09 Faulty Meter 2.25 Customer Care 2.18 Contracting 1.72 Major Burst 1.46 Meter Problems 1.41 Account Termination 1.02 General 0.80 Parallel Accounts 0.67 Incorrect Account Details 0.27 Water Quality 0.25 Stolen Meter 0.21 Secondary Ticket 0.19 New Connection 0.15 Non Reflected Payment 0.14 Illegal Connection 0.05 Vandalism/Theft 0.05 Corruption 0.01 Deposit Refunds 0.01 0 10 20 30% Number of tickets The rate of the increase in the number of complaints from June 2013 to December 2014 differs among digital channels. Although the volume of complaints made through non-digi- tal channels (e.g., over-the-counter and telephone calls) has largely exceeded the volume of complaints made through digital channels, the volume of complaints made through the online portal (digital) is increasing at a steeper rate (monthly percent in the second half of 2014) than average growth 31.9 ­ percent in the telephone calls (monthly average growth is 14.5 ­ the second half of 2014) and the other digital channels. If this 202 CHAPTER 3 Figure 6.  Channel submission and complaint category mapping Billing 4.46 92.72 Meter Reading 95.04 Sewer Blockage 38.73 61.00 No Water 45.32 50.31 Water Leak 52.26 46.35 Reconnections/Disconnections 24.63 74.73 Faulty Meter 4.69 93.01 Customer Care 7.69 82.41 Contracting 98.59 Major Burst 49.64 44.18 Meter Problems 12.38 83.09 Account Termination 94.21 General 97.61 Parallel Accounts 96.11 Incorrect Account Details 5.16 5.16 8.39 81.29 Water Quality 10.96 45.21 36.99 Stolen Meter 97.48 Secondary Ticket 74.11 7.14 16.07 New Connection 92.13 6.74 Non Reflected Payments 7.59 6.33 81.01 Illegal Connection 51.72 44.83 Vandalism/Theft 18.52 25.93 22.22 29.63 Corruption 12.50 12.50 37.50 12.50 25.00 Deposit Refunds 33.33 66.67 0 10 20 30 40 50 60 70 80 90 100% Over the Counter Telephone Call Online Portal SMS Mobile Web USSD Letters Email Social Network Sites growth continues, then the proportion of complaints received via digital channels is expected to increase, narrowing the gap of usage between digital and non-digital channels. However, there is sustained growth in complaints received over-the- percent in the second counter (monthly average growth is 10.1 ­ MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 203 Figure 7.  Cumulative increase in the volume of complaints through non-digital channels Running Sum of Count of Number of Records 45 K 43,495 43,761 41,523 40 38,780 35 35,867 29,995 32,834 30 24,459 27,101 25 20 18,885 21,572 15 13,408 16,074 11,384 12,082 12,245 10 9,624 7,285 10,243 7,556 10,483 5,555 8,516 5 3,916 6,468 2,056 3,994 1,265 2,486 4,575 3 281 812 777 1,795 3,107 76 0 13 27 152 15 22 29 32 34 39 40 41 48 52 59 71 72 Aug 2013 Nov 2013 Feb 2014 May 2014 Aug 2014 Nov 2014 Non-digital channels Over the Counter Telephone Call Letters s half of 2014), suggesting that this will remain the main chan- nel for feedback. The following two figures provide insight into how these changes in channel usage patterns are develop- ing over time separately for non-digital (Figure 7) and digital channels (Figure 8). Note the difference in Y-axis scale. The following research questions are also relevant to this theme: where does that feedback come from? And, who pro- vides this feedback (and how do they compare to the general 204 CHAPTER 3 Figure 8.  Cumulative increase in the volume of complaints through digital channels Running Sum of Count of Number of Records 45 K 43,495 43,761 41,523 40 38,780 35 35,867 29,995 32,834 30 24,459 27,101 25 20 18,885 21,572 15 13,408 16,074 11,384 12,082 12,245 10 9,624 7,285 10,243 7,556 10,483 5,555 8,516 5 3,916 6,468 2,056 3,994 1,265 2,486 4,575 3 281 812 777 1,795 3,107 76 0 13 27 152 15 22 29 32 34 39 40 41 48 52 59 71 72 Aug 2013 Nov 2013 Feb 2014 May 2014 Aug 2014 Nov 2014 Non-digital channels Over the Counter Telephone Call Letters s population)? There is no demographic breakdown of NCWSC customers available (in general), but from the complainants sur- vey various demographic indicators (gender, age, and education level) are available, with further breakdown by complaint chan- nel, which provides some insights into who is providing feedback. Several key things emerge. With regard to gender, males percent)54 across all channels dominate complainants (65.2 ­ (see Figure 9). Male and female complainants equally use over- the-counter, telephone hotline, and online portal channels MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 205 Figure 9.  Gender breakdown of complainants by channel Channel by SMS 92.31 7.69 by Email 91.67 8.33 Over the Counter 66.32 33.68 by Online Portal 64.77 35.23 by Telephone Call 62.00 38.00 by Moblie Web 59.76 40.24 by USSD 55.56 44.44 0 20 40 60 80 100% % of Total Number of Records Male Female (see Figure 9). There is no statistically significant association between gender and use of digital/non-digital channels, but there are differences of note between different digital channels (see Figure 9). The digital channels SMS and email are more percent of dominated by male complainants (more than 90 ­ email complainants are men). The digital channels mobile web and USSD are more balanced in terms of gender usage (55–60 percent are men). Digital channels tend to be used more by younger people, particularly those between 20-44 years old (see Figure 10). There is a negative and statistically significant association between age and use of digital (vs. non-digital) channels as tested by a binary logistic regression (b= -0.04, p<.01, controlling for education). The digital channels tend to be used more by better educated users (see Figure 11). There is a positive and statistically significant association between education and use of digital (vs. non-digital) 206 CHAPTER 3 channels as tested by a binary logistic regression (b= 048, p<.01, controlling for age). Some important variations in digital channel usage appear here. Figure 9 depicts the gender breakdown for each one of the digital and non-digital channels (letters and social networking was excluded because n=1). In general, there are more male com- plainants (65.2 ­percent) than female complainants (34.8 ­percent) and this gender imbalance is reflected across all channels. Although the sample sizes are small for accurate generalizations to the users’ population, Figure 9 suggests that SMS (men=12, women=1) and email (men=11, women=1) are almost exclusively Figure 10.  Age breakdown of complainants by channel Channel Over the Counter 5.98 13.59 26.09 32.07 19.02 by Telephone Call 17.39 23.91 32.61 19.57 by SMS 25.00 50.00 16.67 8.33 by Email 11.11 55.56 33.33 by USSD 9.52 47.62 42.86 by Online Portal 12.68 33.80 39.44 by Mobile Web 6.15 21.54 21.54 50.77 0 25 50 75 100% % of Total Number of Records May I know your age? (group) 20-24 55-64 25-34 65-74 35-44 75+ 45-54 MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 207 Figure 11.  Education level of complainants by channel Gender Channel Female by USSD 80.00 10.00 10.00 by Mobile Web 79.17 16.67 by Telephone Call 56.25 25.00 12.50 6.25 by Online Portal 53.85 38.46 Over the Counter 36.67 18.33 26.67 5.00 10.00 Male by USSD 58.33 16.67 16.67 8.33 by Mobile Web 47.22 25.00 16.67 8.33 by Telephone Call 41.94 29.03 22.58 6.45 by Online Portal 56.10 21.95 17.07 4.88 Over the Counter 33.33 23.58 28.46 7.32 5.69 by SMS 25.00 16.67 50.00 8.33 0 25 50 75 100% % of Total Number of Records Highest level of education No formal schooling Completed secondary shool (A- or O-levels) Some primary school Some tertiary education (=college/university) Completed primary school Completed degree from tertiary institution (college/university) Some secondary school (highschool) used by men. The less gender biased digital channels are USSD (men=15, women=12) and mobile web (men=49, women=33). Non-digital channels mainly used by men are over-the- counter (men=121, women=65) and telephone hotline (men=31, women=19). Digital channels (mobile web, online portal, USSD) tend to be used by younger people (Figure 10). Because in this sample women are more educated [X2 (4) = 12.1, p-value = 0.01], the effect of education and gender on channel 208 CHAPTER 3 Figure 12.  Would complaints have been submitted using other channels Channel Over the Counter 46.33 53.67 by Telephone Call 14.29 85.71 by USSD 8.33 91.67 by Moblie Web 6.49 93.51 by Online Portal 4.49 95.51 by SMS 100 by Email 100 0 20 40 60 80 100% % of Total Number of Records If it had been possible to complain (Q3), would you have complained in some other way? Yes, I would have complained by another channel / in some other way No, I would not have complained at all usage may be confounded. When education is analyzed sepa- rately for male and female complainants, it becomes apparent that USSD, mobile web, telephone hotline, and online portals the- are used by people with higher levels of education. Over-­ counter and SMS are used by people with lower levels of educa- tion (Figure 11), irrespective of gender. Theme 2: Digital feedback mechanisms and the propensity to provide feedback The key research questions in this theme are: is the ability to use digital feedback mechanisms the deciding factor in provid- ing feedback? And, how do people providing feedback rate their experience (across all channels)? The complainant survey data suggest that the digital complaint channels are not the deciding factor in whether or not complaints are submitted. The majority MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 209 Figure 13.  Satisfaction with NCWSC service provision by type of chan- nel used Channel by Online Portal 10.99 31.87 18.68 30.77 7.69 by Mobile Web 9.76 30.49 19.51 36.59 3.66 by USSD 42.86 14.29 42.86 by SMS 7.69 23.08 23.08 46.15 by Telephone Call 5.45 32.73 12.73 38.18 10.91 by Email 8.33 41.67 41.67 8.33 Over the Counter 5.56 25.25 11.11 48.99 9.09 0 25 50 75 100% % of Total Number of Records Satisfaction Very Satisfied Dissatisfied Satisfied Very Dissatisfied Neither Satisfied nor Dissatisfied Figure 14.  Satisfaction with NCWSC service provision by type of channel used (higher educated users) Channel Completed degree by Mobile Web 8.33 30.56 25.00 30.56 5.56 from tertiary institution (college/university) by Online Portal 7.89 34.21 18.42 31.58 7.89 by USSD 40.00 20.00 40.00 by Telephone Call 9.09 27.27 22.73 31.82 9.09 Over the Counter 4.76 33.33 9.52 49.21 3.17 0 25 50 75 100% % of Total Number of Records Satisfaction (group) Very Satisfied Dissatisfied Satisfied Very Dissatisfied Neither Satisfied nor Dissatisfied 210 CHAPTER 3 Figure 15.  Ease of complaint submission How easy or difficult was it to submit your complaint Very Easy 34.16 Easy 46.29 Neither Difficult nor Easy 6.74 Difficult 10.56 Very Difficult 2.25 0 10 20 30 40% % of Total Number of Records Figure 16.  Ease of complaint submission by channel used Channel by Online Portal 57.30 35.96 by Mobile Web 63.64 29.87 by SMS 8.33 33.33 58.33 by Email 10.00 10.00 70.00 10.00 Over the Counter 13.81 8.84 41.99 30.39 by Telephone Call 24.00 24.00 46.00 by USSD 8.00 24.00 24.00 44.00 0 25 50 75 100% % of Total Number of Records How easy or difficult was it to submit your complaint to Nairobi Water? Very Easy Difficult Easy Very difficult Neither difficult nor easy percent) said they would have complained in another way (77.2 ­ had the channel they used not been available. The dominance of non-digital channels has already been noted (Figure 4 and 7), as has the seeming preference of certain MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 211 channels for certain types of complaint (Figure 8). So overall, the ability to use digital channels is only an important factor in a limited number of circumstances (for reporting corruption and vandalism/theft), where privacy and anonymity seem to be influ- encing factors. How do people submitting their complaints rate their experi- ence? Positively, would seem to be the conclusion. People provid- percent) ing feedback are generally satisfied or very satisfied (49 ­ percent). The rather than dissatisfied or very dissatisfied (35.6 ­ percent are neutral. However, when analyzed remaining 15.4 ­ by channel, people are more satisfied when they submit feed- percent are satisfied or very satis- back over-the-counter (58.1 ­ percent are satisfied or very satisfied) and fied) or by email (50 ­ Figure 17.  Is feedback taken seriously by channel of complaint Channel by Online Portal 35.96 7.87 56.18 by SMS 33.33 8.33 58.33 by Mobile Web 36.36 3.90 59.74 Over the Counter 31.64 8.47 59.89 by Telephone Call 36.73 2.04 61.22 by USSD 33.33 4.17 62.50 by Email 10.00 90.00 0 25 50 75 100% % of Total Number of Records Do you feel the feedback you provided has been taken seriously by Nairobi Water Company? Yes, I feel Nairobi Water Company took my complaint seriously I don’t know / No opinion No, I feel Nairobi Water Company did not take my complaint seriously 212 CHAPTER 3 Figure 18.  Is feedback taken seriously by category of complaint Category Faulty Meters 14.29 85.71 Meter Problems 9.09 9.09 81.82 Customer Care 18.18 81.82 Major Burst 20.00 80.00 Water Leak 12.50 8.33 79.17 Sewer Blockage 29.03 67.74 No Water 32.47 64.94 Meter Reading 31.58 64.91 General 36.36 63.64 Parallel Accounts 40.00 60.00 Reconnections/Disconnections 38.71 6.45 54.84 Account Termination 50.00 50.00 Billing 42.48 10.46 47.06 0 25 50 75 100% % of Total Number of Records Do you feel the feedback you provided has been taken seriously by Nairobi Water Company? Yes, I feel Nairobi Water Company took my complaint seriously I don’t know / No opinion No, I feel Nairobi Water Company did not take my complaint seriously more dissatisfied when they submit feedback through USSD percent are dissatisfied or very dissatisfied), the online por- (42.9 ­ percent are dissatisfied or very dissatisfied), or mobile tal (42.9 ­ percent are dissatisfied or very dissatisfied). web (40.2 ­ However, there is no association between levels of satisfac- tion with the service and type of channel (digital vs. non-digital), when controlled for gender, age, and education. More educated MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 213 Figure 19.  Satisfaction of complaint resolution by category of complaint Category Major Burst 20.00 80.00 Water Leak 20.83 79.17 Meter Problems 27.27 72.73 Sewer Blockage 28.12 71.88 Faulty Meters 28.57 71.43 Meter Reading 35.09 64.91 Customer Care 36.36 63.64 Parallel Accounts 40.00 60.00 No Water 41.56 58.44 Reconnections/Disconnections 43.75 56.25 General 45.45 54.55 Billing 49.35 50.65 Account Termination 50.00 50.00 0 20 40 60 80 100% % of Total Number of Records Has Nairobi Water resolved your complaint satisfactory? Yes, resolved to my satisfaction No, not resolved to my satisfaction people tend to be more unsatisfied with the service in general (b=-0.14,p<.05), irrespective of the channel used. There is no effect of gender and age in the levels of satisfaction. Because more educated people use more USSD, mobile web, and online portals and more educated people tend to be less satisfied with the service provided (irrespective of the chan- nel), the low levels of satisfaction with these channels could be due to the characteristics of users, particularly their education. However, when analyzed for the satisfaction of people with 214 CHAPTER 3 Figure 20.  Satisfaction with NCWSC service provision Satisfaction with NCWSC Service Provision Very Satisfied 7.29 Satisfied 41.67 Neither Satisfied nor Dissatisfied 15.42 Dissatisfied 28.75 Very Dissatisfied 6.88 0 10 20 30 40% % of Total Number of Records Figure 21.  How seriously are complaints taken? Feedback taken seriously I don’t know / No opinion 6.38 No, I feel Nairobi Water Company 33.49 did not take my complaint seriously Yes, I feel Nairobi Water Company 60.14 took my complaint seriously 0 20 40 60% % of Total Number of Records Figure 22.  Has MajiVoice made a positive difference to customers and practice of NCWSC? Role category Administration 98.08 Management 100% Supervisory 100% 0 20 40 60 80 100% % of Total Number of Records For customers, do you think MajiVoice has made a difference in practice? No, MajiVoice has made no difference to customers in practice Yes, MajiVoice has made a positive difference to customers in practice MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 215 the highest level of education, they also tend to be more satis- fied with complaints delivered over-the-counter and by tele- phone calls and less satisfied by mobile web and online portals. Possibly people expect higher standards of the service if using digital channels or maybe they feel more satisfied with human interaction as they can provide more personalized service and immediate feedback. The process of providing feedback is perceived as easy or percent) but the dominance of over-the-counter very easy (80.5 ­ use needs to be considered (Figure 4). When ease of submission is broken down by channel, then the digital channels compare well to non-digital channels (Fig. 16). There does not seem to be any significant barrier of com- plaint submission caused by digital technology use. The func- tionality and usability of the digital complaint systems should be viewed positively in this regard, as they, by themselves, are not barriers to use. In fact, when compared to the over-the-counter Table 5.  Average unique weekly log-ins by staff users Period Count Third Quarter 2013: 89 Fourth Quarter 2013: 279 First Quarter 2014: 335 Second Quarter 2014: 375 Third Quarter 2014: 394 Fourth Quarter 2014: 385 216 CHAPTER 3 Table 6.  Regularity of system login How often do you usually log into MajiVoice for your work? A few times per week 7 6.6% Daily 96 90.6% Never 2 1.9% No Answer 1 0.9% Total 106 100.0% and telephone hotline channels, the digital channels outperform in terms of their ease of use. Most customers that have submitted a complaint feel that it has been taken seriously by the water company percent). The categories in which customers feel that (60.1 ­ their complaints were taken seriously are faulty meters, meter leaks, customer care, and water leaks. More users that submit their complaints through email feel that feedback was taken seriously (90 ­percent, n=10) compared with complains submit- ted through online portals (56.2 ­percent, n=177). Figures 17 and 18 provide some insights into this. The categories where customers are more satisfied with the feedback process are major burst, water leaks, meter problems, and sewer blockage. Unsurprisingly, that view is more likely to be held by customers that have had their complaint resolved/closed. Theme 3: Effects on attitudes, perceptions and performance The key research questions in this theme are: has providing feedback been worthwhile? Has the feedback/complaint been MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 217 Figure 23.  Monthly complaints received on MajiVoice Number of Records 4,239 4,059 4000 3,869 3,912 3,913 3,863 3,685 3,952 3,879 3500 3,556 3,608 3,362 3,408 3000 2,769 2500 2,434 2000 1,799 1500 1000 740 500 131 0 202 Jun 2013 Sep 2013 Dec 2013 Mar 2014 Jun 2014 Sep 2014 Dec 2014 oice Running Sum of Count of Number of Records 60 K 57,380 54,611 50,748 50 46,796 40 42,557 38,678 34,619 31,011 30 23,690 27,098 20 16,643 20,005 9,175 12,731 10 5,306 131 2,872 0 333 1,073 Jun 2013 Sep 2013 Dec 2013 Mar 2014 Jun 2014 Sep 2014 Dec 2014 oice 218 CHAPTER 3 Figure 24.  Complaint closure rate over time and complaint category – period 1 Report Period: Sun, Sep 01, 2013 - Fri, Feb 28, 2014 Category Total Received Total Closed Average Resolution Time No Water 1956 1949 (99.64%) 16 days 18 hours 47 minutes 29 seconds Water Quality 32 31 (96.88%) 38 days 14 hours 36 minutes 49 seconds Water Leak 1636 1636 (100.00%) 8 days 19 hours 7 minutes 49 seconds Sewer Blockage 2283 2280 (99.87%) 7 days 1 hours 27 minutes 53 seconds Billing 5487 5464 (99.58%) 129 days 9 hours 7 minutes 19 seconds Vandalism/Theft 11 11 (100.00%) 63 days 22 hours 20 minutes 27 seconds Meter Problems 200 200 (100.00%) 32 days 6 hours 10 minutes 42 seconds Corruption 5 5 (100.00%) 60 days 17 hours 57 minutes 32 seconds Customer Care 231 230 (99.57%) 57 days 15 hours 48 minutes 18 seconds General 293 292 (99.66%) 30 days 20 hours 39 minutes 22 seconds Major Burst 258 257 (99.61%) 19 days 23 hours 19 minutes 56 seconds New Connection 7 7 (100.00%) 103 days 15 hours 44 minutes 11 seconds Account Termination 100 100 (100.00%) 120 days 6 hours 25 minutes 36 seconds Stolen Meter 22 22 (100.00%) 38 days 20 minutes 11 seconds Illegal Connection 3 3 (100.00%) 113 days 11 hours 2 minutes 6 seconds Meter Reading 3844 3834 (99.74%) 113 days 18 hours 35 minutes 36 seconds Faulty Meters 438 437 (99.77%) 60 days 23 hours 57 minutes 60 seconds Parallel Accounts 136 134 (98.53%) 141 days 3 hours 28 minutes 46 seconds Deposit Refunds 3 3 (100.00%) 120 days 9 hours 21 minutes 32 seconds Non Reflected Payments 7 7 (100.00%) 59 days 3 hours 28 minutes 45 seconds Reconnections/Disconnections 1113 1113 (100.00%) 11 days 4 hours 28 minutes 45 seconds Incorrect Account Details 59 59 (100.00%) 57 days 9 hours 54 minutes 14 seconds Contracting 808 808 (100.00%) 32 days 21 hours 54 minutes 30 seconds Total 18,932 18,882 71 days 5 hours 42 minutes 30 seconds MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 219 Figure 25.  Complaint closure rate over time and complaint category – period 2 Report Period: Sat, Mar 01, 2014 - Sun, Aug 31, 2014 Category Total Received Total Closed Average Resolution Time No Water 2444 2430 (99.43%) 10 days 12 hours 49 minutes 36 seconds Water Quality 65 64 (98.46%) 13 days 10 hours 1 minutes 48 seconds Water Leak 2732 2729 (99.89%) 6 days 50 minutes 24 seconds Sewer Blockage 2766 2756 (99.64%) 5 days 17 hours 34 minutes 22 seconds Billing 6011 5907 (98.97%) 62 days 11 hours 15 minutes 20 seconds Vandalism/Theft 8 8 (100.00%) 24 days 7 hours 42 minutes 57 seconds Meter Problems 269 268 (99.63%) 34 days 12 hours 11 minutes 45 seconds Secondary Ticket 12 11 (91.67%) 18 days 14 hours 45 minutes 12 seconds Customer Care 767 766 (99.87%) 21 days 12 hours 15 minutes 20 seconds General 64 62 (96.88%) 17 days 9 hours 46 minutes 3 seconds Major Burst 335 332 (99.10%) 11 days 4 hours 47 minutes New Connection 81 81 (100.00%) 9 days 17 hours 12 minutes 9 seconds Account Termination 341 333 (97.65%) 27 days 6 hours 11 minutes 54 seconds Stolen Meter 66 66 (100.00%) 22 days 16 hours 23 minutes 11 seconds Illegal Connection 13 13 (100.00%) 50 days 20 hours 31 minutes 43 seconds Meter Reading 3704 3641 (98.30%) 56 days 17 hours 56 minutes 9 seconds Faulty Meters 492 491 (99.80%) 25 days 17 hours 44 minutes 33 seconds Parallel Accounts 150 141 (94.00%) 66 days 6 hours 58 minutes 8 seconds Deposit Refunds 3 3 (100.00%) 24 days 17 hours 25 minutes 49 seconds Non Reflected Payments 52 51 (98.08%) 40 days 3 hours 32 minutes 46 seconds Reconnections/Disconnections 1986 1986 (100.00%) 8 days 1 hours 3 minutes 29 seconds Incorrect Account Details 54 54 (100.00%) 26 days 1 hours 16 minutes 25 seconds Contracting 137 136 (99.27%) 25 days 14 hours 46 minutes 57 seconds Total 22,552 22,329 32 days 8 hours 20 minutes 21 seconds 220 CHAPTER 3 addressed? And, has the water service provision and responsive- ness improved as a result of the feedback mechanisms? NCWSC are resolving the majority of customers’ com- percent of the customers sampled plaints satisfactorily: 59.5 ­ think that NCWSC resolved their complaints satisfactorily. A percent) felt that their complaints had similar number (60.1 ­ been taken seriously. While both of these figures are positive, they still leave a large proportion of complaints not resolved satisfactorily or that cus- tomers feel have not been taken seriously. How do staff view the contribution of MajiVoice to the improvements in water service provision and responsiveness? percent), irrespective of their A very clear majority of staff (97.9 ­ role, think that MajiVoice has made a positive difference to cus- tomers and the practice of NCWSC. Theme 4: Understanding successful adoption The key research question in this theme was: has MajiVoice resulted in any significant changes in terms of changing patterns of usage of the system at the water utilities over time, changes in terms of complaint submission patterns by water utility custom- ers, changes in closure rate over time and by complaint category, or changes in complaint resolution time? Since the launch of the system in the third quarter of 2013, there has been a steady increase in the number of registered NCWSC staff users on the system accompanied by an increase in how regularly those users use the system (see Tables 5 and 6). In addition, through the staff survey, staff report regular usage of MajiVoice and it has clearly become an essential part of their MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 221 daily workflows, with over 90 ­percent of surveyed staff accessing the system on a daily basis. The issue of causation should be considered here. It is probably fair to say that the availability of MajiVoice is not the cause of the update in usage of MajiVoice by members of NCWSC staff. Rather, it is the availability combined with the organizational commitment to adopting the software and designing and supporting its use with a set of complaint han- dling processes and strategies. It is likely that it is this inte- grated approach to the MajiVoice system that is the key to its rapid uptake and success. During the first year after MajiVoice’s launch, the number of complaints recorded has increased almost tenfold (see Figure 23), the percentage closed has steadily increased, and average com- plaint closure times have halved (see Figures 24 and 25). These metrics confirm a significant positive effect of MajiVoice on the business processes, in particular the complaint handling pro- cesses of NCWSC. A baseline study carried out at the beginning of the project indicated that prior to MajiVoice, approximately 400 complaints per month were formally recorded at NCWSC.55 This figure has been quickly exceeded in MajiVoice, with an average of 1,529 complaints per month recorded in the first six months, which has risen to 3,794 per month more recently. This represents an almost tenfold rise of recorded complaints. In Nakuru, com- plaints recorded in MajiVoice have averaged 442 per month since the roll-out in October 2014. However, it should be noted that the lack of compre- hensive complaint tracking prior to MajiVoice makes earlier 222 CHAPTER 3 statistics unreliable. Likely, the reported increase is partly due to better reporting rather than simply increased complaint volumes. Regarding complaint closure times, the decrease in han- dling times has uniformly improved, in that nearly all com- plaints are being dealt with and closed in significantly quicker turnaround times. It is interesting to note that certain cate- gories of complaints continue to have much longer handling times than others. In particular, those complaints associated with billing and account status—i.e., broadly revenue related complaints—have nearly twice as long resolution times as the average complaints (complaint categories; Billing and Parallel Accounts—see Figures 24 and 25). CONCLUSIONS Evaluation research questions Theme 1: Investigate digital feedback mechanism usage How much feedback has been received via each digital feedback mechanism? Digital feedback channels receive only a small pro- portion of all the feedback provided through the MajiVoice sys- percent. Usage of some of those digital channels tem—just 3.1 ­ (online portal and mobile web platforms) is growing at a signifi- cantly faster rate than non-digital. Where does that feedback come from? The feedback comes from across the NCWSC customer base. The main focus of com- plaints relates to individual service delivery issues (e.g., my water services bill, my meter reading, etc.). A much smaller proportion of complaints can be considered public complaints about issues such as water leaks and corruption. MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 223 Who provides this feedback (and how do they compare to the general population)? And: How does the use of digital feedback mechanisms compare to non-digital feedback mechanisms? Males percent) across all channels. Male dominate complainants (65.2 ­ and female complainants equally use over-the-counter, telephone hotline, and the online portal channels. There is no statistically sig- nificant association between gender and use of digital/non-digital channels, but there are differences of note between different digi- tal channels. The digital channels SMS and email are most used by percent of email complainants male complainants (more than 90 ­ are men). The digital channels mobile web and USSD are more bal- percent are men). Digital anced in terms of gender usage (55-60 ­ channels tend to be used more by younger people, particularly those between 20-44 years old and also by more educated users. Theme 2: Explore the impact of the mechanisms on the propensity to provide feedback Is the ability to use digital feedback mechanisms the deciding fac- tor in providing feedback? Digital complaint channels are not the deciding factor in complaints being submitted. How do people providing feedback rate their experience (across all channels)? People providing feedback are generally percent) rather than dissatisfied or satisfied or very satisfied (49 ­ percent). The remaining 15.4 ­ very dissatisfied (35.6 ­ percent are neutral. When analyzed by channel, people are more satisfied percent) or when they submit feedback over-the-counter (58.1 ­ percent) and more dissatisfied when they submit by email (50 ­ percent), USSD (42.9 ­ feedback by online portal (42.9 ­ percent), or mobile web (40.3 ­percent). 224 CHAPTER 3 Theme 3: Explore the effect of the digital feedback process on attitudes, percep- tions, and performance (providers of feedback and receivers of feedback) on an organizational and individual level Has providing feedback been worthwhile? Most customers that have submitted a complaint feel that it has been taken seriously percent). The process of providing by the water company (60.1 ­ percent). The feedback is perceived as easy to very easy (80.5 ­ dominance of over-the-counter feedback channel use needs to be considered. Has the feedback/complaint been addressed? NCWSC are resolving the majority of customers’ complaints satisfacto- percent of the customers sampled think that NCWSC rily; 59.5 ­ resolved their complaints satisfactorily. Has the water service provision and responsiveness improved as a result of the feedback mechanisms? During the first year after MajiVoice’s launch, the number of complaints recorded has increased almost tenfold. The percentage of complaints suc- cessfully closed has steadily increased and average turnaround times have halved. Theme 4: Understanding successful adoption of MajiVoice Regarding changing patterns of usage of the system at the water utilities over time, MajiVoice has become an essential part of percent of surveyed NCWSC staff daily workflows, with over 90 ­ staff accessing the system on a daily basis. Regarding complaint submission patterns by water utility customers, NCWSC is handling—and successfully resolving—a near tenfold increase in complaints.56 NCWSC now has an effec- tive complaint handling mechanism (process and system) that MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 225 provides easy performance management across the entire com- plaint handling process. Regarding closure rate over time and by complaint category, average complaint closure times have halved over the first year of MajiVoice being used by NCWSC. Complaint handling times have uniformly improved across all categories. Complaints associated with billing and account status have nearly twice as long resolu- tion times as the average complaint. Through the five lenses of the DCE evaluation framework Logic The MajiVoice system is a combination of a digital complaint submission system and a digitized complaint handling system. Combined, these allow the water utilities using the system to receive feedback from customers and members of the public via digital feedback channels and then to manage that feedback. In addition, the system provides a framework for managing feed- back received by traditional methods (letter, over-the-counter, and telephone calls). The end result is a much more effective feed- back management system. The MajiVoice system and accompanying processes work for both digital and non-digital channels, and so in this sense the logic of the system is valid in that it expands the range of channels that can be used to provide feedback and provides a much more rigorous, automated, and functional complaint handling system. Currently, the level of usage of digital feedback channels is low but growing. If the logic of the system is to replace non-digital complaint channels or discourage such channels, then with current patterns 226 CHAPTER 3 of use, questions would need to be asked about the appropriate- ness of the system. But that is not the objective of MajiVoice. The improvements to service delivery and complaint handling are sig- nificant, and so in terms of business efficiency and accountability to customers (beneficiaries) the MajiVoice system should be con- sidered as a significant improvement on what has been done in this regard previously in Kenya. The logic in this regard is sound. Ultimately, ICT mediated feedback only plays a minor role in the current installation of MajiVoice. While digital feedback channels are a key component of the system, their usage over- all forms a very minor part of the total feedback received. The important role that ICT does play is much more centered on the management of organizational processes. In this sense, MajiVoice is less a success in terms of digital participation or digitally medi- ated feedback (“e-participation”) and more a success in terms of digitally enhanced service delivery with regulatory-enforcement opportunities (“e-government”). Control This lens is perhaps the least relevant when evaluating MajiVoice. The issues of who controls and who influences the digital engagement processes around this beneficiary feedback system are not particularly relevant. MajiVoice exists primar- ily to allow customers of the water utilities in Kenya to pro- vide feedback and for those utilities to manage that feedback process more effectively, with an upward accountability role played by the water services regulator. There is no control in terms of restrictions to providing feedback per se; although of course there are potential technical barriers for using the digital MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 227 channels (although our research shows that there are high levels of satisfaction in terms of ease of use of the digital channels, so barriers in this regard seem to be minimal). There is the poten- tial barrier of cost of usage of the digital channels but those are also minimal to zero depending on the channel used. In this regard, if cost is a barrier, then alternative no-cost channels exist that can be selected instead. Participation There are clear differences in digital channel usage compared to non-digital channels. Broadly speaking, better-educated, younger people are using the digital channels to provide feed- back. The existence of non-digital channels, and their massively dominant position in terms of usage by all categories of people, means that the digital channels are not significant enablers or barriers to use. If current trends continue, then digital feedback channels are set to grow more rapidly than non-digital chan- nels (true across all demographics) and so in time, participation should be re-examined. Technology Reported satisfaction and ease of use of MajiVoice, and in par- ticular the digital feedback channels, is encouraging. The tools, reporting process, and technical implementation all seem to be performing well. The complaint management framework that the MajiVoice system provides should be considered a key success of the program, as should the wider regulatory framework in which it operates. Together this has produced a manageable, auditable, and targeted feedback handling system that the water service 228 CHAPTER 3 provider in question has the right incentives to use and to adopt effectively. The growth in complaints and the improvements in resolution times are all significant outcomes. This technology for handling feedback and complaints, combined with a regulatory environment that supports such complaints being taken seri- ously, should be considered highly successful. Whether it is the MajiVoice technology, the regulatory framework, or the change management processes adopted by NCWSC that is the dominant factor in this success is not clear— the evidence for which is the dominant factor is difficult to dis- entangle. It is probably fair to say that it is a combination of these factors that are contributing to the adoption and uptake of the more effective beneficiary feedback system. There are no significant beneficiary-focused barriers that have been identified as technology related. The biggest “barrier” to use is attitudinal—the beneficiary preference for traditional complaint mechanisms, with a very dominant preference toward over-the-counter complaints. The availability of the MajiVoice software as an open source tool set is encouraging, as re-use and uptake in other contexts is possible. Difference The adoption of the MajiVoice software and its use in a com- plaint handling back-end platform has resulted in a tenfold increase in complaint submissions (or perhaps better thought of as complaint submission and recording). Those complaints are generally being resolved to the satisfaction of complain- ants. The performance of the water utility using MajiVoice MAJIVOICE KENYA — BETTER COMPLAINT MANAGEMENT AT PUBLIC UTILITIES 229 has significantly improved in this regard. This performance enhancement is significant. From the evaluation research undertaken here, it is not pos- sible to draw any further conclusions regarding the impact of MajiVoice on actual water service delivery. LIMITATIONS AND FURTHER RESEARCH This evaluation was a rapid and limited scope evaluation; there- fore it has not explored areas of additional potential interest (e.g., the effect of improved complaint handling on water service provision and access to water, a more detailed and robust inves- tigation into the causality of some of the apparent service level improvements identified, and the role of digital channel confir- mation interaction on complainant satisfaction and interaction decisions). The lack of extensive baseline data for complaints, complaint resolution, and accompanying satisfaction levels has also limited the conclusions that can be drawn. This evaluation will allow those to be looked at in more detail. An initial baseline now exists and experience of undertaking this evaluation will help inform any further roll outs of MajiVoice in more areas, with more water utilities, in different settings, or with different service providers. The role of digital confirmation An aspect of the digital channels that is felt to be important in practice but isn’t immediately obvious from the research under- taken or data collected57 is related to the role of the confirmation SMS messages which the MajiVoice system sends to every single customer making a complaint, no matter what the submission 230 CHAPTER 3 channel used in making the complaint. While not more than percent submit their complaints through digital chan- around 3 ­ percent get a return SMS. The only exception to nels, nearly 100 ­ this is if there is an SMS outage: then, complainants provide wrong numbers when submitting over-the-counter or they use phone numbers that are not for their own phones. The initial message thanks the customer for his/her feedback and contains the ticket number. The closure message alerts the customer to the complaint resolution and requests a report if the closure was not satisfactory. There is potential evidence that this automated customer communication and sending of ticket numbers itself improves customer satisfaction. Using customer survey data to try and understand what influences customer satisfaction with NCWSC and whether customers feel their complaints were taken seri- ously, some basic regression analysis indicated that along with resolution-duration, and whether or not the ticket was closed, whether a confirmation SMS was (consciously) received was a significant determinant. Further work in this area would be worth undertaking—the frequency, timing, and contents of such follow up digital commu- nications could be important influencers on satisfaction and ben- eficiary feedback interaction in a system such as MajiVoice. 4 IMPACT OF ONLINE VOTING ON PARTICIPATORY BUDGETING IN BRAZIL Chapter 4 Impact of Online Voting on Participator y Budgeting in Brazil Mat t Haikin Aptivate, UK With Fredrik M Sjoberg Digital Engagement Evaluation Team, World Bank Jonathan Mellon Digital Engagement Evaluation Team, World Bank Februar y 2015 EXECUTIVE SUMMARY This report explores the difference made by the inclusion of remote, ICT-mediated voting in the state-wide participatory budgeting process in Rio Grande do Sul, Brazil—the Sistema ­ Estadual de Participação Popular e Cidadã (the “state system of public citizen participation”), referred to widely as simply the Sistema. The overarching objective for the evaluation was to 233 234 CHAPTER 4 explore how remote, ICT-mediated voting, in the participatory budgeting process in Rio Grande do Sul, has impacted participa- tion rates (turnout), inclusiveness, and the way in which online and in-person voters engage throughout the process. This report is timely as the Sistema in Rio Grande do Sul is one of the few cases of participatory budgeting to be scaled beyond the municipal level—and the Sistema has emerged from the Porto Alegre model that is well-known around the world as an exemplar of participatory budgeting. Key findings Does online voting affect the level of turnout? The evaluation percent increase in voter turnout identified at minimum a 12.2 ­ directly attributable to online voting. Do online and in-person voters have different demographics? The online voting population is younger, more educated, and high- er-earning, though somewhat more male than its offline counter- part, and includes a lower proportion of non-white citizens. Do online and in-person voters engage in different ways? People chose to vote online mainly on the basis of its convenience: Internet access was not a major factor, but lack of awareness of the option to vote online was. Online voters are slightly more likely to have voted in previous years and somewhat more likely to have been involved in wider political activities than in-person voters. Do online and in-person voters vote differently and does this affect spending? The winning demand was different between online and offline voters in 18 of the 28 COREDES, and in two of these, the online vote can be shown to have changed the final I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 235 outcome. Given the online voting population is younger, bet- ter-educated, and higher earning—this has the potential to cre- ate a tension between the redistributive and social justice goals of participatory budgeting and the democratic goals of widening representation. In addition to the primary findings, a number of secondary findings helped illuminate the process further. For example, a lack of clear and widely shared goals for the Sistema potentially leads to the online and offline voting channels which each support different interpretations of the goals of participatory budgeting. This research also revealed that opportunities for manipulation of votes and voters exist (mainly in the offline vote) and this manipulation was observed directly by the evaluation team in a number of locations. Overall, transparency and oversight of the process is unclear. Finally, although the budget allocated to the participatory budgeting has seen a modest increase in real terms since 2008, as a percentage of overall state spending it remains significantly lower than any time since 2007. RECOMMENDATIONS The findings are sufficiently firm to support a number of recom- mendations that, it is hoped, could be used by those in charge of the Sistema to improve it and to make it more inclusive and adap- tive, or also by others looking to the Sistema as a model for repli- cation or learning. These recommendations span the deliberative, voting, and budget-control stages. First, improved communication, promotion of the assem- blies, and promotion of voting is needed, alongside better 236 CHAPTER 4 background information that should be made available to voters and assembly attendees. There is lots of room for improvement in data collection and use, including better data collection on assembly participants and on voters, both online and in-person. These data need to be made more open, along with the results. Technology can help with this process, but should also be inte- grated into the earlier deliberative stages. Going forward, there should be fewer opportunities for manipulation and influence of the voting, and it will be necessary to establish clear, unambig- uous, and shared goals for the participatory budgeting process. Overall, the process would benefit from increased openness of the process, the results, and the data. BACKGROUND Participatory budgeting is becoming a common tool in devel- opment practice and is used in over 2,700 cities worldwide. Research suggests that PB can have a significant impact on devel- opment goals such as reduced infant mortality (Goncalves, 2009). Claims such as (for example), “PB has promoted a redistrib- utive development model while improving budgetary planning and efficiency” (Schneider & Goldfrank, 2002, p. iii) are made for its transformative impact. This means PB is coming under increased scrutiny and debate at the same time that it is becoming accepted as a standard global tool of poverty reduction (Goldfrank, 2007, 2012, 2014). The city-wide Porto Alegre model of participatory budget- ing has been studied and copied around the world. The scaled-up model—commonly known as the Sistema—which is being used at state level in Rio Grande do Sul has won awards.58 It has been I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 237 featured in The Economist and it is even guiding the Brazilian gov- ernment’s development of a national system for participation. The state government is also aiming to share its successful experi- ences across Brazil and worldwide (Goldfrank, 2014; ODTA, 2014; Spada et al., 2016). Given this context, it seems likely that the Rio Grande do Sul Sistema may become a global model of best practice for peo- ple considering implementing participatory budgeting at scales above the city level. This makes it timely and important to under- stand not just the successes of the program, but areas where it could be improved, elements that are specific to the local con- text and, most importantly for this evaluation report, the way in which technology has been incorporated and how this may have affected or been affected by the program’s wider goals. Goldfrank states that “if successful, the Sistema will likely become an exem- plar . . . if unsuccessful, critics will claim it doesn’t work except at a local level” (Goldfrank, 2014, p. 2). To that end, this evaluation is based on three surveys that were undertaken from Porto Alegre in May 2014 while the vot- ing stage of the 2014 participatory budgeting process was tak- ing place. The surveys allowed similar questions to be posed to online voters, in-person voters, and non-voters through an online survey offered to all online voters, a face-to-face exit poll conducted at a sample of fifty polling stations around the met- ropolitan Porto Alegre region, and a mobile phone survey (using random digit dialing) conducted over the three days immediately after the close of the voting. The results from these surveys are supplemented by a literature review, selective semi-structured interviews with key individuals in Porto Alegre, online surveys to 238 CHAPTER 4 a selection of the staff involved in delivering the in-person voting, surveys to the enumerators conducting the face-to-face surveys, and observation by enumerators. Following the work initiated by the World Bank’s Digital Engagement Evaluation Team in 2012 (see Spada et al. 2016), the overarching objective of this evaluation was to explore how remote, ICT-mediated voting, in the participatory budgeting pro- cess in Rio Grande do Sul, has impacted participation rates (turn- out), inclusiveness, and the way in which online and in-person voters engage throughout the process. Within this macro objective, specific evaluation questions have been designed with a focus on the impact of technology on the process and of the wider context into which this technology fits. The primary evaluation questions the report explores are: • Does online voting affect the level of turnout? • Do online and in-person voters have different demographics? • Do online and in-person voters engage in different ways? • Do online and in-person voters vote differently and does this affect spending? A number of secondary questions also emerged while ­ scoping the evaluation through the perspectives suggested by the five lenses for evaluating DCE. These primary and secondary evaluation questions are mapped in Table 1 below against both the five lenses and the three stages of the participatory budgeting process. It is hoped that the findings in this report will help research- ers to explore the Sistema more deeply, and help practitioners Table 1.  Evaluation questions mapped aginast lenses and PB stage PB process Deliberative stages The vote Control of budget/process Logic Are the goals clear and appropriate? Control Who controls the PB process and total budget? Participation Do online/in-person Does online voting affect the level of voters engage in turnout? different ways? Do online and in-person voters have different demographics? Do online/in-person voters engage in different ways? Evaluation lens Technology Are the online/offline processes open What transparency and oversight exists? to manipulation or undue influence? Difference Do online and in-person voters vote differently, I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L and does this affect spending? 239 240 CHAPTER 4 to adapt the process to their local context and to implement PB in such a way as to avoid some of the pitfalls that are, perhaps, present in the current model in Rio Grande do Sul. Finally, as suggested by the Participatory Budgeting unit in the United Kingdom (PB-Unit, 2009), rather than stating whether includ- ing technology in the process is “good” or “bad,” this report aims to analyze its potential benefits and risks in order to help oth- ers when integrating technology into participatory budgeting processes. ABOUT PARTICIPATORY BUDGETING Explanations of participatory budgeting (PB) vary, but widely accepted sources describe a regular and repeated process by which citizens can make binding decisions over part of a govern- mental budget, typically through an annual cycle of decentral- ized public meetings where citizens and government debate, deliberate, and vote on projects and priorities to be included in the upcoming year’s budget, and subsequently implemented (Goldfrank, 2007; Baiocchi & Ganuza, 2014; Pateman, 2012). PB is generally held to have started in Porto Alegre in 1989.59 It then spread to over 250 cities in Brazil (with some implemen- tations scaled up to state-level) through the 1990s. Its popular- ity has continued. Forms of PB are being implemented globally, including in most of Latin America (with over 250 examples), other countries in the Global South, across Europe (over 300 ­ ities examples), and in the United States. In total, over 2,700 c worldwide are implementing PB of one form or another (Serageldin et al., 2005; Goldfrank, 2007; Baiocchi & Ganuza, 2014; Pateman, 2012). I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 241 Initially a radical and leftist project led by the Workers’ Party in Brazil, PB has evolved into a mainstream policy adopted by governments across the political spectrum, with advocates including both the World Bank and Hugo Chavez (Baiocchi & Ganuza, 2014). The ICT context: online technologies and the Sistema Information and communication technologies (ICTs) have played a role in democratic processes since at least the early 1990s and experiments in the inclusion of ICTs within participatory bud- geting have been taking place in Brazil since the late 1990s. Since 2006, some Brazilian cities have experimented with more com- plex uses of ICT with online or hybrid offline/online PB processes being used in Belo Horizonte and Recife. This “e-participatory budgeting” has since been adopted by other cities around the world (Spada et al., 2016). In Rio Grande do Sul itself, ICTs have played a role in PB since its introduction in 1999. Originally, there was an ICT application built to support the municipal, thematic, and regional assemblies that also aided advisors in preparing and monitoring the bud- gets and investment plans. These tools were adapted in 2003 to match the new Consulta Popular and, for the first time, allowed voting through the Internet. Following the most recent change to the Sistema, ICTs now offer a more limited system and are used primarily to manage the ballot boxes or bags (“urnas”), to man- age online and mobile voting,60 and to aid in the consolidation of results (Procergs, 2013). The ICT systems are developed and managed by the state’s digital office and PROCERGS61 that, between them, also operate 242 CHAPTER 4 other digital participatory systems in the state. These include four major systems: the Governador Pergunta/”Governor Asks,” wherein the government asks citizens for feedback on specific topics; the Governador Responde/”Governor Answers,” wherein citizens can send questions to the Governor; the Governo Escuta/”Government Listens,” wherein public hearings are transmitted online; and De Olho nas Obras/”Eye in the Works,” which allows the public to monitor progress of public works and projects.62 The current open-source software supporting the Sistema is hosted across multiple servers on a secure infrastructure and consists of four modules: management, voting, counting, and results. The management module monitors the management of the urnas for the in-person voting, the voting lists, and other doc- umentation. The voting module consists of the online and mobile voting platform itself. The counting module consolidates the in-person and online results. Finally, the results module accom- modates adjustments required by the law and to generate results for incorporation into the budget. As well as a front-end allowing real-time monitoring of the voting, the system also integrates with the “Citizen Login” sys- tem providing single, sign-on access to the state’s digital services (Procergs, 2013). PARTICIPATORY BUDGETING IN RIO GRANDE DO SUL Rio Grande do Sul (RS) is a state in the south of Brazil that covers over 100,000 square kilometers. It has a population of over eleven million; its capital city is Porto Alegre. Participatory budgeting at the state-level was first introduced into RS in 1999 by one of the I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 243 key figures in its introduction in Porto Alegre a decade earlier— Governor Olivio Dutra of the Workers’ Party. The city-level model used in Porto Alegre was modified to suit the much larger area and number of participants in the state-level RS process, and to include the regional development councils. When the Workers’ Party lost power in 2003, this process was replaced with the Consulta Popular, and when it regained power in 2011 another new form of PB known as the Sistema Estadual de Participação Popular e Cidadã63 was introduced (Spada et al., 2016; Goldfrank, 2007; Schneider & Goldfrank, 2002). Designed to include explicitly all prior forms of PB, the Sistema Estadual de Participação Popular e Cidadã goes beyond budgeting to include a wider set of participatory systems that integrate with multi-year participatory planning at the state level (PPA 2012–15), to reinforce civil society oversight of the execution of the budget, to have a more complex ballot and, crit- ically, to make voting a separate stage of the process that is open to the entire population, not just those who attend the public assemblies (Spada et al., 2016; Goldfrank, 2012, 2014). This vot- ing stage is specifically referred to as the Votação de Prioridades do Orçamento 2015,64 or more commonly simply Votação de Prioridades. This Sistema Estadual de Participação Popular e Cidadã—com- monly referred to simply as the Sistema—is the process considered throughout this report. The model in RS combines deliberative and representative methods to enable citizens to decide which investments should be funded, allocating a portion of the state budget according to a popular vote. The Sistema is managed by a combination of the state government department for Planning, 244 CHAPTER 4 Management, and Citizen Participation (SEPLAG) and the regional development councils, known as the COREDES.65 The budgetary cycle (ciclo orçamentário) takes place annually and consists of six stages:66 budget allocation, regional hearings, municipal assem- blies, regional fora, voting, and defining the budget. Budget allocation: technically outside of participatory process, but a necessary stage before the start of the annual cycle. A portion of the state budget is agreed for the participatory voting across percent of the state (in 2014 this was R$192 million, roughly 0.3 ­ the total planned spend for 2015). This budget is then allocated across the COREDES regions according to the population and the number of municipalities in the COREDES and against an indica- tor of social development that takes into account health, educa- tion, sanitation, etc (the IDESE figure). Regional hearings [March/April]: the annual cycle begins with regional hearings in each COREDES region. Although mostly attended by government, COREDES, and invited civil society groups, these are public hearings that discuss the previous year’s budget and investment progress. They allow for the COREDES to present their annual vision for development in their region. The hearings vote for ten thematic areas67 for their region—proposals for investment in the following stages of the process can only be within these thematic areas—and the hearings elect a regional coordinating group comprising three representatives each from government, COREDES, and the public/civil society. Municipal Assemblies [May/June]: public assemblies are held at municipal level68 throughout the state. These are public debates I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 245 where proposals for investment are brought and deliberated upon, resulting in up to ten proposals with specific allocated val- ues or costs to be taken to the regional fora. There are also elec- tions for delegates to attend the regional fora, with the number of delegates proportional to the number of citizens attending the assembly. Regional fora [June]: one forum is held per COREDES region, attended by the municipal delegates as well as COREDES rep- resentatives and regional directors of the multi-year investment plan (PPA). These attendees vote for between ten and twenty of the demands brought from the municipalities within the region; they establish up to five regional priorities (from the ten thematic areas chosen at the regional hearings). All demands undergo a feasibility analysis from a SEPLAG technical committee and non-feasible demands are excluded. The final set of demands and priorities make up the ballot for the voting stage.69 The regional fora also elect two delegates to attend the State Forum. Voting—Votação de Prioridades [July]: voting on the specific investment demands and the thematic areas is open to the entire population of the state. Voting takes place on one day in-person and then three days online. Voters can choose up to four of the demands and two of the regional priority areas.70 Defining the budget [July]: in the final stage, the State Forum— comprising the government and the delegates elected in the regional fora—use the results of the vote to define the budget for the coming year. The first-choice demand in the voting for each region will always receive funding. If there is remaining budget to cover additional demands, they will also receive funding in 246 CHAPTER 4 the order of voting preference, until the budget limit is reached. The State Forum also works with each COREDES to monitor the implementation of the projects. As the primary focus of this report, the details of the voting stage are discussed in depth here. This information is from the official doc- uments governing the process (although observation while in Brazil suggests that some of these rules are not implemented). First, face- to-face polling locations for the state should be provided and dis- seminated ten days before the vote, ensuring citizens know where to go.71 Voters may vote only once, under penalty of law. In-person voters must sign the attendance list and show proof of ID before they may vote. After voting closes, each urna72 should be sealed and a declaration signed showing the hours the polling station was open and the total number of voters. The declaration should be accom- percent dis- panied by the attendance list. If there is more than a 2 ­ crepancy between the number of voters on the attendance list and the number of ballots in the urna, the results of the urna must not be included in the count of votes. Municipal coordinators are to collect the urnas and count the ballots in pre-designated and publi- cized locations no more than twenty-four hours after the close of the polls, with minutes made that record the count and detail issues or discrepancies. The online voting platform can be accessed from PCs or mobile phones and is available twenty-four hours a day through- out the three-day voting period. On polling day, computers with Internet access should be made available in public spaces to allow citizens to use the online voting if they wish. For clarity, throughout this chapter, the Sistema process will be simplified into three core stages. First, the deliberative stages, which include the regional hearings, municipal assemblies, and I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 247 regional fora preceding the vote. Second, the voting itself. And third, the process of controlling the budget, which includes what happens after the vote, as well as the initial setting of the regional budgets before the process begins. GOALS OF PARTICIPATORY BUDGETING Although PB started as a radical project of the political left and was explicitly concerned with issues of power, exclusion, and marginalization of poor communities (and with redistribution), it has since been embraced by governments across the political spectrum and by the mainstream development community. It is now often discussed in the context of democracy, governance, and efficiency, with goals commonly stated as including the improvement of government performance, the enhancement of the quality of democratic participation, and the democratization and increased transparency of public spending. The different sets of goals attributable to participatory bud- geting have been adopted by those across the ideological spec- trum. The World Bank itself apparently embraces goals ranging across these different interpretations, including “. . . educating, engaging and empowering citizens”; “. . . giving marginalised and excluded groups the opportunity to have their voices heard and influ- ence public decision-making”; “. . . increasing the voice of ordinary citi- zens and the most vulnerable groups”; “. . . . re-direct public investments towards basic service in poor neighbourhoods”; “. . . empower vulner- able groups to increase their voice in budget decisions”; “. . . inclusion of economically and politically weak sectors”; and “. . . the ultimate desired goal is the reduction of poverty” (Goldfrank, 2012, p. 5–6), as well as less radical goals such as localizing of responsibilities to the 248 CHAPTER 4 community level, devolution of authority, and helping to weed out government expenditure and program inefficiencies. Participatory ­ nnovation and a vehi- budgeting can be seen as both a democratic i cle for empowerment, social justice, and redistribution. Perhaps the most useful analysis that combines these differing viewpoints is that of Wampler, who suggests that the interaction of four prin- ciples should be central to any analysis of PB: • Voice: active citizen participation • Vote: increased citizen authority • Social justice: reallocation of resources • Oversight: increased transparency Wampler goes on to explain how these principles present themselves in a participatory budgeting processes (Wampler, 2012): These principles can be a useful tool in evaluating whether a specific instance of PB has been designed and implemented to produce social change or is being seen as a technical tool for efficient delivery of government services and improvements within the status quo (Wampler, 2012; Goldfrank, 2012). In the case of Rio Grande do Sul, the goals of the current system are unclear. As the Sistema was introduced under the Workers’ Party by Tarso Genro—who had been involved with participatory budgeting since its earliest days in Porto Alegre— it seems likely that it is continuing in the more radical foot- steps of its beginnings. This view is certainly shared by some of those interviewed who refer to the history of participatory I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 249 budgeting in the state as “. . . a tool to build citizenship”; “. . . how to speak in public”; “. . . learning to organize a group, to mobi- lize people to vote” (Tarson Nunez, 29/5/2014); and “. . . to help poor people who don’t have access to their governments”; “. . . to open the state to these poor people”; and “to change the priorities to include basic services” (Adalmir Marquetti, 30/5/2014). The state government of RS has also made statements referring to the goals of the Sistema in terms of democratic goals, includ- ing  “. . . diminishing the distance between citizens and institutions, deepening citizen participation, consolidating democratization” (Goldfrank, 2014, p. 10) and making explicit its intention to integrate participatory, deliberative, and representative insti- tutions (Spada et al. 2016). It is not clear whether the newest model of participatory budgeting in RS explicitly includes the earlier and wider goals of participatory budgeting in addition to its goals around demo- cratic innovation. The goals behind the introduction of the online voting are also unclear. Two goals that are clearly stated are “mas- sificação” (massification) or reaching a wider audience, and to “facilitate the process” in the form of immediate results such as cost effectiveness (Paulo Coelho, 21/8/2014). Whether there are any wider ambitions for the use of technology, or whether it is seen as simply an additional voting channel, is not clear. Although the goals of the Sistema and its use of technol- ogy are not clearly stated, this evaluation takes the approach that it should be evaluated in the context of both the lim- ited information on its explicitly stated goals, and the widely accepted goals of PB as both a tool for democratic inclusiveness and as a tool for empowerment and social justice.73 Therefore, 250 Table 2.  Participatory budgeting principles—Wampler. CHAPTER 4 Voice Public and deliberative meetings introduce new voices into the political arena, providing access to citizens who have traditionally been unable to access political power. Vote Moving beyond consultation, citizens become “state-sanctioned decision makers” empowered to make binding decisions over state resources. Social justice By allocating resources based on levels of infrastructure and poverty, and by traditionally excluded participants steering decisions on investment, budgets are allocated toward the poor and the issues and service of importance to them . Oversight The ongoing nature of the conversation between state and citizens enabled direct oversight of how public resources are actually being allocated. I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 251 for the purposes of this chapter, the goals of PB in Rio Grande do Sul are interpreted as these contextualized variations on Wampler’s four principles: • Engagement: increasing citizen participation • Inclusion: of excluded/marginalized groups • Redistribution: effective and efficient allocation of resources towards poor areas/interests • Oversight: a transparent and accountable process METHODOLOGY The evaluation was conducted using a mixed-methods approach, combining online and face-to-face surveys, semi-structured interviews, direct observation of the voting process, and analysis of existing data, government documents, and literature. The key data collected is summarized below. The analysis of the data sources is mainly straightforward reporting of results and comparisons. In certain cases, more detailed analysis has been undertaken and details on sample sizes and confidence intervals are included. The variety of data sources used captures the views and behavior of online voters, person voters, and non-voters. Alongside the quantitative in-­ methods, qualitative interviews make it possible to further test and interpret findings. Finally, use of multiple data sources for each evaluation question mitigates gaps and limitations of any individual method used. Tables 3 and 4 on the follow- ing pages go into more depth on the data collection tools and how the resulting data has been used to answer the evaluation questions. 252 Table 3.  Overview of data collection tools Description Target Sampling Comments CHAPTER 4 Online voter survey After voting, all online voters received a All online voters. Shown to 100% of online voting 22 questions on demographics, political activity, prompt to complete survey for World Bank. population. internet use, and questions related to the vote and participatory budgeting process itself. Prompt was integrated into the voting platform by PROCERGS. In-person ‘exit poll’ 75 50 enumerators conducted paper- A representative sample For logistical reasons, survey in the Undertaken by local partner NRM Estatistica. based surveys at polling locations in 18 of in-person voters. greater Porto Alegre area only. Included reduced set of 12 questions from the municipalities. online survey, plus some demographic fields to be completed by the enumerators. IVR survey76 Random dial numbers around the state to A representative Random, but limited to people with Conducted by global IVR company VOTO Mobile. reach non-voters, conducted over three days sample of the general mobile telephones / fixed lines. Included 18 questions similar to those in the other following the close of the voting. population of the state. surveys (although the logic and drop-offs mean most respondents answered 5–10 questions). Voting observation Team of three spent a full day observing the Ten polling locations in Central city and peri-urban locations No rural locations or anywhere outside the Porto in-person voting. three municipalities in sampled. Alegre region were observed. Porto Alegre. Interviews Face-to-face interviews by the evaluation Key stakeholders and Convenience sampling. team. local subject matter experts. Existing datasets Wide ranging data on the Sistema and other n/a n/a 2010 census data, previous year’s Sistema and survey Rio Grande do Sul data. data, budget information, and IDESE indicators. Stakeholder Follow-up online questions sent to people Stakeholders involved Convenience sampling based on To seek wider stakeholder views on a range of items questionnaire involved in delivery of the voting. in the delivery of availability of email addresses. from the interviews and observations. the Sistema: inc. government, COREDES, CARs and civil society. Enumerator Follow-up questions to the exit poll All enumerators. n/a This allowed for wider corroboration of evaluator most respondents answered 5–10 questions). Voting observation Team of three spent a full day observing the Ten polling locations in Central city and peri-urban locations No rural locations or anywhere outside the Porto in-person voting. three municipalities in sampled. Alegre region were observed. Porto Alegre. Interviews Face-to-face interviews by the evaluation Key stakeholders and Convenience sampling. team. local subject matter experts. 3.  Overview Table Existing datasets of Wide data ranging collection data tools on the Sistema and other n/a n/a 2010 census data, previous year’s Sistema and survey Rio Grande do Sul data. data, budget information, and IDESE indicators. Stakeholder Follow-up online questions sent to people Stakeholders involved Convenience sampling based on To seek wider stakeholder views on a range of items questionnaire involved in delivery of the voting. in the delivery of availability of email addresses. from the interviews and observations. the Sistema: inc. government, COREDES, CARs and civil society. Enumerator Follow-up questions to the exit poll All enumerators. n/a This allowed for wider corroboration of evaluator questionnaire enumerators. observations. I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 253 Table 4.  Mapping of data collection to evaluation questions. 254 Question Data Observations / Methods Does online voting affect the level of turnout? Online voting data Historic comparison data difficult to find. CHAPTER 4 In-person voting data IVR data Historic voting data Census data Do online and in-person voters have different Online survey Face-to-face data is Porto Alegre only, making some demographics? Face-to-face survey comparisons difficult. IVR data Census data Do online/in-person voters engage in different ways? Online survey Limited data available on earlier stages. IVR data Face-to-face survey Face-to-face data are Porto Alegre only, making some Interviews comparisons difficult. Literature review Do online and in-person voters vote differently and Online voting results does this affect spending? In-person voting results Literature review Are the PB goals clear and appropriate? Interviews Difficult to get clear responses from key stakeholders. Documentation Follow-up surveys Are the online/offline processes open to manipulation Observations Sample size of observations very small so conclusions IVR data Face-to-face survey Face-to-face data are Porto Alegre only, making some Interviews comparisons difficult. Literature review Do online and in-person voters vote differently and Online voting results does this aff ect spending? Table 4.  Mapping of data collection to evaluation In-person voting results questions. Literature review Are the PB goals clear and appropriate? Interviews Difficult to get clear responses from key stakeholders. Documentation Follow-up surveys Are the online/offline processes open to manipulation Observations Sample size of observations very small so conclusions or undue influence? Interviews not reliable. Follow-up questionnaires What transparency exists over online and offline Interviews results? Documentation Follow-up surveys Who controls the process and total budget? Interviews Difficult to get clear responses from key stakeholders. Literature review I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 255 256 CHAPTER 4 Table 5.  Completion rates for surveys. Method Responses Refusals Incompletes Online survey 33,758 219,77177 (86.7%) 29,453 (87.2%) Face-to-face ‘exit poll’ 1,923 91 (4.5%) 166 (8.6%) IVR survey 2,173 (1,373 non-voters) 38,01678 (94.6%) 1,247 (57.3%) Enumerator questionnaire 16 Stakeholder questionnaire 35 ABOUT THE DATA summarizes the amount of key data collected. The other signifi- cant source is the voting data that has been provided by the state of Rio Grande do Sul. This includes anonymized records of each online vote cast (timestamp, IP address, municipality, vote choice, etc.) and offline data aggregated at the municipal level (n=497). The online data has been aggregated to the municipal level to allow for comparisons between the two datasets. It is important to note that while the online and IVR surveys were conducted across the entire state, the face-to-face survey was conducted in the greater Porto Alegre region only. To address this, wherever online and offline results are being compared in this chapter, the online results have been limited only to those that were in the greater Porto Alegre region (although the Porto Alegre results and the state-wide results were broadly similar in all questions). Sampling strategies percent of voters, so there is The online survey was shown to 100 ­ no explicit sampling beyond potential self-selection bias. I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 257 For the face-to-face survey, a sampling strategy was cho- sen that sought to give each voter in the greater Porto Alegre area an equal probability of being surveyed. We sampled ran- domly (with replacement) census areas to which to send fifty enumerators. The probability of a particular enumeration area being selected is proportional to the population of that enu- meration area. Based on this random assignment, enumerators visited polling stations in the eighteen municipalities listed in the table below. The count in the second column shows the total number of enumerators that were assigned to each municipality. The IVR survey was conducted using random-digit dialing that randomly constructed RS-area phone numbers and called them. This gives each RS-area number an equal probability of being sampled. While this should theoretically lead to a repre- sentative sample of RS citizens, respondents without phones will not be represented and those with more than one number will be oversampled. Additionally, there is likely to be bias introduced by a high refusal rate on the IVR calls. Representativeness of the data The face-to-face survey poses few issues in terms of representa- tiveness, as probability sampling was employed and the non-re- sponse and drop-outs rates were very low. It is not possible to conclude reliably whether the online survey generated a sample that is representative of the online voters, as demographic details of the voting population are not captured. However, self-se- lection of respondents combined with a high level of non-re- sponse and incomplete answers could raise questions about the 258 CHAPTER 4 Table 6.  Enumerators sent to each surveyed municipality. Porto Alegre 13 Novo Hamburgo 6 Canoas 5 Eldorado Do Sul 3 Estância Velha 3 Nova Hartz 3 Santo Antônio da Patrulha 3 Gravataí 2 Montenegro 2 São Leopoldo 2 Alvorada 1 Arroio dos Ratos 1 Glorinha 1 Guaíba 1 Parobé 1 Sapucaia do Sul 1 Taquara 1 Viamão 1 I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 259 representativeness of the online sample. Analysis of the IVR data indicates they are not representative of the general population, being biased toward people who are younger and who have more education. The maximum margins of error (95 ­percent) are ±0.5 ­percent percent for the face-to-face survey, and for the online survey, ±2.2 ­ percent for the IVR survey. When generalizing percentages ±2.6 ­ from the sample to the population (online, offline, and non-vot- ers), the margins of error should be subtracted and added to the figures to find the confidence interval for the population. Data quality For the three methods—online survey, face-to-face survey, and IVR—the quality of data is given by the percentage of refusals and the percentage of incomplete questionnaires, including blank and non-valid answers. The data quality of the IVR sample was assessed through the inspection of the sampling bias, i.e., the difference between the sample statistics and the population parameters on key socio-demographical variables (gender, age, education, and income). IVR was more prone to error when extrapolating results from the samples to the respective populations (general population), mainly due percent).78 The to the extremely high non-response rate (94.6 ­ quality of the sample from the online survey is also affected by a comparatively high non-response rate (86.7 ­percent) and percent).79 the highest level of drop-out/incompletes (87.2 ­ The face-to-face survey produced robust results allowing gen- eralizations to the population of in-person voters (but only for Porto Alegre). 260 CHAPTER 4 Statistical analysis and methods The statistical analyses consisted of frequencies/percentages, two-way tables, and line/bar graphs to summarize the results of the samples. Confidence intervals were used to refer to the population figures (for a 95 ­percent level). Associations between variables were tested using chi-square test (for categorical vari- ables) and Pearson correlations (for scalar variables). For testing effects of particular socio-demographical variables on online/ in-person voting, controlling for other variables, we used binary logistic regression. For statistical hypotheses testing, we assumed significance if p<.05. Data Summary In sum, the range of data collection methods, sampling strate- gies, and the number of respondents involved, coupled with tri- angulation of different sources for each evaluation question (see Figure 1, Table 7), allows us to draw reliable conclusions about the evaluation questions set out in Figure 3 and Figure 4. FINDINGS AND ANALYSIS This section presents key findings from the surveys undertaken and data gathered from state government documents and from the 2010 census. In places, these findings are supplemented with results from selected interviews with key stakeholders. Firstly, the primary evaluation questions set out in Section 3.1 are considered: • Does online voting affect the level of turnout? • Do online and in-person voters have different demographics? Figure 1.  Voter turnout online and offline (2005–14). Votes 1100K 1,059,842 1,039,471 998,145 967,610 1000 900 907,146 800 813,700 700 640,998 629,526 600 500 428,809 400 329,680 300 255,751 200 177,596 157,549 136,377 121,551 85,982 100 44,549 49,501 135,996 39,737 0 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L Mode Internet In-person 261 262 CHAPTER 4 Figure 2.  2014: reasons for not voting (%). Reasons for not Voting Did not know about it Did not have time Did not know how to vote Not important to me Other Reason 0 20 40 60 80% • Do online and in-person voters engage in different ways? • Do online and in-person voters vote differently and does this affect spending? These findings focus primarily on the voting stage of the PB process and the impact of online voting on this. The section then goes on to consider the secondary evaluation questions: • Are the PB goals clear and appropriate? • Are the online/offline processes open to manipulation or undue influence? • What transparency and oversight exists? • Who controls the PB process and total budget? DOES ONLINE VOTING AFFECT THE LEVEL OF TURNOUT? Using data from online and in-person voting, IVR survey, historic voting, and the 2010 census, this section considers the extent I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 263 Table 7.  2014 voter turnout and voting method in RS Nº of citizens % voting age population % of voters Population 11,164,043 Voting age population 8,645,435 Online voters 255,751 2.9 19.3 In-person voters 1,059,842 12.3 80.7 Total voters 1,315,593 15.2 100 to which overall turnout in the participatory budgeting vote is affected by the introduction of online voting. Despite slight dips in 2007 and 2008, the number of peo- ple voting either online or in-person has risen significantly from 674,075 in 2005 to 1,315,593 people in 2014. percent (1,315,593 people) of the total vot- In 2014, 15.2 ­ percent did not vote. Of the ing-age population voted and 84.8 ­ percent 1,373 non-voters who responded to the IVR survey, 77.6 ­ reported their reason for not voting being that they simply did not know about the vote. Online vs. in-person voting Since the introduction of the new Sistema model of participatory budgeting in 2011, the online vote has increased steadily: • percent of the vote (114,571 voters) in 2011 10.2 ­ • percent of the vote (124,211 voters) in 2012 12 ­ • percent of the vote (255,751 voters) in 2014 19.3 ­ 264 Figure 3.  2014 online & in-person voter turnout by COREDES82 COREDES Mode COREDES Mode Alto da Serra do Botucarai Internet 15,038 Metropolitano Internet 17,319 CHAPTER 4 Presencial 8,224 do Delta do .. Presencial 142,238 Alto Jacuí Internet 5,550 Missões Internet 7,166 Presencial 37,591 Presencial 49,631 Campanha Internet 5,817 Nordeste Internet 8,936 Presencial 15,781 Presencial 21,550 Campos de Cima da Serra Internet 1,272 Noroeste Internet 6,624 Presencial 8,877 Colonial Presencial 24,119 Celeiro Internet 10,600 Norte Internet 14,093 Presencial 36,030 Presencial 18,556 Central Internet 7,719 Paranhana- Internet 1,042 Presencial 37,702 Encosta da .. Presencial 43,374 Centro Sul Internet 1,294 Produção Internet 10,336 Presencial 24,917 Presencial 41,646 Fronteira Noroeste Internet 9,882 Rio da Várzea Internet 7,821 Presencial 45,262 Presencial 18,009 Fronteira Oeste Internet 6,755 Serra Internet 20,253 Presencial 72,487 Presencial 44,557 Hortênsias Internet 6,801 Sul Internet 14,587 Presencial 13,166 Presencial 38,956 Jacui Centro Internet 4,037 Vale do Caí Internet 979 Presencial 24,431 Presencial 20,337 Litoral Internet 4,160 Vale do Jaguari Internet 5,240 Presencial 34,421 Presencial 18,846 Médio Alto Uruguai Internet 28,750 Vale do Rio dos Sinos Internet 19,204 Presencial 18,481 Presencial 126,249 Vale do Rio Pardo Internet 9,713 0 40 80K Votes Presencial 53,145 Vale do Internet 4,763 Taquaril Presencial 21,259 0 40 80 120 160K Votes Figure 4.  2014 voter turnout by municipality. Density Density 3 4 3 2 2 1 1 0 0 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 Total turnout (by municipality) Proportion of voters online (by municipality) I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 265 Density 15 Density 4 1 1 0 0 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 266 Total turnout (by municipality) Proportion of voters online (by municipality) Figure 4.  Continued CHAPTER 4 Density 15 Density 4 10 3 2 5 1 0 0 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 Turnout online (by municipality) Turnout offline (by municipality) I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 267 Figure 5.  Voting channel used in previous year (%). Voting channel used in previous year In person offline (POA) online (POA) Online offline (POA) online (POA) Did not Vote offline (POA) online (POA) 0 20 40 60 The split of online and in-person voting is very different across the state, with some of the twenty-eight COREDES having percent of the turnout voting online and others with just over 10 ­ over 50 ­percent voting online. When this is broken down further into the 497 individ- ual municipalities that make up the 28 COREDES, the differences become even more marked, as shown in Figure 3 histograms. percent in The online vote percentage ranges from 0.51 ­ percent in Butia, to 99.64 ­ Parobe and 0.64 ­ percent in Centenario percent in Planalto. In the majority of municipalities, and 98.71 ­ percent. It is also interesting online voting percentage is under 25 ­ to note that there appears to be a significant channel shift toward online voting, with 18 ­percent of online voters reporting that they voted in-person the previous year. percent of voters stated they Of the online voters, 39.9 ­ percent said they would might have voted offline, while 63.1 ­ probably not have voted if online voting were not available. This percent group (around 160,000 voters) will be referred to as 63.1 ­ “online only” voters in the rest of the analysis. 268 CHAPTER 4 Discussion There are three main themes that arise from these findings around the impact of online voting on turnout: • Non-voters and awareness of the vote • Geographic variations in turnout and in the portion of voting which took place online • The increase in turnout and how much of this is attributable to the online voting. Non-voters and awareness of the vote The IVR survey found clear indication that the majority of people (77.6 ­percent of respondents) who did not vote were not aware of the vote at all (see Figure 2). If this low level of awareness is representative of the level of awareness in the general population,83 it would mean that only percent of the voting age population were approximately 22 ­ aware of the vote in 2014 (around 1,900,000 people). Given that 1,315,593 people voted, this would mean that over percent of the 1,900,000 people who were aware of the vote, 60 ­ actually voted. While these estimates are not reliable, they are indicative and point toward the potential increase in turnout if the process were better promoted. While awareness does not directly correlate to voting, lack of awareness is a significant factor in limiting both online and in-person voter turnout. This has also been found by oth- ers (Sampaio & Peixoto, 2014), commented upon in interviews (Ricardo Almeida, 29/5/2014 and Tarson Nunez, 29/5/2014), and corroborated by the enumerator survey, which found that most I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 269 in-person voters appeared to be unaware of the vote before they were approached. There was also some evidence that in-person voting turn- out was affected by the organization of the polling locations. The in-person vote is organized in a decentralized manner through the COREDES and the local CORDEDES administrative bodies, the Centros Administrativos Regionais (CARs). Although the rules state that polling locations across the state are to be agreed upon and publicized well in advance of the vote, the evaluation team and enumerators found that this varied signifi- cantly. In some COREDES a list of polling locations was provided a few days before the vote. However, in most of the municipalities observed, the information available in advance was vague (“most schools and health centres”) and often turned out to be inaccurate. The number of polling stations available also appeared to vary based on local staffing availability as much as on population size. Given this variation, it seems likely that this availability of in-person vot- ing would also have had some impact on the variations noted above. This would suggest that increasing and improving the pro- motion and information to citizens could be expected to create a significant increase in both online and offline turnout. Geographic variations percent to 99.64 ­ The 0.51 ­ percent variation in online turnout between municipalities—shown in Figure 4—is even larger than percent to 47 ­ the 0.5 ­ percent found by Goldfrank in the 2011 pro- cess (Goldfrank, 2014). Again, much of this variation may be due to differ- ences in promotion and citizen awareness across states and 270 CHAPTER 4 municipalities. For example, when the 2014 range presented in Figure 4 was discussed with Paulo Coelho of SEPLAG, he sug- gested that in Parobe (0.51 ­percent online voting), the local lead- ership is highly active in stimulating the turnout for in-person voting, while in the COREDES of Medio Alto Urugai in which percent online voting) sits, there has been a Planalto (98.71 ­ gradual process for the last three years of replacing the in-per- son with online voting. This is corroborated by looking at the recent change in online voting in Medio Alto Urugai compared to a COREDES with no such plans in place. In Porto Alegre, where the evaluation team actively looked for evidence around promotion, there was a Facebook campaign, Twitter activity, and a small number of posters on buses, but over- all little promotion or communication observed before or during the 2014 vote. The low level of promotion and communication was corroborated by anecdotal comments from the Gabinete Digital and from interviewees. Figure 6.  Online/offline turnout across two COREDES. Médio Alto Votes 90 K Uruguai 60 K 47.772 30 K 28.750 15.228 0K 267 142.238 Metropolitano Votes 120 K do Delta do 90 K Jacuí 60 K 30 K 25.712 17.319 0K 2.215 2005 2014 I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 271 So, while clearly local or regional plans are influencing the levels of online and offline turnout, without a much more thor- ough examination of these activities, it is difficult to draw any conclusions about the regional variations in voting channel preference. Increased turnout In 2014, 19.3 ­percent of the total voters voted online (255,751 peo- ple/2.9 ­percent of the voting age population of the state). Not all of these can be assumed to be part of an increased turnout. A portion would have voted in-person if online voting were not available. percent In the 2012 vote, the World Bank found an 8.2 ­ increase in turnout that could be directly attributed to online voting (Spada et al., 2016). Following the same calculations, percent) in 2014 this evaluation shows an even greater (12.2 ­ increase directly attributable to online voting. This is the num- ber of voters who specifically said they would not have voted if online voting were unavailable. It seems likely, however, that the real increase due to the online voting is higher than this. In addition to those voters who reported a strong preference for voting online, there will have been other voters who voted online due to other fac- tors and would have been unable to vote were online voting unavailable. In particular, some voters will have taken advan- tage of the fact that the online voting was available around the clock for three days, whereas the in-person vote was available on only one day, generally between 10 a.m. and 4 p.m. In many of the observed locations an actual polling station proved dif- ficult to find.84 272 CHAPTER 4 Summary The overall turnout has increased since 2005, and the number of online voters has increased steadily and significantly since the introduction of the new Sistema model of participatory budget- percent of the ing in 2011. This evaluation indicates that the 12.2 ­ increase in voter turnout is directly attributable to online voting. percent of the voting-age population turn- Despite this, 15.2 ­ out could reasonably be expected to increase much further, and the findings indicate that lack of effective promotion and commu- nication is a significant factor currently restricting this growth. With only around a quarter of the voting age population being aware of the vote, increasing and improving the promotion of information to citizens could be expected to create a significant increase in both online and offline turnout. DO ONLINE AND IN-PERSON VOTERS HAVE DIFFERENT DEMOGRAPHICS? This section considers to what extent gender, age, ethnicity, income, and education impact voting. It reports the findings from the online, face-to-face, and IVR surveys, along with comparison to the 2010 census data. The face-to-face survey data is from Porto Alegre only and so the online survey findings are restricted to respondents in the greater Porto Alegre region to allow for com- parison between online and offline responses. In order to understand which of the socio-demographi- cal characteristics differentiate online from in-person voters, a binary logistic regression was run with: • gender, age, education and income as explanatory variables • the binary offline (0) vs. online (1) voting as the response variable I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 273 In Figures 7 to 11, the following categories are used: • Online (POA): the results of the online survey, limited to just those respondents in the greater Porto Alegre area • Offline (POA): the results of the face-to-face survey con- ducted across the greater Porto Alegre area • Population (POA): IBGE 2010 survey results for the greater Porto Alegre area • Online (RGS): the full results of the online survey, including respondents from the entire state of Rio Grande do Sul • Population (RGS): IBGE 2010 survey results for the entire state of Rio Grande do Sul There is not a large difference in the gender split across the different voting audiences.  Offline voters are over-represented by percent), while online voters were 52.4 ­ women (58.9 ­ percent in the Porto Alegre region (and almost exactly 50 ­percent across the state).  These compare to a slightly higher number of women in Figure 7.  Gender of voters. Online (POA) Offline (POA) male Population (POA) female Online (RGS) Population (RGS) 0.00 0.25 0.50 0.75 1.00 2 74 CHAPTER 4 Figure 8.  Age of voters. Age of Voters Online (POA) Offline (POA) 16 - 39 Population (POA) 40 > Online (RGS) Population (RGS) 0.00 0.25 0.50 0.75 1.00 percent women in Porto Alegre and the general population (53.6 ­ percent across the state). 51.3 ­ Online voters appear to be younger than in-person voters. percent are aged 16–29 compared with only For online voters, 38.1 ­ 24.2 ­percent of the in-person voters while only 19.4 ­percent of online percent of in-person voters. voters are over 50 compared to 34.4 ­ It is difficult to compare this to the population data as the census categories appear to be limited to <40 and 40+, but within these broad categories, the age demographic found in online voters appears to be representative of the population age percent of online voters are under forty across as a whole—62.8 ­ percent in the general population, the state compared to 60 ­ percent of online voters in Porto Alegre are under 40 and 60.1 ­ percent of the city population. ­compared to 58.9 ­ The offline audience, however, seems to show a bias toward percent of the in-person older age groups with only 45.2 ­ I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 275 Figure 9.  Ethnicity of voters.85 Ethnicity of voters Online (POA) Offline (POA) Population (POA) Online (RGS) Population (RGS) 0 25 50 75 100% Amarela Negra Branca Parda Indigena NA percent being voters in Porto Alegre being under 40 and 54.8 ­ 40 or over. The offline survey reports a significantly higher number percent) and mixed-race (9 ­ of black (13.8 ­ percent) voters when percent, and compared to the responses in the online survey (3 ­ percent respectively). However, as 21.4 ­ 5.1 ­ percent of the online survey respondents did not give a response to the ethnicity ques- tion, no comparisons can be made to the general population. The survey results and the census data do not differ signifi- cantly between Porto Alegre and the entire state of Rio Grande percent people do Sul, with the exception of income where 24.2 ­ report monthly earnings of over R$6,000 (~US$2,600) in Porto Alegre compared to 17.7 ­percent in Rio Grande do Sul as a whole. There are significantly more offline voters with no education percent) or basic education (28.9 ­ (2.4 ­ percent) than show in the online voters for the Porto Alegre region (where the percentages 276 CHAPTER 4 Figure 10.  Monthly Income of voters. Ethnicity of voters Online voters (POA) Offline voters (POA) Online voters (RGS) Non-voters 0 25 50 75 100% < R$ 750 R$ 6.000 > R$ 750 - R$ 1.500 DK/NA R$ 1.500 - R$ 6.000 are 0.3 and 2.1, respectively). This is only slightly higher for percent and the online audience across the whole state (0.5 ­ percent).  Similarly, there is a higher representation of college 3.9 ­ educated (52.4 percent) or those with masters/doctors degrees percent) in the Porto Alegre online voters than those who (9.2 ­ percent). votes offline (only 19.6 ­percent and 1.1 ­ Correlation of voting method and socio-economic indicator Figure 12 and Figure 13 show online voter turnout and Índice de Desenvolvimento Socioeconômico (IDESE)86 plotted against municipality. This shows that those municipalities with higher percent- ages of online voting tend to be clustered in the north of the state and around the capital of Porto Alegre. The municipalities near Porto Alegre show much higher IDESE indicators, while those in the north of the state are more varied but still appear to be higher than other regions. I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 277 Figure 11.  Education of voters. Online(POA) Offline(POA) Online(RGS) Non-voters 0 0.25 0.50 1.00 None Basic High School College/Degree Masters/PhD Exploring this data further, the Pearson correlation between percentage of online voting (from the total population of voters) and the IDESE indicator of the level of socio-economic develop- ment in each municipality [r(487)=0.15, p<.001] are plotted in the scatter plot below. The correlation is positive, indicating that voters are more likely to vote online in municipalities with better socioeconomic indica- tors. The association is weak although statistically significant.87 Discussion These findings show a clear difference in demographics between the online voters and in-person voters. • Gender: women are over-represented in the in-person vot- percent of in-person voters being women as ing, with 58.9 ­ percent in the online vote and 53.6 ­ compared to 50 ­ percent in the general population. 278 CHAPTER 4 Figure 12.  Map showing online turnout level for municipalities. Posadas Forianopolis Caxias Do Sul Rio São Leopoldo Grande Porto Alegre do Sul Pelotas Uruguay Percent Online 0.0051 0.9964 • Age: using the population statistics for the wider RS region as a proxy for the state population,88 and adjusting to com- pare against the voting age population only, the online vot- percent online ers over-represent age groups 16–29 (38.1 ­ percent offline) and 30–39 (24.6 ­ against 31.3 ­ percent online percent offline) and under-represent those 50+ against 19.9 ­ percent online against 30.1 ­percent offline). (19.4 ­ • Ethnicity: restricting the results to exclude those who did not answer and comparing those who reported their ethnicity I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 279 Figure 13.  Map showing IDESE socio-economic indicator for municipalities. Posadas Forianopolis Caxias Do Sul Rio São Leopoldo Grande Porto Alegre do Sul Pelotas Uruguay IDESE 0.5210 0.8480 percent of in-person voters and 89 ­ as white (76 ­ percent of percent online voters) against the city-wide population (82 ­ white), suggests that while the offline audience is slightly over-represented by non-white minorities, the online audi- ence is slightly over-represented by white voters.89 • Income: the very poor are significantly under-represented in the online voters (just 18 ­percent of online voters report earn- percent of ing less than R$ 1,500 (~US$660) compared to 38 ­ the in-person voters). 280 Figure 14.  Municipality-level online voter turnout vs. socio-economic indicator. Municipality level CHAPTER 4 100 75 Online voting 50 25 0 0 0.6 0.7 0.8 SEI I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 281 • Education: the online vote virtually excludes those with the lowest educated levels (only 4.3 ­percent of online voters had no/ basic education compared to 30 ­percent of the in-person voters). Regression testing demonstrates that gender, age, education, and income significantly predict online vs. in-person voting. The results suggest that online voters are significantly more likely to be male (b=-0.44, p<.001), younger (b=-0.30, p<.001), more educated (b=1.34, p<.001), and with a higher income (b=0.23, p<.001) than in-person voters. Summary A clear difference between the online and in-person voters was found. The online voting population is younger, more educated, and higher earning, somewhat more male than its offline coun- terparts, and includes a lower proportion of non-white citizens, while the percentage of online voters is lower in municipalities with lower levels of socio-economic development. DO ONLINE/IN-PERSON VOTERS ENGAGE IN DIFFERENT WAYS? This section draws on data from the online, IVR, and face-to-face surveys—with the face-to-face survey data from Porto Alegre only—and from direct observations made by the enumerators. It considers what influences voters’ preferences for voting on or offline, for engagement in the deliberative stages of the PB pro- cess, in wider political activity, and in the potential for increasing online voting. percent—of those who voted online The majority—67.7 ­ cited convenience as their principal reason. The reasons for voting 282 CHAPTER 4 Figure 15.  Reasons for choice of voting channel (%). Access Awareness Convenience Preference 0 20 40 60% Online (POA) Offline (POA) percent citing convenience offline were more varied, with 35.8 ­ percent citing as the principal reason they voted in-person, 21.5 ­ percent citing a lack of awareness of the online voting, and 36.6 ­ simply a preference.90 Internet access does not appear to be a significant factor, with only 6.1 ­percent of in-person voters citing lack of Internet access as the reason they voted offline, and 12 ­percent of online voters saying it was not possible for them to reach a physical voting location. Looking at the reported Internet usage, half of the non-vot- ers surveyed use the Internet daily. An additional influence on voting channel used was made by direct observation of the enumerators. They noted that the staff managing the polling stations (including police, public service I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 283 Figure 16.  Internet usage. Internet Usage Never / rarely offline (POA) online (POA) non-voters Monthly offline (POA) online (POA) non-voters Weekly offline (POA) online (POA) non-voters Daily offline (POA) online (POA) non-voters 0 25 50 75 100% workers, and volunteers associated with the COREDES or local civil society groups) were often observed actively encouraging passers-by to vote. This involved literally stopping people who walked past, asking them if they were aware of the vote, explain- ing the Sistema and the voting process to them and then convinc- ing them to take part. While at least one polling station did have a queue of people who appeared to be proactively seeking out a place to vote, this appeared to be an exception not the norm. This perception was shared by most of the enumerators, twelve out of sixteen of whom reported that most or all of the voters they saw were unaware of the vote prior to being approached by the staff running the polling station. While this is similar to many referenda and other forms of public engagement, it is markedly different from the online voting where, although there were active Facebook and Twitter campaigns, the same level of opportunistic engagement with the public is not possible. This suggests that the two populations 284 CHAPTER 4 Figure 17.  Other political activities engaged in by voters (%). Other political activities engaged in by voters (%) Community Meetings offline (POA) online (POA) online only State Online Services offline (POA) online (POA) online only Demonstration offline (POA) online (POA) online only Political Party Meeting offline (POA) online (POA) online only None offline (POA) online (POA) online only 0 20 40 60 (online and offline voters) may be distinct groups and compari- sons between them should be treated tentatively. Moving now to the evidence related to voters’ wider political activity, online voters appear to be more engaged in every polit- percent of online voters ical activity asked about, with only 46.8 ­ percent of in-person voters) not engaging in (compared to 53.6 ­ any political activities whatsoever. A particularly marked difference is evident when asked percent of the online voters about demonstrations, with 26.3 ­ reporting they have participated in some form of protest or mani- festação compared to just 9.7 ­percent of in-person voters. Discussion People chose to vote online mainly on the basis of the convenience of this and to vote offline mainly due to preference, convenience I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 285 and a lack of awareness of the online option. Ability to access the Internet was not found to be a significant influence. There is some suggestion that voting online is a less engaged activity than voting offline; that online voters are somehow more opportunistic and less likely to have been involved in the earlier stages; that online voters are less politically active than in-person voters. For example, during his interview, Tarson Nunez ques- tioned whether voting online is “any different to voting in Big Brother.” Also the World Bank’s 2012 survey found that nine out of ten online voters said they had not participated in discussions about the vote before the vote (Spada et al., 2016). However, these wider survey results suggest that there is vir- tually no difference between the likelihood of online and in-per- son voters having been involved in earlier deliberative stages of the PB process, with 87.6 percent of online and 87.5 percent of in-person voters reporting not having attended any assembly meetings prior to voting. If anything, it appears the online voters might be slightly percent reporting having voted in pre- more involved, with 58 ­ percent of in-per- vious year’s budgeting compared to only 26 ­ percent of online voters said son voters. Also, state-wide, 27.7 ­ percent said “very probable” that it was “probable” and 9.8 ­ they will attend in-person meetings about the budget next percent said it was “probable” year, and in the POA area, 27.4 ­ percent said “very probable” (although we don’t have and 10.3 ­ a comparable figure for the in-person voters). If online voters have an appetite for more engagement in the deliberative stages of the PB process, more thought should be given to how to engage more widely, while still taking advantage 286 CHAPTER 4 of their preferred online channel. This will be explored further in the recommendations in Section 7. Looking at the broader political activities of both groups also does not support the idea that online voters are less political, with the results of online voters showing that they are more likely to have been involved in all of the political activities asked about than in-person voters. Considering the nature of the two voting channels is helpful at this point. To vote online, people must make a conscious choice that they wish to vote, or deliberately click on a link (e.g., on Facebook percent of or Twitter). Therefore, one can assume that near 100 ­ online voters have at least some interest in participating. The in-person vote, however, includes both people who have actively chosen to participate and vote, as well as a large number of people who were unaware of the vote until someone stopped them in the street, at which point they completed a ballot. It would be interesting to explore the degree to which this simply reflects the way people are encouraged to vote and the point at which they become aware of the process. For now, it seems that there is no difference between the way online and in-person voters interact that cannot be explained by another factor. The choice of technology should perhaps be considered simply as a different channel, not as something that is fundamen- tally different. These findings also confirm the previous research by the World Bank (Spada et al., 2016), suggesting that “online only” voters (i.e., those who would not have voted at all if they hadn’t voted online) are slightly less politically engaged than the online voters as a whole. However, they go further than this and show I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 287 that even these “online only” voters are as politically engaged, or more, than the in-person voters in all activities except attendance of party political meetings. However, given the distinct natures of the two voting groups (pro-active online, and a combination of pro-active and opportunistic voting offline), this finding should be taken with caution. The responses from non-voters in the IVR survey are broadly similar to those of the in-person voters. These results have been further analyzed to see if the dif- ferent demographic profiles of the online and in-person voters might be skewing the results, but the findings are consistent— within every age group and income/education bracket, the online voters report higher levels across all political activities. Summary People chose to vote online mainly on the basis of its convenience and to vote offline mainly due to preference, convenience, and a lack of awareness of the online option. Ability to access the Internet was not found to be a significant influence. With regards to engagement in PB processes, the wider sur- vey results show that online voters are slightly more likely than in-person voters to have voted in previous year’s budgeting votes, but there is virtually no difference between the likelihood of online and in-person voters having been involved in earlier delib- erative stages of the PB process. For involvement in wider political activities, online voters are somewhat more likely to have been involved than in-person voters and within every age group and income/education bracket, the online voters report higher levels across all political activities. 288 CHAPTER 4 DO ONLINE AND IN-PERSON VOTERS VOTE DIFFERENTLY AND DOES THIS AFFECT SPENDING? This section discusses how people voted and what impact, if any, this had on budget decisions. It is based on the findings from the online, IVR, and face-to-face surveys. The face-to-face survey data are from Porto Alegre only. The table below shows the frequency with which demands under each of the thematic areas were chosen in the online and offline ballots: There is significant variation in a number of categories, most notably (in bold in Table 8) Desenvolvimento Econômico & Desenvolvimento Rural (Economic Development and Rural percent and 28.3 ­ Development) which were chosen by 16.5 ­ percent percent and 8.6 ­ of online voters respectively, and only 3.8 ­ percent of in-person voters. Segurança Pública e Defesa Civil (Public Safety and Civil Defense) demonstrates the opposite and was chosen by 6.1 ­percent of online voters but only 17.9 ­percent of in-person voters. It is important is to understand whether these preferences have actually impacted the outcome: • In eighteen of the twenty-eight COREDES the first place demand was the same in both the online and offline results • In ten COREDES the demands chosen by online and in-per- son voters were different • In eight of these ten, this difference was not by a sufficient amount to change the overall result • In the other two COREDES, the demand in the online vote was different from the in-person vote and by a sufficient amount to change the final outcome. In Hortênsias COREDES, I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 289 Table 8.  Voting results (state-level). Category Offline (%) Online (%) Citizenship, Justice, Human Rights & Policies for Women 6.2 1.5 Combating Traffic Violence 0.3 <0.1 Digital Culture & Inclusion 2.0 4.9 Economic Development 3.8 16.5 Rural Development 8.6 28.3 Social Development and Poverty Eradication 0.8 0.1 Basic, Vocational and Technical Education 9.3 2.5 College Education 1.6 0.5 Sport, Leisure & Tourism 4.6 0.8 Housing, Urban Development and Sanitation 7.2 2 Infrastructure and Logistics 6.9 <0.1 Irrigation 1.6 0.1 Environment and Water Resources 3.4 0.1 Local and Regional Planning 0.3 <0.1 Health 25.4 36.6 Public Safety and Civil Defense 17.9 6.1 the winning demand was the purchase of eight vehicles for the police and fire brigade, but the winning demand, dis- counting the online vote, would have been a program for fam- ily Agribusinesses. In Fronteira Oeste COREDES, the winning demand was budget for regionalisation and pooled resources (the construction of a regional hospital), but, without the online vote, the winning demand would have been toward localization and fragmentation of regional resources (vehicles or equipment for eight different hospitals). 290 CHAPTER 4 Discussion One of the main goals of participatory budgeting is to distribute a budget in favor of the needs of the poor (e.g., Wampler, 2012). The fact that pro-poor distribution is possible has been demon- strated through experiences in which participatory budgeting has successfully changed spending in ways that favor poorer areas and produce impacts on development outcomes, such as maternal and child health (Avritzer, 2010; Goncalves, 2009). The large differences in online voter turnout in different geo- graphic areas is particularly significant when taken together with the demographic differences between the online and in-person voters (presented in Section 4.2). If a particular area has a high percentage of online voting and the online voters are generally younger, richer, and better educated, then it is possible that the final results for the area will be skewed in favor of the desires or needs of the middle-class population rather than reflecting the needs of the poorer communities. Given there were at least two COREDES where the online vote demonstrably did change the final result, this poses a partic- ular challenge to the social justice goals of participatory budget- ing and highlights the tension between the redistributive goals and democratic goals of widening representation. It also highlights the importance of clear goals as far as out- lining who the Sistema aims to represent—should it reflect the demands of the population as a whole or the needs of poor and marginalized groups across the state? The potential consequences are summarized by one inter- viewee: “Online you build scale but lose quality and move the balance I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 291 back to the middle class. Participatory budgeting is a way of giving voice to the poor—if you level everyone you tip the balance again against the poor” (Tarson Nunez, 29/5/2014). Summary The first place demand was the same in both the online and offline results in eighteen of the twenty-eight COREDES. It was differ- ent, but not by a sufficient amount to change the overall result, in ten COREDES. In two COREDES the demand in the online vote changed the final outcome. This poses a particular challenge to the social justice goals of participatory budgeting and highlights the tension between the redistributive goals and democratic goals of widening representation. SECONDARY EVALUATION QUESTIONS AND ADDITIONAL FINDINGS The report now shifts to consider secondary evaluation questions related to the voting and control of the budget and PB stages. It uses data from the interviews, documentation, and follow-up surveys to explore the goals, risk of inappropriate influence or manipula- tion, transparency, and control of the budget and process. Are the goals of the state’s participatory budgeting process clear and appropriate? In Section 2.4, the goals of participatory budgeting were dis- cussed as both a democratic innovation and a vehicle for empowerment, for social justice, and for redistribution. While it remains unclear whether the state-level participatory budget- ing for Rio Grande do Sul shares these core goals of participa- tory budgeting in general, it is clear from its statements that this 292 CHAPTER 4 new state-level process also includes goals around a new form of democratic inclusiveness. The lack of clearly stated and shared goals is a potential prob- lem for the evaluation and also for the success of the program itself. It could mean that the online and in-person voting do not seek to meet the same goals. For example, it was apparent from observing the vot- ing that many of the people running the polling stations had strong views on which ballot options should be voted for and would actively seek to persuade voters to support them. If the vote is considered in the same way as we would consider a typ- ical vote for a political representative, then this influence is clearly unfair and could prejudice the results. However, if we consider participatory budgeting as a tool to increase the rep- resentation of marginalized groups and help them organize, then those groups who organize most effectively are then best able to influence the outcome. It is arguable that this is in fact a desirable outcome. So, the decentralized nature of the in-person voting and the major role played by civil society could be seen as in line with Wampler’s PB goals of voice, vote, and social justice—explicitly giving poor and marginalized communities an opportunity to share their needs and seek to redistribute spending accordingly. The online vote, however, is centralized and controlled by the state itself and, as such, has no direct involvement from civil society or other groups representing the poor. If we consider the influence of organized state actors, such as the police, it is difficult to attribute any goals to participatory budgeting that would allow the manipulation of voting by the police to be seen as a positive I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 293 outcome. This means the two channels of voting are more than just different channels. They have different drivers, different lev- els of influence, and, therefore, they might even target or appeal to different groups of voters. Considering the demographic differences and the potential for the online vote to change the outcome (see Section 4.4), this is an important aspect to consider. For online and in-person voting to genuinely just be different channels, the processes and goals that lie behind each should be aligned. Depending on which set of goals are more important, this could mean reducing the role of local civil society in the in-person voting or enabling a bigger role for civil society in the online vote. Are the voting processes open to manipulation or undue influence? This is an aspect of participatory budgeting that is not often explored in depth, and concerns about the integrity of the voting process are not often aired (Spada et al., 2016). Three main prob- lems that might occur are people voting multiple times, people’s voting choices being taken or influenced by others, and direct or indirect manipulation of the voting or results. Multiple votes are not possible in the online system that has checks in place to prevent someone voting online more than once. There does not appear to be any such system in place to pre- vent someone voting online and offline, or voting offline multiple times in multiple places. In-person voting requires the voter to show their photo ID before being given a ballot, so it would be difficult for someone to vote on behalf of another person. Voting on behalf of another person is possible online, but the online voting platform requires 294 CHAPTER 4 a secure login using a unique voter ID, or using the new state citi- zen login before someone can vote. In discussion with the Gabinete Digital, it became apparent that someone’s voter ID could be discovered relatively easily by entering some personal details into another government website. Remote voting is also always open to the potential of voter coer- cion as there is no independent third party present to monitor this.91 These types of voting abuse could only be exploited on a local and individualized basis, e.g., a family member voting on behalf of, or coercing, another family member. So, while import- ant, they are unlikely to significantly alter the final outcome of the vote. In-person voting has two much larger issues that may directly impact the results. The first is the organized influencing of the vote by the decentralized group—including civil society organizations, the military, police, fire brigade, etc. Direct obser- vation of the voting stations provided interesting anecdotal evidence. In some locations, local services (health, firefighters, teachers etc.) were out en-masse to demonstrate what the pre- vious year’s budget paid for and to encourage voters to vote for the demands that would support their work. Less positively, at some of the polling stations operated by the police, manipula- tion of the process was observed. This included instances where citizens completed the attendance sheet but did not complete a ballot (leaving that ballot ”free” for the polling station staff to complete), occasions where the staff would effectively tell the voter which boxes to tick and instances where the staff simply completed the ballot on behalf of the voter (on one occasion the voter then complained and was given a second ballot to I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 295 complete herself). It is unclear how widespread these practices might be, but it was corroborated by four of the sixteen enumer- ators in the Porto Alegre area, who reported observing people trying to influence voters’ choices (three of these reported this occurring frequently). The police also feature in the second factor that could impact on the final outcome of the vote—manipulation of voter choices and results. As the observations in this evalua- tion were limited, they cannot be used to make a judgement about whether this has actually changed the final outcomes of the vote anywhere in the state. However, there were sufficient examples to raise this as a significant concern that warrants both further investigation and changes to the process to stop it occurring in future years. Significantly, the category that these actions would have been seeking to influence (Segurança Pública e Defesa Civil—­effectively supplementary funding for the police) reflects the demands that received significantly more votes offline than online. While inconclusive, this might be consistent with this type of influence over the results being widespread across the state. What transparency and oversight exists? The oversight of the voting process itself is hard to comment on as this was not directly observed and is not well documented. The results were made available to the public within a few days of the completion of the voting but it is not clear to what degree, if any, public and/or civil society played a role in monitoring the voting, the counting, and the results. The more interesting aspect of trans- parency and oversight relates to what happens after the vote. Are 296 CHAPTER 4 the projects that are voted for actually delivered? Is the money actu- ally spent (and if so, is it spent on the things the people voted for)? There is some suggestion that this data is not readily avail- able: Sergio Baierle, when interviewed, reported a widespread feeling that some projects are not executed and that some figures are manipulated. Goldfrank (2014, p.16) supports this last state- ment, confirming that “data that used to be easily accessible on the Internet no longer exists,” meaning not only projects that have been approved but have also been executed. However, consider- ing the same systems, Tiago Peixoto found that civil society over- sight had been strengthened, not weakened (Peixoto, 2008). There are a number of new initiatives such as the trans- parency map coming online and it is hard to know whether the apparent lack of data is a deliberate decision to avoid transpar- ency and oversight on the part of the government or simply the result of a transition period where the data migrates to this new platform and people learn to use it. In any case, it is clear that—whether it is already happening or not—“technology could revolutionise the transparency of the bud- get” (Sergio Baierle, 30/5/2014) and this is a recommendation that will be made in Section 7, but may already be underway. Who controls the overall budget and process? This section considers the level of control that exists within the participatory budgeting process itself and the control of the bud- get that sits outside of the process that happens before it begins. The first of these is straightforward but critical. The vote is binding. The government is legally committed to spending the budget as dictated by its citizens (Spada et al., 2016; Peixoto, I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 297 Figure 18.  Budget for participatory processes 2000–2015.92 Participatory budget (R$ millions) R$ 350 R$ 300 R$ 250 R$ 200 Participatory budget R$ 150 R$ 100 R$ 50 0 2000 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2008). In this sense, the state participatory budgeting sits as high as reasonably can be expected on Arnstein’s famous ladder of participation (Arnstein, 1969). However, this is only true within the limited budget that is channeled through the participatory process. However, during this period the overall state spending in RS has increased dramatically from less than R$9,000 million in 2000 to nearly R$60,000 million planned for 2015. While total state spending includes extraneous factors such as inter-depart- mental transfers, these do not change the fact that the percent- age of the state spending allocated to participatory control has reduced significantly over this period. Two other important and related factors also sit outside of direct democratic control: the overall percentage of the state budget that goes through this participatory process and the ­ mechanism by which different regions within the state are allo- cated different portions of this budget. 298 CHAPTER 4 Figure 19.  Participatory budget as % of total spending 2000–2015 PB as % of total state spending 2.5% 2.0% 1.5% 1.0% 0.5% Percentage of total state spending 0.0% 2000 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 The total budget going through the Sistema is relatively small, is lower than fifteen years ago, and is declining as a percentage of state spending. This reduces the potential of participatory bud- geting to achieve its social justice and redistributive goals. If the state’s spending priorities are elsewhere, as suggested by Baierle (“all the key investments in the past 4 years have been in downtown as part of a gentrification process”), this could reduce the potential developmental impact of the Sistema. The role and influence of the deliberative stages A consistent and recurring theme raised by all interviewees was the importance of the deliberative assemblies as a vehicle of empowerment for individual citizens from poor communi- ties. Participatory budgeting offers opportunities for citizens to understand the budgeting process and, through long-term involvement, to learn from their mistakes. It can be said to create I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 299 spaces in which people who are traditionally unable to access power may be able to thrive (Baiocchi & Ganuza, 2014; Serageldin et al., 2005). During interviews with members of the state gov- ernment and local participatory budgeting experts, interesting anecdotal information came out: I interviewed a woman who was a grassroots leader and champion of women’s rights . . . with participatory budget- ing she learned practical ways to think about the future . . . she learned and grew and went on to establish a highly suc- cessful environmental and recycling centre which employed and empowered women experiencing domestic violence (Tarson Nunez, 29/5/2014). Someone told me about participatory budgeting as a citizen- ship exercise . . . “the first time I went to a meeting I didn’t speak, the third year I spoke, now I have learned that I can speak and I understand the process. Now I know that the gov- ernment HAS to listen to me” (Adalmir Marquetti, 30/5/2014). Unfortunately, no data is available on the demographics of those who attend the assemblies, so no discussion of their representa- tiveness of the population, or of marginalized groups, is possible. However, there is a shared assumption by some of the interview- ees that assemblies are indeed over-represented by people from poor and excluded populations: People who go to the meetings are usually the poor, the middle classes won’t go (Adalmir Marquetti, 30/5/2014) 300 CHAPTER 4 Meetings tend to include more people over forty and more struggling families who need help representing their urgent needs (Sergio Baierle, 30/5/2014) Rich people won’t contribute in a public forum in front of someone who doesn’t have sewerage (Tarson Nunez, 29/5/2014) Even if this is correct, and the assemblies include more repre- sentatives from poor communities, the additional benefits offered by the deliberative process only exist for roughly 5 ­percent of par- percent ticipants93 who engage in the entire process, not the 95 ­ who are only involved at the voting stage (irrespective of whether they choose to vote online or offline). Attendance in these face-to-face meetings appears to have dropped from between 179,209 and 378,340 in the first version of state-level participatory budgeting from 1999-2002 (Goldfrank & Schneider, 2006) to an estimated 79,000 people in 2014.94 Taking the 2014 attendance as an average across the 497 municipalities, this would give around 158 citizens per assembly. However, the dis- tribution is not even. In Vale do Cai (a COREDES for which exact attendance figures were available), the number of attendees ranged from 9 in Linha Nova to 240 in Montenegro, although accounting percent (27 for population sizes the range was actually from 0.2 ­ percent (163 attendees attendees in Feliz, population 12,992) to 2.3 ­ in Salvador do Sul, population 7,182). While the percentage is rela- tively small everywhere, this range is significant.95 Attendance at the earlier public hearings, where no decisions are made about the upcoming budget, are more consistent and I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 301 higher, ranging from 35 attendees (in São José do Sul) to 293 in percent of the population (in Feliz) to Montenegro, and from 0.3 ­ percent (in São Vendelino). The attendance figures here seem 4.3 ­ to be more consistent and less related to population size, because these hearings are primarily aimed at local government staff and invite civil society representatives. Unfortunately no data could be obtained regarding the his- toric attendance figures or demographic make-up of the face-to- face meetings. Given that assembly attendance is lower than it was between 1999–2002, and taking into account the range in attendees across different locations, these potential additional benefits are not reaching all parts of the state equally, and are reaching a smaller audience than they may once have reached. It is clear that, for all these deliberative stages, the key differ- ence is not related to the role of technology. While it is arguable that those who vote and do not attend the earlier assemblies may be missing out on additional benefits of the participatory budget- ing process, there is no evidence to suggest that people who vote online are any less interested in being involved in these assem- blies than those who vote offline. What appears more relevant than the technology is the change in the design of the partici- patory budgeting process, from one where final budgetary deci- sions are made during the face-to-face meetings, to one with a distinct and separate voting stage to decide on budget allocation. The channel used to vote does not appear to be a factor. Goldfrank suggests that this new model sacrifices delib- erative face-to-face aspects of the process in the interests of increasing the number of participants, giving a broader but 302 CHAPTER 4 less intense form of participatory budgeting (Goldfrank, 2014). Online voting certainly helps to enable this broader level of inclusion by increasing the potential pool of voters and mak- ing it easier to reach a wider audience. However, there is no evidence that these online voters have less interest in the delib- erative aspects (see 4.3 below) despite the declining numbers of attendees. The reasons for this decline remain unclear, but direct observation suggests that lack of awareness is at least partially responsible. It would be entirely possible though, within the new model, to communicate and promote earlier and to a wider audience, thereby increasing the level of involvement in not just the voting stage, but in every stage of the process, including the deliberative assemblies. CONCLUSIONS Does online voting affect the level of turnout? The evaluation identified voters who stated positively that they would not have voted if online voting was unavailable. This indicates that, at min- percent increase in voter turnout is directly attrib- imum, a 12.2 ­ utable to online voting. With only around a quarter of voting age population being aware of the vote, increasing and improving the promotion and information to citizens could be expected to cre- ate a significant increase in both online and offline turnout. Do online and in-person voters have different demo- graphics? A clear difference between the online and in-person voters was found. The online voting population is younger, more educated, and higher earning. It is somewhat more male than its offline counterpart and includes a lower proportion of non-white I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 303 citizens. The percentage of online voters is lower in municipalities with lower levels of socio-economic development. Do online and in-person voters engage in different ways? People chose to vote online mainly on the basis of its convenience. Internet access was not a major factor, but lack of awareness of the option to vote online was. With regards to engagement in PB processes, the wider sur- vey results show that online voters are slightly more likely than in-person voters to have voted in previous year’s budgeting votes, but that there is virtually no difference in the likelihood of online and in-person voters having been involved in earlier deliberative stages of the PB process. For involvement in wider political activities, online voters are somewhat more likely to have been involved than in-person voters. Within every age group and income/education bracket, the online voters report higher levels of involvement across all political activities. Do online and in-person voters vote differently and does this affect spending? The first place demand was the same in both the online and offline results in eighteen of the twen- ty-eight COREDES—different, but not by a sufficient amount to change the overall result in ten COREDES. In two COREDES the demand in the online vote changed the final outcome. Given that the online voting population is younger, better educated, and higher earning, this has the potential to create a tension between the redistributive and social justice goals of participatory budgeting and the democratic goals of widening representation. 304 CHAPTER 4 Are the PB goals clear and appropriate? In Rio Grande do Sul, the goals of the current Sistema are not well defined. This lack of clearly stated goals appears to have led to a situation where the centralized, state- controlled online voting and the decen- tralized in-person voting (where civil society plays a larger role) may not always seek to meet the same goals. A lack of clear and shared goals for the Sistema potentially leads to the widely-­ online and offline voting channels each supporting different interpretations of the goals of participatory budgeting. This means that the two channels of voting are more than just dif- ferent channels. They have different drivers and different levels of influence and, therefore, they might even target or appeal to different groups of voters. Are the online/offline processes open to manipulation or undue influence? While multiple votes are not possible on the online system, on a local, small scale it is possible for people to vote both online and in person, to vote in person several times in different places, or to vote on behalf of or influence another per- son. None of these is likely to significantly alter the outcome of the vote. Of more concern is the possible influence of civil society organizations—the military, police, fire brigade, etc.—on in-per- son voters through demonstrations at polling stations, telling the voter which boxes to tick, or simply completing the ballot on behalf of the voter. Some incidents of results manipulation were also observed. However, while there were sufficient examples to raise this as a significant concern, the limited observation made during the evaluation cannot demonstrate whether any of these issues altered the outcome of the vote. I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 305 What transparency and oversight exists? It is not clear to what degree, if any, the public and/or civil society played a role in mon- itoring the voting, counting, results, and data. Information about the projects and budgets implemented does not seem to be read- ily available. As the oversight of the voting process itself was not directly observed and is not well documented, it is not possible to comment on this in depth. However, there is some sense that new transparency initiatives are being put in place and that oversight is being strengthened. Who controls the PB process and total budget? The government is legally committed to spending the budget as dictated by its cit- izens, but only the limited budget that is channeled through the Sistema participatory process. The budget allocated to this partic- ipatory budgeting has seen a modest increase in real terms since 2008, but as a percentage of overall state spending remains sig- nificantly lower than at any time since 2007. The percentage of the overall budget allocated to participa- tory control is decided outside of the participatory process, as is the mechanism by which different regions within the state are allocated different portions of this budget. DCE SUMMARY: LOOKING THROUGH THE 5 LENSES In Section 1, fives lenses were introduced that form part of the World Bank’s guide for evaluating DCE (World Bank, 2016). In addition to offering useful perspectives while scoping out and designing an evaluation, these lenses can be useful in pre- senting evaluation results in a manner that aids comparisons across different DCE evaluations. To that end, the conclusions 306 CHAPTER 4 of this report are considered with respect to these five lenses below: Logic • The goals of the Sistema are not clearly defined and, there- fore, different stakeholders may have different assumptions about what the process is for. In particular, there appears to be some confusion over whether the Sistema aims simply to provide a new and innovative form of democratic gover- nance or whether it shares the wider goals of participatory budgeting: voice, vote, social justice, and oversight. Control • Participation does not include the stage of setting the bud- gets or allocating these to different regions within the state. • Data on voting results is available quickly and easily. • Other data on actual investment spending is not easily and readily available to the public. Participation • The majority of the state’s population seem not to be aware of the Sistema or the vote, necessarily limiting the level of participation that can be expected. • The introduction of the online vote has been responsible for at least a 12 ­percent increase in turnout, but the percentage of online voters is not consistent across the state. • A large majority of voters have not been engaged in the ear- lier deliberative stages of the process (this applies equally to online and in-person voters). I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 307 • Many online voters stated a preference to be involved in deliberative stages in the future. • The online voting population is younger, richer, and better educated than in-person voters. Technology • The online voting platform seems reliable and well-managed. • The online voting seems less open to manipulation or undue influence than the in-person vote. • The in-person vote has the ability for widespread organized influencing of voters by special interest groups representing key segments of the population. Difference • Citizens are having an impact on the spending decisions of their state government. • The level of the state budget going through the Sistema is low and declining. • It is unclear whether spending is being allocated in such a way as to redistribute it to poorer areas. • The online vote has changed the final results in at least two regions. Given the demographics of the online voters, this needs serious consideration. LIMITATIONS AND FURTHER RESEARCH State and city comparisons For logistical reasons, the face-to-face survey was necessarily limited to just the greater Porto Alegre area, whereas the online survey and IVR survey covered the entire state. This limits the 308 CHAPTER 4 comparisons that can be made and may limit the ability to extrapolate more widely from some of these findings. These types of surveys also invariably suffer from some kind of self-reporting bias that has not been considered or accounted for. Validity of IVR data The IVR survey was intended to act as a proxy for the general state population to allow for comparisons between this group and those who participated in the voting process. However, concerns over the representativeness of the IVR have limited the scope of its use to just one case (awareness of the voting) where the results were so signifi- cant as to outweigh the possible bias in the respondent demographics. Further research If data could be obtained, there are a number of areas where fur- ther research and analysis could prove informative. Some of these are highlighted below. RECOMMENDATIONS Deliberative stages Improved communication and promotion of the assemblies Many voters stated a desire to be involved in the earlier deliber- ative stages but were unaware of them. Earlier and more wide- spread communications, online and offline, would enable more of the population to choose to engage. Data collection on assembly participants Ensuring that the decentralized organizations that manage the assemblies are collecting basic demographic data on participants I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 309 would help ensure that the deliberative stages are reflecting the goals of the Sistema (whether that is representing the general popu- lation or explicitly targeting poor and marginalized communities). This will require changes to the process, support/training for the local organizers, and will benefit from simple mobile data collection tools to minimize the impact of the monitoring on the participants. Exploration of role for technology in the earlier deliberative stages The state of Rio Grande do Sul is already considering ways to expand the use of technology, as shown in this statement: The idea is to advance in the use of technology in every step of the process, be it to help in the initial moments of the assemblies or further on in the process. There is no official agenda, but it is being built with the help of the World Bank. With resources from the bank, we want to hire an agency for the development of a digital platform for the integration of government data (participation portal), like it was done with the digital office. And also a module for evaluation of public services (Paulo Coelho, 21/8/2014). In order to scale the Assemblies and to meet the appar- ent preference of large numbers of voters to engage earlier, but who prefer the online channel, exploring blended online/offline approaches to deliberation is suggested, that might “allow these processes to scale without losing the important deliberative aspect” (Ricardo Almeida, 29/5/2014). There are many ways in which this could be explored, tak- ing lessons from the e-learning sector (“blended learning”) Table 9.  Suggestions for additional research. 310 Attendance trends for deliberative stages Explore the historic changes in turnout at the earlier face-to-face meetings, and the current turnout and demographics compared to the online voters. It would be particularly interesting to follow a number of individuals through the entire process. CHAPTER 4 Delivery of Sistema-funded projects Are all the successfully voted projects actually delivered and, if not, what does the delivery rate look like compared to other non-Sistema projects in the state? Explore high variance in online/offline Are there more COREDES than Médio Alto Uruguai that are actively seeking to move their voting online? turnout in different COREDES What are the intended and unintended impacts of this, and how does this compare to other COREDES without such plans? Better understanding of reasons for Further and deeper exploration of the speculation that much of the population doesn’t vote simply non-voting because they are unaware of the voting. I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 311 where face-to-face sessions are filmed, live-streamed, and allow interaction from remote participants; having online discus- sions in parallel with face-to-face discussions; or adopting what many communities practice where face-to-face discussions are prefaced with sharing information online and can be continued online afterwards. Deliberation is a complex and long-term issue and any explorations should seek not to replace the offline fora (“the focus should be the offline,” Ricardo Almeida, 29/5/2014), but to sup- plement it and allow those who were not present to also benefit from the deliberations. Recording the sessions and making them available would allow online voters to understand the discussions that took place. Utilizing more advanced technology to live-stream these meetings could allow remote participants to engage more fully, although this would potentially limit remote engagement to those with the money to access reliable and high-bandwidth Internet connections. This report does not seek to make specific recommendations as to the nature of this blended model of deliberation, simply to draw attention to the potential of technology in this space and to suggest experimentation in this area could be a useful way for- ward in scaling the deliberative stages of the Sistema. The vote Improved communication and promotion of the vote The majority of the state population, apparently, are unaware of the vote itself. A more concerted, widespread communication campaign could help to address this—in particular focusing on 312 CHAPTER 4 ensuring the vote is publicized on television news and in the newspapers and doesn’t rely on the decentralized civil society organizations and paid online advertising. This could be done both offline and online, wherever the technology is appropriate. Data collection on online and in-person voters The surveys undertaken for this report were relatively inexpen- sive and could easily be integrated into the annual process. If a good and standardized exit poll were conducted online and offline each year and compared to wider population statistics, the state would be able to assess whether the inevitable increase in online participation is simply a channel shift or if it is shifting the voting population toward a younger and richer demographic. Whether this shift is seen as a problem or not depends on the goals ascribed to the Sistema, but it seems vital to at least monitor it. Better background information available to voters At the time of voting, a voter who was not involved in the earlier deliberative stages could benefit from more information about the choices and the deliberation that was behind them. Ensuring this information is easily available online and linked to the voting platform would increase the level of informed voting from online voters. It would then be important to seek a way to achieve the same results through the decentralized offline model so in-person voters can see the same information. Reduce opportunities for manipulation and influence of the voting The state could investigate and better understand how wide- spread the kind of electoral manipulation and undue influence I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 313 observed by the evaluation team is. If this is replicated state-wide, better monitoring and amended processes must be put in place to prevent this from happening in future years for the Sistema to continue to be seen as a valid and fair system. At the same time, it would be helpful to understand the level of influence over the vote that is exerted by those staffing the polling stations. Whether this is problematic or not depends on whether the Sistema sees the mobilization and organizing of civil society as a valid goal. A more in-depth understanding of exactly who the staff are (how many are government employees, police, civil society leaders, volunteers, etc.) would be a critical element to understanding the fine line between encouraging commu- nities to organize around common issues and allowing vested interests to exert undue influence over the results. CONTROL OF THE PROCESS / BUDGET Clear goals The goals of the Sistema should be discussed and shared publicly so all stakeholders and participants are aware of, for example, whether social justice and redistribution is a goal of the process or not. The success of the Sistema cannot be monitored or evaluated reliably without this. Open data and results Information on the assemblies, voting results, processes, and investment spending should all be made available to the general public in an easily digestible format both online and in a manner that ensures people without access to the Internet can also access the information. 314 CHAPTER 4 The raw (anonymized) data should also be made available in a useful format so that independent civil society or other groups can undertake their own analyses to ensure the process remains fair and continues to achieve its stated goals. The conclusions and recommendations above offer insights that it is hoped will be useful both for those responsible for the future of the Sistema in Rio Grande do Sul, and for anyone else- where in the world using technology within participatory bud- geting processes who are looking to the Sistema as a model of best practice. There is much to learn from the existing and on-going successes of the Sistema, but also much that can be improved and some salient warnings that should be considered as the role of technology inevitably increases in coming years. The authors would welcome the opportunity to engage in further discussions or support developments and experiments in this field. REFERENCES Arnstein, S. (1969) “A Ladder of Citizen Participation,” Journal of the American Institute of Planners, 35(4), pp. 216–224. Avritzer, L. (2010) “Living under a Democracy: Participation and its impact on the living conditions of the poor,” Latin American Research Review, Special Is. Baiocchi, G. and Ganuza, E. (2014) “Participatory Budgeting as if Emancipation Mattered,” Politics & Society, 42(1), pp. 29–50. Baker, R. (2011) “Interactive voice response surveys,” Alert! Marketing Research, (September 2011). Goldfrank, B. (2007) “Lessons from Latin America’s Experience with Participatory Budgeting,” In Shah, A. (ed.), Participatory Budgeting, the World Bank, pp. 91–126. I M PA C T O F O N L I N E V O T I N G O N PA R T I C I PAT O R Y B U D G E T I N G I N B R A Z I L 315 Goldfrank, B. (2014) “Participation, Distribution, and the Left: The Return of PB in Rio Grande do Sul,” In A New Critical Juncture? Changing Patterns of Representation and Regime Politics in Contemporary Latin America. Goldfrank, B. (2012) “The World Bank and the Globalization of Participatory Budgeting,” Journal of Public Deliberation, 8(2). Goldfrank, B. and Schneider, A. (2006) “Competitive Institution Building: The PT and Participatory Budgeting in Rio Grande do Sul,” Latin American Politics and Society, 48(3). Goncalves, S. (2009) Power to the people: The effects of participatory bud- geting on municipal expenditures and infant mortality in Brazil,. IPSOS (2009) “For the Record: Ipsos Reid and the May 12, 2009 B.C. Election,” [online] Available from: http://www.ipsos-na.com/news- polls/pressrelease.aspx?id=4391 (Accessed 13 February 2015). ODTA (2014) “Technology Drives Citizen Participation and Feedback in Rio Grande do Sul, Brazil,” Open Development Technology Alliance. Pateman, C. (2012) “Participatory Democracy Revisited,” In APSA Presidential Address, pp. 7–19. PB-Unit (2009) “The role of new technology in participatory budget- ing,” London, UK, The PB Unit. Peixoto, T. (2008) “e-Participatory Budgeting: e-Democracy from the- ory to success,” e-Working Papers 2008, e-Democracy Cente. Procergs (2013) Votação Online das Prioridades do Orçamento 2014: Relatório de Atividades,. Sampaio, R. and Peixoto, T. (2014) “Electronic participatory budgeting: False dilemmas and true complexities,” In Hope for democracy: 25 years of participatory budgeting worldwide, pp. 413–425. Schneider, A. and Goldfrank, B. (2002) Budgets and ballots in Brazil: par- ticipatory budgeting from the city to the state,. SEPLAG (2014) Ciclo Orçamentário Estadual 2014/2015,. SEPLAG (2013) Regimento interno do Processo de Participação Popular e Cidadã para elaboração do Orçamento Estadual 2015,. Serageldin, M., Driscoll, J., Miguel, L. M. S., Valenzuela, L., Bravo, C., Solloso, E., Solá-Morales, C. and Watkin, T. (2005) “Assessment of 316 CHAPTER 4 Participatory Budgeting in Brazil,” prepared for the International Development Bank. Spada, P., Mellon, J., Peixoto, T. and Sjoberg, F. (2014) “Effects of the Internet on Participation: study of a public policy referendum in Brazil. Journal of Information Technology & Politics, 13 (3). Wampler, B. (2012) “Participatory Budgeting: Core principles and Key Impacts,” Journal of Public Deliberation, 8(2). World Bank Group (2016). Evaluating Digital Citizen Engagement: A Practical Guide. World Bank, Washington, DC. © World Bank A number of websites have also been extremely useful in gath- ering background information on the Sistema, or in obtaining datasets that have been useful for comparative analysis. These are listed below: • Gabinete Digital (http://gabinetedigital.rs.gov.br/) • PROCERGS (http://www.procergs.com.br/) • IBGE (http://www.ibge.gov.br) • Consulta Popular (http://www.consultapopular.org.br/) • Sistema Estadual de Participação Popular e Cidadã (http:// www.participa.rs.gov.br/) • Votação de Prioridades Orçamento 2015 (https://vota.rs.gov .br) • Transparência RS (http://www.transparencia.rs.gov.br) • Mapa de Transparência (http://www.mapa.rs.gov.br/) • SEPLAG (http://www1.seplag.rs.gov.br) • ATLAS Socioeconomico Rio Grande do Sul (http://www.scp .rs.gov.br/atlas/default.asp) 317 List of acronyms CARs Centros Administrativos Regionais, system of local administrative bodies within each of the 28 regions of Rio Grande do Sul COREDES The 28 regions of Rio Grande do Sul, further divided in 497 municipalities DCE Digital citizen engagement ICT Information and communication technologies, e.g., the Internet, mobile phones IDESE Índice de Desenvolvimento Socioeconômico, aggregated from indicators relating to income, education and health IVR Interactive voice response, an automated telephony system that interacts with callers, gathers informa- tion and routes calls to the appropriate recipient PB Participatory budgeting POA Porto Alegre, the capital city of Rio Grande do Sul PROCERGS An independent company which is a “mixed econ- omy” public–private joint venture PPA Plano Plurianual, multi-year state participatory plan- ning process RS Rio Grande do Sul, a state in the south of Brazil SEPLAG Rio Grande do Sul department of Planning, Management and Citizen Participation SEI Socio-economic indicator 318 Endnotes INTRODUCTION 1. For more details on the lenses used on the field evaluations see “Evaluating Digital Citizen Engagement: A Practical Guide” (World Bank Group 2016). 2. The nine variables examined are disclosure of feedback, disclosure of service provider response, proactive listening, voicing modality (individual or collective), accountability directionality (upwards or downwards), combined offline action, driver of the initiative (civil society, governor, or donor), partnership with service pro- vider, and level of government (national, sub-national, and local). 3. See https://youtu.be/g4fGB5mQ_gE 4. See http://www.huffingtonpost.com/lex-paulson/three-text-­messages_ b_3761643.html 5. Mostly through unprompted messages that are sent by some U-Reporters. 6. See, for instance, Kang & Maity (2012), and Dodson et al. (2013). 7. As a side note, the evaluation provides equally sobering numbers on the potential of the Internet in the Ugandan context: 63 ­percent of respondents said they did not know what the Internet is. 8. For instance, most U-Reporters interviewed emphasized that U-Report helps them stay up to date with developments in their community. 9. See Peixoto and Fox (2016) 10. See Welle et al. 2015. 11. For a rich account of the use of ICTs in PB processes, see Gilman 2016. 12. For the effects of online voting on PB turnout, also see Spada et al. 2016. 319 CHAPTER 1 13. This also included an international platform, Change.Org. The data analysis in that case referred to a total of 132 countries (World Bank 2014b). 14. Making All Voices Count is supported by DFID, USAID, Sida, and Omidyar Network 15. The current enthusiasm—among development stakeholders and the media—over the potential of technology in citizen participation in the developing world is reminiscent of the wave of optimism sur- rounding such initiatives in Europe over the past decade, despite the significantly less favorable conditions of developing countries. Even in Europe, with generous funding and a more favorable institu- tional and technological context, most experiences present limited results at best (see, for instance, Prieto-Martin, et al, 2011; Susha and Gronlund, 2014; Diecker and Galan, 2014). 16. Note that this widely assumed causal mechanism does not dis- tinguish explicitly between two different kinds of accountabil- ity: preventative (reforms that make future transgressions more transparent) and reactive (answerability and the possibility of sanctions). 17. In the case of MajiVoice, degrees of responsiveness can be explained by the modality of contracts between government and service pro- viders (renewable upon performance) as well as the creation of an oversight structure to monitor government response. For details, see Belcher and Lopes (2015). 18. For empirical evidence of the effect of government responsiveness on levels of citizen participation, see Sjoberg et al. (2017). CHAPTER 2 19. In methodological terms however some could argue that crowd- sourcing is merely data collection based on self-reporting, where technology’s role is merely that of lowering the transaction costs for reporting that data. ENDNOTES 321 20. The WSA, which was launched as part of the United Nations Summit on the Information Society in 2003, is viewed as one of the most important global competitions in m-Content and creativity. 21. A complete list can be found at http://www.ureport.in. 22. The partners at the time of the data collection for the study (July 2014) included: the scouts of Uganda, Marie Stopes an Uganda a reproductive health organization, the Uganda Muslim Council, the Uganda Catholic Church, the Church of Uganda, the Girls Education Movement, the Rwenzori Information Network (RICNET), the Battery Operated Systems for Community Outreach (BOSCO), Mildmay Uganda and BRAC life skills and microfinance. 23. Questions were sent out in the end of July/beginning of August 2014, but results were calculated in September 2014. Polls usually remain open for some time after they are fielded. This accounts for the difference with the U-Reporters’ population in system data. 24. See section 7 for suggestions. 25. Margin of error is ± max 1.3 for a 95 ­percent confidence level. 26. For age groups 20–24, 25–29, and 30–34. 27. Ugandan education is structured as follows: (1) Pre-Primary—3 years, (2) Primary school—7 years, (3) Secondary/Ordinary Level school—4 years, (4) High school—2 years and, (5) University edu- primary) there is a national exam cation. At each level (except pre-­ to qualify for the next level. At the end of each level, especially after ordinary level, students can opt to join Technical Colleges. 28. The total number of responses to this question in the U-Report survey was 2,394. 29. A 2013 ITU report from the Partnership on Measuring ICT and Development quotes academic research that indicates that in South Africa it was usual for people to own four SIM cards, with some users in Uganda owning up to seven SIM cards (see Partnership on Measuring ICT and development, I. T. U. (2013). “Stocktaking and Assessment of Measuring ICT and Gender.” Background paper for the 11th World Telecommunication/ 322 ENDNOTES ICT Indicators Symposium, available for download at: http://ow.ly/3xKKLj, last accessed 15.04.2015). 30. Child days is a health promoting strategy that allows UNICEF to deliver basic health care services to eligible children in their communities. 31. An important factor when examining perceptions of impact is the role of the media. Wide media coverage of a project might considerably influence how people perceive an intervention and its outcomes. 32. For a more detailed description of Ugandan MPs responsibilities, see http://ow.ly/3xK8zM, last accessed 14.04.2015. 33. This is in the region of 15,000–20,000 USD. 34. Some of the drivers for RIWI costs are: Internet penetration and incidence rates (the closer the targeted population is general popu- lations and the Internet the lower the cost is), the complexity of the questionnaire, and time constraints on obtaining the data. 35. Includes an observation made by the interviewer on the roof materials 36. UNICEF suggested that the switch to the RapidPro platform, which is used in newer implementations of U-Report in other countries, has increased response rates. 37. USSD, which stands for Unstructured Supplementary Service Data is “is a protocol used by GSM cellular telephones to commu- nicate with the service provider’s computers” (Wikipedia, http:// ow.ly/3xKDbf, last accessed 17.04.2015). 38. There are some interesting exceptions, however. For instance, Gillwald et al. (2010) indicated that more women than men owned a mobile phone in South Africa and Cameroon. 39. Response rates for online surveys vary significantly depending on the target audience, the complexity of the questionnaire, and the commu- nication strategy. In general, employee surveys elicit a 50–70 ­percent response rate and customer surveys from 20–50 ­percent. 40. This is not to claim that face-to-face questionnaires are fool-proof as, e.g., interviewers might also introduce their own biases. 41. Like training and in-depth analysis, triangulation is resource intensive. ENDNOTES 323 CHAPTER 3 42. In this case the beneficiaries are the customers of the water utility companies in question. The “beneficiary feedback system” could equally be described as a “customer complaint system.” 43. MajiVoice has been released under an open-source, Lesser General Public License (LGPL) and is thus available under LGPL terms for adoption and adaption without license fees or permissions. It can be downloaded from: https://github.com/CustomerFeedback 44. WHO/UNICEF, Joint Monitoring Program 45. Lafferty, A.; Lauer, W.; Benchmarking—Performance Indicators for Water And Wastewater Utilities: Survey Data and Analysis Report, American Water Works Association, 2005; pp.73-78; Sum of median technical and customer complaints for water (6.1+5.9=12); for African values: Water operators Partnerships, Africa Utility Performance Assessment, 2009; p.107 (note that due to substan- dard reporting systems at many African utilities, these may be underestimates); for more comparison figures, see also: Australian Government, National Water Commission, Australian National Performance Report 2006–2007—Urban Water Utilities, p.25 46. Water Services Regulatory Board of Kenya’s IMPACT Report 6, 2013; p.87 47. It should be noted that MajiVoice was primarily designed as a dig- ital channel feedback mechanism (i.e., receiving and managing complaints and feedback received via digital channels such as SMS and email). However, the design of the system allows complaints received from any channel to be entered onto the system by the water service provider and once that has been done, the back-end complaint management infrastructure allows for all complaints to be dealt with in a uniform and consistent manner. 48. Water Services Regulatory Board of Kenya’s IMPACT Report 7, 2014; Table 4.4, p.25 49. MajiVoice system statistics; unique users per week (see table 6). percent of NWC staff are regular This implies approximately 20 ­ users of the system. 324 ENDNOTES 50. The Water Services Regulatory Board (WASREB) is a non-com- mercial State Corporation established in March 2003 as part of the comprehensive reforms in the water sector. The mandate of the institution is to oversee the implementation of policies and strate- gies relating to provision of water and sewerage services. WASREB sets rules and enforces standards that guide the sector towards ensuring that consumers are protected and have access to efficient, affordable, and sustainable services. WASREB works with industry, community representatives, and groups. 51. SurveyMonkey was chosen based on required features and func- tionality, familiarity, ease of use and cost. A range of similar survey / questionnaire tools are available. To allow completion of individual surveys by the enumerators during telephone calls required reli- able and reasonable Internet connectivity speeds. Use of such a tool would not have been viable if such connectivity was not available. 52. This may just reflect the gender breakdown of account holders at NCWSC. Due to lack of account holder data in this regard, it is not possible to check this. 53. Water and Sanitation Programme, MajiVoice Baseline Report, 2012 54. It should be noted that the interpretation of what comprises a “successful resolution” of a complaint may differ between the com- plainant and the water company. For example, a complaint about a bill being too high that is adjudged to in fact be correct, may leave the customer unsatisfied but the complaint will still be marked as successfully resolved. It is not clear how prevalent this kind of sit- uation is but complaints not resolved in favour of the complainant are an inherent complexity of any complaint resolution process. 55. Personal correspondence (March 2015) with Maximilian Leo Hirn, World Bank lead on MajiVoice in Kenya. CHAPTER 4 56. Including the UN Public Service Awards for Latin America in 2013, and various national prizes within Brazil. ENDNOTES 325 57. Although there is debate over whether Porto Alegre was the first city to implement participatory budgeting, it was certainly amongst the first and has gone on to become the most famous and high profile example. 58. Voting via SMS was also introduced in 2012, but was not continued in subsequent years. 59. An independent company which is a “mixed economy” public–pri- vate joint venture whose largest shareholder is the state govern- ment of Rio Grande do Sul. 60. Translates into English as “state system of popular citizen participation.” 61. “Voting the priorities for the 2015 budget.” 62. The COREDES are composed of representatives from local coun- cils, universities, and civil society organizations. They were intro- duced in 1994 to allow civil society organizations to influence development plans. 63. Process information comes from interviews, government websites, and two official documents (SEPLAG, 2013, 2014). 64. These are selected from sixteen state-wide themes, which were established as part of the multi-year PPA. 65. Rio Grande do Sul is split into 28 COREDES regions, each of which is fur- ther divided into municipalities, of which there are 497 across the state. 66. Note that while the four demands have a specific budget allo- cated, the two thematic choices are “a novelty in the process” (Paulo Coelho, 21/8/2014) and do not ensure any specific budget is allocated. They simply ensure that discussion with the relevant government bodies takes place and reflect the desires of the popu- lation. This second part therefore is perhaps better considered as a consultation piggybacking the budgeting process, rather than a core part of the participatory budget itself. 67. In practice this seems to involve polling locations being placed shopping mostly in areas with high levels of pedestrian traffic—­ centers, health centers etc.—and being staffed by people who are actively approaching all passers-by to encourage them to vote 326 ENDNOTES 68. An urna is the sealed box, bag, or other container into which voting ballots are put by voters. 69. The range of interpretations of the Sistema goals seems to be shared more widely. In a follow-up online survey of thirty-five stakeholders (COREDES staff and others involved in delivering the decentralized process) answered questions about their perception of the program’s goals. Thirty of them cited democracy as a goal, but eight of the thirty-five also cited social justice, redistribution, or inclusion of poor and marginalized communities, and seven also cited empowerment of excluded individuals. 70. The face-to-face survey needed to be short so that people were willing to complete it, limiting the breadth of questions that could be asked. This was an expensive method of collecting data, further complicated by the de-centralized process which meant that the location of polling stations was not known in advance. 71. Interactive Voice Response (IVR) surveys are automated voice calls that use short pre-recorded questions and can either record audio or interpret keypad presses as answers. IVR is capable of offering different sets of questions depending on the answers given. The IVR survey was seen as a low-cost, experimental method using a relatively new technology. There are some concerns over its poten- tial effectiveness as well as legal and ethical concerns over its use in certain countries. 72. As all online voters were offered a prompt to complete the sur- vey, refusals are taken simply as the total number of online voters minus the number who completed the survey. 73. This number reflects the fact that IVRs make an extremely high number of calls that are unanswered or which connect but do not go on to complete a survey (and which could be refusals, poor con- nections, or have only reached an answering service). 74. IVR is a relatively young technology and its use in different scenar- ios is still being explored. There is some suggestion that, while it can achieve extremely high response rates for inbound surveys or when ENDNOTES 327 calling primed and expecting audiences, response rates for outbound percent (Baker, 2011). surveys to wider populations can be as low as 1 ­ 75. Online exit polls remain relatively rare and, although this appears extremely low compared to the face-to-face-survey, it is similar to at least one other online exit poll in Canada which achieved online response rates of ~13 ­percent (IPSOS, 2009) 76. This is the 2013 estimate taken from the IBGE website. The latest official figures are from the 2010 census when the population was listed as 10,693,929. 77. This estimate is made by calculating the percentage of citizens aged sixteen or over in the 2010 census and using the same ratio for the 2013 population estimates. 78. The graph shows these labels in Portuguese as Internet (online) and Presencial (in-person) 79. Given the non-representative nature of the IVR survey this is not necessarily a valid assumption, but given the extremely high per- centage reporting “did not know about it,” even adjusting for a percent discrepancy between the IVR and the general pop- 10-20 ­ ulation, the results would be broadly similar. 80. When considering scaling the vote, it is of course important to note that extending the time the online vote is available is almost free to do, whereas spreading a decentralized and labor-intensive in-person voting model across multiple days would be expensive and logistically challenging. 81. To allow for comparison with the wider population, this survey used the controversial but official IBGE census categories: Amarela (“yellow” or Asian); Branca (White); Indigena (Indigenous); Negra/ Preto (Black); and Parda (“brown” or mixed-race). 82. Aggregated from indicators relating to income, education, and health. 83. The correlation is likely to have been higher and has been altered in recent years by the fact that at least one COREDES (Medio Alto Urugai) has been actively seeking to move all of its voting online, where most others have not (as described in Section 4.2). 328 ENDNOTES 84. Available census data for RS/POA is broken down only by under and over age forty. These results could be amended once more accurate state and city census data are obtained. 85. This analysis may not be valid, particularly if the “no answer” group contains a high proportion of non-white voters, as seems likely. 86. This question was asked as “I wanted to” or “I prefer to” vote online or offline, and was intended as a catch-all for those people who do have a clear preference but whose preference does not fall into one of the other options. There is some potential here for overlap with the convenience response. 87. This level of security is similar to equivalent systems elsewhere in the world, including the European Citizen Initiative. 88. An estimated 79,000 people attend the municipal assemblies, compared to the roughly 1.3 million who take part in the final vote on the budget priorities. Across 497 municipalities in the state, that is an average of 158 citizens per municipality, a very small sample that is highly unlikely to be representative of the wider populace. 89. Exact figures are available in some COREDES but do not appear to be in others. Figures for the periods from 2003-2013 were not available. 90. If municipal-level indicators for poverty can be found, it will be interesting to compare these against the level of involvement in different municipalities. Civichall.org World Bank