99519 Evaluation Team | DEET The Effect of Government Responsiveness on Future Political Participation WORKING PAPER #1 Fredrik M. Sjoberg | Data-Pop Alliance Digital Engagement Evaluation Team (DEET) Jonathan Mellon | Oxford University Digital Engagement Evaluation Team (DEET) Tiago Peixoto | The World Bank Digital Engagement Evaluation Team (DEET) Electronic copy available at: http://ssrn.com/abstract=2570898 This paper is a product of the World Bank's Digital Engagement Evaluation Team (DEET) at the World Bank's Governance Global Practice. The Digital Engagement Working Paper Series disseminates the findings of work in progress to encourage the exchange of ideas about issues at the intersection of technology and citizen engage- ment. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Develop- ment/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. Copyright Digital Engagement Evaluation Team. This work is licensed under a Creative Commons Attribution 4.0 International License. Electronic copy available at: http://ssrn.com/abstract=2570898 The Effect of Government Responsiveness On Future Political Participation Fredrik M. Sjoberg, Data-Pop Alliance Jonathan Mellon, Oxford University Tiago Peixoto, The World Bank Thursday, February 26, 2015 Words: 6,908 Abstract What effect does government responsiveness have on political participation? Since the 1940s political scientists have used attitudinal measures of perceived efficacy to explain participation. More recent work has focused on underlying genetic factors that condition citizen engagement. We develop a ‘Calculus of Participation’ that incorporates objective efficacy – the extent to which an individual’s participation actually has an impact – and test the model against behavioral data from FixMyStreet.com (n=399,364). We find that a successful first experience using FixMyStreet.com (e.g., reporting a pothole and having it fixed) is associated with a 54 percent increase in the probability of an individual submit- ting a second report. We also show that the experience of government responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of government responsiveness for fostering an active citi- zenry, while demonstrating the value of incidentally collected data to examine participa- tory behavior at the individual level. Classification: JEL: P16 – Political Economy, JEL: H76 – State and Local Government: Other Expenditure Categories, JEL: H54 – Infrastructures; Other Public Investment and Capital Stock. Keywords: Government Responsiveness, Participation, Fix My Street, ICT, and Crowdsourcing. * Note that author order is determined using bounded randomization. All authors contributed in equal shares. Acknowledgements: We would like to thank Tom Steinberg, Struan Donald, and Paul Lenz at My Society for providing the data and valuable comments. This paper was presented at the American Political Science Associ- ation Annual Conference 2014 in Washington D.C. We would also like to thank Amy Chamberlain and Josh Kalla for editorial assistance. 1 Introduction Why do some participate in politics and others do not? It can be argued that individuals have an underlying propensity to participate, due to socialization (Tam Cho 1999) or ge- netic factors (Fowler, Baker, and Dawes 2008). However, incentives also matter (Rosenstone and Hansen 1993), as do social dynamics (Gerber, Green, and Larimer 2008), and institutional design (Blais 2006; Smets and Van Ham 2013). The literature on efficacy has suggested that the extent to which citizens feel that government is responsive to them affects their participation (Finkel 1985; Abramson and Aldrich 1982). To date, the litera- ture has focused primarily on the relationship between subjective perceptions of efficacy and citizens’ levels of participation. Yet, there has been relatively little analysis of whether it is merely these subjective perceptions of external efficacy that matter or whether a citi- zen‘s objective efficacy – how much they can actually affect government – is relevant as well.1 The effect of objective efficacy on participation is difficult to examine in the context of traditional forms of participation such as voting, as the consensus is that one individual‘s vote is almost certain to make no difference to the outcome of the election (Green and Shapiro 1994; Bendor et al. 2011). From this perspective, the impact of efficacy on voting is essentially a study of deluded voters. However, other forms of participation, particularly at a more local level, can have a more direct impact on the way government is run. Local governments can listen and react directly to the concerns of individual voters. While the outcomes of these processes may not be considered part of high politics by some, they play a key role in providing greater opportunities for democratic participation and social inclu- sion (Mill 1991; Pratchett 2004). The participatory democracy (i.e. non-electoral) literature has frequently assumed that levels of participation are intrinsically linked to system responsiveness: the more responsive 1 Traditionally in political science a distinction has been made between a sense of external efficacy, the belief that government will be responsive to attempts to influence it, from a sense of internal efficacy, the belief that one is competent to understand politics and therefore participate in politics. We consider both of these tradi- tional categories as subsets of subjective efficacy. The concept of ’objective efficacy’ used throughout the paper refers not to the belief that one can make a difference but instead to whether an individual actually can make a difference. 2 government is, the more likely citizens are to participate. For instance, looking at demo- cratic innovations such as Neighborhood Councils in Chicago and Panchayats in West Bengal, Fung and Wright suggest that the sustained levels of engagement seen in these initiatives are related to citizens’ degree of “empowerment”, that is, their capacity to effec- tively influence government actions that are relevant to them (Fung and Wright 2001). Similarly, participatory budgeting scholars often argue that citizens' willingness to partici- pate is largely dependent on governments’ ability to respond to their demands (Abers 2001; Wampler 2010; Gret and Sintomer 2005). Yet, to date, the evidence to support these seemingly obvious assumptions remains anecdotal at best. The most direct example of objective efficacy is a direct government response to an act of participation. In this paper we focus on a new type of non-electoral participation: sub- mission of online reports on local problems through the online platform Fix My Street. This platform allows citizens in the United Kingdom to report micro-local problems via a website that displays the complaint online for everyone to view, but also, importantly, automatically forwards the complaint to the local authorities. Local authorities can thus engage citizens with updates about specific complaints. As a result of this direct response, we can observe a voter’s objective efficacy (whether their participation, through submission of a report, resulted in their problem being fixed) as well as their subsequent engagement with the system. Here we focus on explaining continued participation with the platform beyond the first report submission. This allows us to ignore factors that are constant, such as socialization levels and genetics. The question is simple: what is the effect of having a first reported problem fixed (government responsiveness) on future participation? Building on the existing literature we present a simple calculus of participation model, inspired by the classic turnout model. We then present the unique data source, system data from the online platform Fix My Street (UK), which has records of over three hun- dred thousand acts of participation. Since this is the first time the data has been used in an academic study, we describe it in detail. In the analysis section we estimate the effect of government responsiveness – objective efficacy – on continued participation in the platform 3 using regression modeling. We conclude by highlighting the implications for future research and policy work. Efficacy, Government Responsiveness, and Participation Does political efficacy increase participation? This would appear to be a question that was answered some time ago, with Finkel (1985) showing the causal effects of internal and ex- ternal efficacy on political participation and earlier studies showing a strong cross-sectional relationship between efficacy and participation (Campbell et al. 1960). However, these studies focused on the sense of efficacy rather than the actual extent to which an individu- al citizen can make a difference.2 To distinguish a sense of political efficacy from political efficacy itself, we refer to a citizen‘s actual (rather than perceived) ability to make a differ- ence as their objective efficacy (contrasted with their subjective efficacy). While it might seem obvious that greater objective efficacy would increase participation, the extensive literature on the paradox of voting provides a strong counterpoint. On the one hand, substantial proportions of voters consistently turn out to vote in elections, de- spite there being an extremely small probability of a single vote mattering (Green and Shapiro 1994). On the other hand, studies of turnout have suggested that increased close- ness of a race predicts somewhat higher turnout (Blais 2000; Norris 2002; Shachar and Nalebuff 1999). Since closeness increases the probability of a vote mattering, this finding is consistent with our hypothesis on objective efficacy (although it is still inconsistent with voters accurately conducting a calculus of participation).3 It must be noted that these stud- ies are not an ideal test of objective efficacy, as closeness of the race is also strongly associ- ated with greater party mobilization, salience in the media, and political interest (Cox and Munger 1989; Fauvelle-Aymar and François 2006; Stratmann 2005). Work by Lassen and Serritzlew (2011) finds a causal effect of jurisdiction size on inter- nal political efficacy, whereby voters have lower efficacy in larger jurisdictions. This is con- 2 Efficacy is an individual-level attitudinal variable, of either internal (input) or external (output), see (Craig 1979). 3 To put this another way, a close race merely changes the decision to vote from completely irrational to slight- ly less irrational, at least in terms of private benefits. 4 sistent with a mechanism in which voters update their attitudes in response to changing objective circumstances. However, the study does not directly test the mechanisms or any subsequent effect on political participation. Existing research on efficacy also gives reason to question whether objective external efficacy should matter for participation. Both inter- nal and external efficacy have been shown to be stable constructs that do not vary greatly over time (McPherson, Welch, and Clark 1977) and that both internal and external effica- cy appear to be attitudes that are created through early socialization (Easton and Dennis 1967; Dudley and Gitelson 2002; Lyons 1970) and civic education (Pasek et al. 2008). The Calculus of Participation In this section we lay out a simple model of the decision facing a potential reporter of spe- cific local problems, the type of participation that the Fix My Street (FMS) platform ena- bles. The payoff from participation is derived from the possibility of a local public good being provided, as captured by a problem report being updated to a ‘fix.’ Continued en- gagement with FMS is motivated by outcomes in terms of fixes to the micro-local infra- structure. As with all action oriented towards public goods, there is a potential free-rider problem, where it might not be rational for individuals to participate when they can bene- fit from others’ participation. However, with FMS, benefits may be targeted enough that it is individually rational to submit a report anyway.4 The intention here is not to formalize this further, but rather to spell out the compo- nents of the model. The basic model for the ‘calculus of participation’5 is given by: R=P×B–C where R is the reward gained from participating, in essence a proxy for the probability that a citizen will participate, P is the probability of participation having an impact, B is 4 This is contrary to Olson’s prediction about self-interested individuals not participating in voting (Olson 1971). However, FMS is related to a relatively targeted benefit, unlike national level policy decisions, which is the case in terms of voting. 5 This model is essentially equivalent to the ‘infamous’ calculus of voting (Downs 1967), but not including the D term introduced by (Riker and Ordeshook 1968). 5 the utility benefit of participating, and C stands for the costs of participation in terms of time and effort.6 An individual will participate if P × B > C, i.e. if the benefits of participation exceed the costs. These models are largely considered to have failed in predicting turnout because P should be so low for most forms of political participation, that even very small costs (C) will easily make the decision to vote irrational. In the type of participation considered here, however, the benefits are more targeted and tangible and the costs are extremely low since reporting is done online and only takes a few minutes. More importantly, there is no rea- son to expect B or C to change dramatically over time.7 The main focus in this study is the effect of government responsiveness on participants, i.e. those that have already submitted their first report. Following Bendor et al. 2011, we consider P to be iteratively updated depending on the individual’s experience of participation. An individual updates their belief about P based on information about the responsiveness of their local government. A good experience with FMS – that is, seeing a reported problem fixed – will either increase P or reinforce P, if prior beliefs about P were very high (see Figure 1). 6 The model treats voters [participants] as Bayesian prospective decision-makers (forward-looking and future- oriented), though with imperfect information (Achen 2002). 7 For the time being, we remain agnostic about which factors may be part of B, allowing that, as well as per- sonal benefits, factors such as altruism might be relevant. We also make the simplifying assumption that the council’s behavior is exogenous and that problems are not fixed because of strategic political considerations. 6 Figure 1. Calculus Of Participation - Iterative Updating of Perceived Efficacy Based On Government Responsiveness. This model predicts that individual variation in objective efficacy – whether a problem gets fixed or not – will have an impact on the subjective assessment of external efficacy, and in turn have an effect on future participation.8 Hypotheses From the theory presented above we derive the following hypothesis: H1 A positive experience of government responsiveness – objective efficacy – will increase the probability of future participation. The main hypothesis can be further specified to apply both to the causal effect on the probability of submitting a second report (H1.1) and the effect on total future report sub- mission (H1.2), given that the first time a report is submitted can be considered a forma- 8 Remember that objective efficacy is defined as how much individuals can actually affect government, as con- trasted with how much they think they can affect it. 7 tive experience. It is also useful to consider whether an experience of low objective efficacy (failing to have a problem fixed) reduces an individual’s propensity to participate again. Democratic Innovation: Fix My Street While the calculus of participation clearly cannot explain voting, other forms of political participation, such as reporting a problem to the local authority, may have the potential to provide sufficiently high P values to justify participation. With these forms of participa- tion, the contact with government can potentially lead to the resolution of the citizen‘s problem. This paper observes one example of such participation and analyzes the observed efficacy of the act of participation and individuals’ subsequent participation to answer the question of whether objective efficacy impacts participation. To do this, we exploit a unique dataset of 399,364 reports submitted by 154,957 unique individuals through the Fix My Street (FMS) platform from February 28 2007 to February 12 2013. Although the definition of political participation is contested, with arguments over whether involuntary or violent acts should be considered examples of participation, almost all definitions agree that an act of political participation aims at influencing policy and decision-making (Verba, Schlozman, and Brady 1995). Participation via FMS clearly falls within this definition, given that a report aims to influence the distribution of public goods and provision of public services (e.g. whether a road is fixed in a neighborhood or not). Fix My Street differs from some other forms of mass participation in that it does not require collective action for an act of participation to achieve its objective (unlike, for instance, voting). This makes an individual's attribution of efficacy clearer and provides a cleaner test of the effect of objective efficacy on subsequent participation. 8 Background & History of Fix My Street Fix My Street is a web-based platform that allows users to report – via PC and smartphones9 - physical problems with local infrastructure or public services that can be fixed by local authorities. Launched in the United Kingdom in February 2007 by the UK charity MySociety, the platform was funded by the Department for Constitutional Affairs Innovations Fund.10 In 2010, FMS was closely integrated with The Guardian newspaper‘s “Guardian Local Project.” On the FMS platform, individual users can submit reports about tangible (physical) problems in the local community, such as potholes, broken streetlights and graffiti. Report submission is a simple process taking on average only a few minutes to complete. First, a user enters a UK postcode, a street name, or uses the ‘locate automatically’ function. A map is then shown covering the area of interest. The user then clicks the map to indicate the specific location of the problem and enters a subject line, a short description, a catego- ry (e.g. pothole, streetlight), and optionally attaches a picture. Once a report is submitted, FMS automatically forwards the report to the relevant local authority, either directly to their Customer Relationship Management system or to an e- mail address provided by the local authority. Local authorities can respond to these reports via the platform by indicating when the problem is fixed. Other users of the platform can also indicate whether the problem has been fixed. A report submitter will be informed sep- arately via e-mail if the problem has been reported as fixed by a third party, be it a local authority or another user of the platform. After 28 days, an automatic follow-up e-mail is sent to the user if the problem has not yet been fixed. At this point, the user has the op- tion to indicate whether they want to receive further e-mails with status notifications from FMS. Since its launch, FMS has attracted the attention of the international media, the devel- opment community and scholars from numerous other fields (King and Brown 2007). Built 9 In 2008 a FMS app was developed to enable users to report problems via iPhone and ever since volunteers have developed apps for both Nokia and Android. 10 Development started in September 2006 and it was launched February 2007. The development costs were £6,660 and the computer script consisted of 15,670 lines of code (incl. markup), which are both modest num- bers (Escher, 2011). 9 on open source code, FMS has been replicated in a number of countries such as Sweden, Australia, Malaysia and Georgia. Furthermore, the FMS model has inspired a number of similar web-based citizen reporting initiatives, such as Vecino Inteligente in Chile, I Change My City in India, and SeeClickFix in the United States. Despite the proliferation of solutions similar to FMS in both developed and developing countries, the understanding of citizen engagement dynamics mediated by these platforms remains extremely limited. Similarly, little research has tapped into the potential of inci- dentally collected data to shed light on participatory behavior, particularly at the individ- ual level. The FMS data and the analyses carried out are described in the following sec- tions. Data Description We obtained raw platform data directly from MySociety. The full dataset includes 399,364 individual reports in time-series long format with a unique user id and a time variable. There are 154,957 unique users in the data set. The analysis is conducted on wide format dataset with unique user on separate rows and a set of variables related to the n-th report submitted by a user. The long-form data contains the following variables: user id of report submitted, user id of fix reporter (if applicable), report category (self-selected from a drop-down menu), title of report, body text, timestamp, and a dummy for whether a photo was attached. The mean number of reports submitted per user is 2.58, while the median is one.11 The uptake has been increasing steadily, reaching 106,601 submitted reports in 2013. The most common report categories are potholes (23.5%), roads/highways (10.5%) and street lighting (10.1%). Figure 2 illustrates how problem category frequencies have devel- oped over time. Only 11.7% of the reports come with an attached picture of the problem. 11 With one user submitting as many as 2,108 reports. MySociety confirmed that this user‘s reports are genuine and that ‘someone’s just very diligent’ (personal correspondence, 2014). 10 Figure 2. Top FMS Problem Categories By Year. On average it takes 66 days for a report to be classified as ‘fixed’. The mean fix time varies depending on who reports the fix. For fixes marked by the original reporter the average time is 57 days, for fixes marked by the council the average is 26 days and for reports marked as fixed by other users, the average is 109 days.12 As would be expected, different problem categories are associated with different fix rates. For instance, problems with streetlights have a relatively high fix rate of 50%, problems such as dog fouling have much lower fix rates (20%). In terms of fixes, a total of 159,539 (39.9%) problems have been re- ported as fixed, either as reported by the council (11.0% of all fixes), report submitters themselves (79.9%), or other users (9.0%). 12 Note that there are only five local authorities that have reported problems to be fixed. 11 Statistical Modeling Here we estimate two models: (1) exploring the effect of having the first report fixed on submitting a second report;13 and (2) the effect of having the first report fixed on all future reporting. The first is a binary logistic regression model focusing on the probability of submitting a second report in a specified future window (35 to 365 days after the first re- port): logit(πi) = α + β1Xi1 +… + βk Xik + εi [1] where we model the logit of the probability π of submitting a second report for each user (i,…, n). The explanatory variable, Xi1, is a dummy indicating the fix status of the first report. The subscript k indicates the number of independent variables or regressors. The estimator here is Maximum Likelihood Estimation (MLE). Note that the fix status of a problem reported to FMS cannot be taken as an indication of the problem being fixed, but rather as an indication of someone reporting the problem to be fixed. There is currently no way for us to verify the accuracy of either the original report or the fix status provided by the platform. The second model is a negative binomial regression, where we model the total number of reports submitted in the same future window as for the logistic regression model. We use a negative binomial model to account for the over-dispersion of the counts. log(yi) = α + β1Xi +… + βk Xik + εi [2] where yi is the number of reports submitted between n and 365 days after the original report. Note that we are not estimating the effect of government responsiveness on partici- 13 As will be seen, we actually have two separate versions of this first model: the naive model and the main model (see below). 12 pation in the general population, but rather among a subset of people that have already participated by submitting a first report. Analysis We take the unit of analysis as the individual FMS user. In particular, we focus on wheth- er the user‘s first report predicts 1) any subsequent participation (H1.1) and 2) their long- term participation with FMS (H1.2). Here we present two models: 1. Short-Term Model: A logistic regression across the full dataset including fixes reported by the original user or any other user. This predicts a user sending at least one further report more than 35 days after their original report, dependent on the first reported problem having been fixed within 35 days. The sample ex- cludes any users who reported their first problem as fixed. This means that only users whose problem was not fixed or had it reported fixed by someone else are included (H1.1). 2. Long-Term Model: A negative binomial regression on the same subsample as in (1), predicting total reports after 35 days dependent on the first problem having been fixed within 35 days (H1.2). The logic of each of the models is to look at the predictive power of having a problem re- portedly fixed on future participation. Before modeling this relationship we can simply examine the bivariate relationship using a bar chart. Figure 3 shows the difference in the percentage of users who submit a second report via FMS, depending on whether or not their first report was fixed, is 29.7 percent (or 8.1 percentage points). 13 Figure 3. Bar Chart Of The Percentage Of Users Who Submit A Second Report Among Those Whose First Problem Was Fixed And Those Whose First Problem Was Not Fixed. *Note: Full data set (excluding 2014 reports). The bivariate analysis is problematic for several reasons. Firstly, a user that visits the FMS website and actively follows up on their first report by indicating that the report was fixed, is clearly a more active participant. It is therefore not surprising that such an indi- vidual is more likely to submit a second report. Secondly, it is possible that the second report is submitted before the first report is indicated as fixed. This would preclude a causal relationship between the status of the first report and the submission of a second report. Endogeneity in Reporting Problems as Fixed The first issue mentioned above is addressed by focusing only on the sample of users that did not report their own problem as fixed. In this sense, to eliminate the selection effect in the model, we restrict the data to exclude any first reports that were marked as fixed by the same user who reported the problem. To reiterate, FMS tracks which reports have been fixed and which are still outstanding. This information is supplied by users them- selves so cannot be entirely separated from participation more generally. Reporting a prob- lem as fixed is itself a form of participation. If we see that a user has not marked a prob- 14 lem as fixed, this can either mean that the problem has not actually been fixed or that the problem has been fixed and the user has not updated the problem‘s status. By excluding reports that the user marked as fixed and only including reports that were marked as fixed by another user, we avoid contaminating the measurement of a user’s participation with their own participation in the form of indicating a fix. In order to provide a sensible ‘con- trol group’ for those who had their problems marked as fixed by others, we restrict the sample to include only problems reported in councils that had previously had another problem marked as fixed by another user. As a result, we are comparing problems that were marked as fixed by other users with problems that at least had a chance of being marked as fixed by other users. To check whether the endogeneity is actually present, we also run a model on the full sample to compare the estimates to those on the restricted sample. Cut-offs in the Models The second issue, whereby the effect may occur prior to the cause, is addressed by choos- ing a cut-off where a fix counts if, and only if, it takes place before the cut-off and subse- quent reports are counted if, and only if, they take place after the cut-off. We then test the robustness of the model using different cut-offs. At the default 35 days cut-off, 28,723 re- ports were marked as fixed. This excludes 13,251 reports that were sent by users whose problem was marked as fixed after 35 days. Our estimates can therefore be considered con- servative, since a second report submitted at say day 40 and preceded by a ‘fix by others’ on day 38 is coded as a ‘no fix’ in the data. This means that we are underestimating the effect due to the creation of an arbitrary cut-off that excludes all subsequent fixes. Using the restricted dataset (only problems marked as fixed by another user), at the default 35 days cut-off there are 3,655 reports sent by users whose first report was marked as fixed by another user. This excludes 4,134 reports that were sent by users whose problem was marked as fixed by another user after 35 days. 15 Revised Explanatory Variable The main explanatory variable here is a dummy for whether the first report that a user submitted was marked as fixed within 35 days by another user (this user could be the council or another citizen). This is our key explanatory variable that captures whether a user’s complaint was fixed in a timely manner, thereby demonstrating their objective effi- cacy. The cut-off is arbitrary and represents a time where nearly two-thirds of fixes are reported.14 For robustness we test different values ranging from 5 to 60 days. The distribution of fix times for user’s first reports is shown in Figure 4. The spike around the email reminder from FMS exists in the user fix data but is not present when fixes are marked by other users. This spike is the result of an automatically generated re- minder that is sent out 28 days after the report is submitted. This constitutes an encour- agement to participate for those that have not yet reported that the problem has been fixed. This means that there may be a compositional difference between those users who mark a problem as fixed before and after this message is sent out. This reminder email further complicates the use of the unrestricted sample (where fixes are marked by the users themselves). 14 47% of fixes in the reduced data (3,655 cases) take place before 35 days. 16 Figure 4. Average Time It Takes for a First Report To Be Marked As Fixed For All Re- ports And Other Fixes. * Note: Full data set of first reports (excluding 2014 reports). Time cut-off at 100 days. Control Variables In the regression model, we include the following control variables: date of first report, a dummy for whether the first report included a photo, and local authority dummies. Includ- ing the date of the first report submission is important since the FMS platform has changed over time in terms of engagement with councils, user uptake and design of the website. For instance, the average time it takes for a problem to be marked as fixed has been steadily declining (see Figure 8, in the appendix). Given that there are various time trends in the data (see statistical appendix), it is important to account for such trends 17 when running models in order to avoid finding correlations merely because two variables are trending. The second control variable is designed to capture how engaged the user was originally, i.e. whether or not they took the time to take and upload a photo. A potential factor that might bias estimates of the effect of government responsiveness on future participation is the quality of a user’s report. In this sense, more engaged or conscientious individuals may tend to submit higher quality first reports and therefore tend to get a more positive gov- ernment response. Conscientious people are also more likely to participate in general, so a spurious correlation would be generated between future participation and a positive re- sponse to i’s first report. By controlling for indicators of initial engagement, we can reduce this potential bias. Finally, we also include dummy variables for each council in the model to reduce a source of variation that could otherwise introduce a confounding factor into the model. The reason here is that while the third party fixes should not be correlated with the user’s tendency to participate, these fixes will be correlated with the pool of other available users who can report fixes and their tendencies to participate. Results Table 1 shows the different participation models, including the short-term model that pre- dicts whether users submit any further report, the long-term model that predicts the total number of reports a user eventually submits and the naïve model that does not restrict the sample (in order to test whether this endogeneity is present). 18 Table 1. Different Participation Models: Logistical Regression and Negative Binomial Re- gression Coefficients. Short-Term Model Long-Term Model Naïve Model Model type Logit Negative binomial Logit Sample Restricted sample Restricted sample Full sample (Intercept) -2.816 0.369*** -0.543 0.192** -2.350 0.285*** Fixed within 35 0.532 0.049*** 0.685 0.022*** 0.733 0.018*** Date (days) 0.010 0.008*** -0.030 0.004*** -0.005 0.006 Has a photo 0.172 0.040*** 0.456 0.018*** 0.187 0.031*** n = 78,666 n = 78,666 n = 112,940 * Note: Significance levels * p < 0.05 ** p <0.01 *** p < 0.001. All models include council level dummy variables and category variables. Short-Term Model The short-term model in Table 1 shows the estimates from the logistical regression with a 35-day cutoff. The effect of government responsiveness is strong and positive.15 It shows that the fixed status of the first report is positively associated the submission of a future report. Users whose first interaction with FMS was more recent are more likely to have submitted a subsequent report. Also, users who submitted a photo with their first report are much more likely to submit future reports, suggesting that provision of a photo is an effective proxy for an individual's underlying propensity to participate. As well as estimating the effect of government responsiveness on any future participa- tion, we also look at the main model’s robustness to different cutoffs. Figure 5 shows the marginal effect of government responsiveness on future participation at different cutoffs from 5 to 60 days. The size of the marginal effect of having a problem fixed is around 6 percentage points across all cut-offs from 15 to 60 days.16 This effect must be considered in the context of only 11.3 percent of users sending another report (35 days or more after their first report). This means that the effect of having a problem fixed translates into around a 54 percent increase in the probability of submitting an additional report com- 15 These results are robust to the following specifications: if we run it separately for every year FMS has been in operation; including fixed effects at the council level; excluding reports marked as fixed by other users; in- cluding self-reported fixes as fixed (the coefficient presents almost no changes); and including self-reported fixes as unfixed (the coefficient gets smaller as we would expect but is still strongly positive). 16 The effect is somewhat smaller with a 5 or 10-day cut-off, which is most likely due to the small sample size and the contamination mechanism explained in the unrestricted model. 19 pared to respondents whose first report is not fixed. As previously mentioned, this estimate is conservative, meaning the true effect is likely to be higher still. Figure 5. Marginal Effects In The Short-Term Participation Model (Restricted Sample). * Note: all models include controls for photo provision, council dummies and date of first report. Testing for Endogeneity To test whether the sample restriction was necessary to avoid endogeneity, we look at the naïve model that is run across the unrestricted sample of 112,940 first reports. Figure 6 shows that the magnitude of the effect of responsiveness increases substantially in this un- restricted sample compared with the restricted sample (0.73 versus 0.53). 20 Figure 6. Marginal Effects Plot Of The ‘Full Sample‘ Short-Term Participation Model. * Note: all models include controls for photo provision, council dummies and date of first report. In Figure 6 we observe that the probability of submitting a second report is higher if the first report is fixed and that this is consistent across all cut-offs. However, the magnitude of the increased probability varies substantially for different cut-offs. For cut-offs from 5-25 days, the marginal effect of a reported fix increases steadily. However, there is a substan- tial drop-off between 25 and 30 days. The drop-off is most likely related to the 28-day re- minder. The group of respondents whose problem was marked as fixed at 25 days consists entirely of people who proactively visited FMS to mark their problem as fixed. By con- trast, the majority of those respondents whose problems are marked as fixed at 30 days are people who marked their problem as fixed only when prompted to do so by a reminder email. Since being proactive is also likely to predict further participation, the larger effect prior to 28 days is likely to be a selection effect. Consequently, the marginal effects after 28 days are probably closer to the true effect size as they should be less influenced by the selection effect. 21 The increase in the size of the marginal effect at the beginning is likely to be due to the changing composition of the control group. At 5 days, the control group consists of every- one who will never have their problem fixed and everyone who will have their problem fixed sometime after 5 days. Since very few problems are fixed within 5 days, the majority of users whose problem is eventually fixed are in the control group. This would tend to dilute the effect of having a problem fixed. By 20 days, a lower proportion of those users who will ever have their problem fixed are in the control group. Consequently the dilution effect reduces over time as the control group contains fewer people who will have their problem fixed. Choosing a cut-off after 28 days appears to reduce the endogeneity of re- porting a problem as fixed. However, the magnitude of the effect remains higher than in the restricted sample model, suggesting that some endogeneity is present even after 28 days. This is not surprising: even marking a problem as fixed after having been reminded to do so is an indication of being participatory and it is still plausible that a certain pro- portion of people whose problem is fixed do not take action to mark it as such, even after they are prompted to do so. Overall, the results of the naïve model strongly indicate that endogeneity is present and imply that the restricted sample is a better measure of the true effect of responsiveness. The long-term impact of initial success in participation To assess whether there is a long-term impact of initial success on future participation, we model the total count of reports submitted by a user between 35 days and a year after they submit their first report. The long-term model in Table 1 shows that the first report does has a significant effect on encouraging future participation. The negative binomial model results differ from the main model in terms of the relative importance of initial success and underlying motiva- tion. In the short-term model, the estimate of the first report including a photo is substan- tially smaller than the estimate for the first report being marked as fixed. However, in the long-term model, the estimate of having a problem fixed within 35 days is smaller than the photo estimate. This suggests that long-term participation is driven more by factors relat- 22 ed to an individual’s underlying participation propensity than their initial experience with participation. Alternative explanations There are several possible alternative explanations for the results outlined here. One expla- nation may be that those users that submit high-quality and constructive reports are more likely to get the council to fix the problem, or at least report back that the problem has been fixed. However, we do control for one key indicator of report quality – attaching a photo. This does significantly predict future participation but does not greatly reduce the magnitude of the effect of a fix. Future work should focus on incorporating further indica- tors of report quality such as the tone of a report. Another potential explanation is that some participants are willing to submit reports about more minor problems that are both more common and easier to fix. We tested whether this mechanism was present by including the detailed category of problem (from a list of 187) as dummy variables in our short term fix model (rather than the 6 dummy variables we used in the main model). However, the inclusion of all these variables barely changed the responsiveness parameter (0.532 to 0.501). While there are other aspects of problems that are important, such a small impact of the category suggests that this mech- anism is not driving our results. Conclusions The analysis presented above consistently shows that government responsiveness is posi- tively associated with future participation via Fix My Street in the United Kingdom. While we cannot estimate causal effects per se, we have attempted to eliminate the most likely sources of endogeneity, and the evidence so far is entirely consistent with the hy- pothesis that objective efficacy affects future participation in this type of activity. We show both a short-term and long-term model, contrasting the effect of the first ex- perience on sending any subsequent reports and its effect on the total number of reports a user sends. The short-term model suggests that users whose first reported problem was 23 fixed are 54 percent more likely to send at least one more report. The long-term model indicates that there is a small effect of the first report’s success on the total number of reports that a user eventually submits. The literature on political participation has been dominated by the study of electoral participation. The rational model originally developed to help understand the decision to turn out to vote has been widely questioned in the empirical literature. However, different modes of participation are associated with different considerations, and the type of partici- pation considered here might be more applicable to a Downsian decision-theoretic frame- work. Participation in FMS is associated with extremely low costs and observable, targeted benefits. Getting a pothole in front of one’s house fixed should be less susceptible to behav- iors and social dynamics expected in voting (e.g. free-riding, herding). This being the case, we might say that participation in FMS is over-determined. The paradox might rather be why so few people participate given that the benefits are so clear and the costs are so low. Abstention could be explained by lack of awareness about the opportunity to report prob- lems, the lack of problems to report on, or, as we argue, due to subjective beliefs about external efficacy based on bad experiences with objective efficacy. Our key finding is that objective efficacy (i.e. how much an individual can make an ac- tual difference) appears to have a substantial effect on continued participation. This find- ing suggests that the calculus of participation may be an appropriate way of thinking about certain types of participation where the benefits and probability of success are easily observed. Yet, the importance of responsiveness on participation does not fully determine whether a user continues participating. The majority of users who report a second time do so independently of whether a previous report has been addressed or not. In total, 11.3 percent of FMS users whose problem was not reported as fixed by anyone within 35 days still submit another report within a year of their first submission, as opposed to 17.4% of FMS users whose first submission was successful.17 Furthermore, when it comes to sus- tained participation in the long-term, our findings show that the effect of government re- 17 Based on the predicted probability of submitting a second report holding all variables at their means and fix as false. 24 sponsiveness is smaller. This should not be surprising given the many documented instanc- es of political participation where an individual’s objective efficacy is virtually zero. Over- all, these findings call for an understanding of participation as a multidimensional phenom- enon in which government responsiveness, while an important predictor of future participa- tion, is certainly not the only one. Our findings also call for a rethinking of subjective efficacy. Much of the literature has tended to suggest that internal and external efficacy are generally long-term, stable attrib- utes that are partially the result of socialization. However, if we assume that the differ- ences in objective efficacy affect participation through subjective efficacy, then at least some form of subjective efficacy can be changed by government responsiveness. This sug- gests that the stable nature of subjective efficacy measures within individuals may owe more to the fairly constant objective situation they are faced with (mature democracies do not tend to change radically in the degree to which an individual can affect their out- comes) rather than because the attitude is unchangeable. Without further data it is not possible to assess whether individuals whose problems are fixed focus on this as proving their internal efficacy (how competent they are to partici- pate) or external efficacy (how likely the system is to respond to their action). But it seems likely that at least one of these changes in response to the objective signal from the local government. Future research should examine whether this change is domain-specific (‘I now believe that my actions will have an effect on getting the council to fix potholes’) or general (‘I now believe that the political system will be more responsive to my actions’). The model in this paper focuses only on the user’s first experience and its impact on any subsequent participation. We chose this stage in order for all the decisions across different users to be comparable and because the majority of users submit just one report, meaning this is where the majority of dropout occurs. However, there are also ‘super-users’ who submit many reports and it is important that future work looks at the factors that influ- ence their continued and initial participation. 25 Responsiveness could also potentially affect total participation in two further ways. First, responsiveness is likely to affect existing users’ recruitment of new participants through word of mouth. Second, people deciding whether to submit a first report may make their decision based partially on their perceived chance of success, which will be af- fected by the experience of other nearby users (communicated either through being told directly or by looking at the success of reports on the FMS website). Future work should examine both of these mechanisms linking government responsiveness to future participa- tion. This paper demonstrates the value of using incidentally collected data to examine citi- zen behavior. By using these records, we directly observe behavior rather than relying on self-reported survey measures, which have consistently been shown to suffer from issues of poor recollection and social desirability. We also obtain accurate information about the timing of the observed actions, which would be impossible if relying solely on respondents’ own recollections. An additional advantage of incidentally collected data is that the only cost involved is the time taken to collate it from the existing databases. Finally, the inci- dentally collected data gives us full sample coverage – it is a census of all FMS users – and therefore is not subject to non-response bias. Nonetheless, we believe that future work will benefit from combining data sources such as these with attitudinal data on respondents that can help to further examine the mechanisms through which objective efficacy affects future participation. The previous focus on subjective over objective indicators of efficacy is not just one of measurement. Fundamentally, the question lies in whether getting people to participate in politics require making them feel empowered or actually giving them power. This paper suggests that giving power and genuine efficacy to individuals can encourage greater par- ticipation. 26 Appendices Summary Statistics Table 2. Summary Statistics For Fix My Street Data. Fix times Report includes Problem (days) photo fixed n 159539 399364 399364 mean 65.69 0.12 0.40 median 28.40 sd 148.57 min -0.01 max 2516.77 range 2516.78 skew 5.84 kurtosis 41.10 * Note: Data as of February 2014. Figure 7. Fix My Street Reporting Per Weekday. 27 Fixes Summary Statistics Figure 8. Mean And Median Days For A Problem To Be Fixed Over Time (Excluding 2013 And 2014) For First Reports. * Note: Full data set (excluding 2013 and 2014 reports). 28 Robustness tests Differential impact of government responsiveness depending on prior participation It is plausible that government responsiveness might have a differential effect on those users who are already highly participatory and those who are not as participatory. It could be that the effect is weakened among users who are already participatory because they do not need further inducement, or that the effect is stronger among these users because the combination of the underlying participatory attribute and government responsiveness is stronger than the simple combination of those two conditions. To test these propositions, we add an interaction term to the restricted sample logistic regression model interacting the presence of a photo and the problem being fixed within 35 days. Table 3 shows that the interaction term is very close to zero and does not reach statisti- cal significance. This does not provide support for the claim that there is a differential ef- fect of government responsiveness among those who are already participatory and those who are less participatory. Table 3. Logistic Regression Predicting At Least One Report After 35 Days (35 Day Cut- Off And Restricted Sample) Estimate Std. Error (Intercept) -2.809 0.369 *** Fixed within 35 days 0.551 0.051 *** Date (days) 0.000 0.000 *** Has a photo 0.189 0.041 *** Has a photo * fixed in 35 days -0.216 0.152 * Note: Significance levels * p < 0.05 ** p <0.01 *** p < 0.001. All models include council level dummy variables and category variables. 29 References Abers, Rebecca. 2001. “Practicing Radical Democracy: Lessons from Brazil.” disP-The Planning Review 37 (147): 32–38. Abramson, Paul R., and John H. Aldrich. 1982. “The Decline of Electoral Participation in America.” The American Political Science Review, 502–21. Bendor, Jonathan, Daniel Diermeier, David A. Siegel, and Michael M. Ting. 2011. A Be- havioral Theory of Elections. Princeton University Press. Blais, André. 2000. To Vote or Not to Vote?: The Merits and Limits of Rational Choice Theory. University of Pittsburgh Press. ———. 2006. “What Affects Voter Turnout?” Annu. Rev. Polit. Sci. 9: 111–25. Campbell, Angus, Philip E. Converse, Warren E. Miller, and Donald E. Stokes. 1960. The American Voter. Cox, Gary W., and Michael C. Munger. 1989. “Closeness, Expenditures, and Turnout in the 1982 U.S. House Elections.” The American Political Science Review 83 (1): 217–31. Craig, Stephen C. 1979. “Efficacy, Trust, and Political Behavior An Attempt to Resolve a Lingering Conceptual Dilemma.” American Politics Research 7 (2): 225–39. Downs, Anthony. 1967. “A Realistic Look at the Final Payoffs from Urban Data Systems.” Dudley, Robert L., and Alan R. Gitelson. 2002. “Political Literacy, Civic Education, and Civic Engagement: A Return to Political Socialization?” Applied Developmental Science 6 (4): 175–82. Easton, David, and Jack Dennis. 1967. “The Child’s Acquisition of Regime Norms: Politi- cal Efficacy.” The American Political Science Review, 25–38. Fauvelle-Aymar, Christine, and Abel François. 2006. “The Impact of Closeness on Turn- out: An Empirical Relation Based on a Study of a Two-Round Ballot.” Public Choice 127 (3-4): 461–83. Finkel, Steven E. 1985. “Reciprocal Effects of Participation and Political Efficacy: A Panel Analysis.” American Journal of Political Science 29 (4): 891–913. Fowler, James H., Laura A. Baker, and Christopher T. Dawes. 2008. “Genetic Variation in Political Participation.” American Political Science Review 102 (02): 233–48. Fung, Archon, and Erik Olin Wright. 2001. “Deepening Democracy: Innovations in Em- powered Participatory Governance.” Politics and Society 29 (1): 5–42. Gerber, Alan S., Donald P. Green, and Christopher W. Larimer. 2008. “Social Pressure and Voter Turnout: Evidence from a Large-Scale Field Experiment.” American Political Science Review 102 (01): 33–48. Green, Donald P., and Ian Shapiro. 1994. Pathologies of Rational Choice Theory: A Cri- tique of Applications in Political Science. Cambridge Univ Press. Gret, Marion, and Yves Sintomer. 2005. The Porto Alegre Experiment: Learning Lessons for Better Democracy. Zed Books. 30 King, Stephen F., and Paul Brown. 2007. “Fix My Street or Else: Using the Internet to Voice Local Public Service Concerns.” In Proceedings of the 1st International Con- ference on Theory and Practice of Electronic Governance, 72–80. Lyons, Schley R. 1970. “The Political Socialization of Ghetto Children: Efficacy and Cyni- cism.” The Journal of Politics 32 (02): 288–304. McPherson, J. Miller, Susan Welch, and Cal Clark. 1977. “The Stability and Reliability of Political Efficacy: Using Path Analysis to Test Alternative Models.” The American Political Science Review, 509–21. Mill, John Stuart. 1991. “Considerations on Representative Government.” Norris, Pippa. 2002. Democratic Phoenix: Reinventing Political Activism. Cambridge Uni- versity Press. Olson, Mancur. 1971. “The Logic of Collective Action: Public Goods and the Theory of Groups.” Pasek, Josh, Lauren Feldman, Daniel Romer, and Kathleen Hall Jamieson. 2008. “Schools as Incubators of Democratic Participation: Building Long-Term Political Efficacy with Civic Education.” Applied Development Science 12 (1): 26–37. Pratchett, Lawrence. 2004. “Local Autonomy, Local Democracy and the ‘new Localism.’” Political Studies 52 (2): 358–75. Riker, William H., and Peter C. Ordeshook. 1968. “A Theory of the Calculus of Voting.” The American Political Science Review, 25–42. Rosenstone, Steven, and John M. Hansen. 1993. “Mobilization, Participation and Democra- cy in America.” Shachar, Ron, and Barry Nalebuff. 1999. “Follow the Leader: Theory and Evidence on Po- litical Participation.” American Economic Review, 525–47. Smets, Kaat, and Carolien Van Ham. 2013. “The Embarrassment of Riches? A Meta- Analysis of Individual-Level Research on Voter Turnout.” Electoral Studies 32 (2): 344–59. Stratmann, Thomas. 2005. “Some Talk: Money in Politics. A (partial) Review of the Liter- ature.” In Policy Challenges and Political Responses, 135–56. Tam Cho, Wendy K. 1999. “Naturalization, Socialization, Participation: Immigrants and (non-) Voting.” The Journal of Politics 61 (04): 1140–55. Verba, Sidney, Kay Lehman Schlozman, and Henry E. Brady. 1995. Voice and Equality: Civic Voluntarism in American Politics. Vol. 4. Cambridge Univ Press. Wampler, Brian. 2010. Participatory Budgeting in Brazil: Contestation, Cooperation, and Accountability. Penn State Press. 31