American Economic Review 2017, 107(6): 1535–1563 https://doi.org/10.1257/aer.20140774 Report Cards: The Impact of Providing School and Child Test Scores on Educational Markets† By Tahir Andrabi, Jishnu Das, and Asim Ijaz Khwaja* We study the impact of providing school report cards with test scores on subsequent test scores, prices, and enrollment in markets with multiple public and private providers. A randomly selected half of our sample villages (markets) received report cards. This increased test scores by 0.11 standard deviations, decreased private school fees by 17 percent, and increased primary enrollment by 4.5 percent. Heterogeneity in the treatment impact by initial school test scores is consistent with canonical models of asymmetric information. Information provision facilitates better comparisons across provid- ers, and improves market efficiency and child welfare through higher test scores, higher enrollment, and lower fees. (JEL D83, H75, I21, I28, O15, O18) It is a widely held belief that providing information to citizens is a powerful tool for improving public services. This view is particularly prevalent in the edu- cation sector, where advocates claim that informing parents about school perfor- mance is key to improving school quality (World Bank 2004; Hoxby 2002). The empirical evidence on the impact of information provision on quality, however, is mixed. Depending on the setting, the extent to which the information was bundled with other accountability measures, and the type of response that was studied, the impact of information can range from zero to highly positive. Worryingly, high- stakes information can also create incentives for manipulation through the selection of more desirable consumers (Dranove et al. 2003) or through cheating and direct manipulation (Jacob and Levitt 2003; Figlio and Getzler 2006). * Andrabi: Pomona College, 425 N College Avenue, Claremont, CA 91711 (e-mail: tandrabi@pomona.edu); Das: Development Research Group, World Bank, 1818 H Street NW, Washington, DC 20433 (e-mail: jdas1@ worldbank.org); Khwaja: Harvard Kennedy School, 79 JFK Street, Cambridge, MA 02138 (e-mail: akhwaja@hks. harvard.edu). The data used in this paper is part of the multiyear LEAPS project. We are grateful to Khurram Ali, Sarah Bishop, Alexandra Cirone, Ina Ganguli, Sean Lewis-Faupel, Emily Myers, Paul Novosad, Niharika Singh, and Tristan Zajonc for research assistance and to the LEAPS team for all their hard work. We are also grateful to Abhijit Banerjee, Prashant Bharadwaj, Pinar Dogan, Quy-Toan Do, Matthew Gentzkow, Justine Hastings, Robert Jensen, Maciej Kotowski, Nolan Miller, Rohini Pande, Doug Staiger, Tara Viswanath, and seminar participants at Boston University, BREAD (Duke), Brown, Chicago, Cornell, Dartmouth, NBER, Stanford, Syracuse University, UC Berkeley, UCLA, University of Illinois, University of Wisconsin at Madison, and Yale for comments. All errors are our own. This paper was funded through grants from the KCP trust funds and the South Asia Human Development Group at The World Bank. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the view of the World Bank, its executive directors, or the countries they represent. The authors declare that they have no relevant or material financial interests that relate to the research described in this paper.  †  Go to https://doi.org/10.1257/aer.20140774 to visit the article page for additional materials and author disclosure statement(s). 1535 1536 THE AMERICAN ECONOMIC REVIEW june 2017 This paper contributes to the literature by studying the experimental impact of providing information in the presence of both a public sector and a (competitive) private market for schooling in a low-income country. This is important for two reasons. First, such market settings in education are increasingly common for many low-income countries.1 Second, in canonical models of asymmetric information, prices adjust endogenously to mitigate the adverse impacts of poor information. Market-determined prices therefore allow us to assess predictions derived from such models and better understand the impact of information provision in these complex but realistic environments. We analyze data from a market-level experiment that increased information exog- enously in 56 of 112 Pakistani villages through the dissemination of report cards with school- and child-level test scores. These report cards, given to both house- holds and schools in treatment villages, contained the test scores of children and the mean test scores of all schools in the village. Our sampled villages contain both public and private schools, with an average of 7.3 schools per village. Further, each village can be regarded as an island economy: we can confirm in the data that chil- dren rarely attend schools outside the village. Combined with limited central regu- lation, this implies that each village is its own schooling market with private school prices and quality determined locally. Since the village is also our unit of treatment, we are able to study the average impact of information on the schooling market as a whole as well as the heterogeneous impact on particular schools. To our knowledge, this is the first experiment in education on the impact of information where both the treatment and the outcome measures are at the level of the market, rather than the school or the child. We first confirm that parental knowledge improved as a result of the intervention. Perceptions of school quality became better aligned with school test scores in treat- ment compared to control villages. We then demonstrate the impact of information on educational outcomes. First, learning improved. In treatment villages, the aver- age test scores increased by 0.11 standard deviations, reflecting an additional gain of 42 percent over the test score increase in control villages. Second, (private) school fees declined in treatment villages by 17 percent relative to schools in control vil- lages. Third, overall enrollment among primary-age children rose by 3 percentage points in treatment villages. Fourth, private schools with low baseline test scores were more likely to shut down in treatment villages, with their students shifting into alternate schooling options.2 These range of impacts are substantial relative to a variety of (typically costlier) educational interventions in other low-income envi- ronments (McEwan 2015). The observed decline in prices in treatment villages, which may seem counterin- tuitive as test scores improved, is consistent with existing models of optimal pric- ing and quality choice in markets with asymmetric information (Wolinsky 1983; Shapiro 1983; Milgrom and Roberts 1986). These models recognize that in the absence of third-party information, consumers receive partially informative signals 1  In India and Pakistan, 40 percent of primary enrollment is in private schools (ASER India 2012; ASER Pakistan 2012); across low-income countries, it increased from 11 to 22 percent between 1990 and 2010 (Baum et al. 2014).  2  The fee and school closure results are for private schools only as public schools do not charge fees and rarely shut down.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1537 of firm (school) quality3 and firms can use costly investments to locate at differ- ent points in the quality spectrum resulting in separating equilibria. Such equilibria are supported by increasing mark-ups for higher quality schools. When information improves, such as through the provision of report cards, the mark-up declines with greater declines for higher quality schools, reducing the price-quality gradient. The final quality distribution will depend on the (parental) valuation of school quality in the population, as schools trade off the relative benefits of distorting qual- ity choice versus coping with lower demand at the higher price induced by a sepa- rating equilibrium. Under plausible assumptions on the distribution of valuations, it can be shown that with better information, quality will increase among initially low-quality schools, but such responses will be muted (and may even be negative) among initially high-quality schools. Baseline and experimental evidence suggest that schools were initially in a separating equilibrium: schools’ baseline test scores were highly correlated with both their baseline price and households’ perception of school quality, even after accounting for village fixed effects and a set of parental attributes. Given that par- ents (correctly) update their beliefs as a result of the intervention, we can directly test the price and test score predictions of the model by exploring heterogeneity in impacts by baseline school test scores. We find support for both. The price-test score gradient declines in treatment villages, and that this is due to greater price declines for initially high scoring private schools. We also find that the test scores of children in initially low scoring private schools rose by 0.31 standard deviations relative to the control, while those in initially high scoring schools did not change. Finally, we also find a test score gain of 0.09 standard deviations for the average child in public schools in response to the intervention. Although public schools face few market or administrative disciplining mechanisms, social (nonprice) disciplin- ing actions among the community may alter teacher behavior and quality in these schools. We provide evidence consistent with such a channel by demonstrating a significant increase in interactions between parents and schools in treatment villages following the distribution of report cards. In terms of the channels driving these impacts, data from household surveys show little change in mean household investments of time and money in children, apart from a significant increase in parent-school interactions. Instead, the combination of test score and price changes suggests that schools altered their investment as a consequence of the report cards. Using detailed school surveys, we do find a modest increase in teacher qualifications in public schools and an increase in the time spent on schoolwork at initially low scoring private schools. Further, test scores gains and price declines were higher among private schools in more competitive market settings, suggesting that the cross-school comparison enabled by the report cards created greater pressure to perform for these schools. To situate our contribution, it is useful to think of existing studies as falling into two broad groups (for a review, see Dranove and Jin 2010). One group provides experimental results in settings where prices are administratively determined and 3  In our context, parents rely on informal monitoring, the schools’ own tests, and their own assessments of child performance to judge school quality. Our measure of quality is test scores in English, mathematics, and Urdu; we discuss the rationale and limitations of this in the conclusion.  1538 THE AMERICAN ECONOMIC REVIEW june 2017 school-level responses are unlikely in the short term. Banerjee et al. (2010) and Hastings and Weinstein (2008) assess experimentally whether information leads to consumers demanding better services from public providers. Banerjee et al. (2010) do not find any impact in India when only village-level information is given to par- ents about the performance of their children. In contrast to our study, their inter- vention did not provide scores for each school, limiting the comparability across schools.4 Hastings and Weinstein (2008) show that providing parents with school rankings leads them to change their declared choice toward higher scoring schools when such schools are nearby, leading to higher test scores. Their main focus is to assess whether household nominations are responsive to information about school test scores.5 Our paper builds on this work by allowing for a richer set of compari- sons and responses, both among households and schools when there is an improve- ment in the quality signal and schools can adjust prices in response. A second group of studies examines similar (price-setting) market settings but using nonexperimental approaches. Camargo et al. (2017) and Mizala and Urquiola (2013) use a regression discontinuity design where information is revealed for some schools that pass a threshold; in both these cases, information is only partially revealed. Mizala and Urquiola (2013), for example, study an environment where there is already extensive test score information on all schools and parents receive an extra signal on some schools and no signal on others. They find little further impact of this program on enrollments or prices, but rightly caution that they cannot capture the effect of new information in markets since their comparison is not across markets with information on all schools in one market and no information in others. Camargo et al. (2014) again use a regression discontinuity design for Brazil and find similar results to ours: large gains for initially low performing private schools and smaller gains for initially high performing private schools. Finally Jin and Leslie (2003) study the impact of hygiene report cards for restaurants and report similar impacts with an increase in the quality of initially low performing restaurants and an increase in restaurant revenue in response to a positive hygiene grade. An important difference between Jin and Leslie (2003) and our setting is that prior to the arrival of informa- tion, restaurants are in a pooling equilibrium whereby revenue is unresponsive to (changes) in hygiene; in this context, the arrival of information increases the sensi- tivity of revenue to the reported grade. It is nevertheless noteworthy that the pattern of responses in Carmago et al. (2014) and Jin and Leslie (2003) is similar to ours. In short, we show that prices and quality are key components of how markets react when information improves, and that the heterogeneous patterns of price changes are consistent with the predictions of a model of asymmetric information. These insights inform a more nuanced understanding of the impact of informational provision in markets with multiple (public and private) providers and how impact may vary based on the preexisting informational environment. Information provi- sion in our setting improves consumer welfare by lowering mark-ups and inducing 4  Banerjee et al. (2010) do however find increases in test scores when information is bundled with a teaching intervention suggesting, as we also find, that engaging teachers may be another important element of impacting learning.  5  Further afield, Bjorkman and Svensson (2009) show that information bundled with additional accountability measures lowers child mortality in Uganda.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1539 lower quality schools to improve quality. Public schools respond positively by rais- ing quality and overall village enrollment increases. The remainder of the paper is structured as follows. Section I provides details on the data, the context, and the report card intervention. Section II describes the conceptual and empirical framework. Section III presents the findings. Section IV discusses these results further and concludes. I. Data, Context, and Intervention Private schooling has increased dramatically in low-income countries, from an 11 percent market share in 1990 to 22 percent in 2010 (Baum et al. 2014). In Pakistan, the setting for this study, the number of private schools increased sharply from 3,800 in 1983 to 47,000 in 2005; such schools currently account for 40 per- cent of all primary school enrollment. These private schools are coeducational and instruct children in English, mathematics, and Urdu using a curriculum and text- books similar to that in public schools. In contrast to public schools, however, pri- vate schools face little government oversight or regulation and operate in (de facto) lightly regulated markets with no administrative guidance on pricing. Sixty percent of the rural population in the province we study resided in a village with at least one private school in 2001 and villages typically have multiple (public and private) schools. Thus, parents face substantial school choice. We designed our study around the particular opportunities and challenges represented by this increasingly common choice-rich environment. A. Data The data come from the Learning and Education Achievement in Punjab Schools Project (LEAPS), a multiyear study of education in Pakistan. For the LEAPS proj- ect, we randomly sampled 112 villages across 3 districts in the Punjab province, the largest state, with a population of 70 million in 2010. The list frame for the random sample was all villages with at least one private school in 2001, therefore exclud- ing villages with no private schools at all. Using a household census of schooling choices, we verified that these villages were effectively closed markets with children attending the schools in the village and school populations drawn from children in the village. We included all schools in these villages that offered primary education in our sample, resulting in a total sample of 823 public and private schools. Online Appendix I.A provides further details on the sample and a discussion of what we mean by closed markets. The average village in our sample therefore has 7.3 schools, 4.4 (sex-segregated) public and the remaining 2.9 (coeducational) private. Parents can enroll their chil- dren in any school of their choice, as long as the public school (if chosen) is sex appropriate. Therefore the number of schools a given child is eligible for is similar across public and private schools. In practice, the location patterns of public and private schools imply that in most cases effective choice is between a single public school and multiple private schools. This is because public schools tend to locate on the outskirts of the village while private schools are closer to the densely populated village center (see online Appendix I.A, Figure 1) and because there is a strong 1540 THE AMERICAN ECONOMIC REVIEW june 2017 ­ egative effect of distance to school on enrollment (see Andrabi et al. 2007 and n Burde and Linden 2013).6 In each of these villages we conducted a series of annual surveys starting in 2004. First, we tested around 12,000 children who were in Grade 3 in the initial survey round and continued to track and test them in each subsequent round. These chil- dren were also tested in each round using norm-referenced tests in English, Urdu, and mathematics, with test scores equated and standardized across years using Item Response Theory (see online Appendix IB). Second, we conducted annual surveys in all schools. These surveys contained a number of modules, including a facility survey, roster data on around 4,900 teachers, and detailed surveys for head teachers and Grade 3 teachers. Third, for all tested grades, we administered a short child questionnaire to 10 randomly selected children (6,000 children) to collect house- hold-level information. We also conducted surveys with parents separately from the schools. This house- hold questionnaire, with an extended focus on educational investments, was fielded for 1,807 randomly selected households in the sample villages, stratified to over- sample students eligible by age for Grade 3 (the tested grade). These three data sources allow us to triangulate self-reported data from multiple sources and investi- gate the role of school and household inputs. We use data from the first two rounds of the LEAPS surveys, augmented to check for longer-run effects with data from the third round. Online Appendix IB provides further details on the content and timing of the different school- and household-based surveys. On average, there are 631 households in a sampled village with an adult literacy rate of 37.3 percent (Table 1). Among children between the ages of 5 and 15, base- line enrollment rates (public and private) were 76.2 percent for boys and 64.8 per- cent for girls in 2004. Public schools enroll an average of 184 children, and private school average enrollment is 143 children; in the tested Grade 3, 20 and 14 children enrolled on average in public and private schools, respectively. The enrolled children in Grade 3 are on average 9.7 years old and 55.7 percent are male. Finally, just over one-half of the teachers in these schools report more than a secondary education. B. Patterns in the Baseline Data Households spend 3–5 percent of their monthly budget on each child’s school- ing, with private school fees averaging about Rs  1,200 (approximately US$20) per year. Analysis of choice suggests that while parents take into account school fees and infrastructure, the distance to school remains a major determinant of their choices. For example, increasing the distance of the nearest school from the home by 500 meters (adjusting for demographics) reduces enrollment by 1.5 to 3 per- centage points for boys and 9 to 11 percentage points for girls (Andrabi et al. 2007; Alderman, Orazem, and Paterno 2001; and Burde and Linden 2013), an effect that is also replicated in the specific choice of school (Carneiro, Dias, and Reis 2016). The importance of distance, documented across numerous studies, underscores why 6  These location patterns reflect a policy whereby land for the public school had to be given by the village, and private land is cheaper on the outskirts.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1541 Table 1—Baseline Summary Statistics   Mean 25th percentile Median 75th percentile SD Observations Panel A. Village level Village wealth (median monthly 4,641.5 3,689.3 4,635.2 5,611.5 1,575.2 112   expenditure, in rupees) Number of households in village 631.3 405.5 561.0 771.0 383.9 112 Percent of adults (>24) literate in village 37.3 27.1 37.3 46.0 11.9 112 Village enrollment % (all) 70.8 61.8 75.5 82.5 16.9 112 Village enrollment % (boys) 76.2 68.2 81.7 86.8 15.6 112 Village enrollment % (girls) 64.8 54.0 70.7 79.8 19.7 112 Herfindahl index of schools in village 0.194 0.143 0.177 0.233 0.076 112 Panel B. School level Public schools   School average test score −0.252 −0.679 −0.201 0.179 0.687 485   Number of students enrolled at school 183.7 76.0 130.0 224.0 174.7 485   (all grades)   Number of students enrolled at school 99.9 40.0 80.0 141.0 79.0 483   (grades 1–5)   Grade 3 enrollment at baseline (number) 19.9 8.0 16.5 28.0 16.8 484   Percent of teachers with more than a 0.568 0.375 0.611 0.800 0.320 485   secondary education Private schools   School fees (rupees per year) 1,184.4 650.0 1,060.0 1,350.0 811.5 289   School average test score 0.488 0.173 0.504 0.854 0.531 303   Number of students enrolled at school 142.7 72.0 115.0 180.0 99.4 303   (all grades)   Number of students enrolled at school 73.0 37.0 58.0 94.0 51.7 302   (grades 1–5)   Grade 3 enrollment at baseline (number) 14.2 6.0 11.0 20.0 10.9 302   Percent of teachers with more than a 0.542 0.333 0.571 0.750 0.264 285   secondary education Panel C. Child level Child average test score −0.018 −0.548 0.090 0.619 0.913 12,110 Female child 0.443 0.000 0.000 1.000 0.497 13,735 Child age 9.7 9.0 9.5 10.3 1.5 13,733 Child time in school or school prep 420.8 390.0 420.0 480.0 65.3 983   (minutes per day) Child’s time spent on school work, not in 96.7 60.0 60.0 120.0 61.5 982  school (minutes per day) Perception of school quality 3.3 3.0 3.3 3.6 0.5 619   (Likert scale: 1 to 5) Parents’ spending on school fees (rupees) 302.5 24.0 24.0 240.0 531.3 954 Parents’ education spending, other than 969.1 420.0 720.0 1,200.0 822.2 988   school fees (rupees) Parents’ time spent teaching child 3.4 0.0 0.0 7.0 5.2 964   (hours per week) Notes: This table presents baseline summary statistics for outcome and control variables in the main regression tables and the online Appendix tables, as well as other background variables mentioned in the text of the paper. Panel A displays variables at the village level. All variables have 112 observations, which is the number of villages in our sample. Panel B displays variables at the school level, separated by type of school, public or private (we do not report NGO schools because there are only 16 such schools). There are 485 public and 303 private schools in our sample in the baseline year; missing data reduce the number of observations in some cases. Panel C displays variables at the child level. These variables derive from three different sources: (i) child roster data from testing at the school (variables with greater than 12,000 observations); (ii) child and parental data from household survey for all children in the household data that were matched to the school testing roster (variables with observations in the 900s); and (iii) household data on perceptions averaged at the school-level (variable with 619 observations—we have fewer than 800 observations, the number of schools in the sample, because parents were not asked to provide perceptions for schools they did not know about and could respond with “don’t know”). these villages are effectively closed educational markets, thereby allowing us to study market-level interventions. There are strong indications that the environment is competitive, with schools offering vertically differentiated products. Private schools locate within denser settlements in villages; the average private school has at least three other schools 1542 THE AMERICAN ECONOMIC REVIEW june 2017 around it; the Herfindahl index is consistent with a competitive environment; and, the median profits of Rs 14,580 of private schools is similar to the wages of a male teacher with secondary education and therefore the appropriate option value if the entrepreneur were to shut down the school.7 Although the student population differs slightly across schools, there is little evidence that these are segmented markets, either by wealth, parental education, or social variables such as caste (Andrabi et al. 2007). While learning levels are generally low (Andrabi et al. 2007), there is substantial variation in test scores and prices with most of the variation across schools and within villages. Variation in test scores within village accounts for 83 percent of the total test score variation in our data. Part of this variation is driven by differences across public and private schools, but even across private schools, the interquartile range for test scores lay between –0.08 and 0.78 standard deviations in mathematics, with similar results for other subjects. Similarly, within the same village there are large differences in the prices offered by private schools. Average prices are low, with monthly fees typically lower than the daily wage rate for unskilled labor (PEIP 2000).8 The interquartile range of prices for private schools is between Rs 650 and Rs 1,350 (per year), with 45 percent of the price variation within rather than across villages. Test scores and fees are positively correlated at baseline. A one-standard- deviation increase in baseline test scores is associated with a 0.45-standard- deviation (Rs 369) increase in school fees (Table 2, column 1). The result is sim- ilar if we include village fixed effects and demographic characteristics including household wealth and education. Results are also similar if we focus instead on value-added test scores (in control villages) with a one-standard-deviation increase in value-added associated with a Rs 332 increase in school fees.9 Test scores predict school fees better than infrastructure. However, infrastructure also matters, with a one-standard-deviation increase in basic and advanced infrastructure indices asso- ciated with a Rs 55 and Rs 141 or 0.07– 0.17 standard deviation higher fees, respec- tively (Table 2, column 2). C. Intervention and Experimental Protocol In 2004, we tested all children in Grade 3 in all the schools in our sample. We then experimentally allocated one-half of the villages (within district stratification) to receive report cards on child and school performance. The two-page report card reported raw test scores for the child in English, mathematics, and Urdu as well as her quintile rank across all tested children on the first page. The second page reported scores for all the schools in the village, with their quintile rank (across all schools tested in the sample) and the number of children tested. Online Appendix Figure 4 is a sample of a (translated) report card. The report cards were delivered to 7  The Herfindahl index is 0.20 for the sampled villages. With an average of 7 schools in every village, exactly equal enrollment shares (the most competitive scenario) imply a Herfindahl value of 0.14.  8  Low fees reflect low teachers’ salaries in the private sector, which are 20–25 percent of those in the public sec- tor. We have shown that this model relies on the availability of locally resident secondary-school-educated women in a context with limited geographic and occupational immobility for women (Andrabi, Das, and Khwaja 2013).  9  The value-added specification is relevant only to the control sample, since the treatment effect will be sub- sumed in villages that received the report cards. In this case, the sample size is smaller and though the coefficient on test scores is large, precision declines.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1543 Table 2—Fee-Test Score Relationship and Impact on Perceptions Perception Fees (Year 1)   Year 1 Year 2   (1) (2)   (3) (4) School score 369.2 316.3 0.216 −0.0279 (95.07) (107.2) (0.0239) (0.0347) School fee 0.000129 −2.26E-05 Baseline perception 0.228 (0.0365) Report card 0.00798 (0.0364) Report card × school score 0.114 (0.0438) Basic infrastructure index 54.93 (33.08) Extra infrastructure index 141.1 (79.67) Controls Village Village fixed Village Village effects Observations 289 289 610 588 R2 0.337 0.137 0.116 0.315 Baseline dependent variable (mean) 1,184.360 1,184.360 3.288 3.275 Notes: This table presents results on the association between school fees and test scores, and some findings on per- ception of school quality. Columns 1–2 show the relationship between school characteristics and school fees for pri- vate schools; there are 303 private schools in our sample, but we have fewer observations due to missing data. The dependent variables in columns 3 and 4 are constructed by taking the average of all parental perceptions, ranked on a five-point scale, for a given school. This ensures schools are equally represented (one observation per school). Column 3 shows the correlation between school test score and parental perception in Year 1. Column 4 considers perception in Year 2 (open schools only) to see whether report cards had an impact on this. We have fewer than 800 schools in columns 4 because when calculating the average perception of a school we restrict to only those household-school combinations where we have perceptions data for both rounds. Our results are robust to alterna- tive restrictions (online Appendix Table III). All regressions cluster standard errors at the village level, and include district fixed effects. All regressions include baseline of outcome variable as a control as well as, where appropri- ate, additional village controls (village wealth [median monthly expenditure], number of households in village, Herfindahl index of schools in village, and percent of adults [>24] literate in village). Baseline dependent variable (mean) displays the baseline mean for the sample for all outcome variables. schools and parents at a school meeting, which confined itself to only explaining the information on the report cards and not to advocate or discuss any particular plan of action. The meetings were held in September 2004, after the summer break and prior to the next regular admission cycle in April 2005. The timing of the report card delivery has implications for child switching behavior. While children can switch schools right after summer break (the timing of our delivery), most choose to do so when the new school year starts in April. Consequently, our timing decision may imply less switching relative to delivery before the new school year. However, the gap between information revelation and the next year’s admission decisions also gave parents sufficient time to absorb the information and schools sufficient time to respond to it. From a welfare and policy point of view, it may be more desirable to give schools time to respond to infor- mation by altering their price and investing in quality, as opposed to encouraging parents to immediately exit schools with low test scores. 1544 THE AMERICAN ECONOMIC REVIEW june 2017 At the time of distribution, schools and households were explicitly informed that the exercise would be repeated a year later to ensure that educational investments would be captured in future test scores. This implied that parents and schools would be able to verify how test scores changed over the year, allowing parents to give a school more time to improve before withdrawing their children. Online Appendix ID provides the detail of the experimental protocol including the design, content, and delivery of the report cards along with a discussion of the validity and the reliability of the test score measures. We also confirm that the base- line values of outcomes and control variables are balanced across the treatment and control villages; the p-value for a joint test of significance of observable village characteristics is 0.56. In terms of attrition (see online Appendix IE), we success- fully track the enrollment status with certainty for 96 percent of children between the baseline and endline years, although absenteeism leads to somewhat lower (82 per- cent) retesting rates. We confirm that there is no evidence of differential attrition or any compositional (demographic or baseline test score) differences between attriters in treatment versus control villages. II.  Conceptual and Empirical Framework A. Conceptual Framework To understand how report card delivery can impact the market, we outline a stan- dard framework of market equilibrium under asymmetric information drawing heav- ily on Wolinsky (1983). The main insight is that the impact of the intervention depends on the preexisting informational environment (regarding school quality) and, more specifically, on whether schools were pooling or separating on quality, measured here as test scores, in the initial equilibrium. The theory leads to testable predictions on how the price-quality gradient changes due to treatment and, relatedly, whether we would expect a differential impact on school fees by baseline school quality. The theoretical predictions on how school quality responds to information and whether such responses differ by initial school quality depend on the structure of demand and are therefore more ambiguous for certain parts of the quality distribution. Using a similar setup to Wolinsky (1983), we posit school i’s profits are vi​  ​​)) ​qi​  ​​  −  z​  , ​π​i​​  =  ( ​pi​  ​​   − c(​ ​ which depends on the cost of producing quality, c , the price (  ​ ( ​vi​  ​​  )​ pi​  ​​​ ​ ​ ), the expected sales volume, ​qi​  ​​  ,​and a fixed cost of entry, z. There are a continuum of consumer ​ types (parents) who each consume one unit of the good with consumer type j’s pref- erences given by ​ U = ​ uj​  ( ​​​ ​  θ)​ − ​ vi​  ​​  ,  pi​  ​​  ,​ where θ is the valuation for quality. Information is modeled such that for any quality level ​ vi​  ​​​ ​ , there is always a lower bound on the quality signal that the parent can receive. Therefore any signal below this lower bound fully reveals that the school cannot have produced at quality ​ vi​  ​​​. ​ Formally, parent j receives a signal of quality for school i prior to choosing a school where the cumulative distribution of the signal is given by (t, v)​  =  Pr(​ D​ ​ )​. d ​ i​  ​  ≤  t | ​vi​  ​​  =  v  j VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1545 Assume that for every v, there is at least one t such that D(t, v) = 0. Define ​​ t​ v ⁎ ​  ​​ as the maximum t such that D(t, v) = 0. That is, for every school producing at a ­ particular quality level, there is a single scalar ​​ t​ v ⁎ ​  ​​ , such that no parent can ever receive t​ v a signal lower than ​​ ⁎ ​  ​​if the school produces at ​ ​ i​  ​​  =  v.​ v The Price Condition for Separation.—The equilibrium of quality and price deter- mination can be derived in two steps. The first step derives the prices that can sup- port a separating equilibrium. In the second step, given the price schedule, schools make optimal quality decisions. The basic feature of the feasible price schedule under separation implies that high-quality schools will earn a mark-up over and above the prices that would exist under full information. To see this, consider the decision process for a single school, deciding whether to produce ​ ​ vh​  ​​​ (high) or vl​  ​​​ (low) quality, faced with a set of q​  ​ ​ ​​ i​​​ parents who would choose the school for sure if they knew its quality were ​ ​ vh ​  ​​​ . In a separating equilibrium, every quality is associ- ated with a different price and the choice of p completely reveals the choice of v. For this separation to hold, it must be the case that the choices of p and v are incentive compatible. Suppose that a school tries to deviate by charging p ​ ( ​vh ​  ​​  )​but produc- ing ​ ​ l​  ​​​ v . In this case, relative to producing ​ ​ vh​  ​​​ , the school gains an amount given by q​ ​ (c( ​vh ​  ​​  ) − c( ​vl​  ​​  ))[ ​​ 1 − D​ (​t​  h​ ​ ⁎  ,  ​vl​  ​​)] ​​ ,​but risks losing ​ ( p( ​vh q​ ​  ​​  ) − c( ​vh ​  ​​  ))​ [  D​ vl​  ​​)​ (​t​  h​ ​, ​ ⁎ ]​. To see this note that by producing quality ​ ​ l​  ​​​ v , for every unit produced the school saves​ c( ​vh ​  ​​  ) − c( ​vl​  ​​  )​ . At this new quality level, the fraction of parents who receive a signal consistent with ​ ​h v ​  ​​​are those whose signal is greater than ​ ​​  h t * , that is [ ​  ​ ​ ​​ 1 − D​ (​t​  h​ ​, ​ ⁎ ]​ vl​  ​​)​ ​. These parents are incorrectly informed and will enroll their children in the school. (​t​  h​ ​,  ​vl​  ​​)​ ⁎ In contrast, a fraction D ​​ ​of parents will receive a signal that makes them realize that the school is not producing quality ​ ​ vh​  ​​​and no longer enroll in the school. This generates a loss of p ​ (​ vh ​  ​​  ) − c( ​vh ​  ​​  )​from each such parent. For the separating equilibrium to hold (i.e., that such a deviation is not profitable), it must be that the ­ gains are no greater than the loss, so that c(​ vh​  ​​  ) − c(​ _________ vl​  ​​  ) ( ​ c(​ vh vl​  ​​  ))​​(1 − D( · ))​ ​  ​​) − c(​ ___________________________ p(​ ​ vh​  ​​) ≥ c(​    vl​  ​​  ) + ​   ​     p(​ ,​ or, ​ vh​  ​​) ≥ c(​ vh​  ​​) + ​          ​ ​  . D( · ) D( · ) Thus, school ​ ​ vh​  ​​​must earn a mark-up above his/her marginal cost, c ​ (​ vh​  ​​)​to induce separation in the market.10 Note that as the precision of the signal declines, t ​​​  h ⁎ ​ ​​ decreases, the mark-up required to sustain separation increases. Intuitively, the mass of parents who receive an inconsistent signal when the school charges ​ p(​ vh ​  ​​  )​but pro- duces ​vl​  ​​​is smaller. For separation to hold, it must be that the losses from cheating ​ are larger to compensate for the gain in the number of parents who are fooled and pay the high price for low quality. The only instrument available to increase these losses is ​ ​  ​​  )​ p( ​vh p(​ , and therefore, in equilibrium, the ​ vh​  ​​  )​that ​​ can sustain a separating equilibrium must increase as the signal deteriorates. Conversely, as the information environment improves, the price mark-up in a separating equilibrium declines, a prediction that we will return to in the empirics later. Online Appendix II derives the closed-form solution for the mark-up with convex quadratic costs and shows that the See Wolinsky (1983) for an equilibrium refinement that narrows the set of equilibria to prices where­ 10  the inequality holds exactly.  1546 THE AMERICAN ECONOMIC REVIEW june 2017 mark-up exists for all schools throughout the quality distribution, but is higher for schools at higher quality levels when information is poor. Optimal Quality under Imperfect Information.—As information becomes more imprecise, the mark-up required to sustain separation between any two given quality levels increases. One possibility therefore is that the market collapses to a pool- ing equilibrium. At the extreme when the information is pure noise, no amount of mark-up can induce separation because the threat to punish that sustains separa- tion can never be realized (Akerlof 1970). For less extreme information environ- ments, the ultimate quality distribution will depend on the structure of demand as schools trade off the relative losses of coping with lower demand at the higher incen- tive compatible prices versus distorting their quality choices. Online Appendix II demonstrates that quality increases with better information for initially low-quality schools, but quality changes are ambiguous for initially higher quality schools.11 Therefore, under asymmetric information we should observe price declines together with quality improvements at least at some portion of the quality distribution. Online Appendix II contrasts this with another candidate class of explanations where infor- mation is symmetric, so that report cards provide feedback on own-performance. Public Schools.—The challenge with public schools is that they are not maximiz- ing profits and (in Pakistan) they cannot charge fees. They also have limited local control; while school heads can argue for removals or additional staff, most staffing, pay, and promotion decisions are made at the level of the province.12 Given consid- erable uncertainty over the objective function and investment opportunities of public school teachers, one option is to not model the response of the public sector to the report card, but to view the public school as an outside option whose quality may be affected by the report cards, but whose price is always zero. Given that the public schools are lower quality in our data, an increase in their quality will lead (at least) low-quality private schools to adjust on the quality margin. However, such an approach misses the possibility that the utility of teachers and principals in public schools is likely affected by their interactions with the local community. Suppose parents can complain to teachers and principals, in the manner formalized by Banerjee et al. (forthcoming). Then, similar to their model, verifiable information increases the utility cost of poor performance. While public schools cannot compensate parents for poor performance by lowering prices (which are already zero), teachers in public schools can nevertheless always increase effort and teacher qualifications could improve especially if there is also pressure on the prin- cipal. Like in Banerjee et al. (forthcoming), the effect of information when consum- ers can complain therefore depends on (i) their ability to complain; (ii) the effect of such complaints on the utility of school teachers and principals; and (iii) the trade- off in the costs of improving quality versus alternative responses. In the empirical work below, we will shed further light on whether such a mechanism is important by 11  A sufficient condition for these patterns is that the probability density function of quality valuations is mono- tonically decreasing. This is satisfied, for instance, for the family of log-concave distributions.  12  This is unlike the United States, where schools are managed by local boards, which retain considerable juris- diction over significant school inputs (Hoxby 2000; Figlio and Hart 2014).  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1547 examining how the report card intervention increased interactions between parents and schools (the complaint mechanism) and may have impacted teachers. B. Empirical Framework We estimate the causal effect of the report card treatment on key outcome vari- ables, such as test scores, fees, or enrollment. We present our main results at the village level. Our estimating equation is ​ ​ Ym​  2​​  = ​α​d​​ + β · RC ​m​  ​​ + γ · ​ Ym​  1​​ + δ · ​ Xm​  1​​  + ​ε​m​​  ​, where ​ ​m Y ​  2​​​is the outcome of interest, for example, average (across all children in the village) test scores from the post-intervention year (Year 2) in village m ​ ​ ;​R​ Cm​  ​​​ is the treatment dummy assigned to village m ​ ​​​are district fixed-effects; ​ ​ ​; ​​αd ​m Y ​  1​​​is the baseline measurement of the outcome variable; and ​ ​  1​​​is a vector of village-level  ​Xm baseline controls (size, wealth, adult literacy, and Herfindahl measure of school competition). Under random assignment, β ​ ​is an unbiased estimate of the impact on test scores associated with the report card intervention. Our preferred specification includes baseline controls to improve precision, but we also present parsimonious specifications (without any controls, and only controlling for baseline value of the dependent variable) for completeness. We include district fixed effects in all speci- fications since the randomization was stratified by district. The conceptual framework suggests that the reaction to the information will dif- fer by the schools’ baseline quality. To examine this, we also estimate models with treatment effects separately for the school’s type (private or public) and baseline test score. These specifications are estimated at the school level or at the child level with standard errors clustered by village.13 A generic school-level specification is ​ ​ ​  2​​  = ​α​d​​  + ​β​0​​  R​ Ymi Cm​  ​​  + ​β​1​​  GO​ Vm​  i​​  + ​β​2​​  HIG​ Hm​  i1​​  + ​β​3​​  R​ Cm​  ​​  ·  GO​ Vm​  i​​  + ​β​4​​  R​ Cm​  ​​  ·  HIGH ​m​  i1​​  + ​β​4​​  R​ Cm​  ​​  ·  GO​ Vm​  i​​  ·  HIG​ Hm​  i1​​ + γ · ​ Ym​  i1​​  + δ · ​ Xm​  1​​  + ​ε​mi​​​, where ​ ​ ​  2​​​represents the outcome of interest (such as fees or enrollment) for school Ymi i in village m in time period 2 (post-intervention year). As before, R ​​ Cm ​  ​​​is the treat- ment dummy assigned to village ​ ​ m​; ​​α​d​​​are district fixed-effects; ​ ​  1​​​is the baseline Ymi of the outcome variable; ​ ​m X ​  1​​​is the vector of baseline village-level controls. The variable ​ GOV ​m ​  i​​​is a dummy indicator for whether the school is a public school; and ​HIG​ Hm ​  i1​​​is an indicator for whether the school baseline score was above a pre- defined baseline test score threshold. Where relevant, we also run analogous speci- fications at the child level. 13  We separate out 16 schools run by non-governmental organizations (NGO) in the sample. We have sup- pressed these estimates in the specification and when we present our estimated effects since the NGO-run sample is too small for meaningful comparisons.  1548 THE AMERICAN ECONOMIC REVIEW june 2017 III. Results We start with the impact of the report card on household perceptions. We then examine the impact on school fees, test scores, and enrollment at the village level. Finally, we turn to the more specific model predictions, including heterogeneous impacts across different types of schools, and interpret them in light of the concep- tual framework. A. Impact on Perceptions We first test whether perceptions/signals of school quality are correlated to school test scores at baseline. Table 2, column 3 finds that a one-standard-deviation increase in test scores is associated with a 0.22 (0.44 standard deviation) increase in the perception of school quality (elicited on a Likert scale of from 1 = very poor to 5 = very good). This shows that parents are (somewhat) informed at baseline and is consistent with an informational environment that would sustain a separat- ing equilibrium. This also suggests that parental perceptions likely have room for improvement and/or they potentially reflect other dimensions of quality beyond those captured by test scores. In column 4, we test whether providing report cards leads to a stronger relation- ship between parental perceptions and test scores using the following regression specification: Per​ ​ cm​  i2​​  = ​α​d​​  + ​β​1​​  R​ Cm​  ​​  + ​β​2​​  Scor​ em​  i1​​  + ​β​3​​  R​ Cm​  ​​  ×  Scor​ em​  i1​​  + ​β​4​​  Per​ cm​  i1​​  ​  i1​​  +  ​ε​mi​​​  . + γ ​Xm The variable P ​ er​ ​  i2​​​is the average parental perceptions in Year 2 for school i in cm village m, aggregated across all households in the village who reported perceptions for school i in both rounds;14 ​ Scor​ ​  i1​​​is the baseline test score of school i; and em the interaction term, which is the key object of interest, is ​ R​ ​  ​​  ×  Scor​ Cm em​  i1​​​. We also include district fixed effects (​ ​ ​​​ ​αd ), baseline average parental perception (​ Per​ cm ), ​  i1​​​ and a vector of village- and school-level controls (​ ​m X ​  i1​​  )​ , and cluster standard errors at the village level. We indeed find that in villages that received a report card, the relationship of perceptions with test scores (controlling for baseline perceptions) is stronger: i.e., the coefficient on the interaction term (RC × Score) is 0.114 (col- umn 4 of Table 2). This represents a substantial increase in the sensitivity of parental perceptions to test scores relative to the control villages.15 14  Online Appendix I.F discusses several alternatives for the aggregation of perception measures and confirms that our results are robust to a variety of choices.  15  Column 4 also highlights limited learning over time in the absence of report cards. With controls for base- line perception and fees there is no relationship between baseline score and Year 2 perceptions in control villages. Column 4 also shows that while report card provision increased the sensitivity of parental perceptions to test scores, there is no overall treatment effect, suggesting that parents were not systematically over- or underestimating school quality.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1549 Table 3—Fee and Test Scores: Impact on Market Outcomes Village average fees (Year 2) Village average test scores Household School report report Year 2 Weighted (same Basic by children   Basic Year 2 Year 3 kids)   (1) (2)   (3)   (4)   (5)   (6) Panel A. No controls Report card −288.4 −334.1 −193.9 0.128 0.140 0.129 (92.58) (107.9) (99.97) (0.0624) (0.0584) (0.0599) Observations 104 104 83 112 112 112 R2 0.336 0.473 0.259 0.328 0.292 0.399 Panel B. Baseline control only Report card −191.8 −194.9 −128.2 0.107 0.122 0.103 (65.18) (55.92) (73.46) (0.0448) (0.0428) (0.0395) Baseline 0.750 0.799 0.780 0.710 0.648 0.719 (0.104) (0.0865) (0.0859) (0.0628) (0.0742) (0.0603) Observations 104 104 83 112 112 112 R2 0.719 0.808 0.644 0.687 0.625 0.746 Panel C. Baseline and village controls Report card −187.0 −175.2 −141.7 0.114 0.123 0.109 (65.91) (62.12) (74.35) (0.0455) (0.0435) (0.0401) Baseline 0.764 0.842 0.742 0.706 0.644 0.718 (0.104) (0.102) (0.0831) (0.0624) (0.0754) (0.0596) Observations 104 104 83 112 112 112 R2 0.726 0.816 0.665 0.692 0.631 0.749 Baseline dependent variable (mean) 1,080.699 1,234.479   998.964   −0.032   −0.032   −0.008 Notes: This table looks at the impact of the report card on Fees (columns 1–3) and Test Scores (columns 4–6) at the village level. The outcome variables are: Year 2 village average private school fees from school survey data—in lev- els (column 1); in levels and weighted by children in school (column 2); Year 2 village average private school fees, in levels, from household survey data (column 3); Year 2 village average (across all three subjects—math, English, Urdu) test scores (column 4); Year 3 village average test scores (column 5); Year 2 village level average test score using only those kids tested in years 1 and 2 (column 6). All regressions include district-fixed effects and robust standard errors. Panel A considers no additional controls; panel B includes a baseline control of the outcome vari- able; and panel C includes baseline of the outcome variable and additional village controls, which are the same as in Table 2. Columns 1–3 have fewer than 112 observations due to private school closure in Year 2 and missing fee data in some villages. Column 3 has 83 villages because we only consider those villages where we can match children who attend private school from the household survey to the testing roster. Columns 4–6 are run on all 112 sample villages. Baseline dependent variable (mean) displays the baseline mean for the sample for all outcome variables. B. Impact on Market Outcomes We now examine the impact of report card provision on school fees, test scores, and enrollment at the village/market level. Fees.—Columns 1–3 in Table 3 show that there were substantial changes in pri- vate school fees due to the provision of report cards (recall public schools are essen- tially free). Panel A presents the specification without any controls, panel B adds baseline values of the dependent variable as a control, and panel C, our preferred specification, adds additional village-level controls. Panel C, column 1 shows that private schools in treatment villages decreased their annualized fees relative to those 1550 THE AMERICAN ECONOMIC REVIEW june 2017 in the control by an average of Rs 187 in response to the report card intervention, representing 17 percent of their baseline fees.16 The effect is (i) similar when we weight by the number of children enrolled in private schools (column 2), confirming that the result in column 1 is not driven by small private schools and (ii) robust to using household’s rather than schools’ reports of school fees (column 3).17 Test Scores.—Columns 4–6 in Table 3 now examine the impact of report card provision on average test scores in the village. The dependent variable is the aver- age test score in the years after the provision of report cards. In column 4, we find tests scores in Year 2 improved by 0.11–0.13 standard deviations depending on the specification used.18 Column 5 shows that these effects are present two years after the provision of the report cards.19 In column 6, we replicate the analysis from col- umn 4, but restrict the village test scores to children tested in both years. The results show that the test score gains were not driven by compositional changes, which is unsurprising given that attrition was low and not differential by baseline test score (online Appendix I.E). Enrollment and Switching.—Columns 1–   4 in Table 4 examine whether the report cards led to changes in enrollment and switching at the village level. To the extent that there is a decline in average prices and quality increases, one may expect increased enrollment in treatment villages. Column 1 shows that the overall enrollment increased by 3.2 percentage points or 4.5 percent increase in treatment villages: roughly 40 additional children.20 This additional enrollment came from new entrants, as the starting grades (preparatory and Grade I) saw the largest enrollments (see online Appendix III, Table V). We also find some new entry into Grade 4 (the natural grade progression for the tested cohort whose parents directly received the report cards); these are likely children who may have dropped out before but are induced to re-enroll when schools increase quality and/or decrease price. In contrast to the overall enrollment gains, columns 2 and 3 show that there is lit- tle change in the overall switching or dropout rates for the tested cohort in treatment villages (i.e., the number of children who switch schools or drop out in the village 16  The fee regressions have 104 instead of 112 villages and 274 instead of 303 schools due to school clo- sures in Year 2 (15 schools), missing data (3 schools), and inconsistencies in fee data across grades within years (11 schools).  17  The dependent variable in column 3 is the village mean of school fees as reported by surveyed households who happened to have a child enrolled in one of the private schools (the drop in number of villages is because not all schools have a household fee report). The magnitude of the fee effect is somewhat smaller than in column 2 but we cannot reject equality of coefficients.  18  Results for English, Urdu, and math respectively were 0.10 to 0.15 standard deviations and we cannot reject equality of coefficients across the three subjects (online Appendix III, Table IV).  19  The second-year report card contains information on test scores in Year 2 and test score changes between Year 2 and Year 1. As we did not re-randomize across villages in Year 2, we cannot separate the persistence of impact due to the first report card delivery from additional impact due to the second report card. In Andrabi et al. (2011) we show that the coefficient on lagged test scores is less than 0.5 for subjects such as mathematics. Therefore, for level gains to remain the same over the two-year period, the treatment effect either continued to grow or there was an additional effect from the second report card. Substantial within-school persistence in test scores (the correlation between the two years is 0.64 in the control group) suggests that the Year 2 report cards may have had less information content relative to those given in Year 1.  20  In column 1, panel A, the p-value is 0.14 and with controls for baseline enrollment rates in panels B and C, the enrollment result becomes highly significant.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1551 Table 4—Enrollment and Switching: Impact on Market Outcomes Village enrollment (Year 2) Village average Primary Dropout rate test scores: same enrollment Switching rate (tested cohort kids, no switchers rate (tested cohort only) only) (Year 2)   (1) (2) (3)   (4) Panel A. No controls Report card 0.0390 0.009 0.009 0.129 (0.0263) (0.007) (0.006) (0.0608) Observations 112 112 112 112 R2 0.473 0.0561 0.377 0.397 Panel B. Baseline control only Report card 0.0351 0.107 (0.0140) (0.0402) Baseline 0.973 0.711 (0.0470) (0.0595) Observations 112 112 R2 0.851 0.742 Panel C. Baseline and village controls Report card 0.0324 0.009 0.007 0.113 (0.0137) (0.0074) (0.0056) (0.0408) Baseline 1.037 0.711 (0.0690) (0.0587) Observations 112 112 112 112 R2 0.853 0.083 0.429 0.745 Baseline dependent variable (mean) 0.71 — —   −0.012 Notes: This table examines the impact of the report card on enrollment at the village level. The outcome variables are: Year 2 village primary enrollment rate from school survey data (column 1); switching rate and drop out rate at the village level for tested cohort only available from child roster data (columns 2 and 3); and Year 2 village aver- age test score for those kids who did not switch schools between years 1 and 2 (column 4). Columns 2 and 3 are available only for the tested cohort where we tracked and verified the status of every child; these data do not exist for the children in other grades in a given school. All regressions include district fixed effects and display robust standard errors in parentheses. Panel A considers no additional controls; panel B includes a baseline control of the outcome variable; and panel C includes baseline of the outcome variable and additional village controls, which are the same as in Table 2. Baseline dependent variable (mean) displays the baseline mean for the sample for all out- come variables. Note that we do not observe baseline rates for switching and dropout. Columns 1–   4 are run on all 112 sample villages. as a fraction of children enrolled at baseline in Grade 3).21 As we examine later, the lack of an overall impact hides some heterogeneous results across schools. The lack of evidence of differential switching or dropouts suggests that the test score gains were driven primarily by students who remained in the same school. In column 4, we restrict the sample to children who were tested in both periods (as in Table 3, column 6) but also exclude any children who switched schools. The results confirm that the test score gains for these children remain the same as 21  We cannot examine switching and dropout rates for the entire school as child tracking was only conducted for the tested cohort.  1552 THE AMERICAN ECONOMIC REVIEW june 2017 in Table 3: columns 4 and 6 show effects of 0.114 and 0.109, respectively, and we now obtain 0.113.22 C. Impact by Provider Quality and Type We now examine some of the more specific predictions highlighted in the frame- work in Section II on school fees, test scores, and enrollment by school type and quality.23 School Fees.—With prima facie evidence that schools were likely in a separating equilibrium, we should expect higher price declines among initially high achieving schools—or more specifically—there should be a flattening of the price-quality gra- dient in treatment villages. The results in Table 5 support this. Column 1 first regresses (log) fees on test scores before and after the provision of report cards in treatment and control villages. Our interest is in the triple-interaction term, RC × Score × Post. As predicted by the framework, there is a large and sig- nificant decline in the price-quality gradient in treatment villages relative to control villages, as a consequence of the report cards. Columns 2–4 now directly examine how the impact of report cards on school fees varies by baseline school test scores. Column 2 shows that if a school in a treatment village has a one-standard-deviation higher test score at baseline, it experiences a Rs 281.6 greater decline in fees. Column 3 illustrates the same result using a binary quality measure, constructed by dividing schools into initially high and low scoring, where initially high refers to schools above the sixtieth percentile of the baseline school test score distribution. Online Appendix Section I.F and Table VI show that our results are similar if we use alternative binary thresholds. Private schools with high baseline test scores show larger price declines (a Rs 294 decline or around 25 percent of their baseline fees) as a result of report card provision, as compared to initially low scoring private schools. Column 4 confirms that the same results hold when we use fees reported by households instead of schools. Test Scores.—The asymmetric information model under plausible assump- tions on the structure of demand also suggests that quality should increase for ini- tially low-quality schools and these responses will be more muted among initially high-quality schools (online Appendix II). We now empirically assess how the report cards affect test scores at the child level for students enrolled in different types of schools (i.e., public or private or initially high or low scoring, defined as before) at baseline. Table 6 shows that the test score improvements in private schools observed in the aggregate data were primarily a result of improvements in scores—by 0.31 standard 22  If switching responds to the treatment, estimates restricted to non-switchers will be biased. A simple bound- ing exercise in online Appendix I.F shows that gains among switchers would need to be at least 2.25 standard deviations for switchers to drive our results, which seems implausibly large.  23  The first stage of our intervention on parental perceptions was similar across school types with no difference in baseline uncertainty regarding school quality across households as a function of the schools’ initial test scores. Neither do we find (regressions not shown) that the correlation between perceptions and test scores changed differ- ently for (private) schools with high and low baseline test scores.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1553 Table 5—Private School Fees: Impact by Baseline Test Score Level fees (Year 2) log fees School Household (panel version) report   report (1)   (2) (3)   (4) Report card (RC) −0.139 Report card (RC) −111.6 −42.70 78.58 (0.0916) (76.40) (88.65) (145.2) School score (Score) 0.244 School score (Score) 195.9 (0.114) (162.9) RC × score 0.0389 RC × score −281.6 (0.150) (163.0) Score × post 0.0544 High scoring school 232.2 530.2 (0.129) (121.3) (189.0) RC × score × post −0.368 RC × high scoring school −293.8 −511.4 (0.179) (129.0) (207.1) Post −0.177 Baseline 0.683 0.681 0.488 (0.323) (0.122) (0.117) (0.125) RC × post 0.121 (0.109) Controls Village Village Village Village Observations 555 274 274 238 R2 0.311 0.584 0.585 0.402     Subgroup point estimate, F-test p-values in brackets Low scoring private school −42.70 78.58 [0.631] [0.590] High scoring private school −336.5 −432.9         [0.000]   [0.000] Baseline fee (mean) 6.911   1,188.5 1,188.5   1,047.9 Notes: This table looks at the impact on school fees by school type. The outcome variables are: Private school log fees in panel format (column 1); Year 2 private school fees from school survey data, in levels (column 2 and 3); and Year 2 private school fees in levels from household survey data (column 4). Column 1 data are from the school sur- vey and are constructed in a panel format to test whether the price-test score gradient falls as a result of the inter- vention; we thus see roughly double the number of observations in column 1 compared to columns 2 and 3 which use the same data source. The coefficient of interest in the triple interaction terms (RC × Score × Post). Column 2 considers the impact on fees when baseline test score is continuous whereas columns 3 and 4 consider a binary test score measure with schools defined as high scoring if they are above the sixtieth percentile of the test score distri- bution. The results are robust to alternative classifications (see online Appendix Table VI). The number of observa- tions is less than 303 private schools due to missing fee data and private school closure in Round 2. Column 4 has even fewer observations because we only use data from those households with children in private schools who we tested and were able to match in our testing roster. All regressions include district-fixed effects and cluster standard errors at the village level. Additional village level controls, the same ones listed in Table II, are used in all regres- sions. The lower panel displays the estimated coefficients and p-values [in square brackets] for relevant subgroups obtained from the coefficients estimated in the top panel. Baseline fee (mean) displays the baseline fee mean for the sample across all regressions. deviations—for the average child at initially low scoring private schools. The aver- age child in an initially high scoring private school shows no improvement with a small negative point estimate. In contrast, we find no such heterogeneity for public schools. Column 1 shows that the average child in both initially high and low scor- ing public schools sees similar learning impact, i.e., while the point estimate (shown in the subgroup estimates at the bottom of the table) are somewhat higher for a high scoring public school (0.21) than a low one (0.07), we cannot reject equal impacts 1554 THE AMERICAN ECONOMIC REVIEW june 2017 Table 6—Child Average Test Scores: Impact by School Type and Baseline Test Score Child average test scores (Year 2) By school type and baseline test score Government schools combined     (1)   (2) Report card (RC) 0.310 0.305 (0.124) (0.125) RC × government (Gov) −0.240 −0.216 (0.125) (0.127) RC × high scoring school −0.357 (0.133) RC × gov × high scoring 0.496  school (0.231) RC × high scoring private −0.355  school (0.134) High scoring school 0.0619 (0.0538) Government −0.176 −0.227 (0.0503) (0.0570) High scoring private achool 0.0822 (0.0580) Baseline 0.696 0.667 (0.0263) (0.0343) Controls Village Village R2 0.533 0.529 Observations 9,888 9,888 Subgroup point estimate, F-test p-values in brackets Low scoring private school 0.310 Low scoring private school 0.305 [0.0143] [0.0161] High scoring private school −0.0472 High scoring private school −0.0505 [0.355] [0.316] High scoring gov. school 0.209 Gov. school 0.0888 [0.244] [0.0538] Low scoring gov. school 0.0700 [0.106] Baseline test score (mean)   0.009   0.009 Notes: The outcome variable is Year 2 child average (across all three subjects) test score. Column 1 separates the effect by school type and by school performance, i.e., whether a given school regardless of type was high scoring at baseline. Column 2 combines government school and focuses on private school type, which are low scoring or high scoring. All regressions include baseline child test score as a control, district fixed effects and cluster standard errors at the village level. We further include village controls (the same as in previous tables). Regressions include interaction terms with NGO, as well as other interactions and level terms that are necessary given the interaction terms included. The lower panel displays the estimated coefficients and p-values [in square brackets] for relevant subgroups obtained from the coefficients estimated in the top panel. Baseline test score (mean) displays the base- line child test score mean for the sample. on both type of public schools ( p-value of the test of equality is 0.46). Column 2 therefore combines both types of public schools and obtains similar results with an overall 0.089 standard deviation increase for the average child enrolled in a public school at baseline.24 24  Heterogeneous responses across schools do not reflect heterogeneous responses across initially low/high achieving children: in low scoring private schools, both low and high scoring children increased their scores, while in high scoring schools private schools, neither type of child improved (online Appendix III, Table VII).  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1555 Both price and quality results are consistent with the conceptual framework. Educational markets were inducing separation in price and quality by providing mark-ups to higher quality schools. Once information improved through the report cards, this mark-up fell and fell more for initially high scoring schools and test scores increased for initially low scoring schools. Improvements among public schools also suggest that better information provides nonprice incentives to improve test scores for the public sector, something we will return to in subsequent sections. ­ School Enrollment.—Previously, we documented evidence of an aggregate increase in enrollment but little impact on aggregate switching behavior. The latter hides potential heterogeneity, examined further in Table 7, where the results are (statistically) weaker and therefore more suggestive. Columns 1 and 2 consider the impact on total enrollment in the schools (Grades 1 to 5) and for the tested cohort only, respectively. In column 1, initially low scoring private schools lose 4.5 chil- dren on average, public schools gain 5 children, and initially high scoring private schools see little impact; however, only the public school coefficient is significant (at the 1 percent level). For the tested cohort in column 2, initially low scoring schools lose 1.5 children on average (significant at the 10 percent level), while pub- lic schools and initially high scoring private schools show small positive and not statistically significant coefficients (0.706 and 0.232, respectively). With an average baseline enrollment of around 18 children in the tested grade, these are nevertheless reasonably sized effects. Columns 3 to 4 decompose changes in the tested cohort into children moving into schools (switching in and new children), and those moving out of schools (switch- ing out and dropouts).25 The loss in net enrollment in low scoring private schools is primarily driven by children switching or dropping out (in regressions not shown, separating between the two shows equal-sized effects). While the net gain in high scoring private schools was minimal, this masks churning within these schools with children both switching in and newly enrolling (one additional child) countered by an increase (of one-half of a child) in switching or dropping out. This churning likely reflects both heterogeneous responses across parents and within these schools. Notably, we do not find any differences in the composition of children who switched or dropped out in the treatment villages as measured by their baseline test scores.26 Column 5 shows that, consistent with some of the enrollment changes, the treat- ment also increased the incidence of closure among schools, with low scoring private schools 12.5 percentage points more likely to close in treatment villages. Given the smaller number of low scoring private schools, this increased rate of clo- sure reflects an additional six such schools closing in treatment villages.27 If we ­ reestimate Columns 1 and 2, and exclude any schools that closed, we confirm that 25  We can only do so for children in the tested cohort since that is the only grade where we had a child-tracking exercise that followed every child (in the tested grade) enrolled in Year 1 through the subsequent years (96 percent were successfully tracked).  26  There is little evidence of price discounts for higher performing students in these schools. Using household reports of school fees, we find that a 1-standard-deviation increase in test scores leads at most to a 2 percent statistically insignificant decline in school fees, which increases to 5.8 percent (still insignificant) when parental ­ controls are included.  27  School openings did not differ by treatment status with 11 new schools opening in Year 2, 5 in the treatment, and 6 in the control villages. This is likely because the time period under consideration is too short to examine entry.  1556 THE AMERICAN ECONOMIC REVIEW june 2017 Table 7—School Enrollment: Impact by School Type and Baseline Test Score Tested Tested cohort Tested cohort Private Primary cohort children going children going school enrollment enrollment into schools out of schools closure (Year 2) (Year 2) (Year 2) (Year 2) (Year 2)   (1) (2) (3) (4) (5) Report card (RC) −4.472 −1.474 −0.410 1.296 0.125 (3.815) (0.846) (0.483) (0.537) (0.0486) Government 7.315 1.628 0.155 −0.698 (2.655) (0.838) (0.666) (0.305) High scoring private school 3.216 −0.792 −1.293 −0.192 0.0336 (3.241) (0.801) (0.570) (0.303) (0.0237) RC × gov 9.424 2.180 0.989 −1.413 (4.769) (1.063) (0.752) (0.580) RC × high scoring private school 3.906 1.706 1.428 −0.794 −0.111 (4.853) (1.072) (0.665) (0.604) (0.0599) Baseline enrollment 0.961 1.065 0.169 0.0485 (0.0254) (0.0491) (0.0407) (0.0109) Controls Village Village Village Village Village Observations 801 802 798 802 303 R2 0.904 0.863 0.203 0.151 0.0378 Subgroup point estimate, F-test p-values in brackets Low scoring private school −4.472 −1.474 −0.410 1.296 0.125 [0.244] [0.084] [0.397] [0.018] [0.011] High scoring private school −0.567 0.232 1.017 0.502 0.0141 [0.836] [0.714] [0.043] [0.073] [0.633] Government school 4.952 0.706 0.578 −0.117   [0.013] [0.273] [0.335] [0.557] Baseline dependent variable (mean) 88.774 17.562 — — — Notes: This table looks at the impact of the report card on school enrollment by school type. The outcome variables are: total primary, Grades 1–5, enrollment in Year 2 (column 1); Tested cohort enrollment (i.e., Grade 4 in Year 2, column 2), these are children now in Grade 4 who were originally tested in Grade 3; number of children in the tested cohort who are newly observed in a school in Year 2 (column 3), i.e., those children that were either in a different school at baseline and so switched into a new school in Year 2 or were not enrolled in any school in the village at baseline; Number of children in the tested cohort who are not observed at their baseline school in Year 2 (column 4), either because they are confirmed to have switched out or dropped out of their baseline school, or are untracked chil- dren from closed schools; and school closure by private school type (column 5). For columns 1, 2, and 5, we use data from school surveys. For columns 3 and 4, we use child tracking data for the tested cohort. Columns 1–4 are run on all 804 schools in 112 villages; some missing values reduce the number of observations. Column 5 is run on all 303 private schools in the sample. All regressions include district-fixed effects and standard errors are clustered at the village level. The same village controls as in Table 2 are included in all regressions. The lower panel displays the estimated coefficients and p-values [in square brackets] for relevant subgroups obtained from the coefficients estimated in the top panel. Baseline dependent variable (mean) displays the baseline mean for the sample for all outcome variables, where available. the decline in enrollment in low scoring private schools is indeed accounted for by these closures.28 28  Closures could affect our interpretation of the quality increase in low scoring private schools in treatment villages if the schools with the lowest expected gains shut down. However, even if a school closed, we were able to track the child when they re-enrolled in another school. Since we assign children to their initial school (the timing of report card provision makes it likely they spent more time in their initial school), we can still (partially) consider gains in closed schools. For children in closed schools that we are unable to retest, a bounding exercise shows that for the observed gains in low scoring private schools to be driven entirely by selective school closure one would need to have the schools that shut down experience a test score decline of more than three standard deviations. Alternatively, assigning children in such schools to either the worst score gain of any private school in our sample VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1557 D. Channels Our final set of results focuses on the potential channels for improvement. Given that test scores depend on school and household investments, we were particu- larly interested to see whether the information intervention affected these inputs differentially. School Investments.—Panel A of Table 8 shows that school investments changed in both public and private schools as a consequence of the intervention. In public schools in treatment villages, there was a modest and significant increase in the qualifications of teachers (column 1). We do not find significant effects for work- force qualification in the private sector, which is perhaps not surprising given the cost of hiring more qualified teachers in low cost private schools. Instead, private schools with low baseline test scores increased teaching time with a corresponding reduction in the breaks during school hours (column 2). Columns 3 and 4 show no changes in basic (desks, blackboards, toilets, and classrooms) or extra (library, computer, sports facility, fans, electricity, and wall/fence) infrastructure for public of initially low scoring private schools.29 Initially high scoring private schools show a small reduction in the extra infrastructure regression perhaps as a consequence of the decline in fees charged by these schools.30 Household Investments.—Panel B then uses detailed time-use and expenditure data from the household surveys to look at parental investments in children. We examine three different measures of parental investments: money (excluding fees); time directly spent on child’s education; and parental engagement with the school. There is a hint of a decline in time and money investments, consistent with house- holds substituting away from educational investments in their children (see Das et al. 2013). Our positive learning effects are therefore unlikely to be generated by greater parental time or spending on their children. In contrast, column 7, which computes mean effects using the average effect sizes, shows that parental engagement with the school and knowledge of their school teachers increased for both government schools and initially low scoring private schools by 0.14 and 0.38 s ­ tandard devia- tions, respectively.31 This suggests that in both private and public schools, parental ­ pressure through their increased engagement could have played an important role in inducing the school investment (and eventual test score) improvements. or the gain of a school closest to them in baseline test scores does not alter the point estimate of the treatment effect of report cards on low scoring private schools.  29  These regressions compute the average effect size (AES), which gives equal weight to all components associ- ated with basic and extra infrastructure (see Kling et al. 2004). Online Appendix III, Table VIIIA shows results for each component of the infrastructure indices.  30  We also assessed, but do not find, reductions in class size, student teacher ratios, or evidence of changes in peer quality (regressions not shown).  31  Column 7 computes average effect size across three questions: (i) whether a parent has ever met their child’s teacher; (ii) if they are able to recall the teacher’s name; and (iii) what their knowledge or view of the class teacher’s involvement is. Online Appendix III, Table VIIIB shows the impact of the intervention on these individual compo- nents. Although having met or being able to recall the teacher’s name may appear to be a weak measure of parental engagement, in our sample one-third have not met their child’s teacher and close to one-half do not know the teach- er’s name at baseline. For a parent to have met or know the name of the teacher is therefore a notable change that likely proxies for greater school engagement.  1558 THE AMERICAN ECONOMIC REVIEW june 2017 Table 8—Channels: Impact by School Type and Baseline Test Score Panel A. School inputs Panel B. Household inputs (Year 2) (Year 2) Percent of teachers with at Basic Parent- least infrastruc- Extra Parental teacher higher ture (avg infrastructure Parental spending interaction secondary Break effect (avg effect time on on education (avg effect degree time size) size)   education excl fees size)   (1) (2) (3) (4)   (5) (6) (7) Report card (RC) 0.0129 −12.39 −0.0262 −0.0981 −1.566 −163.1 0.382 (0.0499) (6.875) (0.153) (0.105) (1.051) (160.4) (0.123) Government 0.0248 −1.932 −0.819 −0.580 −1.142 −105.6 0.0790 (0.0344) (5.760) (0.111) (0.0703) (0.791) (134.1) (0.0937) High scoring private 0.0314 −6.244 −0.0502 0.231 −0.0202 132.2 0.254  school (0.0374) (6.008) (0.117) (0.0740) (0.961) (193.8) (0.113) RC × gov 0.0118 14.03 0.0787 0.0637 1.367 29.63 −0.242 (0.0534) (7.102) (0.158) (0.109) (1.067) (174.9) (0.127) RC × high scoring private 0.0211 15.05 0.0478 −0.0518 0.740 −74.74 −0.426  school (0.0539) (7.884) (0.164) (0.115) (1.281) (239.2) (0.163) Baseline 0.792 0.105 0.179 0.334 (0.0266) (0.0552) (0.0317) (0.0687) R2 0.659 0.0380       0.0910 0.136 Observations 783 782 783 783   930 953 1,015 Subgroup point estimate, F-test p-values in brackets Low scoring private school 0.0129 −12.39 −0.0262 −0.0981 −1.566 −163.1 0.382 [0.796] [0.074] [0.864] [0.348] [0.139] [0.311] [0.002] High scoring private 0.0340 2.658 0.0216 −0.150 −0.826 −237.8 −0.0442  School [0.177] [0.462] [0.790] [0.0157] [0.242] [0.212] [0.672] Government school 0.0247 1.632 0.0525 −0.0344 −0.199 −133.5 0.140   [0.071] [0.450] [0.347] [0.475]   [0.511] [0.104] [0.0146] Baseline dependent 0.561 32.641       3.438 971.005  variable (mean) Notes: This table looks at changes in school and household inputs as a result of the intervention. Panel A exam- ines school inputs and the outcome variables are: percent of teachers with at least a higher secondary degree, i.e., at least 12 years of schooling (column 1); break time in minutes per day (column 2); columns 3 and 4 compute average effect size (AES) for basic infrastructure components (desks, blackboards per child, toilets per child, and classrooms per child), and extra infrastructure components (dummies of the presence of a library, computer, sports facility, fans, electricity, and wall/fence at a school), respectively. Panel B examines household inputs and the out- come variables are: Parental time (reading and helping) spent on education with kids in hours per week (column 5); parental non-fee spending on education in rupees per year (column 6); and an AES regression for parental interac- tion which has three components: (i) whether a parent has ever met their child’s teacher, (ii) if they are able to recall the teachers name, and (iii) what their knowledge/view of the class teacher’s involvement is (column 7). Panel A data come from the school survey. Panel B data come from the household survey for children who were matched to the school testing roster; the household data are at the household X school level. The observations from school surveys are less than 800 due to school closure and missing data. The observations from household survey differ slightly across regressions because of missing LHS/RHS values. All regressions control for baseline value of the dependent variable where available, include district fixed effects and standard village controls; standard errors are clustered at the village level. The lower panel displays the estimated coefficients and p-values [in square brack- ets] for relevant subgroups obtained from the coefficients estimated in the top panel. Baseline dependent variable (mean) displays the baseline mean of the dependent variable for the sample in these regressions. VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1559 E. Discussion: Linking Conceptual Framework and Empirics The conceptual framework provides clear predictions that the price-quality gradi- ent for private schools should decline when more information becomes available and we are able to confirm this prediction in the data. We also find that quality as mea- sured by test scores increased more for initially lower quality schools, consistent with the predictions of the theory under plausible assumptions on the structure of demand. The impact on public schools is less obvious—in the absence of market incen- tives for improvement, it is hard to see why they should improve at all. We believe, however, that our results (especially on parental engagement) support the idea that verifiable information increases complaints and thus imposes utility costs on public functionaries—teachers and principals, in our case. Two additional observations help frame our results further. First, report cards also provided feedback to parents about their child’s performance and to schools about their own performance. Could it be that this feedback mechanism is what drove our results? Lacking separate experimental variation on these different aspects, we can- not conclusively isolate which component mattered the most. However, our results on the lack of changes in household investments into their child suggest that the child-specific component was unlikely to have been critical. In addition, feedback to schools having an independent impact is harder to reconcile with the price move- ments we observe. If we think of feedback as performance information when both parents and schools face the same information set, there should always be a tight correspondence between price and quality movements; instead we observe price declines simultaneously with quality increases, something that is inconsistent with models of symmetric information (see online Appendix II). Second, underscoring the importance of comparisons across schools, we also find evidence that our effects for private schools are stronger in villages with compet- itive settings. Using the Herfindahl index as the measure of competition, online Appendix III, Table IX shows that in high competition markets, test scores increased among the low scoring private schools by 0.40 SD (  p = 0.02) relative to 0.15 SD ( p = 0.15) in low competition markets. Equally, price declines among initially high scoring private schools were Rs 390 (  p = 0.001) relative to Rs 284 (  p = 0.04) in high versus low competition markets. Test scores in public schools were not affected by the degree of competition, again supporting the idea that different mechanisms were at play in public schools, with parental pressure directly affecting the utility of school staff. We should caution that even though the differences are large in point estimates, we cannot reject equality at conventional levels (for example, we can reject equality at a p-value of 0.21 for test scores). Our results on market-level impact, heterogeneity of impact, and channels of impact present a consistent picture whereby school investments changed among those very schools where test scores increased and parents did not change their investments of time or money, choosing instead to increase their interactions with the school. Finally, changes were larger in villages where competition was fiercer at baseline. The fact that we find school responses but limited household responses beyond pressuring the school to improve its own performance suggests that it was the combination of parental pressure and school information in a competitive (asym- metric information) setting that really mattered. 1560 THE AMERICAN ECONOMIC REVIEW june 2017 IV. Conclusion There is limited evidence on how education markets adjust when the informa- tional environment improves, particularly in low-income countries with a large num- ber of private schools and few administrative requirements on quality and pricing. This paper informs that question, using the first market-level experimental approach to information provision in a low-income country. We show that providing report cards to schools and parents reduces private school fees, increases test scores in public schools and low performing private schools, and brings in more children into public schools. Information on test scores seems to improve efficiency and equity simultaneously. The magnitudes of the impacts we find are large. The report card intervention, including the testing, printing, and distribution costs $1 per child. The gains in learn- ing alone compare favorably to other interventions, both in absolute terms and rela- tive to normal yearly gains (the treatment effect is 42 percent of the average yearly gain experienced by children in our sample).32 Similarly, the gain in enrollment represents a cost per marginal child enrolled of $22, which is significantly lower than several programs that are currently regarded as quite successful in low-income countries (Akresh, de Walque, and Kazianga 2013).33 With a fee savings of approx- imately $3 per child in private schools and one-third of all children enrolled in pri- vate schools in these villages, the total cost of providing information at $1 per child is comparable to the decline in fees. This partial analysis would suggest that the entire improvement in test scores is free of cost if only the welfare of households is considered.34 These gains are all the more noteworthy as very few children switched schools in treatment villages and are therefore largely supply driven. Even in these highly com- petitive markets, schools are still operating within their technological frontier and enjoy positive mark-ups that they can exploit when information-induced competi- tion increases. We present additional evidence that parental pressure on schools— one marker of which is the increase in parent-school interactions—could have led to this increase in competition. We should caution that although the report cards had a significant impact on test scores and enrollment, we did not investigate a broader set of measures including noncognitive outcomes like persistence and grit. Our outcome variables reflect a (perhaps older) consensus among educationalists and researchers in low-income 32  A recent meta-study (McEwan 2015) of over 70 educational intervention studies from developing coun- tries finds that the largest mean effects were around 0.15 standard deviations (for interventions with computers or instructional technology). Our impact size is higher than those obtained from reducing class sizes in Kenya and India (Duflo, Dupas, and Kremer 2015; Banerjee et al. 2007) and similar to those obtained by providing school grants (Das et al. 2013) or teacher incentives (Muralidharan and Sundararaman 2011; Glewwe, Ilias, and Kremer 2010). 33  In Conditional Cash Transfer programs, the cost of enrolling additional children can range from $450 in Pakistan (Chaudhury and Parajuli 2010) to more than $9,000 in Mexico (de Janvry and Sadoulet 2006). Our costs compare favorably to one of the lowest cost interventions documented thus far, which provides information to par- ents on the returns to schooling (Jensen 2010).  34  Cost-benefit calculations in the educational literature typically focus on household/child welfare, excluding for instance the effort costs of teachers. A complete welfare analysis would exclude the decline in fees as a transfer and focus on the enrollment and test score gains alone. The returns to this intervention remain significant within this restricted focus.  VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1561 countries that at the very low levels of basic skills observed in the population (less than one-third of children at the end of Grade 3 can write a correct sentence in Urdu in our sample), test scores remain the first marker of a successful learning interven- tion. Nevertheless, a fuller accounting on the impacts of this and other interventions would also include these broader domains that arguably affect capabilities and later life functioning. Despite this limitation, our paper highlights three key aspects of information pro- vision in such contexts. First, a commonly held view is that providing information should allow the (initially) higher quality providers to benefit more by increasing prices and/or enrollment. Yet standard models of asymmetric information suggest that if quality signals are somewhat informative, the original equilibrium is sepa- rating and higher quality providers will lose (informational) rents when the infor- mation environment improves. We are able to validate this prediction through our experiment. Second, with better information, at least low-quality schools should increase their test scores as they do in our study, which is consistent with evidence on schooling from Brazil (Camargo et al. 2014) and restaurants in the United States (Jin and Leslie 2003). Finally, we report a sizable improvement in test scores among public schools and argue that a plausible channel is greater interactions between parents and schools that could have increased the (utility) costs for public school teachers of poor performance. Finally, our results help inform the ongoing debate on public versus private educa- tion in low-income countries where public sector failures are common. Increasingly, parents can choose between multiple schools, public and private. In this context, market-level interventions that can improve the performance of the schooling sector as a whole can yield rich dividends. What we have been able to show here is that the dissemination of credible and comparable information on learning quality is an intervention that can improve performance in the private sector and simultane- ously strengthen the public sector. Fixing market failures in the private sector should remain a priority—and in doing so, can yield broad improvements across the public and private sectors, both on efficiency and equity. References Akerlof, George A. 1970. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mecha- nism.” Quarterly Journal of Economics 84 (3): 488–500. Akresh, Richard, Damien de Walque, and Harounan Kazianga. 2013. “Cash Transfers and Child Schooling: Evidence from a Randomized Evaluation of the Role of Conditionality.” World Bank Policy Research Working Paper 6340. Alderman, Harold, Peter F. Orazem, and Elizabeth M. Paterno. 2001. “School Quality, School Cost, and the Public/Private School Choices of Low-Income Households in Pakistan.” Journal of Human Resources 36 (2): 304–26. Andrabi, Tahir, Jishnu Das, and Asim Ijaz Khwaja. 2013. “Students Today, Teachers Tomorrow: Iden- tifying Constraints on the Provision of Education.” Journal of Public Economics 100: 1–14. Andrabi, Tahir, Jishnu Das, and Asim Ijaz Khwaja. 2017. “Report Cards: The Impact of Providing School and Child Test Scores on Educational Markets: Dataset.” American Economic Review. https://doi.org/10.1257/aer.20140774. Andrabi, Tahir, Jishnu Das, Asim Ijaz Khwaja, Duriya Farooqi, Tara Vishwanath, and Tristan Zajonc. ) Report. Washington, 2007. Learning and Educational Achievements in Punjab Schools (LEAPS  DC: World Bank. 1562 THE AMERICAN ECONOMIC REVIEW june 2017 Andrabi, Tahir, Jishnu Das, Asim Ijaz Khwaja, and Tristan Zajonc. 2011. “Do Value-Added Estimates Add Value? Accounting for Learning Dynamics.” American Economic Journal: Applied Econom- ics 3 (3): 29–54. Annual Status of Education Report (ASER) India. 2012. Annual Status of Education Report: India. http://www.asercentre.org/education/India/status/p/143.html (accessed July, 2016). Annual Status of Education Report (ASER) Pakistan. 2012. Annual Status of Education Report: Paki- stan. http://www.asercentre.org/education/India/status/p/143.html (accessed July, 2016). Banerjee, Abhijit V., Rukmini Banerji, Esther Duflo, Rachel Glennerster, and Stuti Khemani. 2010. “Pitfalls of Participatory Programs: Evidence from a Randomized Evaluation in Education in India.” American Economic Journal: Economic Policy 2 (1): 1–30. Banerjee, Abhijit V., Shawn Cole, Esther Duflo, and Leigh Linden. 2007. “Remedying Education: Evidence from Two Randomized Experiments in India.” Quarterly Journal of Economics 122 (3): 1235–64. Banerjee, Abhijit, Rema Hanna, Jordan C. Kyle, Benjamin A. Olken, and Sudarno Sumarto. F  orth- coming. “Tangible Information and Citizen Empowerment: Identification Cards and Food Subsidy Programs in Indonesia.” Journal of Political Economy. Baum, Donald, Laura Lewis, Oni Lusk-Stover, Harry Patrinos. 2014. “What Matters Most for Engag- ing the Private Sector in Education: A Framework Paper.” Systems Approach for Better Education Results Working Paper 8. Bjorkman, Martina, and Jakob Svensson. 2009. “Power to the People: Evidence from a Randomized Field Experiment on Community-Based Monitoring in Uganda.” Quarterly Journal of Economics 124 (2): 735–69. Burde, Dana, and Leigh L. Linden. 2013. “Bringing Education to Afghan Girls: A Randomized Con- trolled Trial of Village-Based Schools.” American Economic Journal: Applied Economics 5 (3): 27–40. Carmago, Braz, Rafael Camelo, Sergio Firpo, and Vladmir Ponczek. 2017. “Information, Market Incentives, and Student Performance: Evidence from a Regression Discontinuity Design in Bra- zil.” Journal of Human Resources. http://jhr.uwpress.org/content/early/2017/04/03/jhr.53.2.0115- 6868R1.abstract. Carneiro, Pedro, Jishnu Dias, and Hugo Reis. 2016. “The Value of Private Schools: Evidence from Pakistan.” Institute for Fiscal Studies Working Paper CWP22/16. Chaudhury, Nazmul, and Dilip Parajuli. 2010. “Conditional Cash Transfers and Female Schooling: The Impact of the Female School Stipend Programme on Public School Enrolments in Punjab, Pakistan.” Applied Economics 42 (28): 3565–83. Das, Jishnu, Stefan Dercon, James Habyarimana, Pramila Krishnan, Karthik Muralidharan, and Venkatesh Sundararaman. 2013. “School Inputs, Household Substitution, and Test Scores.” Amer- ican Economic Journal: Applied Economics 5 (2): 29–57. Das, Jishnu, Priyanka Pande, and Tristan Zajonc. 2012. “Learning and Level Gaps in Pakistan with a Comparison with Uttar Pradesh and Madhya Pradesh.” Economic and Political Weekly 47 (26–27): 228–40. Das, Jishnu, and Tristan Zajonc. 2010. “India Shining and Bharat Drowning: Comparing Two Indian States to the Worldwide Distribution in Mathematics Achievement.” Journal of Development Eco- nomics 92 (2): 175–87. De Janvry, Alain, and Elisabeth Sadoulet. 2006. “Making Conditional Cash Transfer Programs More Efficient: Designing for Maximum Effect of the Conditionality.” World Bank Economic Review 20 (1): 1–29. Dranove, David, and Ginger Zhe Jin. 2010. “Quality Disclosure and Certification: Theory and Prac- tice.” Journal of Economic Literature 48 (4): 935–63. Dranove, David, Daniel Kessler, Mark McClellan, and Mark Satterthwaite. 2003. “Is More Informa- tion Better? The Effects of ‘Report Cards’ on Health Care Providers.” Journal of Political Economy 111 (3): 555–88. Duflo, Esther, Pascaline Dupas, and Michael Kremer. 2015. “School Governance, Teacher Incentives, and Pupil-Teacher Ratios: Experimental Evidence from Kenyan Primary Schools.” Journal of Pub- lic Economics 123: 92–110. Figlio, David, and Lawrence S. Getzler. 2006. “Accountability, Ability and Disability: Gaming the Sys- tem.” In Advances in Applied Microeconomics, Volume 14, edited by Michael Baye and John Max- well, 35–49. Bingley: Emerald Group Publishing Limited. Figlio, David, and Cassandra M. D. Hart. 2014. “Competitive Effects of Means-Tested School Vouch- ers.” American Economic Journal: Applied Economics 6 (1): 133–56. Glewwe, Paul, Nauman Ilias, and Michael Kremer. 2010. “Teacher Incentives.” American Economic Journal: Applied Economics 2 (3): 205–27. VOL. 107 NO. 6 andrabi et al.: test scores AND educational markets 1563 Hastings, Justine S., and Jeffrey M. Weinstein. 2008. “Information, School Choice, and Academic Achievement: Evidence from Two Experiments.” Quarterly Journal of Economics 123 (4): 1373–414. Hoxby, Caroline M. 2000. “Does Competition among Public Schools Benefit Students and Taxpay- ers?” American Economic Review 90 (5): 1209–38. Hoxby, Caroline M. 2002. “The Cost of Accountability.” National Bureau of Economic Research Working Paper 8855. Jacob, Brian A., and Steven D. Levitt. 2003. “Rotten Apples: An Investigation of the Prevalence and Predictors of Teacher Cheating.” Quarterly Journal of Economics 118 (3): 843–77. Jensen, Robert. 2010. “The (Perceived) Returns to Education and the Demand for Schooling.” Quar- terly Journal of Economics 125 (2): 515–48. Jin, Ginger Zhe, and Phillip Leslie. 2003. “The Effect of Information on Product Quality: Evidence from Restaurant Hygiene Grade Cards.” Quarterly Journal of Economics 118 (2): 409–51. Kling, Jeffrey, Jeffrey Liebman, Lawrence Katz, and Lisa Sanbonmatsu. 2004. “Moving to Opportu- nity and Tranquility: Neighborhood Effects on Adult Economic Self-Sufficiency and Health from a Randomized Housing Voucher Experiment.” Princeton University Center IRS Working Paper 481. McEwan, Patrick J. 2015. “Improving Learning in Primary Schools of Developing Countries: A Meta-Analysis of Randomized Experiments.” Review of Educational Research 85 (3): 353–94. Milgrom, Paul, and John Roberts. 1986. “Price and Advertising Signals of Product Quality.” Journal of Political Economy 94 (4): 796–821. Mizala, Alejandra, and Miguel Urquiola. 2013. “School Markets: The Impact of Information Approx- imating Schools’ Effectiveness.” Journal of Development Economics 103: 313–35. Muralidharan, Karthik, and Venkatesh Sundararaman. 2011. “Teacher Performance Pay: Experi- mental Evidence from India.” Journal of Political Economy 119 (1): 39–77. Private Educational Institutions in Pakistan (PEIP). 2000. Dataset. Islamabad: Pakistan Bureau of Statistics, Government of Pakistan. Rogosa, David. 2005. “Irrelevance of Reliability Coefficients to Accountability Systems: Statistical Disconnect in Kane-Staiger ‘Volatility in School Test Scores.’” Nonpartisan Education Review 1 (1): 1–78. Shapiro, Carl. 1983. “Premiums for High Quality Products as Returns to Reputations.” Quarterly Journal of Economics 98 (4): 659–79. Wolinsky, Asher. 1983. “Prices as Signals of Product Quality.” Review of Economic Studies 50 (4): 647–58. World Bank Group. 2004. World Development Report 2004: Making Services Work for Poor People. Washington, DC: World Bank.