WPS5629 Policy Research Working Paper 5629 Impact Evaluation Series No. 49 School Inputs, Household Substitution, and Test Scores Jishnu Das Stefan Dercon James Habyarimana Pramila Krishnan Karthik Muralidharan Venkatesh Sundararaman The World Bank Development Research Group Human Development and Public Services Team April 2011 Policy Research Working Paper 5629 Abstract Empirical studies of the relationship between school with the optimization model, they find in both settings inputs and test scores typically do not account for the fact that households offset anticipated grants more than that households will respond to changes in school inputs. unanticipated grants. They also find that unanticipated This paper presents a dynamic household optimization school grants lead to significant improvements in student model relating test scores to school and household test scores but anticipated grants have no impact on test inputs, and tests its predictions in two very different scores. The results suggest that naďve estimates of public low-income country settings--Zambia and India. The education spending on learning outcomes that do not authors measure household spending changes and account for optimal household responses are likely to be student test score gains in response to unanticipated as considerably biased if used to estimate parameters of an well as anticipated changes in school funding. Consistent education production function. This paper is a product of the Human Development and Public Services Team, Development Research Group. It is part of a larger effort by the World Bank to provide open access to its research and make a contribution to development policy discussions around the world. Policy Research Working Papers are also posted on the Web at http://econ.worldbank.org. The author may be contacted at jdas1@worldbank.org. The Impact Evaluation Series has been established in recognition of the importance of impact evaluation studies for World Bank operations and for development in general. The series serves as a vehicle for the dissemination of findings of those studies. Papers in this series are part of the Bank's Policy Research Working Paper Series. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. Produced by the Research Support Team School Inputs, Household Substitution, and Test Scores Jishnu Das* Stefan Dercon James Habyarimana Pramila Krishnan Karthik Muralidharan Venkatesh Sundararaman JEL Classification: H52, I21, O15 Keywords: school grants, school inputs, household substitution, education in developing countries, randomized experiment, India, Zambia, Africa, education production function * Jishnu Das (World Bank and Center for Policy Research, Delhi: jdas1@worldbank.org ) Stefan Dercon (Oxford University, BREAD, and CEPR: stefan.dercon@economics.ox.ac.uk ) James Habyarimana (Georgetown University, IZA, and Center for Global Development: jph35@georgetown.edu ) Pramila Krishnan (Cambridge University and CEPR: pk237@cam.ac.uk ) Karthik Muralidharan (UC San Diego, NBER, BREAD, and J-PAL: kamurali@ucsd.edu) Venkatesh Sundararaman (World Bank: vsundararaman@worldbank.org) We thank Julie Cullen, Gordon Dahl, Roger Gordon, Gordon Hanson, Hanan Jacoby and several seminar participants for comments. The World Bank and the UK Department for International Development (DFID) provided financial support for both the Zambia and India components of this paper. The experiment in India is part of a larger project known as the Andhra Pradesh Randomized Evaluation Study (AP RESt), which is a partnership between the Government of Andhra Pradesh, the Azim Premji Foundation, and the World Bank. We thank Dileep Ranjekar, Amit Dar, Samuel C. Carlson, and officials of the Department of School Education in Andhra Pradesh for their continuous support. We are especially grateful to DD Karopady, M Srinivasa Rao, and staff of the Azim Premji Foundation for their leadership in implementing the project in Andhra Pradesh. Vinayak Alladi provided excellent research assistance. The findings, interpretations, and conclusions expressed in this paper are those of the authors and do not necessarily represent the views of the World Bank, its Executive Directors, or the governments they represent. 1. Introduction The relationship between school inputs and education outcomes is of fundamental importance for education policy and has been the subject of hundreds of empirical studies around the world (see Hanushek 2002, and Hanushek and Luque 2003 for reviews of US and international evidence respectively). However, while the empirical public finance literature has traditionally paid careful attention to the behavioral responses of agents to public programs1, the empirical literature estimating education production functions has rarely accounted for household re-optimization in response to public spending. This is a critical gap because (a) household responses to education policies will mediate the extent to which different types of education spending translate into learning outcomes, and (b) parameters of education production functions are typically not identified if household inputs respond to changes in school-level inputs (see Urquiola and Verhoogen 2009 for one such example in the context of class-size). We develop a dynamic model of household optimization that clarifies how increases in school-provided inputs translate into learning outcomes. We then test the main predictions of the model in two very different countries ­ Zambia and India ­ using unique matched data sets of school and household spending, and panel data on student achievement. A key contribution of this paper is our ability to measure household spending changes and student test-score gains in response to both unanticipated as well as anticipated changes in school funding. The former measures the production function effect of increased school funding (a partial derivative holding other inputs constant), while the latter measures the policy effect (a total derivative that accounts for re-optimization by agents). The theoretical framework of a dynamic forward-looking model provides a useful guide to the key issues. In this framework, households' optimal spending decisions will take into account all information available at the time of decision making. The impact of school inputs on test scores depends then on (a) whether such inputs are anticipated or not and (b) the extent of substitutability between household and school inputs in the education production function. The model predicts that if household and school inputs are technical substitutes, an anticipated increase in school inputs in the next period will decrease household contributions that period. Unanticipated increases in school inputs limit the scope for household responses, leaving 1 Illustrative examples include Meyer (1990) on unemployment insurance, Cutler and Gruber (1996) on health insurance, Eissa and Leibman (1996) on the EITC, Autor and Duggan (2003) on disability insurance. See Moffitt (2002) for an overview on labor supply responses to welfare programs. 1 household contributions unchanged in the short run. These differences lead to a testable prediction: If household and school inputs are (technical) substitutes, unanticipated inputs will have a larger impact on test scores than anticipated inputs. We test this using data on educational spending for largely substitutable school inputs, such as books and writing materials in both Zambia and India. Our data from Zambia allow us to distinguish between two different types of school spending: a predictable and fixed rule-based school block grant and an unpredictable district- level source of funds that varied widely across schools. The cross-sectional variation in the per- student rule-based grant comes from variation in school enrollment, which is instrumented for with the size of the catchment area (Case and Deaton 1999, and Urquiola 2006 use a similar instrumental variable strategy). We find that household spending substantially offsets variations in predicted per-student school grants. Evaluated at the mean, for each dollar spent on schools via the predictable grants, household spending on education reduces by a similar amount. In contrast, unpredictable grants have no impact on household spending. We also find that student test scores respond positively to the unanticipated sources of funds (test scores in schools receiving these funds are 0.10 standard deviations (SD) higher for both the English and mathematics tests for a mean transfer of just under $3 per pupil), but that they do not vary with variations in anticipated funds. This evidence is strongly suggestive that the two main predictions of the model are correct and is robust to several checks. However, we cannot fully rule out all identification concerns, and therefore test the model again using experimental variation induced by a randomly-assigned school grant program in the Indian state of Andhra Pradesh. The Andhra Pradesh (AP) school block grant experiment was conducted across a representative sample of 200 government-run schools in rural AP with 100 schools selected by lottery to receive a school grant (also around $3 per pupil) over and above their regular allocation of teacher and non-teacher inputs. The conditions of the grant specified that the funds were to be spent on inputs used directly by students and not on any infrastructure or construction projects. The program was implemented for two years. In the first year, the grant was exogenously assigned and a surprise for recipient schools, while in the second year, the grant continued to be exogenous (relative to the comparison schools), but was now anticipated by the parents and teachers of program schools. 2 We find that household education spending in program schools is significantly lower in the second year than in the first year of the program suggesting that households offset the anticipated grant significantly more than they offset the unanticipated grant (just like in Zambia). Evaluated at the mean, the point estimates suggest that for each dollar spent in the form of the anticipated grant in the treatment group, household spending declines by 0.85 dollars (and we cannot reject that the grant is completely offset by the household). Further, students in program schools perform significantly better than those in comparison schools at the end of the first year of the (unanticipated) school grant program, scoring 0.08 and 0.09 SD more in language and mathematics tests respectively for a transfer of about $3 per pupil. In the second year of the program, there is no significant effect of the (anticipated) school grant on test scores. These findings are again consistent with the two main predictions of the model and are virtually identical to those from Zambia. The two sets of results complement each other and provide greater external validity to our findings. The Zambia case offers an analysis of two sources of funding (rule-based and discretionary), but relies on cross-sectional data and instrument quality. The AP case offers experimental variation in one source of funding, which changes from unanticipated to anticipated over time. The policy implications of our results, which are discussed in the concluding section, follow from the insight that the impact of any school input on test-scores will depend on the degree of substitutability between the school input and what households can provide. The impact of anticipated school grants in both settings is low or zero, not because the money did not reach the schools (it did) or because it was not spent well (there is no evidence to support this), but because households realigned their own spending patterns optimally. The replication of the findings in two very different settings2, with two different implementing agencies (the government in Zambia and a leading non-profit organization in AP), and in representative population-based samples suggests that the impact of school grant programs is likely to be highly attenuated by household responses. Further, we find no heterogeneity in household responses 2 The two settings are similar in some ways including having high primary school enrollment but low student test scores and having limited funding for recurrent non-salary expenditures (Pratham 2010, Kanyika et al. 2005). However, at the time of the study, Zambia experienced severe declines in per-capita government education expenditure and a stagnant labor market, while Andhra Pradesh has been one of the fastest growing states in India with large increases in government spending in education over the last decade. Our finding very similar results in a dynamic, growing economy and in another that was, at best, stagnant at the time of our study suggests that the results generalize across very different labor market conditions and the priority given to education in the government's budgetary framework. 3 across asset-poor and asset-rich households suggesting that school grants for learning materials may largely be viewed as pure income transfers to households, and that their long-term impact on learning is unlikely to be higher than the income elasticity of test scores. This has direct implications for thinking about the effectiveness of many such programs across several developing countries.3 The distinction between anticipated and unanticipated inputs and the differential ability of households to substitute across various inputs could account for the wide variation in estimated coefficients of school inputs on test scores (Glewwe 2002, Hanushek 2003, or Kreuger 2003), and our results highlight the empirical importance of distinguishing between policy effects and production function parameters (Todd and Wolpin 2003, and Glewwe and Kremer 2005 make this point theoretically). A failure to reject the null hypothesis in studies that use the production function approach could arise either because the effect of school inputs on test scores through the production function is zero or because households (or teachers or schools) substitute their own resources for such inputs. While in our case the substitution takes the form of textbooks or writing materials, in a more general setting it may include parental time4, private tuition and other inputs.5 Our results show that the policy effect of school inputs is different from the production function parameters with consequences both for estimation techniques and for policy. The remainder of the paper is structured as follows. Section 2 describes the theoretical framework and develops the dynamic model which motivates our estimating equations. Section 3 presents results from Zambia using cross-sectional variation in anticipated and unanticipated school funding, while section 4 presents results from the school grant experiment in India. Section 5 discusses robustness to alternative interpretations and section 6 concludes. 2. Model The aim of this section is to offer an analytical framework to organize the empirical investigation and to understand the results. Becker and Tomes (1976) provide a classic model of the role of parents in spending on educational inputs, but do not model the interaction of school 3 Examples include school grants under the Sarva Shiksha Abhiyan (SSA) program in India, the Bantuan Operasional Sekolah (BOS) grants in Indonesia, and several similar school grant programs in African countries (see Reinikka and Svensson 2004 for descriptions of school grant programs in Uganda, Tanzania, and Ghana). 4 Houtenville and Conway (2008) estimate an achievement production function that includes measures of parental effort and find that parental effort is negatively correlated with school resources. 5 Of course, not all school inputs are substitutes. As we show in Section 2, these predictions do not hold for school inputs that are complementary to household inputs. 4 and household inputs. Todd and Wolpin (2003) allow for the possible substitutability of household and school inputs, but do not offer an explicit optimization model to derive empirical predictions. The contribution of our model is to specify the household's dynamic optimization problem, solve it subject to both budget and production function constraints, and to derive the Euler equation that shows the optimal growth path of test scores (based on an appropriate shadow price of the cost of investing in educational inputs in each period).6 We use this solution to discuss the differential impact of anticipated and unanticipated school inputs on test-score improvements and show how this varies based on whether school and household spending are substitutes or complements. A household derives (instantaneous) utility from the test scores of a child, TS, and the consumption of other goods, X. The household maximizes an inter-temporal utility function U(.), additive over time and states of the world with discount rate (<1), subject to an inter-temporal budget constraint. Finally, test scores are determined by a production function relating current achievement TSt to past achievement TSt-1, household educational inputs zt, school inputs wt , non time-varying child characteristics µ and non time-varying school characteristics . We assume that household utility is additively separable, increasing and concave in test scores and other goods [A1]; and that the production function for test scores is given by TSt F ( TSt 1 , wt , zt , , ) where F (.) is concave in its arguments [A2]. Under [A1] and [A2] the household problem is T Max( X t , zt ) U E t t [u (TSt ) v( X t )] (1) (2) s.t. At 1 (1 r ).( At yt Pt X t zt ) TSt F (TSt 1 , wt , zt , , ) (3) AT 1 0 (4) Here u and v are concave in each of their arguments. The inter-temporal budget constraint, Equation (2), links asset levels At 1 with initial assets At, private spending on educational inputs zt, income yt and the consumption of other goods, Xt. The price of educational inputs is the numéraire, the price of other consumption goods is Pt and r is the interest rate. The production function constraint, Equation (3) dictates how inputs are converted to educational outcomes, and 6 This relates closely to the discussion on durable goods and inter-temporal household optimization; see Deaton and Muellbauer (1980), Jacoby and Skoufias (1997), Foster (1995) and Dercon and Krishnan (2000). 5 the boundary condition, Equation (4) requires that at t=T, the household disposes of all remaining assets so that all loans are paid back and there is no bequest motive. We treat test scores as the observable measure of human capital. The latter is what parents care about, while the former is what they observe and optimize with respect to. The formulation can be seen as a short-cut for an alternative set up in which parents derive future utility from the flow of returns to the child`s stock of human capital, as in a more standard human capital investment model. As we are mainly interested in deriving the optimal dynamic path for reaching the desired stock of human capital considering the costs and benefits of boosting test scores in current and future periods, the insights gained from using a human capital investment model are going to be similar, given the other assumptions made, especially the concavity of the period-by- period production function.7 In this formulation, credit markets are perfect so that there are no bounds on At 1 apart from Equation (4).8 Moreover, households choose only the levels of Xt and zt so that school inputs, wt are beyond its control. In the contexts studied here, this is a reasonable assumption since school resources are allocated at state or federal levels and are not tied to a local property tax that residents may choose (unlike in the US). At the time the household makes its decision, it knows the underlying stochastic process governing wt but not the actual level; we assume that school inputs are a source of uncertainty in the model--for simplicity, the only source. Maximization of Equation (1) subject to Equations (2) and (3) provides a decision rule related to TSt , characterizing the demand for test scores. Since test scores are a stock, we define a per-period price for test scores as the user-cost of increasing the stock in one period by one unit, i.e., the relevant (shadow) price in each period for the household. As in the durable goods literature (Deaton and Muellbauer 1980), the user cost, evaluated at period t is (see Das et al. (2004) for its derivation): 1 FTSt (.) t (5) Fzt (.) ( 1 r )Fzt 1 (.) Here, the first term measures the cost of taking resources at t and transforming them into one 7 Further, we assume that households care about the level of educational achievement, a stock. Results are unaffected if households care about the (instantaneous) flow from educational outcomes, provided that the flow is linear in the stock. 8 It is straightforward to incorporate imperfect credit markets in this framework (see Das et al. 2004). 6 extra unit of test scores. When implemented through a production function, the cost of buying an extra unit is the inverse of the marginal product of spending, Fzt (.) . However, since TS is durable, increasing TS in period t, reduces the cost of acquiring TS in period t+1 proportional to FTSt (.) and the second term thus measures the present value of this reduction in cost in the next period, expressed in monetary terms.9 Given the user cost, the first-order Euler condition determines the optimal path of educational outcomes between period t-1 and t as: U Et 1 t 1 1 TS t (6) t U TS t 1 This is a standard Euler equation stating that along the optimal path, test scores will be smooth, so that the marginal utilities of educational outcomes will be equal in expectations, appropriately discounted and priced. Finally, the concavity of the production function in each time period will limit the willingness of households to boost education fast since the cost is increasing in household inputs.10 Starting from low levels in childhood, the optimal path will be characterized by a gradual increase in educational achievement over time. Under the further assumptions that household utility is additively separable and of the CRRA form, and that marginal utility is defined as TSt , ( the coefficient of relative risk aversion), Equation (6) can be rewritten as: (7) TSt t 1 TS 1 et t 1 t Where et is an expectation error, uncorrelated with information at t-1. Taking logs and expressed for child i, we obtain the optimal growth path: TSit 1 1 1 (8) ln TS ln ln( it ) ln( 1 eit ) it1 it1 9 In the durable goods literature, the user cost per period is derived by assuming that the good is sold in the second period. Though there is no second hand market for test scores, the shadow price for consuming a unit of test scores derived above is similar to those derived in the durable goods literature (see Foster 1995 for a similar derivation of the rental-equivalent price of boosting nutritional status in one period). 10 The "per-period" concavity of the education production function can be motivated in several ways, the most intuitive of which is the existence of limits to how much a student can learn in a given period of time. While the unit of time is not specified in the model (as in the consumption smoothing literature in general), it is natural to consider the unit to be one year in the context of education, since decisions regarding education are typically made prior to the start of the school year. If an additional school grant arrives after this initial spending and is spent on learning materials, households are unlikely to be able to sell materials already purchased and we assume that they will only re-optimize at the start of the next school year. 7 which is determined by the path of user-costs, and a term capturing surprises. In this paper, we do not aim to use the structural dynamic model to estimate an impulse- response function over time to an unexpected change in inputs (the data requirements for that exercise are beyond any education data set we know of). However, we can use this theoretical model to derive an empirical model that nests some key predictions on how anticipated and unanticipated inputs affect the path of test scores. To derive these, assume that school resources are not known with certainty until households make decisions regarding their own inputs. Let wta ( wtu ) be inputs at time t that were anticipated (unanticipated) at t-1. For unanticipated increases in school inputs, households are unable to respond till the next time period and are therefore pushed off the optimal path (see footnote 10). The increase in educational achievement in period t is given by Fwt dwtu , and the change in the growth path is given by ln( TSt wtu Fw ) which is strictly positive. In the case of anticipated increases, the effect on the path of outcomes will depend on the impact on the user-cost of educational achievement at t, since there is no direct impact on the budget constraint at t (all information related to the anticipated inputs, including the budget constraint, will have been incorporated into the programming problem at t-1). Using the implicit function theorem with Equation (5) and assuming TSt (1 )TSt 1 F (wt , zt , , ) where the Hessian of F (.) is negative semi-definite, d t Fz t wt 0 0 if Fz t wt dwta 2 Fz t (9) The change in the optimal growth path is given by ( t 1 ln TS ) 1 ln t t ( ) wta wt (10) 1 1 Fzw 0 if 2 Fzw 0 t Fz If household and school inputs are technical substitutes so that Fzt wt 0 , anticipated increases in school inputs at t increase the relative user-cost of boosting TS at t, resulting in lower 8 growth of test scores, ceteris paribus, between t and t-1.11 Households have (price) incentives to shift resources for educational spending to t-1, boosting educational achievement at t-1 in anticipation of the higher resources at t, and also to take advantage of the higher overall resources for educational inputs that allow them to spend relatively less on educational inputs compared to other commodities. Thus, the effect of an unanticipated change is higher than that of an anticipated change: household spending on educational inputs at t is unchanged, as households cannot move some of their spending to t-1, or to other commodities, as they could with anticipated increases of government spending.12,13 Assuming identical risk preferences, an empirical specification consistent with (8) is: TSit ln TS o 1 ln wit 2 ln wit 3 X t it a u (11) it1 a u Here, wit and wit are anticipated and unanticipated changes in school inputs, measured in this paper by the flows of funds, while X t reflects all other sources of changes in the user cost between t and t-1. The core prediction is that the marginal effect of anticipated funds is lower than that of unanticipated funds when household and school inputs are substitutes. Finally, it is easy to see that if a portion of what the econometrician regards as unanticipated was anticipated by the household (or was substitutable even after the 'surprise' arrival of the school grant), then 11 In other words, if Fzt wt 0 , an increase in wt will decrease the marginal product of zt (and therefore increase the price of boosting TS by increasing zt ). We do not model the schools' choice of inputs to spend on, but if their objective function is to maximize TS, they should optimally allocate cash grants across different inputs and therefore account for the degree of substitution with households. One way to interpret these results is that schools are constrained in what they can do and are hence unable to spend this funding on inputs that could not be easily substituted for by parental resources. These constraints could arise either due to thin markets, explicit restrictions on the use of the grants (to hire teachers for instance), an inability to exploit scale economies (for instance, to improve infrastructure), or parental preferences expressed through school committees to spend on substitutable items). 12 If school and households inputs are technical complements, increasing school inputs at t will increase the marginal productivity of household inputs at t, and through the decline in user-costs lead to higher growth in test scores along the optimal path between t and t-1. Anticipated lower user costs for educational inputs at t relative to t-1 create incentives to shift resources from t-1 to t, leading to a higher growth of test scores between t and t-1. Whether this reduces spending and therefore test scores at t-1 depends on preferences, as households have incentives to keep the optimal path of test scores smooth, while taking advantage of the additional government spending at t to spend more on other commodities. 13 The model above is written as if there is only one type of school and household inputs. It is straightforward to allow for multiple inputs, taking w and z as vectors of educational inputs in the model. Different inputs could have different cross-derivatives, implying different degrees of technical substitutability, so that the extent to which the household may substitute for school spending on particular inputs may differ. We return to this issue in section 5. 9 the estimate of 2 will be a lower bound of the true production function effect (see section 5.4). 3. Zambia 3.1 Background and Context The educational system in Zambia is based on public schools (less than 2 percent of all schools are privately run) and the country has a history of high primary enrollment rates. Teacher salaries are paid directly by the central government, and account for the majority of spending on school-level resources; schools receive few other resources from government. Districts receive some discretionary funding for non-salary purposes from the central government and aid programs. However, since the 1990s, these sources were highly unreliable and unpredictable, partly due to the operation of a "cash budget" in view of the poor macroeconomic situation, and partly due to the irregularity of much of the aid flows to the education sector (Dinh, et al. 2002). In 2002, the year of our survey, less than 24 percent of all schools received such discretionary grants and conditional on receipt, there was considerable variation with some schools receiving 30 times as much as others. Few resources were distributed in kind to schools during the year of the survey (see Das et. al 2003). Overall, the share of discretionary resources was only about a tenth of the share of the teacher salary bill. Parental involvement in schools is high with parents traditionally expected to contribute considerably to the finances of the school via fees paid through the Parent Teacher Association (PTA). Limited direct government funding for non-salary purposes during economic decline put pressure on parents to provide for inputs more usually provided by government expenditure. This customary arrangement regarding PTA fees changed in 2001; following an agenda of free education, all institutionalized parental contributions to schools, including formal PTA fees were banned in April 2001. At the same time, this put further pressure to complement school finances by further direct private parental spending on education. In 2001 the year preceding our survey, a rule-based cash grant through the government's Basic Education Sub-Sector Investment Program (BESSIP) was provided to every school to reverse some of the pressure on school finances arising from a persistent economic decline. These grants were fixed at $600 per school ($650 in the case of schools with Grades 8 and 9) irrespective of school enrollment to exclude any discretion by the administration. The grant was managed via a separate funding stream from any other financial flows, and directly delivered to 10 the school, via the headmaster. Spending decisions were made at the Annual General Meeting, before the start of the school year. The share of this funding in overall school funding was considerable: for 76% of schools it was the only public funding for non-salary inputs, while its average share in total school resources was 86%. The scheme also attracted much publicity, increasing its transparency; combined with the simplicity of the allocation rule, this ensured that the grants reached their intended recipients. Disbursement was fast and reliable and 95 percent of all schools had received the stipulated amounts by the time of the survey and the remainder within 1 month of survey completion (Das et al. 2003).14 Therefore, we expect that in the year of the survey the fixed cash grants would be anticipated by households making their educational investment decisions for the year, contrary to discretionary sources, which had become highly unpredictable and therefore unanticipated. Furthermore, because the grants were fixed in size, there was considerable variation across schools in per-student terms due to underlying differences in enrollment. We use the variation in per-student amounts to examine the crowding-out of household expenditures, a strategy discussed further below. 3.2 Sampling and Data We collected data in 2002 from 17215 schools in 4 provinces of Zambia (covering 58 percent of the population), where the schools were sampled to ensure that every enrolled child had an equal probability of inclusion. The results are therefore externally valid within the 4 provinces of the study. The school surveys provide basic information on school materials and funding as well as test scores for mathematics and English for a sample of 20 students in grade 5 in every school, who were tested in 2001 as part of an independent study and were then retested in 2002 to form a panel. To supplement these data, we also collected information for 540 households matched to a sub-sample of 34 schools identified as "remote" using GIS mapping tools (defined as schools where the closest neighboring school was at least 5 kilometers away). From these schools, the closest village was chosen and 15 households were randomly chosen from households with at least one child of school-going age. The restriction of the household survey sample to 34 remote 14 This contrasts with the early experience in Uganda (Reinnika and Svensson 2004). 15 The initial sample contained 182 schools, although 2 yielded only incomplete information, 5 were private schools not covered in this paper and 3 could not be matched to the test scores data from the Examination Council of Zambia. 11 schools allows us to match household and school inputs in an environment where complications arising from school choice are eliminated. We use the entire sample of 172 schools to estimate the relationship between test scores and cash grants to schools (rule-based and discretionary). We use the sub-sample of 34 schools matched to 540 households to estimate the relationship between rule-based cash grants to schools and household expenditures on education. Table 1 presents summary statistics separately for rural and urban schools, as well as for schools that are in our "remote" sample and matched to households. As might be expected, there are significant differences between rural and urban areas, with the latter having a better-off student body, but not necessarily better school supplies per student. Our sample of "remote" schools is not significantly different from rural schools on most measures, but they attract poorer students than the other rural schools, and have relatively more books and desks per student (though each desk or textbook is still shared by two students). Per student funding from the predictable rule-based grant increases as we go from urban to rural to remote schools, which is consistent with a fixed rule-based grant being distributed among fewer students in rural and remote areas. Substantial parts of school spending are suitable for substitution by parents. On average 54% is spent on books, chalk (for slates), stationary and other school materials by the school while 23% is spent on sports materials and equipment. About 19% is spent on utilities, maintenance and infrastructure, and only 3% is spent on allowances and other costs linked to teachers.16 3.3 Empirical Methodology We first test whether there is crowding out of household educational spending in response to anticipated grants. We estimate a cross-section demand model for the 1195 children (from 540 households) matched to 34 schools in which household spending on school-related inputs is regressed on anticipated and unanticipated grants with and without a set of controls for child, household and school-level variables: ln zij 1 Ai 2 ln wa 3 ln wu 4 X i i j j j (12) where z ij is the spending by the household on child i enrolled in school j, w a and w u are j j respectively anticipated (rule-based) and unanticipated (discretionary) grants per student in 16 Looking at average spending shares by households, 27% is on books and stationary, and other materials for school while 19% is spent on cash contributions of various forms (although PTA fees were formally abolished) and other direct cash payments to the school. The remainder, 54% of household expenditure, is on school and sport uniforms and shoes, and for sports activities at school. 12 school j that matches to child i, and X i are other characteristics of child i including assets owned by the household. We test 2 3 0 , i.e., households respond negatively to the pre- announced, anticipated rule-based grants at the school level by cutting back their own funding, but are unable to respond to cash grants that are unanticipated. To address the concern that w a captures unobserved components of household demand j operating through an enrollment channel, we use the size of the eligible cohort in the catchment area as an instrument for school enrollment and therefore the level of per-student cash grants. This instrumentation strategy is similar to Case and Deaton (1999), Urquiola (2006) in the case of class-size and more recently by Boonperm et al. (2009) and Kaboski and Townsend (2008) in the context of large fixed grants to villages in Thailand. Using the size of the eligible cohort as an instrument for enrollment is especially credible in this context since we use only a sample of remote schools and can abstract away from issues of school choice. We also confirm that there is no correlation between the instrument and X i . We explore the impact of different spending types using Equation (13), based on (11), modeling changes in standardized test-scores TS between t and t-1 regressed on anticipated and unanticipated spending, and a set of controls at t-1 capturing sources of heterogeneity and differences in user costs. TSit o 1 ln wit 2 ln wit 3 X t 1 it a u (13) The prediction is that 1 < 2: unanticipated spending will have a larger effect on test scores than anticipated spending.17 3.4 Results 3.4.1 Household Spending The results of estimating (12) are presented in Table 2, showing results without and with controls, and using the size of the eligible cohort in the catchment area as an instrument.18 The results are consistent with the predictions from our model: across all specifications, the estimated elasticity of substitution for anticipated grants ( ) is always negative and significant and ranges 17 In one specification shown, Xt-1 will include the lagged dependent variable TSit-1 as a further control for heterogeneity in the path of test-scores over time. 18 We can reject the hypothesis that the instrument is weak: the F-statistic of the first stage regression is above 10. The impact of an extra child in the catchment area on enrollment is 0.68 ­ which is close to the actual enrollment of about 80% in the sample. 13 from -0.72 to -1.12 while the coefficient of unanticipated grants ( ) is small and insignificant.19 Evaluated at the mean we cannot reject the hypothesis that for each dollar spent on the rule-based grant per student, households reduce school expenditure by one dollar, while there is no substitution of discretionary, unanticipated spending. One concern may be that households in larger villages (which have smaller per capita anticipated funding) have a different overall demand for education. We address this concern by comparing household expenditure across schools with different levels of rule-based grants. We divide schools into two categories - those receiving less than the median per-child rule-based grant ("low rule-based grant schools) and those receiving more than the median ("high rule-based grant schools) - and Table 3 shows school and household expenditure for these two types of schools. As expected, we find that the per-student grant is significantly lower in the "low rule- based grant" schools. However, household spending on education is significantly higher in these schools. Most importantly, there is no significant difference in total expenditure per child across these two school types. This suggests that overall demand for education is similar across the households in the sample, and that they compensate/offset for lower/higher spending at the school level.20 3.4.2 Test Scores Tables 4A and 4B show the results for English and Mathematics for different specifications where all estimations are at the school level, based on equation (13). The high variability in discretionary funding, with less than a quarter of the school sample receiving any fund, and other schools receiving very high levels, encourages us to explore two specifications for discretionary funding. Table 4A shows the results, expressing discretionary funding as dummy variable, while in Table 4B, we introduce both the level and the square of discretionary funding. In each table, we show two specifications for test-score results for English and for Mathematics. In a first specification, we only include some geographical characteristics (rural/urban and province dummies). The second is our key result, and includes also changes in other school level characteristics that change over time in the data (changes in head teacher, changes in chair of the 19 Only 4 schools (or 12%, with about 150 students in total) in this sample received discretionary funding, possibly weakening this test. 20 In a parent`s succinct summary: "The school had no textbooks this year, so we had to buy our own". 14 Parent-Teaching Association, and changes in fees for this association).21 Other school-level controls did not affect the results. For all specifications, the coefficient on anticipated grants is small and insignificant: there is no improvement in test-scores from these rule-based grants. Adding higher order terms for anticipated grants does not make any difference. For English, there is a consistent impact of discretionary funds received by the school. In a specification with level and square terms (Table 4B), the overall effect is significant at 10% (in column 2).22 When added as a dummy in Table 4A, the effect is significant at 5% (in column 2). For Mathematics, the effect of discretionary funds is significant only when added as a dummy (at 10%, see column 3 in Table 4A). Nonparametric investigation of the relationship between levels of discretionary funds and test score gains suggested a highly non-linear relationship for both English and Mathematics (not shown). Consistent with Table 4A, a positive relationship with discretionary funds exists for both subjects, but Table 4B suggests that a simple (quadratic) parametric formulation is only sufficient to capture this relationship for English. Focusing on the results in Table 4A (columns 2 and 4), we find that on average, receiving discretionary funds adds 0.10 of a SD of test-scores, in both English and Mathematics; in contrast, and consistent with the predictions of the model, there is no impact from rule-based, anticipated school grants. One key threat to identification in the results above is the possibility that the discretionary/unanticipated grants may have been targeted to areas with the most potential improvement in test scores. Alternatively, parents and communities that care enough to obtain these funds for their schools may also be motivated to increase test scores in other ways. We address this concern by comparing the observed characteristics of schools that do and do not receive these discretionary funds and find that there is no significant difference between these types of schools (Table 5). In this table, as column [3] shows, we find no difference in initial levels of test scores, other school performance indicators, location or wealth characteristics between these two types of schools. Column [4] shows OLS results when using these characteristics to try to explain whether discretionary funding was received, and we reject the joint significance of these characteristics. At least on the basis of observables, there is no evidence of differences between these two types of schools that are correlated with the trajectory 21 These controls could be thought of as potentially changing the benefits of spending on schooling by parents (i.e. the user costs). 22 Although the squared term is negative, for all observed values in the sample, the overall effect is still positive. 15 of test score of gains. 3.4.3 Limitations These results are strongly suggestive of the processes outlined in the theory: household and school-level funds are technical substitutes in the production function of test scores and when school-level funding increases, it crowds out private spending within the household. Consequently, such grants have little (if any) impact on the path of test-scores. We are also able to show that the lack of a relationship between test-scores and school-level funding is not because such funds have no effect through the production function--when households are surprised and cannot adjust their own expenditures, test-scores increase with school funding. However, there are a few caveats to the estimates presented from Zambia. First, our household substitution results are only valid for the remote rural sample and while we can show that household spending offsets variation in anticipated funding, our test of the hypothesis that it does not respond to unanticipated funding is based on a small sample (only 4 of the 34 linked schools in the remote sample reported any unanticipated funding at all). Second, while standard in the literature, we cannot rule out that the size of the catchment area (used as an instrument for school-level enrollment) could be correlated to returns in the labor market or historical levels of education in the population. These in turn may be directly correlated to educational investments thus biasing downwards our estimates of crowding-out. Third, unanticipated funds could have been targeted in unobservable ways to schools where parental substitution would be less and where test-scores were more likely to increase even in the absence of funding. While the patterns of observed characteristics and the stability of the results to the use of credible instrumental variables suggest that these are not serious concerns, we cannot fully rule out these alternate explanations. We therefore present results from a field experiment in the Indian state of Andhra Pradesh designed to specifically determine the pattern of crowd-out and we show that the results obtained are virtually identical across these different contexts. The randomized school grants address the second and third caveats above, while the first limitation is addressed by collecting spending data from a large sample of households at multiple points in time (both when the grants were a surprise, and later when they were expected), which allows us to test differential household responses to anticipated and unanticipated funds across a large representative sample of schools. 16 4 Andhra Pradesh, India 4.1 Background and Context Andhra Pradesh (AP) is the 5th largest state in India, with a population of over 80 million, 73% of whom live in rural areas. AP is close to the all-India average on various measures of human development such as gross enrollment in primary school, literacy, and infant mortality, as well as on measures of service delivery such as teacher absence. There are a total of over 60,000 government primary schools in AP and over 70% of children in rural AP attend government-run schools (Pratham 2010). The average rural primary school is quite small, with total enrollment of around 80 to 100 students and an average of 3 teachers across grades one through five.23 Teachers are well paid, with the average salary of regular civil-service teachers being over Rs. 8,000/month and total compensation including benefits being over Rs. 10,000/month (per capita income in AP is around Rs. 2,000/month). Regular teachers' salaries and benefits comprise over 90% of non- capital expenditure on primary education in AP, leaving relatively little funds for recurring non- teacher expenses.24 Some of these funds are used to provide schools with an annual grant of Rs. 2,000 for discretionary expenditures on school improvement and to provide each teacher with an annual grant of Rs. 500 for the purchase of classroom materials of the teachers` choice. The government also provides children with free text books through the school. However, compared to the annual spending on teacher salaries of over Rs. 300,000 per primary school (three teachers per school on average) the amount spent on learning materials is very small. It has been suggested therefore that the marginal returns to spending on learning materials used directly by children may be higher than more spending on teachers (Pritchett and Filmer 1999). The AP School Block Grant experiment was designed to evaluate the impact of providing schools with grants for learning materials, and the continuation of the experiment over two years (with the provision of a grant each year) allows us to test the differences between unanticipated and anticipated sources of school funds. 23 This is a consequence of the priority placed on providing all children with access to a primary school within a distance of 1 kilometer from their homes. 24 Funds for capital expenditure (school construction and maintenance) come from a different part of the budget. Note that all figures correspond to the years 2005 - 07, which is the time of the study, unless stated otherwise. 17 4.2 Sampling, Randomization, and Program Description The school block grant (BG) program was evaluated as part of a larger education research initiative (across 500 schools) known as the Andhra Pradesh Randomized Evaluation Studies (AP RESt), with 100 schools being randomly assigned to each of four treatment and one control groups.25 We sampled 5 districts across each of the 3 socio-cultural regions of AP in proportion to population. In each of the 5 districts, we randomly selected one administrative division and then randomly sampled 10 mandals (the lowest administrative tier) in the selected division. In each of the 50 mandals, we randomly sampled 10 schools using probability proportional to enrollment. Thus, the universe of 500 schools in the study was representative of the schooling conditions of the typical child attending a government-run primary school in rural AP. Experimental results in this sample can therefore be credibly extrapolated to the full state of Andhra Pradesh. The school year in AP starts in mid June, and baseline tests were conducted in the 500 sampled schools during late June and early July, 2005.26 After the baseline tests were evaluated, 2 out of the 10 project schools in each mandal were randomly allocated to one of 5 cells (four treatments and one control). Since 50 mandals were chosen across 5 districts, there were a total of 100 schools (spread out across the state) in each cell. The geographic stratification allows us to estimate the treatment impact with mandal-level fixed effects and thereby net out any common factors at the lowest administrative level of government, and also improve the efficiency of the estimates of program impact. Since no school received more than one treatment, we can analyze the impact of each program independently with respect to the control schools without worrying about any confounding interactions. The analysis in this paper is based on the 200 schools that comprise the 100 schools randomly chosen for the school block grant program and the 100 that were randomly assigned to the comparison group. Table 6 shows summary statistics of baseline 25 AP RESt is a partnership between the government of AP, the Azim Premji Foundation (a leading non-profit organization in India), and the World Bank. The World Bank and the UK Department for International Development (DFID) provided financial support. The Azim Premji Foundation (APF) was the main implementing agency for the study. See Muralidharan and Sundararaman (2010, 2011) for details of other interventions. 26 The selected schools were informed by the government that an external assessment of learning would take place in this period, but there was no communication to any school about any of the treatments at this time. 18 school and student characteristics for both treatment and comparison schools and the null of equality across treatment groups cannot be rejected for any of the variables.27 As mentioned earlier, the block grant intervention targeted non-teacher and non- infrastructure inputs directly used by students. The block grant amount was set at Rs. 125 per student per year (around $3) so that the average additional spending per school was the same across all four programs evaluated under the AP RESt.28 After the randomization was conducted, project staff from the Azim Premji Foundation (APF) personally went to selected schools to communicate the details of the school block grant program (in August 2005). The schools had the freedom to decide how to spend the block grant, subject to guidelines that required the money to be spent on inputs directly used by children. Schools receiving the block grant were given a few weeks to make a list of items they would like to procure. The list was approved by the project manager from APF, and the materials were jointly procured by the teachers and the APF field coordinators and provided to the schools by September, 2005. This method of grant disbursal ensured that corruption was limited and that the materials reached the schools and children. APF field coordinators also informed the schools that the program was likely to continue for a second year subject to government approval. Thus, while program continuation was not guaranteed, the expectation was that it was likely to continue for a second year. Schools were told early in the second year (June 2006) that they would continue being eligible for the school grant program and the same procedure was followed for disbursal of materials (no money was handed over to schools or teachers, and procurement was conducted jointly). Table 7 shows that the majority of the grant money was spent on student stationary such as notebooks, and writing materials (over 40%), classroom materials such as charts (around 25%), and practice materials such as workbooks and exercise books (around 20%). A small amount (under 10%) of the grant was spent in the first year on student durable items like school bags, and plates/cups/spoons for the school mid-day meal program. This amount seems to have been transferred to stationary and writing materials in the second year. We also see that the overall 27 Table 6 shows sample balance between the comparison schools and those that received the block grant, which is the focus of the analysis in this paper. The randomization was done jointly across all treatments and the sample was also balanced on observables across the other treatments. 28 The block grant was set on the basis of the number of students who took the baseline tests as opposed to the number of students enrolled (except for the first grade where there was no baseline test). This ensured that schools that inflated enrollment (which is not uncommon in India) were not rewarded with a larger grant. 19 spending pattern at the school level is quite stable across the first and second year of the grant. Many of these items could be provided directly by parents for their children, suggesting a high potential for substitution. 4.3 Data Data on household expenditure on education was collected from a household survey that attempted to cover every household with a child in a treatment or comparison school and administer a short questionnaire on education expenditures on the concerned child during the previous school year.29 Data on household spending was collected at three points in time ­ alongside the baseline tests for spending incurred in the pre-baseline year (Y0), during the second year of the program about spending during the first year (Y1), and after two full years of the program about spending during the second year (Y2). Data on household education spending was collected retrospectively to ensure that this reflected all spending during the school year.30 The outcome data used in this paper comprise of independent learning assessments in math and language (Telugu) conducted at the beginning of the study, and at the end of each of the two years of the experiment. The baseline test (June-July, 2005) covered competencies up to that of the previous school year. At the end of the school year (March-April, 2006), schools had two rounds of tests with a gap of two weeks between them. The first test covered competencies up to that of the previous school year, while the second test covered materials from the current school year's syllabus. The same procedure was repeated at the end of the second year, with two rounds of testing. Doing two rounds of testing at the end of each year allows for the inclusion of more overlapping materials across years of testing, reduces the impact of measurement errors specific to the day of testing by having multiple tests around two weeks apart, and also reduces sample attrition due to student absence on the day of the test. For the rest of this paper, Year 0 (Y0) refers to the baseline tests in June-July 2005; Year 1 (Y1) refers to the mean score across both rounds of tests conducted at the end of the first year of the program in March-April, 2006; and Year 2 (Y2) refers to the mean score across both rounds of tests conducted at the end of the second year of the program in March-April, 2007. All analysis is carried out with normalized test scores, where individual test scores are converted to 29 The data was collected from a short survey that was only based on the main child who was being covered in the school assessments and not for other siblings or other components of household spending. 30 We obtained spending data from a total of 8,612 households for Y0 (no data was collected for retrospective spending on children in grade 1, because it was their first year in school), 13,572 households for Y1, and 10,189 households for Y2. 20 z-scores by normalizing them with respect to the distribution of scores in the control schools on the same test.31 4.4 Results 4.4.1 Household Spending We estimate: ln zijkt 0 Y0 1 Y1 2 Y2 3 BG Y0 4 BG Y1 5 BG Y2 Z m ijk (14) where ln zijkt is the expenditure incurred by the household on education of child i, at time t (j, k, denote the grade, and school), Yn is the project year, BG is an indicator for whether or not the child was in a block grant school, and standard errors are clustered at the school level. The parameters of interest are 3 , which should equal zero if the randomization was valid (no differential spending by program households in the year prior to the intervention); 4 , which measures the extent to which household spending adjusted to an unanticipated increase in school resources (since the block grant program was a surprise in the first year of the project), and 5 , which measures the response of household spending to an anticipated increase in school resources (since the grant was mostly anticipated in the second year).32 All regressions include a set of mandal-level dummies (Zm) to account for stratification and to increase efficiency. Table 8 confirms that that 3 and 4 are not significantly different from zero while 5 is significantly negative. We report the results both with and without a full set of household controls, and the results are unchanged. The findings are fully consistent with the predictions of the model: in Y1, households did not adjust to the unexpected grant, while in Y2, household spending was adjusted in anticipation of provision of materials by the school (using the grant).33 The estimated elasticity of -0.25 to -0.27 suggests that at the mean household expenditure for the 31 Since all analysis is done with normalized test scores (relative to the control school distribution), a student can be absent on one testing day and still be included in the analysis without bias because the included score is normalized relative to the control school distribution for the same test that the student took. 32 We say mostly anticipated because it was not guaranteed that the program would be continued to the second year, but field reports suggest that the perception of the likelihood of continuation was high enough that households waited to see the materials provided by the schools before doing their own spending. 33 This was further corroborated by field reports after the program was withdrawn, which suggest that most parents did not buy the materials that they thought would be provided by the school. 21 comparison group (Rs 411 in Y2), the per-child grant of Rs. 125 would be almost entirely offset, and we cannot reject that the substitution is 100% (the estimated offset is around 85%).34 4.4.2 Student Test Scores Our default specification for studying the impact of the school block grant, consistent with equation (11) uses the form: Tijkm (Yn ) Tijkm (Y0 ) BG Z m k jk ijk (15) The main dependent variable of interest is Tijkm , which is the change in the normalized test score on the specific test (normalized with respect to the score distribution of the comparison schools), where i, j, k, m denote the student, grade, school, and mandal respectively. Y0 indicates the baseline tests, while Yn indicates a test at the end of n years of the program. Including the normalized baseline test score improves efficiency due to the autocorrelation between test-scores across multiple periods.35 These regressions also include a set of mandal-level dummies (Zm) and the standard errors are clustered at the school level. We also run the regressions with and without controls for household and school variables. They will allow us to capture any sources of changes in user costs, as in (11). The BG variable is a dummy at the school level indicating if it was selected to receive the school block grant (BG) program, and the parameter of interest is , which is the effect on the normalized test scores of being in a school that received the grant. The random assignment of treatment ensures that the BG variable in the equation above is not correlated with the error term, and the estimate of the one-year and two-year treatment effects are therefore unbiased.36 34 As in the Zambia case, we used a logarithmic specification; estimating a linear model in levels of spending we found identical results, including that we could not reject total substitution by households of the school grant in Y2. 35 The inclusion of the baseline test score also allows us to control also for individual heterogeneity correlated with baseline test-scores. In the case of Zambia, we explored adding this to the specification in table 4A and 4B, but, this creates endogeneity problems for inference related to the variables of interest (spending). In the AP case, the randomization ensures that the BG variable is uncorrelated with the error term. Since grade 1 children did not have a baseline test, we set the normalized baseline score to zero for these children (similarly for children in grade 2 at the end of two years of the treatment). 36 We also check for differential post-treatment attrition of teachers and students and find that there is no differential attrition or turnover of teachers between "block grant" and "control" schools. However, there is a small amount of differential student participation in the test at the end of the first year of the program (with attrition from the baseline test-taking sample of 5.4% and 8.2% in the treatment and control groups respectively). As weaker students may drop out of the testing sample, this may bias our estimate of the first-year treatment effect downwards, but since the magnitude of differential attrition is small (2.8%), this bias is likely to be quite small, especially since baseline scores are controlled for. In the second year, however, there is no differential attendance on the end of year tests. 22 Note that specification in (15) can be used to consistently estimate the one-year and two-year effect of the program, but not the second year effect alone (with the second-year gains as the dependent variable controlling for Y1 scores) because Y1 scores are a post-treatment outcome that are correlated with the treatment. Thus, specifications with second-year gains as the dependent variable controlling for Y1 scores will not provide consistent estimates of for the second year of the program. We show these results for illustrative reasons, and to help understand the mechanism for the differing results on 1-year and 2-year impacts of the program. Columns 1 and 4 of Table 9 show that students in schools that received the block grant scored 0.09 standard deviations (SD) higher than those in comparison schools at the end of the first year of the program for mathematics, and 0.08 SD higher for Telugu. Test scores were 0.04 SD and 0.07 SD higher for mathematics and Telugu at the end of the second year (Table 9 ­ columns 3 and 6). The difference at the end of year one is significant for each subject, but not so at the end of two years. The addition of school and household controls does not significantly change the estimated value of , again confirming the validity of the randomization (tables available on request). It is striking that after two years of block grants, there is no significant effect on test scores, despite the gains after the first year. The size of gains after two years (with point estimates below the point estimates after Y1) suggest that the second year of block grants did not add much to learning outcomes, while depreciation of earlier gains may explain that average gains (in terms of point estimates) after Y2 are smaller than achieved after Y1, and not significant.37 These findings are entirely consistent with the predictions of the model, and confirm the considerable substitution in terms of household spending on education in response to the program when anticipated. Finally, we tested for heterogeneity of the block grant (BG) program effect across student and school characteristics by adding a set of characteristics and their interaction with the BG variable in (15). The main result is the lack of heterogeneous treatment effects by several household and child-level characteristics.38 For example, if we expect poor households to be 37 Columns (2) and (5) of Table 9 shows the results of estimating equation (15) with the second-year gains on the left hand side. Recall that this estimate is biased as discussed above, but it suggests that the effect of the block grant program in the second year alone was close to zero in mathematics and 0.05SD in Telugu (both of which are not significantly different from zero). 38 We tested the interaction of the program with school size, proximity to urban centers, school infrastructure, household affluence, parental literacy, caste, sex and baseline test score. 23 more credit constrained and to be within their desired optimal` amount of spending on education, then we would expect that they would offset less of the value of the grant and that the grant would have a larger impact on learning outcomes of poorer households. This suggests that even poor households were spending enough on education so as to almost completely substitute away the value of the school grant from their own spending. 5 Robustness and Interpretation We find strong suggestive evidence from two different low-income countries for a model in which households respond to anticipated school funding. The crowding out of private spending is sufficiently substantial to lead to no impact on test scores from anticipated school grants, while unanticipated changes positively impact the growth of test scores of children. In this section, we discuss the robustness of our results and its interpretation. 5.1 What Are the Components of Spending? One possible concern regarding our interpretation of the results is that the lack of responsiveness by households to unanticipated funds arises because schools spend these funds on different inputs with different parameters of (technical) substitution between household and school funding. Simply put, it is possible that all the unanticipated grants were spent on hiring teachers (who households cannot substitute for) and all the anticipated grants on textbooks (which they can). We compare patterns of spending across various spending categories, and show that this is not the case. In AP, the pattern of spending across various categories is almost identical between the first and second years of the project (Table 7), and it seems clear that the funds were spent on the same type of inputs both when they were unanticipated (first year) and anticipated (second year). In Zambia, we cannot attribute school spending to specific sources of funding (discretionary vs. rule-based). However, the total shares spent on those items most suitable for substitution (books, chalks, and stationary) add up to 57% and 47% respectively for schools without and with discretionary funding, suggesting that in both cases, substantial and similar spending occurs on items that could be substituted by households. This also helps rule out explanations based on diminishing returns to the items procured or the durable nature of school materials. In both countries, the majority of the grant is spent on material that is used up during the school year (stationery, notebooks, practice books, etc). In the AP experiment, it is possible that some of the classroom materials purchased may be durable, 24 and the results reflect diminishing returns to durables in the second year. However, we see that the exact same fraction of the grant was spent on classroom materials in both years, suggesting that even these materials needed to be replenished. We also explicitly record spending on durables (bags, uniforms, plates, etc.) and find that these accounted for less than 10% of spending in the first year, and under 1% in the second year. 5.2 Are the Unanticipated Grants "True" Surprises? A further possible concern with our approach may be that the distinction between anticipated and unanticipated funding is artificial, and households can similarly anticipate both sources after all. For the AP program, this is hard to sustain: as mentioned earlier, the schools had no reason whatsoever to expect the program in the first year, while the grant was eagerly anticipated by schools in the second year. Also, as suggested earlier, most household spending on education occurs at the start of the school year when the school typically provides parents with a list of items to procure for their child for the school year. In the first year of the experiment, the announcement of the grant program was made around one and a half months into the school year and materials were typically procured a few weeks after that. Thus, it is highly likely that materials bought with the grant supplemented the initial household spending and that the first- year program effect represents the "production function" effect of additional spending on school materials. In the second year of the program, field reports suggest that in many cases, treatment schools reduced the list of what they expected parents to buy expecting to use the grant to buy some of these items. Thus, the difference in the degree of anticipation of funds in the first and second year is quite clear. Similarly, in Zambia the uncertainty related to the cash-budget meant that actual spending and budgets were far apart. The typical arrival of these funds at varying points during the school year suggest that households were unlikely to be able to respond to these (as suggested by the positive test score gains in these schools in Table 4A and 4B, and the findings in Table 2). In any case, we see clearly in Table 3 that households do respond substantially to variations in the rule-based grants and that they spend much more/less in schools with lower/higher per-student rule-based funding. 5.3 Budgetary Offsets A third possibility is that there are correlations between the two different types of funds that may be confounding our results. In Zambia, we find a positive but insignificant relationship 25 between rule-based funding and discretionary funding [p-value=0.22]. In AP, the concern would be if anticipated funds are offset by a reduction of other transfers to the program schools. We measure the total grants received by the schools from all other sources and we verify that there is no difference in year to year receipts of funds in either treatment or control schools. There is also no significant difference between the amounts received in treatment and control schools in any year, or a significant difference between any of these differences across the years (tables available on request). 5.4 Storage and Smoothing In interpreting our results, a question that arises is whether households or schools could have smoothed the unexpected grant by either saving some of the funds or storing some materials for use in later years (if the materials had already been bought). We argue that this does not seem to have taken place because the households don`t appear to reduce their expenditure in response to the unanticipated grant in either AP or in Zambia. On the school side, the program design in AP did not provide schools the option of saving funds. They could have saved materials, but they spend on the same sets of materials in both years suggesting that storage was limited, and that the grant led to a near one for one increase in learning materials in the first year. In Zambia, the cash budget system in government spending would have given little scope for smoothing spending, though some of the funds did get used for durable infrastructure. But even if some smoothing via savings, storage or durable goods spending by the school may have been possible, the coefficient on the unexpected grant is a lower bound on the production function parameter (because in this case, the full value of the grant will not have been spent in the same time period) and our results show that the production function effect of the school grant is positive ­ which would not have been apparent if the relationship between school grants and test scores were to have been estimated using anticipated grants. 5.5 What Did Households Substitute Spending Toward? One striking implication of our results is that while it was possible in production function terms to obtain a significant increase in student test scores in both contexts for a relatively inexpensive intervention (spending $3/student to raise test scores by 0.1 SD compares very favorably with the cost effectiveness of other education interventions in developing countries), parents chose to not make that investment in the next period when they could have continued making it (in AP), and seem to offset rule-based grants completely in Zambia. While this result 26 suggests that the households had a low income elasticity of test scores, it is impossible to make any further conclusions without further information on what the household spent the extra cash on. Specifically, given the declining marginal benefit of spending on education in each particular period, the household may still have found it better to save this money for spending on future educational materials, rather than just spend it on goods and services this year. As we do not have information on what the households did with the extra cash available, we cannot explore this further.39 We summarize this section by noting that while having further disaggregated data on both school and household expenditures would allow for an even more precise understanding of the mechanism for our test score results, the combination of the household spending results and the test score results are most parsimoniously explained by the theoretical framework laid out in this paper. We consider but end up rejecting several alternative explanations for these results and finding the same results in two contexts as varied as Zambia and India makes us confident that our results present evidence of differential household responses to anticipated and unanticipated school grants. 6 Conclusion Data on test-scores and household expenditures in the context of school grant programs in Zambia and Andhra Pradesh in India suggest that grants anticipated by households crowd-out private educational spending. Consequently, school grants that are fully anticipated have no impact on test-scores. Unanticipated grants elicit no household responses and do have positive impacts on learning. These results have implications for common estimation techniques in the educational literature. The dominant technique for estimating the effect of school inputs on test scores is based on the production function approach, where achievement (or changes in achievement) is regressed on school inputs. Following Todd and Wolpin (2003), these estimates represent the policy effect of school inputs that combines both the effect of inputs on test scores through the production function, as well as household responses to such inputs. Our use of unanticipated 39 What we do know from our evidence is that households did not spend it on other (non-education) inputs that may raise child test scores directly, such as child nutrition; if so, we could have had substantial crowding out, but still a positive impact on test-scores, which is rejected by our evidence. 27 inputs allows the estimation of both effects separately, thus shedding more light on the process through which school inputs may or may not affect educational attainments. This distinction between anticipated and unanticipated inputs could account for the wide variation in estimated coefficients of school inputs on test scores (Glewwe 2002, Hanushek 2003, or Kreuger 2003). The production function framework does not separate anticipated from unanticipated inputs and so the regressor is a combination of these two different variables. The estimated coefficient is bounded below by the policy effect and above by the production function parameter; the distance from either bound depends on the extent to which the schooling inputs were anticipated or not. While experimental evaluations of education interventions typically overcome selection and omitted variable concerns, the distinction highlighted in this paper is relevant even for experiments, since the interpretation of experimental coefficients depends on the time horizon of the evaluation and whether this was long enough for other agents (especially households) to re-optimize their own inputs. Although we find evidence of high crowding out of anticipated inputs, our results do not suggest an educational policy where inputs are provided unexpectedly. Although test scores in the current period increase with unanticipated inputs, the additional consumption will push households off the optimal path. In subsequent periods, therefore, they will readjust expenditures until the first-order conditions are valid again ­ unanticipated inputs in the current period will not have persistent effects in the future (except due to the durable nature of the good). The policy framework that is suggested under this approach involves a deeper understanding of the relationship between public and private spending, acknowledging that this may vary across different components of public spending. Our key policy implication is that schooling inputs that are less likely to be substituted away by households are better candidates for government provision.40 What might such inputs be? One important example may be teaching inputs, whereby the combination of economies of scale in production (relative to private tuition), difficulty of substituting for teacher time by poorly educated parents, or the generic non-availability of trained personnel in every village could make public provision more efficient (see Andrabi et al., 2009). 40 An alternative could be to give very large grants to school. The anticipated grant in both countries was relatively small. For example, in AP only 12% of households were spending less than the per pupil school grant. If a grant larger than household spending had been given, then crowding out of household spending would have been bounded, and the additional school grant may have had a positive impact on test-scores as total spending by schools and households would have been increased. 28 In a parallel experiment on the provision of an extra teacher to randomly-selected schools in Andhra Pradesh, Muralidharan and Sundararaman (2010) find that the impact of the extra teacher was identical in both the first and second year of the project ­ suggesting that teacher inputs were less likely to be substituted away. Similarly, inputs like school infrastructure that retain some aspects of public-goods and would thus be under-provided by non-coordinating households are a good candidate for government provision. The approach followed here of treating test scores as a household maximization problem with the production function acting as a constraint explicitly recognizes the centrality of households in the domain of child learning. This has important implications for both estimation and policy and further research could potentially separate inputs with high/low degrees of substitutability with regard to private expenditures. One hurdle for such studies is the lack of matched school and household data and the identification of "surprises" in the provision of inputs; long-term data on schooling inputs and panel data on student learning would allow for a deeper understanding in varied contexts based on deviations from means, as is standard in the consumption literature (following Hall 1978). Investments in such data collection will provide the necessary infrastructure for evaluation of short, medium, and long-run impacts of education policy innovations and should be a high priority for education policy makers and funders of education research. 29 References ANDRABI, T., J. DAS, and A. KHWAJA (2010): "Students Today, Teachers Tomorrow? Identifying Constraints on the Provision of Education," Harvard University. AUTOR, D. H., and M. G. DUGGAN (2003): "The Rise in the Disability Rolls and the Decline in Unemployment," The Quarterly Journal of Economics, 118, 157-205. BECKER, G. S., and N. TOMES (1976): "Child Endowments and the Quantity and Quality of Children," Journal of Political Economy, 84, S143-S162. BOONPERM, J., J. HAUGHTON, and S. R. KHANDKER (2009): "Does the Village Fund Matter in Thailand?," The World Bank. CASE, A., and A. DEATON (1999): "School Inputs and Educational Outcomes in South Africa," Quarterly Journal of Economics, 114, F1047-F84. CUTLER, D. M., and J. GRUBER (1996): "The Effect of Medicaid Expansions on Public Insurance, Private Insurance, and Redistribution," The American Economic Review, 86, 378-383. DAS, J., S. DERCON, J. HABYARIMANA, and P. KRISHNAN (2003): "Rules Vs. Discretion: Public and Private Funding in Zambian Education," Development Research Group, The World Bank. -- (2004): "When Can School Inputs Improve Test Scores?" World Bank Policy Research Working Paper 3217. DEATON, A., and J. MUELLBAUER (1980): Economics and Consumer Behavior. Cambridge, UK: Cambridge University Press. DERCON, S., and P. KRISHNAN (2000): "In Sickness and in Health: Risk Sharing within Rural Households in Rural Ethiopia," Journal of Political Economy, 108, 688-727. DINH, H. T., A. ADUGNA, and B. MYERS (2002): "The Impact of Cash Budgets on Poverty Reduction in Zambia: A Case Study of the Conflict between Well-Intentioned Macroeconomic Policy and Service Delivery to the Poor," The World Bank. EISSA, N., and J. B. LIEBMAN (1996): "Labor Supply Response to the Earned Income Tax Credit," The Quarterly Journal of Economics, 111, 605-637. FOSTER, A. D. (1995): "Prices, Credit Markets and Child Growth in Low-Income Rural Areas," The Economic Journal, 105, 551-570. GLEWWE, P. (2002): "Schools and Skills in Developing Countries: Education Policies and Socioeconomic Outcomes " Journal of Economic Literature, 40, 436-482. GLEWWE, P., and M. KREMER (2006): "Schools, Teachers, and Education Outcomes in Developing Countries," in Handbook of the Economics of Education, ed. by E. Hanushek, and F. Welch: North-Holland. HALL, R. E. (1978): "Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence," The Journal of Political Economy, 86, 971-987. HANUSHEK, E., and J. A. LUQUE (2003): "Efficiency and Equity in Schools around the World," Economics of Education Review, 20, 481-502. HANUSHEK, E. A. (2002): "Publicly Provided Education," in Handbook of Public Economics, ed. by A. J. Auerbach, and M. S. Feldstein. Amsterdam: North-Holland, 2045-2141. -- (2003): "The Failure of Input-Based Schooling Policies," Economic Journal, 113, F64-98. HOUTENVILLE, A. J., and K. S. CONWAY (2008): "Parental Effort, School Resources, and Student Achievement," Journal of Human Resources, 43, 437-453. JACOBY, H. G., and E. SKOUFIAS (1997): "Risk, Financial Markets, and Human Capital in a Developing Country," The Review of Economic Studies, 64, 311-335. 30 KABOSKI, J. P., and R. M. TOWNSEND (2008): "A Structural Evaluation of a Large-Scale Quasi- Experimental Microfinance Initiative," Ohio State University. KANYIKA, J., C. T. SAKALA, M. MWALE, R. MUSUKU, G. G. MWEEMBA, and T. NAKASWE- MUSAKANYA (2005): "Learning Achievement at the Middle Basic Level: Zambia's National Assessment Survey Report." KRUEGER, A. (2003): "Economic Considerations and Class Size," Economic Journal, 113, 34-63. MEYER, B. D. (1990): "Unemployment Insurance and Unemployment Spells," Econometrica, 58, 757-782. MOFFITT, R. A. (2002): "Welfare Programs and Labor Supply," in Handbook of Public Economics: Elsevier B.V., 2393-2430. MURALIDHARAN, K., and V. SUNDARARAMAN (2010): "Contract Teachers: Experimental Evidence from India," UC San Diego. -- (2011): "Teacher Performance Pay: Experimental Evidence from India," Journal of Political Economy. PRATHAM (2010): Annual Status of Education Report. PRITCHETT, L., and D. FILMER (1999): "What Education Production Functions Really Show: A Positive Theory of Education Expenditures," Economics of Education Review, 18, 223- 39. REINIKKA, R., and J. SVENSSON (2004): "Local Capture: Evidence from a Central Government Transfer Program in Uganda," The Quarterly Journal of Economics, 119, 679-705. TODD, P. E., and K. I. WOLPIN (2003): "On the Specification and Estimation of the Production Function for Cognitive Achievement," Economic Journal, 113, F3-33. URQUIOLA, M. (2006): "Identifying Class Size Effects in Developing Countries: Evidence from Rural Bolivia," Review of Economics and Statistics, 88, 171-177. URQUIOLA, M., and E. VERHOOGEN (2009): "Class-Size Caps, Sorting, and the Regression- Discontinuity Design " The American Economic Review, 99, 179-215. 31 Table 1: Summary Statistics of Sampled Schools Full Urban Rural Remote Difference (Col Variable Type Variable Sample Sample Sample Sample 3 - Col 4) Size of average class in school 53.9892 40.7701 58.2003 68.0702 -9.86 Class Size Indicators Students per good classroom 98.3198 102.4692 97.9581 91.0939 6.86 Does school have library 0.1163 0.2167 0.0633 0.0606 -0.002 Does school have playground 0.9128 0.7833 0.9747 1 -0.025 Infrastructure Does school have fence 0.3198 0.8 0.0759 0.0303 0.045 Math textbooks per 100 pupils 29.4018 12.4983 34.0235 50.7878 -16.76** English textbooks per 100 pupils 31.9922 18.9598 37.8634 42.8223 -4.95 School Inputs Desks per 100 pupils 40.5585 38.1258 38.6156 50.0305 -11.41** Fraction Repeating 0.0772 0.0468 0.0935 0.0937 -0.0002 School Performance Fraction Dropouts in Primary 0.0423 0.0191 0.053 0.0579 -0.005 Imputed school level asset Student Assets indices -0.1581 0.6134 -0.4804 -0.7892 0.308*** Did School receive discretionary funds 0.2442 0.25 0.2911 0.1212 0.169* Did School receive rule-based funds 0.9419 0.9333 0.9241 1 -0.07 School Funding Per-Pupil Discretionary Funds (Kwacha) 10369.31 4280.306 11702.52 18248.62 -6546 Per-Pupil Rule-Based Funds (Kwacha) 4997.677 2004.352 5750.567 8637.709 -2887.14*** Observations 172 60 78 34 Notes: The table shows summary statistics for (a) all schools in the sample in Column (1); (b) schools in the sample that are in urban areas only in Column (2); schools that are in the sample in rural regions but not in the remote sample that was also selected for the household survey Column (3) and; schools that were in the remote sample only in Column (4). Column (5) reports tests of differences between schools in the rural and the remote samples. School-level asset indices are the average wealth of students in the school, based on surveys with students who were also tested. For the construction of the asset index, see Das et al. (2003) ***p<.01 **p<.05 * p<.1. 1 US dollar = 3570 Kwacha on 1 September 2001. 32 Table 2: The Relationship between Household Spending and School Funding (1) (2) (3) (4) Dependent Variable: Log of Household Spending on Child's Education OLS OLS IV IV Rule Based Funds -0.716** -0.843*** -1.124*** -0.946** [0.285] [0.252] [0.266] [0.460] Discretionary Funds 0.0769 0.0713 0.0661 0.0627 [0.109] [0.0829] [0.0910] [0.0797] Constant 14.69*** 15.52*** 18.42*** 16.25*** [2.617] [2.454] [2.383] [3.561] Geographic Controls N Y N Y Child-level Controls N Y N Y Household-level Controls N Y N Y School-Level Controls N Y N Y F-stat of First Stage 23.54 10.32 Observations 1,195 1,116 1,164 1,085 R-squared 0.053 0.239 0.037 0.238 Notes: This table shows the relationship between household spending and funding received at the school. All regressions exclude 2 private schools. We report OLS and IV coefficients for the response of household spending to rule-based and discretionary funding at the school-level. Column (1) has no controls beyond rule based and discretionary funds; column (2) control include province and rural dummies; child age, the square of age, and gender; parental presence, parental literacy and household wealth measured through an asset index; and class-size in the school, textbooks available per child for Mathematics and English and the number of desks and chairs per 100 children. Columns (3) and (4) are the estimated coefficients from an instrumental variable specification where we use the size of the school catchment as an instrument for per-student rule-based funding as discussed in the text. The F-statistic of the first-stage for each specification is noted; we reduce the sample size by 2 schools for whom this information is not available. Robust standard errors in brackets. *** p<.01, ** p<.05. 33 Table 3: Household Spending and Rule-Based Allocations in the School Low Rule High Rule Based Grant Based Grant Funding Type Difference Schools Schools (N=17) (N=17) Average Per-Child Mean 17882 12022 5860*** Household Expenditure (Kwacha) Observations (Households) 612 620 1232 Rule-Based funds Mean 5915 12158 -6243*** (Kwacha) Observations (Schools) 17 17 34 Total Household and Mean 23734 24124 -390 Rule-Based Funding (Kwacha) Observations (Households) 612 620 1232 Notes: Rule Based-Funds show the per-student funding received under the BESSIP funding. Total Household and Rule-Based funding shows the sum of the two. The 34 schools in the sample are categorized into two equal groups with low and high rule- based funding. *** p<0.01 p<0.1. 1 US dollar = 3570 Kwacha on 1 September 2001. 34 Table 4A The Relative Impacts of Rule-Based Funds and the Receipt of Discretionary Funds on Test-Scores Dependent variable is the gain in normalized test-scores English Mathematics VARIABLES [1] [2] [3] [4] Any Discretionary Funds Received 0.128** 0.103** 0.0794* 0.0957* [0.0583] [0.0501] [0.0457] [0.0481] Rule-Based Funds -0.0272 -0.0184 -0.00416 -0.00445 [0.0343] [0.0303] [0.0216] [0.0262] Constant 0.664** 0.550** 0.467** 0.459* [0.288] [0.259] [0.187] [0.235] Geographical controls Y Y Y Y School controls N Y N Y Expenditure controls N N N N Lagged test scores N N N N Observations 172 171 172 171 R-squared 0.133 0.187 0.042 0.06 Notes: The table reports the estimated effects of rule-based and discretionary funds on yearly changes in English and Mathematics test-scores. Discretionary Funds are treated as a binary variable, separating schools into those who received a positive amount versus thosse who received zero. Column (1) reports the estimated coefficient with only geographical controls, in the form of indicator variables for whether the school is rural and the province; Column (2) adds in school level changes in the head-teacher, the head of the Parent-Teacher Association and PTA fees; Column (3) and (4) report the coefficients for Mathematics. All regressions are clustered at the district-level.*** p<0.01, ** p<0.05, * p<0.1 35 Table 4B: The Relative Impacts of Rule-Based Funds and Discretionary Funds on Test-Scores Dependent variable is the gain in normalized test-scores+C46 English Mathematics VARIABLES [1] [2] [3] [4] Discretionary Funds 0.0700** 0.0598** 0.0193 0.0287 [0.0331] [0.0274] [0.0236] [0.0240] Square of Discretionary Funds -0.00488* -0.00422* -0.000524 -0.00122 [0.00274] [0.00231] [0.00185] [0.00192] Rule-Based Funds -0.025 -0.0159 -0.00603 -0.00617 [0.0348] [0.0314] [0.0215] [0.0261] Constant 0.544* 0.439 0.461** 0.438* [0.313] [0.287] [0.204] [0.248] Geographical controls Y Y Y Y School controls N Y N Y Expenditure controls N N N N Lagged test scores N N N N F-Test of equality of impact of discretionary and rule- 4.27 2.87 0.99 0.98 based funds P-Value of F-Test [.047] [.10] [0.32] [0.33] Observations 172 171 172 171 R-squared 0.139 0.192 0.047 0.065 Notes: The table reports the estimated effects of rule-based and discretionary funds on yearly changes in English and Mathematics test-scores. Discretionary Funds and Rule-Based Funds are treated as a continuous variable Column (1) reports the estimated coefficient with only geographical controls, in the form of indicator variables for whether the school is rural and the province; Column (2) adds in school level changes in the head- teacher, the head of the Parent-Teacher Association and PTA fees. Columns (3) to (4) report the coefficients for Mathematics. All regressions are clustered at the district-level. F-test reported tests null hypothesis that impact of discretionary funding is equal to impact of rule-based funding at mean levels of B26B22discretionary funding. *** p<0.01, ** p<0.05, * p<0.1 36 Table 5: Are receipts of discretionary funds correlated with observable school characteristics? [1] [2] [3] [4] Schools that did Schools that not receive received Difference OLS Results discretionary discretionary funding funding Total enrolment at School 887.3692 989.5476 -102.18 0.0000474 [677.0056] [628.4058] [118.13] [0.000132] Average Wealth of Students -0.1942 -0.0465 -0.158 0.0364 in School [.7971] [.7349] [.138] [0.103] Mean Math Score at -0.0194 -0.0672 -0.047 -0.139 baseline [.4433] [.4226] [.077] [0.0929] Mean English Score at -0.0585 -0.0516 -0.007 0.105 baseline [.438] [.5288] [.082] [0.0860] Fraction Repeating 0.0768 0.0786 0.002 0.445 [.0645] [.0579] [.011] [0.597] Fraction Dropouts in 0.0447 0.0349 0.009 -0.555 Primary [.0556] [.0556] [.009] [0.661] DEO office <5KM 0.7308 0.619 0.112 -0.0657 [.4453] [.4915] [.084] [0.0760] PEO office <25KM 0.7077 0.7857 -0.078 0.129* [.4566] [.4153] [.075] [0.0751] Size of average class in 56.2947 46.9079 9.38 0.151 school [38.0703] [19.2398] [6.12] [0.166] Observations 130 42 172 R2 0.04 F-Test (All Coefficients are jointly insignificant) 1.58 P-Value of F-test [0.171] Notes: The table shows the differences between schools that received any discretionary funds and those that did not. Columns (1) and (2) show the mean values and Column (3) reports the results from the mean comparisons. Column (4) reports results from a regression where we predict the receipt of any discretionary funding with school-level variables that would not have responded to the receipt of funds. The F-test cannot reject that all variables we consider are jointly insignificant, suggesting that schools that received discretionary funds were observationally similar to those that did not. For Columns (1) and (2), standard deviations are reported in brackets; for Column (3) standard errors of the difference are reported in brackets and in Column (4) we report the robust standard error after accounting for clustering at the district level. 37 Table 6: Sample Balance Across Treatments [1] [2] [3] P-value (H0: Control Block Grant Variable type Variable Diff = 0) Total Enrollment (Baseline: School-level Variable 113.2 104.2 0.39 Grades 1-5) Total Test-takers (Baseline: 64.9 62.3 0.64 Grades 2-5) Number of Teachers 3.07 3.03 0.84 Pupil-Teacher Ratio 39.5 34.6 0.17 Infrastructure Index (0-6) 3.19 3.40 0.37 Proximity to Facilities Index 14.55 14.66 0.84 (8-24) Baseline test Math (Raw %) 18.4 16.6 0.12 performance Telugu (Raw %) 35.0 33.7 0.42 Notes: The table shows the sample balance between the treatment and control groups. The school infrastructure index sums 6 binary variables (coded from 0 - 6) indicating the existence of a brick building, a playground, a compound wall, a functioning source of water, a functional toilet, and functioning electricity. 2. The school proximity index ranges from 8-24 and sums 8 variables (each coded from 1-3) indicating proximity to a paved road, a bus stop, a public health clinic, a private health clinic, public telephone, bank, post office, and the mandal educational resource center. 3. The t-statistics for the baseline test scores and attrition are computed by treating each student/teacher as an observation and clustering the standard errors at the school level (Grade 1 did not have a baseline test). The other t-statistics are computed treating each school as an observation. 38 Table 7: Spending of School Grant (Average per Block Grant School) Year 1 Year 2 Rs. % Rs. % Textbooks 110 1.1 246 2.6 Practice books 1782 17.7 1703 17.8 Classroom materials 2501 24.9 2354 24.6 Child Stationary 4076 40.5 4617 48.2 Child Durable Materials 864 8.6 88 0.9 Sports Goods and Others 723 7.2 577 6.0 Average Total Expenditure per Block Grant School 10057 100 9586 100 Notes: The table shows the average spending in Rupees and spending share in each year of the school grant. 39 Table 8 : Household Expenditure on Education of Children in Block Grant Schools (relative to comparison schools) over time Dependent variable is log of household expenditure on children's education [1] [2] Block Grant School* Year 0 -0.021 -0.017 [0.033] [0.031] Block Grant School* Year 1 -0.043 -0.038 [0.028] [0.026] Block Grant School * Year 2 -0.25*** -0.273*** [0.04] [0.042] Household Controls No Yes Observations 34645 31184 R-squared 0.142 0.168 P-value (BG * year 1 = BG * Year 2) 0.000 0.000 Notes: Household expenditure on children's education is the sum of spending on textbooks, notebooks, workbooks, pencils, slates, pocket money for school, school fees, and other educational expenses. Block Grant is a dummy denoting whether the school was a treatment school receiving the block grant or not. In column [2], household controls included are student gender, caste, parental literacy and household affluence. * significant at 10%; ** significant at 5%; *** significant at 1% 40 Table 9 : Impact of Block Grant on Student Test Scores Dependent Variable is Gain in Normalized Test Scores Mathematics Language (Telugu) [1] [2] [3] [4] [5] [6] Second-year Second-year First-year Gain Two- First-year Gain Two- Gain Gain (Unanticipated year (Unanticipated year (Anticipated (Anticipated Grant) Gain Grant) Gain Grant) Grant) Block Grant School 0.091 -0.008 0.039 0.079 0.047 0.065 [0.042]** [0.049] [0.049] [0.038]** [0.039] [0.046] Observations 13778 12844 9891 13926 12878 9981 R-squared 0.293 0.302 0.325 0.254 0.206 0.238 Notes: All regressions include mandal (sub-district) fixed effects and standard errors clustered at the school level. Estimates of two-year gains do not include the cohort in grade 1 in the second year (since they only exposure to one year of the program). All regressions include lagged test scores. * significant at 10%; ** significant at 5%; *** significant at 1%. 41