WPS7674 Policy Research Working Paper 7674 Prioritizing Infrastructure Investment A Framework for Government Decision Making Darwin Marcelo Cledan Mandri-Perrott Schuyler House Jordan Schwartz Public-Private Partnerships Cross-Cutting Solutions Area & Singapore Infrastructure and Urban Development Hub May 2016 Policy Research Working Paper 7674 Abstract Governments must decide how to allocate limited resources social cost-benefit analysis) are available, the Infrastructure for infrastructure development, particularly since financing Prioritization Framework can inform project selection by gaps have been projected for the coming decades. Social combining selection criteria into social-environmental and cost-benefit analysis provides sound project appraisal and, financial-economic indexes. These indexes are used to plot when systematically applied, a basis for prioritization. In projects on a Cartesian plane, and the sector budget is some instances, however, capacity and resource limitations imposed to create a project map for comparison along each make extensive economic analyses across all projects unfea- dimension. The Infrastructure Prioritization Framework sible in the immediate term. This paper responds to a need is structured to accommodate multiple policy objectives, for expanding the available set of tools for project selec- attend to social and environmental factors, provide an intui- tion by proposing an alternative prioritization approach tive platform for displaying results, and take advantage of that is systematic and feasible within the current resource available data while promoting capacity building and data means of government. The Infrastructure Prioritization collection for more sophisticated appraisal methods and Framework is a multi-criteria decision support tool that selection frameworks. Decision criteria, weighting, and considers project outcomes along two dimensions, social- sensitivity analysis should be decided and made trans- environmental and financial-economic. When large sets of parent in advance of selection, and analysis should be small- to medium-sized projects are proposed, resources made publicly available and open to third-party review. are limited, and basic project appraisal data (but not full This paper is a product of the Public-Private Partnerships Cross-Cutting Solutions Area and the Singapore Infrastructure and Urban Development Hub. It is part of a larger effort by the World Bank to provide open access to its research and make a contribution to development policy discussions around the world. Policy Research Working Papers are also posted on the Web at http://econ.worldbank.org. The authors may be contacted at dmarcelo@worldbank.org, cmandriperrott@ worldbank.org, jschwartz3@worldbank.org, and shouse@worldbank.org. The Policy Research Working Paper Series disseminates the findings of work in progress to encourage the exchange of ideas about development issues. An objective of the series is to get the findings out quickly, even if the presentations are less than fully polished. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. Produced by the Research Support Team Prioritizing Infrastructure Investment: A Framework for Government Decision Making Darwin Marcelo, Cledan Mandri‐Perrott, Schuyler House, Jordan Schwartz Key words: policy planning, prioritization, infrastructure, decision support, multi‐criteria analysis, project selection JEL classification codes: H54, H76, R42, R53 Introduction Infrastructure services, widely deemed critical to economic development, trade connectivity, social welfare, and public health, are underprovided in many regions and typically feature strongly in national development plans. Leading up to 2020, an estimated US$836 billion to US$1 trillion will be required each year to meet growth targets worldwide (Ruiz‐Nuñez & Wei, 2015; World Bank). Global estimates of infrastructure investments required to support economic growth and human development lie in the range of US$65 trillion to US$70 trillion by 2030 (OECD, 2006), while the estimated pool of available funds is limited to approximately US$45 trillion (B20, 2014). These needs are particularly intensified for developing regions, as the changing landscape of investment and international aid has also reduced the availability of donor funds and shifted the locus of infrastructure decision‐making from donors to governments. Moreover, the past 20 years have witnessed a shift towards decentralized infrastructure planning and implementation in many countries. Subnational governments, regional entities, and sector agencies have been delegated responsibility for planning and project selection, though accountability for fund allocations may remain with the centralized finance agency (CFA). While these constituencies may propose numerous infrastructure projects, governments often have insufficient financial resources to implement the full suite of proposals. This requires paring down the sets of proposed infrastructure projects, expanding the pool of resources, or both. Good practice suggests that economic and strategic project appraisals and feasibility studies provide a good basis for project prioritization via highest societal net present value. The reality for many countries, however, is that they lack the capacity and resources to provide extensive economic analysis across full project sets or are challenged with having to make decisions based on incomplete or second‐best information. Thus, there is a need for evidence‐based infrastructure decision support that is consistent, pragmatic, and responsive to the particular needs and current capacities of a government. This paper begins with an overview of existing approaches to project selection, highlighting the challenges of prioritization. Next, we propose an alternative approach to prioritization – the Infrastructure Prioritization Framework (IPF) – that utilizes existing and accessible data via multi‐criteria decision analysis. The IPF is intended to help governments systematically compare projects, while promoting the building of analytical capacity and data for more extensive economic analysis. In this way, it is an extension of the set of tools available to support project selection and is complementary to ongoing efforts to build project appraisal and selection capability. The IPF, as it is presented herein, is the latest version of the tool, which continues to evolve through ongoing piloting. We follow its description with lessons drawn from initial pilots in Vietnam and Panama and conclude with a discussion of next steps to improving the tool’s applicability and implementation. 2 Prioritizing Infrastructure Project selection implies grappling with the relative exigency, efficiency, and effectiveness of investments. A number of steps are needed to reach decisions that match policy guidance with project appraisal and subsequent investment. The World Bank’s Unified Framework on Public Investment Management (PIM), aligned generally with the Public Expenditure and Financial Accountability (PEFA) initiative, is a useful overarching framework for guiding governments through the processes of infrastructure investment and delivery, aimed at increasing the effectiveness and efficiency of infrastructure investments. The PIM framework identifies eight key “must‐have” features of an effective public investment management system (see Figure 1). These constitute the “bare‐bones institutional features that would minimize major risks, be achievable in a lower‐capacity context, and yet provide an effective systemic process for managing public investments” (Rajaram, Tuan, Bileska, & Brumby, 2014, p. 20). In the PIM framework, project selection should follow first‐level screening, sound project appraisal, and independent review.1 Where information and technical capacity is sufficient, governments may appraise projects via social cost‐benefit analysis (SCBA) and extensive feasibility analysis. Thereafter, project selection may be based on selecting projects with highest net present values (NPV), best fit with infrastructure policy guidance, or (optimally) both.2 Figure 1. Key features of a Public Investment Management System Source: Power of Public Investment Management (Rajaram et al., 2014) In practice, however, many governments do not have the resources and capacity needed for generating extensive economic appraisals based on Social Cost‐ Benefit Analysis and full‐fledged feasibility assessments across all proposed projects. In these cases, the PIM approach (Rajaram et al., 2014, p. 24) proposes 1. First‐level screening should be done to ensure that projects align with the development strategy and meet basic requirements for budget inclusion as a project (Rajaram et al., 2014). 2. In Chile, for example, projects are consideration by multiple criteria, including NPV as a key factor. 3 that “the emphasis should be on basic elements of formal project appraisal, including whether ‐ The need for a project is well justified; ‐ The project’s objectives are clearly specified; ‐ Broad alternative options to meet the project’s objectives are identified and comparatively examined; ‐ The most promising option is subject to detailed analysis; ‐ Project costs are fully and accurately estimated; and ‐ Project benefits are assessed qualitatively as likely to justify the costs.” When governments must prioritize and select projects under conditions of restricted information and capacity, there is a risk that they may fall back on unsystematic, ad hoc selection. In these cases, decision frameworks based on multi‐criteria analysis can be helpful to (a) systematize prioritization based on key development goals; (b) make best use of available (or reasonably attainable) information across the set of proposed projects; (c) invite governments to ex ante state decision criteria to control propagation of wasteful “white elephant” projects; and (d) identify important missing information to improve project appraisal and data collection looking forward. Such decision support frameworks can help alleviate pervasive problems such as poor or reactive planning, regressive investment, over‐commitment, information asymmetries, corruption, and high degrees of political interference. We have undertaken a review of the common principles of and rationales for systematic infrastructure prioritization, which are discussed in Annex 1. In response to an observed need for an expanded set of tools to support project prioritization and selection under conditions of imperfect or basic project appraisal and limited resources, the World Bank developed an innovative and adaptable Infrastructure Prioritization Framework (IPF).3 This paper discusses the framework’s technical aspects and relevance to the broader investment decision process, along with lessons from the conceptualization of the tool in Vietnam and first pilot application in Panama. We conclude with next steps for refining the framework and exploring its applicability to identifying opportunities for private investment. A Stepping‐Stone Approach: Improving Project Prioritization Recent attention to infrastructure prioritization is grounded on demonstrated government and multilateral organization demand for evidence, comprehensiveness, value, and legitimacy in infrastructure decision‐making.4 It is also a proposed precursor to identifying opportunities for private sector investment.5 Common principles of and rationales for systematic infrastructure 3. See (Mandri‐Perrott, Marcelo, and Haddon, 2014) for a discussion of conceptualization in Vietnam. 4. In late 2014, the Group of Twenty (G20) Development Working Group (DWG) requested MDBs to take steps to ensure that project preparation facilities collaborate to support governments in infrastructure prioritization. The draft ‘MDB’s Common Approach to Prioritizing Infrastructure with their Partner Countries’ promotes a harmonized approach to project preparation, including use of standardized environmental and social safeguards policies as well as common approaches including cost‐benefit analysis and assessment of project executability, development effectiveness, and greenhouse emissions (G20 DWG, MDB’s Common Approach to Prioritizing Infrastructure with their Partner Countries, 2015). 5. The 2014 World Economic Forum (WEF) Investment Blueprint proposes that, “a strategic vision for infrastructure should be the first step for a government to maximize investor financing in 4 prioritization are discussed in Annex 1. The provision of legitimate, evidence‐ based prioritization is necessarily constrained, however, by existing capacity and resource limitations. Many governments make infrastructure decisions with only basic elements of project appraisal at hand. The challenge, therefore, is to provide a provisional framing device that makes the best use of reasonably attainable information in the immediate term until capacity and resources are sufficient to generate more extensive economic appraisals across full project sets to support prioritization decisions. In country consultations with Vietnam, Panama, Indonesia, and Peru, we observed that local government units and line agencies proposed large sets of projects to the central government (e.g., the Ministry of Finance) for funding. While proposed projects passed pre‐screening and were, indeed, subject to basic appraisal, these did not always include full‐fledged SCBA or feasibility studies. Faced with excess projects, limited funds, and no single common denominator for comparison, the central governments struggled to make sense of the mass of proposals. Thus, there was expressed demand for immediate decision support within the existing limitations of the infrastructure planning system, as well as guidance on improving data for better project appraisal in the future. For situations like this, we propose the IPF as a stopgap approach to project prioritization that serves as an interim decision structuring tool until more sophisticated pre‐selection analyses are available (see Figure 2). This ‘stepping‐ stone’ approach does: ‐ Inform decision‐making on project prioritization; ‐ Compare projects that have passed strategic pre‐screening and have been subject to basic appraisal; ‐ Make space for technical deliberation; ‐ Structure the decision space when capacity and information is limited but nevertheless sufficient for systematic comparison; and ‐ Encourage better appraisal by fostering discussion of key decision factors for which project data should be improved or gathered in the future. Conversely, the approach does not: ‐ Deliver a definitive list of projects for selection; ‐ Replace best practices in project appraisal, particularly Social Cost‐Benefit Analysis; or ‐ Take current data deficiencies as acceptable for the long term. infrastructure. This vision should describe the government’s medium to long‐term infrastructure goals, along with the underlying economic and social rationale, and enable the prioritization of a pipeline of projects in the shorter term” (2014, p. 19). An earlier WEF report, ‘Strategic Infrastructure: Steps to Prioritise and Deliver Infrastructure Efficiently and Effectively’ (2012), proposes that governments must decide which solutions create the greatest impact in terms of economic growth, while considering social and environmental issues. 5 Figure 2. Visualizing the stepping stone approach to infrastructure prioritization Selection Informed by Selection Informed by Infrastructure Advanced Project Prioritization Appraisal Ad‐Hoc / Uninformed Framework Project Selection ‐ High technical and ‐ Limited institutional institutional capacity ‐ Limited project‐level and/or technical available information available capacity ‐ Detailed project‐level ‐ Inconsistent use of ‐ Partial project‐level information available information information available ‐ Extensive quantified ‐ Decisions frequently ‐ Project costs known and monetized social, based on non‐ ‐ Some information on environmental, technical or political social, financial and considerations environmental, other economic effects ‐ Subjective economic effects known assessment common ‐ Decisions based on ‐ Decisions based on minimum relevant extensive information information Current Approaches to Prioritization Prior to describing the mechanics of the Infrastructure Prioritization Framework, we begin with an overview of approaches for project prioritization. This naturally requires dealing with appraisal, which, while not the primary focus of this paper, influences the approach to prioritization and selection.6 Project comparison based on cost‐benefit analysis (CBA) allows project comparison based on a single metric –that of monetized value. CBA essentially totals all costs and benefits of a project over its lifetime and discounts future flows to calculate present values. A key strength is that it allows decision makers to intuitively compare and rank diverse alternatives based on a single indicator (Thomopoulos, Grant‐Muller, & Tight, 2009). With Social Cost‐Benefit Analysis (SCBA), prioritization can be based on selecting projects that maximize the net present values for society overall. Because SCBA assessments require quantification and monetization of positive and negative effects, extensive information about the projects and their projected impacts is required (Van Delft & Nijkamp, 1977). Since information in many contexts is limited and many costs and benefits are difficult to monetize, and since SCBA itself can be quite costly, this can make the standard a tall order for application across all proposed projects. This is particularly difficult when governments possess limited resources for appraising large sets of small‐ and medium‐sized projects. An extensive discussion of CBA/SCBA can be found in 6. A good discussion of the key elements of sound project appraisal may be found in The Power of Public Investment Management (Rajaram, et. al, 2014, p. 76). 6 Annex 2, but most germane to this discussion are the resource, capacity, and time limitations that constrain extensive application of SCBA in many contexts, imposing a practical limitation on the appraisal mode as a basis for prioritization. SCBA is used extensively in the US, New Zealand, England, Australia, Singapore, Chile, Ireland, and many other countries to assess and prioritize alternative infrastructure projects, particularly those that demand significant investments. But in the past five years, the UK, Australia, and many US states have also published notes and guidance on the application of multi‐criteria decision analysis (MCDA), expanding the ‘Value for Money’ discourse to suggest structured ways of employing MCDA to incorporate key policy criteria. Some countries, such as Ireland, have imposed thresholds to guide when government should apply SCBA, multi‐criteria analysis, or more simple assessments, depending on the size of the proposed investment. Appraisal and prioritization processes outside of OECD are largely undocumented, but evidence suggests that prioritization is often based on a politics, loose qualitative assessments, or professional judgment, but without clear principles underpinning selection (Petrie, 2010). More problematic, in some contexts, prioritization is not based on formal appraisal at all, with projects approved or disapproved on a rolling, ad hoc basis. The unstructured path to project approval in many countries leaves room for corruption, inefficiency, and particularist infrastructure policy that is unlikely to effectively serve development needs. While countries strive to improve investment appraisal to the advanced use of SCBA across large sets of projects, there is a need to support decisions based on more basic appraisal in the interim. MCDA can be a useful approach to make best use of available information, particularly data reflecting key criteria defined by sector or national infrastructure development policy. Multi‐Criteria Decision Analysis Multi‐criteria decision analysis has gained traction as a way of systematically structuring investment decisions when multiple aspects associated with proposed investments must be reconciled. Multi‐criteria decision approaches formalize the inclusion of non‐monetary and qualitative factors into decision analysis and can be useful when information or analytical resources are limited. Indeed, MCDAs are currently included in government and multilateral project appraisal and selection practice in regions including the Pacific Island Countries and Argentina, as well as in countries with longstanding and established programs of economic project assessment, including Chile, Ireland, and the UK.7 MCDAs have the added benefit of flexibility, since they can be recalibrated to accommodate improved data as it becomes available. An extensive discussion of MCDA as applied to infrastructure decision‐making is included in Annex 3, but we highlight some key points here. Practically, increased use of MCDA to support infrastructure decisions reflects pressure on governments to work within time, information, and capacity limitations (DCLG, 7. See, for example, the work of the Pacific Island Infrastructure Facility (www.theprif.org) on infrastructure selection via MCDM, UN Food and Agriculture Organization work on irrigation planning in Argentina (http://www.fao.org/americas/eventos/vii‐taller‐irrigacion‐argentina/es/), or guidance on multi‐criteria analysis for policy‐making (https://www.gov.uk/government/publications/multi‐ criteria‐analysis‐manual‐for‐making‐government‐policy). 7 2009). Also, when information is incomplete or multiple policy goals are at stake, Beinat and Nijkamp suggest that the “compromise principle” is more appropriate than the optimizing principle. This presumes a variety of decision criteria and states that solutions must reflect a compromise between multiple priorities, while discrepancies between outcomes and goals are traded off by use of preference weights (Beinat & Nijkamp, 1998). In terms of intuition and transparency, multi‐criteria models, which synthetize criteria with assigned weights, are pragmatic since they are “able to cope with almost any problem” and are easily understood (Tsamboulas, 2007). Important considerations for applying MCDA to issues of sustainable development include the ability to deal with complex situations (criteria, different scales and aspects, type of data, uncertainties); the possibility to involve more than one decision‐ maker; and engagement of stakeholders to increase knowledge and propose alternative solutions (De Montis et al., 2004). MCDA allows for two critical policy choices: (1) the selection of criteria by which alternatives will be assessed, and (2) the weighting of criteria. These issues are discussed in more detailed in Annex 3, but in summary, the selection of criteria is essential to capturing the most important costs and expected impacts of a project, as well as performance with respect to prioritized development goals for the sector and country as a whole. Criteria weighting is also a policy choice. Weighting may simply be uniform, wherein all criteria are equally considered, or it may be subjectively set, with weights assigned via consultation or expert guidance to reflect the (expressed) relative importance of the decision criteria. Alternatively, weights may be statistically determined via methods such as Principal Component Analysis (PCA) to determine the linear combination of criteria that captures most of the variation of the underlying data. As with any approach, there are limitations and weaknesses associated with MCDA. For one, it lacks the utilitarian grounding in welfare economics that comes with SCBA, wherein project selection is based on maximization of social welfare (Layard & Glaister, 1994). There is also the threat of subjective manipulation of weights and criteria to privilege certain projects over others. While these weaknesses are apparent, we propose three points in response. First, while not grounded in utilitarian welfare economics, MCDA is neatly aligned with extensive and well‐developed bodies of theory in policy analysis, democratic accountability, and deliberative governance, wherein policy selection is based on the stated goals of a polity and its citizens, including criteria of effectiveness, efficiency, feasibility, adequacy, equity, responsiveness, and appropriateness (Dunn, 2015; Araral et al., 2002; Weimer & Vining, 2015; Hajer & Waagenar, 2003; Bardach & Patashnik, 2015; Poister, 1978). 8 Second, the issue of manipulation to privilege the selection of particular projects is not unique to MCDA. Indeed, the threat of methodological corruption is present in every approach to appraisal and selection. Third, rules guiding implementation can 8. Responsiveness is of particular distinction, as it focuses on the extent to which a policy satisfies the needs, preferences, or values of the subjects of policy with societal standing (Dunn, 2015, 202). Moreover, as Weimer and Vining point out, “the appropriateness of cost‐benefit analysis as a decision rule depends on whether efficiency is the only relevant value and the extent to which important impacts can be monetized. When values other than efficiency are relevant, cost‐benefit analysis can still be useful as a component of multigoal policy analysis” (2005). 8 help deal with the latter issue. In particular, decision criteria, criteria weighting, and sensitivity analysis should be decided and made transparent in advance of selection, and the data used and resulting analysis should be made publicly available and open to third‐party review. The Infrastructure Prioritization Framework The Infrastructure Prioritization Framework (IPF) is a quantitative multi‐criteria prioritization approach that synthetizes project‐level financial, economic, social, and environmental indicators into two indices – social‐environmental and financial‐economic – and considers these alongside the public budget constraint for a particular sector. The IPF differentiates from other multi‐criteria decision tools in four ways. First, it systematically incorporates policy goals, social and environmental sustainability considerations, and long‐term development aims alongside traditional financial factors. Second, it is predicated on parsimony and pragmatism. Third, results are displayed graphically on an intuitive, graphical interface by which decision‐makers can compare alternative investment scenarios. Fourth, it facilitates active deliberation of key decision criteria and priorities for improving project appraisal looking forward. In this way, the process itself is as important as its outputs. Several empirical issues motivated the construction and ongoing development of IPF. For one, there are significant challenges facing many governments in infrastructure planning, wherein large numbers of infrastructure projects are identified in development plans, which are to be implemented with scarce public resources, limited institutional capacity, and cost and time constraints. Second, these difficult decisions must be made based on currently available or reasonably attainable information for the set of projects. Third, given the imperfect nature of appraisal, it is important that projects be evaluated for “social (including environmental) and economic value” (Dabla‐Norris et al., 2011) in addition to financial impacts, but many of these social and environmental impacts are difficult to monetize. Fourth, there is a desire to balance analytical efficiency, derived from standardization, with policy and political responsiveness, derived from selection of decision criteria. To the last point, IPF recognizes that the selection of infrastructure projects cannot be divorced from the political economy of project selection. Particular projects may be chiefly valued by governments and other stakeholders due to key policy goals which are non‐economic in nature or due to considerations that objective indicators cannot measure, such as promoting social cohesion, honoring culture, or redistributing wealth to the poor. But selection can be responsive to policy without becoming altogether political. As such, this support framework explicitly accommodates policy responsiveness in two ways: through criteria selection and by leaving a degree of freedom in decision‐making through multiple references for judgment (i.e., two indices). In addition to building space for political deliberation, consultation, and professional judgment, the following design ideals were incorporated, based on a survey of international best practice: ‐ The strategic relevance of a project must be determined at the sector level as well as within the appropriate tier of government; 9 ‐ Project comparison should be systematic and based on quantitative measures, to the greatest extent possible, in order to limit subjectivity; ‐ Standard indicators of social value and financial return should drive project comparisons; and ‐ The output should be transparent, allowing for a clear audit trail. A key strength of IPF is that it may be flexibly applied. The framework can incorporate elements from other common methods, such as expert judgment and cost‐benefit analysis. Expert judgment and deliberation come into play via the selection and definition of criteria, as well as in the selection of projects within the budget constraint. IPF can also take advantage of financial or partial social CBA components that are more easily quantified, measured, and monetized (e.g., net present values of market‐based costs and revenues). Nevertheless, IPF’s most important value‐add is in relieving some of the burden of determining and justifying the assumptions required to monetize all benefits and costs. Technical Features The IPF is designed to account for the inherent multidimensionality in infrastructure planning processes. Stakeholders determine specific project indicators or ‘selection criteria’ via a consultative process. Criteria may differ from country to country and sector to sector. For example, the financial and economic index may include indicators such as multiplier effects and net present values, while the social and environmental index may include the number of beneficiaries, carbon footprint and jobs created. The IPF aggregates criteria into two composite indices – social‐environmental and financial‐economic – via a weighted additive model.9 In practice, information may be quantitatively or qualitatively recorded, depending on the attribute. Qualitative data is common when assessing social‐related phenomena and has an important informative value in infrastructure projects. To make use of this information, a method to convert qualitative information into numerical data is applied. The two composite indices allow for the comparison of projects against others within a sector. The key output is a graphical display of projects’ relative performance within the sector under study along two axes, defined by the financial‐economic and social‐environmental composite index scores. Project scores are mapped onto a Cartesian plane, whereby alternative investment scenarios for a sector may be considered. The available budget for the sector is considered as a fixed amount and superimposed upon each of the axes. As a result, the budget constraint sets quadrants on the plane (see Figure 5). Future developments of the IPF may test alternative approaches to capture relationships between the two indices and the budget constraint (e.g., a sloped single budget line) or to account for private participation through PPPs and other private financing schemes, which would 9. A composite value or index value is a single numerical figure that combines information from several underlying variables. The strength of this approach is that a decision–maker can efficiently consider complex phenomena like economic performance, sustainability, or competitiveness in a single variable (Freudenberg, 2003; Nardo et al., 2005). A selection of widely cited indices is listed in Annex 4. 10 make the budget constraint variable. These technical features are further described in the following sections, which describe IPF step‐by‐step. Constructing the IPF Indices The first step of IPF is to identify the set of indicators that will be combined to construct the social‐environmental and financial‐economic indices. The selection of variables – or project comparison indicators – may differ amongst application contexts based on government policy goals (e.g., particular sectoral, economic, social, and environmental aims) and stakeholder consultation. For explanatory purposes, however, we will discuss the IPF index construction as performed in the first conceptualization of IPF in Vietnam and initial pilot test in Panama.10 The selection of variables seeks to preserve the principle of parsimony. Accordingly, this methodology relies on and requires a minimum level of relevant information to compare expected outcomes of alternative infrastructure investments. Multiple variables are selected to reflect different aspects of expected performance in two composite indicators, the Social and Environmental Index (SEI) and the Financial and Economic Index (FEI), each built on quantitative and transformed qualitative variables combined via an additive model. 11 To condense dissimilar data types and scales of measurement into indices, three data transformations are required. One must (a) transform qualitative data and ordinal quantitative data into usable scalar data, wherein the intervals between values reflect degrees of difference; (b) standardize criteria measurements to a common scale; and (c) establish weights for each criterion in the additive model. The transformation of categorical and ordinal qualitative and quantitative data into usable numerical data may be done using the Alternating Least Squares Optimal Scaling (ALSOS) algorithm, a widely accepted transformation approach. Within a quantified categorical variable, the numbers assigned by the ALSOS algorithm to each category reflect the distance between categories, revealing the implicit metric of the variable (Perreault & Young, 1976).12 Numerical values are thereafter standardized via a typical standardization formula to transform the measurements to have a mean value of zero and unit variance. The standard score of a raw score is (1) where is the sample mean and is the standard deviation of the variable j for project i. 10. The work in Vietnam is described in detail in Mandri‐Perrott, Marcelo, and Haddon, 2014. At the request of Vietnam's Ministry of Planning and Investment (MPI), the World Bank piloted the IPF to prioritize and select public infrastructure investments. One of the objectives of the IPF in this case was to provide the Government of Vietnam (GoV) with the means to operationalize the guidelines and requirements of the Public Investment Law (PIML) in an open and transparent manner. The Pilot Test covered over projects in three sectors: transport, irrigation and urban. 11. To generate a valid numerical expression for the composite indices, it is required that qualitative information is transformed into quantitative variables. 12. Statistical software such as SPSS and SAS include routines to easily perform transformations of qualitative variables into numerical variables. 11 Once qualitative information is transformed and standardized to isolate the various units of measure, it is combined in composite indices. The standardized indicators are multiplied by weights to create index scores. Weights may be selected by decision‐makers or determined by statistical methods. Subjective weighting should be based on deliberation with key stakeholders to reflect the relative importance of component indicators. The strength of this weighting approach is in structuring discussions on the relative importance of component indicators and policy goals. The risk, however, is in exposing the process to manipulation in pursuit of prevailing interests. As such, regardless of the weighting system employed, it is important that decision‐makers consider and decide the weighting, in advance of analysis, and record these decisions transparently. In the cases of Vietnam and Panama, however, a statistical method, Principal Component Analysis (PCA) was used to determine the weights of each variable in the index’s additive function. One of the main characteristics of PCA is the ability to calculate coefficients based solely on the statistical relationship between variables. This is useful when there is an explicit preference to objectively assign weights and likelihood of redundancies in underlying data. In Vietnam, the Ministry of Planning and Investment expressed a desire to eliminate all subjectivity from the process, whereas the use PCA in Panama was practical and experimental, based on the desire to test the approach. Annex 6 includes more notes on PCA. Social and Environmental Index (SEI) Infrastructure projects are meant to improve quality of life. A number of direct social and environmental benefits are associated with their implementation, including improved access to public services and job and income opportunities created during the construction and execution of investments. These benefits come at a cost, however. Engineering works frequently require clearing forested areas, polluting and endangering natural environments, and sometimes construction works involve the resettlement of families or communities. The IPF directly considers relevant social and environmental benefits and costs via the social and environmental index (SEI), whose sub‐components are dependent on the evaluating government’s selected criteria. In Vietnam, the SEI consisted of five indicators: Direct Jobs Created (DJ); Number of Direct Beneficiaries (NB); People Affected by Repurposing of Land Use (PA); Cultural and Environmental Risks (CER), and Pollution, in terms of CO2 equivalent emissions (CO2). The data required to compute each variable were primarily sourced from existing project feasibility studies. Additional variables on projected indirect effects were estimated using data routinely gathered by the National Statistic Office of Vietnam (Marcelo, Mandri‐Perrott, & Haddon, 2015).13 PCA was used to synthetize social and environmental variables into a composite social‐environmental index. Since many social and environmental data were 13. The information collected was used to populate a Microsoft Excel form, which in turn populated a database with embedded macros, programmed to transform the data using transformation functions and weights required to calculate the total SEI score for a project. 12 recorded as qualitative ordinal variables, they were transformed using the ALSOS14 methodology. Finally, PCA was used to generate an index function from the first principal component, where the coefficients that maximize the variance of the first principal component are used as the weights for each standardized variable. The resultant SEI function was expressed as follows: (2) 2 The linear combination of the standardized variables ZDJ, ZNB, ZPA, ZQCER and ZCO2 is equivalent to the SEI additive function. The coefficients , … , can be interpreted as the relative weights of SEI variables. Had weights been subjectively determined or set to be equal, these would replace the PCA‐ determined coefficients ( values) above. The resultant SIE calculations are normalized and rescaled to generate scores between 0 and 100 for each project. The rescaled score is (3) 100 where is the minimum value for variable Z and is the maximum. In Panama, on the other hand, the SEI consisted of only three indicators: the number of beneficiaries, the direct number of jobs created, and the number of service recipients living below the poverty line, which were combined via the same approach. Financial and Economic Index (FEI) The same procedure was used to construct the FEI, only with different component variables. Financial profitability and economic value are probably the most common investment decision considerations. That said, public investment decisions must also consider externalities and may include indirect economic effects, such as multiplier and network effects on other industries and economic sectors. The Financial and Economic Index (FEI) seeks to condense the minimum amount of relevant information required to appropriately represent the financial and economic effects derived from infrastructure investments. In the case of Vietnam, the FEI consisted of five indicators selected by the government in consultation with sector specialists: Financial Internal Rate of Return (IRR), Multiplier Effects (ME) determined by an Input‐Output model, a categorical score indicating the project’s locus in designated Priority Economic Zones (PEZ), a qualitative measure of Implementation Risk (IR), and a qualitative measure of Complementarity/Competition effects (CC) intended to reflect the degree of alignment of each project with existing infrastructure. As with the SEI, data required to compute each variable for the FEI were drawn from feasibility studies (FSs). Additional data on indirect effects generated could be calculated using input‐output table data collected by the National Statistics Office. 14. For example, cultural heritage loss and environmental risks were qualitatively classified from "high" to "low". To apply PCA, the variable was transformed to a quantitative value. 13 A FEI was calculated using PCA, as follows: (4) where the suffix Z again denotes standardization, the suffix Q indicates quantification of a qualitative variable, and the coefficients δ11,…,δ15 are PCA‐ determined weights. As with the SIE, the resultant FIE calculations were normalized and rescaled. In Panama, on the other hand, the FEI required only standardization but not additive combination, since only one base indicator was used. Data limitations were such that only one indicator was available: the benefit cost ratio emerging from financial cost‐benefit analysis (and partial social CBA in transport). Comparing Projects by SEI and FEI Construction of the SEI and the FEI composite indicators allows the ranking of projects within a sector, according to projected relative performance along each dimension. But a good infrastructure investment, in terms of financial and economic performance, may simultaneously be a poor choice from a social and environmental perspective, and vice versa. Thus, policymakers should not make definitive investment decisions based on only one dimension. In fact, neither should decisions be made on both, without the inclusion of a critical additional information piece – the public budget constraint. Projects are first plotted on a two‐dimensional Cartesian plane, with axes defined by the SEI and FEI scores. In Figures 5 and 6, each point represents a proposed infrastructure project, within one sector only. The location of a project in the plane is determined by coordinates (x,y) defined by the (FEI, SEI) pair. Comparison by each dimension does not presuppose that the FEI and SEI dimensions are equally important. Rather, it does quite the opposite: it concedes that the relative importance of SEI versus FEI is unknown and makes no attempt to combine the two into a single index. While financial and economic measures may remain the most important to a government, the SEI gives decision‐makers both a point of reference for understanding development impacts and also facilitates the identification of potential problems that may arise in the social and environmental dimension. This can help governments identify cases that require further mitigation attention to make the project viable. The Budget Constraint Once projects are plotted, the budget constraint is considered and superimposed separately for both dimensions, perpendicular to each axis. The budget limit may be based on a known fixed amount or on an estimation based on the historical proportion of funding requested actually allocated. It is important that a government’s fiscal and budgetary framework establish envelopes for public investment in order to support a sustainable investment program (Rajaram et al., 2014, p. 21). In Vietnam and Panama, historical budget allocations were used to estimate budget envelopes for the following strategic five‐year investment period. To locate the point of intersection where the budget constraint crosses each axis, projects are first ranked according to the SEI and FEI scores. Then, the budget limit is placed at the point where the last prioritized project along each axis can 14 be funded. For example, in Panama, the total cost of proposed water and sanitation infrastructure projects was US$622,091,173. Based on recommended figures from the Ministry of Economics and Finance, in turn based on the Draft Annual Budget, the budget limit for water and sanitation projects was estimated to be around 55% of the total cost of proposed projects. Considering the costs of the top‐ranking SEI projects 15, 3, 23, and so on, the budget would be exhausted after Project 29, at which the cumulative expenditure would be $318,906,839 (See Figure 3). Since Project 12 costs $60,000,000, there would be insufficient funding to include it. Figure 3. SEI‐prioritized water and sanitation projects within the budget limit 60 budget limit 50 40 30 20 10 0 Project 15 Project 3 Project 23 Project 10 Project 1 Project 24 Project 16 Project 27 Project 17 Project 4 Project 29 Project 12 Project 7 Project 22 Project 28 Project 35 Project 19 Project 13 Project 31 Project 20 Project 18 Project 21 Project 34 Project 26 Project 25 Project 9 Project 8 Project 11 Project 5 Project 6 Project 32 Project 30 Project 33 Project 14 The same was done for the FEI (Figure 4). For water and sanitation, the last project that could be funded was also Project 29, with a cumulative expenditure of $304,002,232. The cumulative totals are different because the ranking of projects according to each index are different. Figure 4. FEI‐ordered water and sanitation projects within the budget limit 100 90 budget limit 80 70 60 50 40 30 20 10 0 Project 3 Project 6 Project 1 Project 5 Project 4 Project 7 Project 9 Project 8 Project 10 Project 35 Project 16 Project 13 Project 15 Project 12 Project 31 Project 34 Project 23 Project 29 Project 28 Project 24 Project 33 Project 27 Project 19 Project 14 Project 11 Project 18 Project 32 Project 17 Project 25 Project 22 Project 20 Project 26 Project 30 Project 21 Visualizing the SEI, FEI and Budget Limit Simultaneously SEI and FEI scores are plotted in a Cartesian plane. The budget constraint is imposed onto the plane following the logic described above. Since this is done along each axis, rather than delineating a singular threshold, the budget constraint results in quadrants (see Figures 5 and 6). While a function could be determined to express the relationship of SEI to FEI, decision makers are 15 unlikely to know or come to easy agreement on the relative importance of the two dimensions, particularly since their meaning is abstracted.15 Projects that fall inside the budget constraint along each axis represent the ‘Investment Possibilities’ set within each dimension. For example, from a FEI point of view (X axis), the location of the budget constraint line indicates the threshold where public resources would be fully exhausted. In the example of Vietnam transport, resources would be sufficient to finance only those projects with a FEI above 70. From a SEI perspective, on the other hand, resources would be enough to finance only those projects with a score above 46. Figure 5. Prioritization Matrix, Vietnam Transport Projects A Higher priority 100 B Higher social/environmental 90 priority 80 B A C Higher financial/economic priority 70 D Lower priority 60 SEI 50 40 30 D C 20 10 0 0 10 20 30 40 50 60 70 80 90 100 FEI The prioritization matrix for Panama water and sanitation projects shows that projects with an FEI score above 6 and an SEI score above 22 would be considered as higher priority due to good performance along both axes. 15. Future developments to establish the relationship between SEI and FEI could be used to trace a function for the budged constraint. At this stage it is assumed as a fixed number. 16 Figure 6. Prioritization Matrix, Panama Water and Sanitation Projects 50 45 P15 40 B A P3 P23 P10 P1 P24 35 30 P27 25 P17 P29 P4 P16 SEI P12 P22 P28 20 P7 P19 P35 15 P21 P18 P13 P26P25 P31 P5 P11 P34 10 P30 P32 P20 P6 C P33 P14 5 D 0 0 10 20 30 40 50 60 70 80 90 100 FEI Interpreting the Quadrants Quadrant A contains high–priority infrastructure projects that simultaneously score high on the SEI and FEI (green points, Figures 5 and 6). These projects are recommended for implementation. On the other hand, projects falling into quadrant D (red points) may be classified as lower–priority, since they score relatively low on both the SEI and FEI (red dots, Figures 5 and 6). Projects in quadrants B and C have two common features. First, they score relatively high on either the SEI or FEI, but not both. Second, all of projects in either quadrant B or C, or a combined array of select projects within each, could be implemented with public funds. If the SEI is definitively privileged over the FEI, all projects in quadrant B could be selected for funding. Conversely, if the FEI is unequivocally more important to a government, all of the projects in quadrant C could be implemented. Alternatively, some portion of quadrant B and quadrant C projects could be funded, where both the FEI and SEI are deemed important. Quadrant B and C projects may be given a medium priority level. Identification of these medium‐priority projects leaves space for expert review, flexibility, and informed political debate. Since only a set of projects must be selected from amongst these, the negotiated process of ordering projects within Quadrants B and C allows IPF to capture important information from the professional and political bases of knowledge. In other words, the framework informs decisions regarding projects in the medium‐priority set, but leaves room for structured professional and political judgment. Financial and economic considerations will likely remain of key importance to project selection; however, it is recommended that decision makers discuss and document the principles of project selection in advance of results. The ex‐ante 17 discussion and agreement on guiding principles is helpful to prevent decision‐ makers from simply cherry‐picking from amongst remaining projects. One way to guide selection from amongst Quadrants B and C would be to establish minimum acceptable threshold scores for individual variables within each composite indicator. Since SEI and FEI scores themselves do not represent a meaningful quantify or performance score (i.e., they hold meaning only as scores of performance relative to other proposed projects), it would be arbitrary to assign basic thresholds for the indices overall. But component indicators may be used to set basic requirements. For example, projects may be required to meet a minimum cost‐benefit ratio, extend services to areas with a minimum poverty profile, or not exceed a set carbon emissions limit. Lessons from Vietnam and Panama Pilots The early construction of the IPF in Vietnam and pilot application of the in Panama were met with interest from stakeholders in both countries (Mandri‐ Perrott, Marcelo, & Haddon, 2014; Marcelo et al., 2015). The experiences brought to light a number of issues for further refinement to the framework, however, which are discussed in this section. Both governments were amenable to employ a multi‐criteria prioritization approach, given their respective strategic planning cycles, infrastructure needs, fiscal plans, and legislative and governmental support for employing a prioritization methodology. While the two experiences are an important start, more work is needed to explore implications of regional disparities in the context of application and data availability, and in developing specific links to project financing and PPP identification. Vietnam Experience In Vietnam, two factors made prioritization a natural pursuit for the Ministry of Planning and Investment. First, the 2014 Public Investment Law referred specifically to implementing a classification and selection system for proposed infrastructure, to incorporate assessments of financial efficiency and effectiveness alongside social and environmental sustainability. Second, the pilot test aligned with the government’s strategic planning cycle. The exercise covered 30 randomly selected projects in three sectors: transport, irrigation, and urban. An important pre‐condition was that projects should have already undergone a preliminary feasibility study (FS) (Mandri‐Perrott, Marcelo, & Haddon, 2015). Following an initial exploratory and consultation period, the IPF was employed for ex ante project evaluation in three sectors: transport, irrigation, and urban, in a two‐stage approach. Since the Ministry of Planning and Investment (MPI) did not have sufficient funding to support a full set of feasibility studies for the approximately 3,000 proposed projects, an initial qualitative project validation and classification filter was applied to identify a subset of projects for which feasibility studies could be funded.16 Following the first filter, about 268 projects 16. The first filter reduced the set of projects under consideration by assessing whether they met tests of (a) legal and procedural validity; (b) strategic validity (alignment with development goals); and (c) financial validity (capital value within available resources). Projects that passed all three tests were thereafter classified geographically to identify those projects located in either (a) priority development areas due to poverty levels or key economic development initiatives or (b) environmentally protected regions. Naturally, the former were assigned higher classification scores, whereas the latter were 18 were selected for feasibility study grants. Of these, thirty projects were randomly selected to run the pilot: ten each from transport, irrigation, and urban sectors. The calculation of the SEI and FEI and the plotting of the projects against the budget constraint followed the steps described above. The primary lessons drawn from experiences in Vietnam are summarized here. First, there are important pre‐analytical steps required to ensure sufficiently comparable data. One of the challenges of the pilot was that some data, even from within feasibility studies, was either opaquely determined (e.g., IRR) or had limited comparability across projects (Mandri‐Perrott, Marcelo, and Haddon, 2015). Feasibility studies should follow clear rules, guidelines, and standards to ensure quality and comparability of data. Second, special attention must be given to the metrics used to measure variables if PCA is used to assign weights. Since PCA synthetizes information based on the correlations between variables, it is important to make sure that the weights reflect the expected relationship between the variables and the composite indicator. In some cases, this can be done with alternative specifications of a component variable. For example, the concept of poverty may be expressed by the population of poor residents or the poverty ratio. Because these decisions are technical and require a certain degree of statistical and methodological knowledge, it is important that decisions on variable specification –particularly if a statistical method such as PCA is proposed– are made by professionals with sufficient knowledge to do so. Third, pre‐filters or additional variables may be required when there are inherent biases in the set of projects proposed, or where the government aims to break regressive patterns. For example, in Vietnam, it was observed that projects in poorer regions tended to score lower on some variable inputs to the FEI or SEI. This observation justified use of an initial filter to reflect a goal to target areas with higher poverty rates (Mandri‐Perrott, Marcelo, & Haddon, 2015). Fourth, in order to improve robustness of results and foster concurrent application with other supportive analytical tools (including CBA and expert assessment), users must have sufficient capacities to understand the mechanics and implications of key decisions regarding its use. These include the selection of criteria, definition of indicators, and relationships to other decision support methods. As such, capacity building remains essential. While relieving some analytical demands, IPF nevertheless requires sufficient technical knowledge to appropriately specify and calibrate the additive models, variables, and weights. Moreover, capacity‐building efforts to improve project appraisal and selection should also involve training on CBA, which is essential for larger projects, in particular. Further, to extoll the benefits of responsiveness inherent to the tool, the proposed methodology should not be a one–off exercise. Rather, it should be utilized as a progressive approach, intended to ‘live and grow’ with the country's infrastructure needs and policy objectives. As such, the prioritization program should involve continuous refinement of the decision‐support tool, based on penalized. The MPI or sector agency will allocate funding for FSs to projects in the highest importance groups and then proceed through the remaining groups until the FSs funding is exhausted. 19 informed deliberation regarding criteria selection and any pre‐decisions of a policy nature (Mandri‐Perrott, Marcelo, & Haddon, 2015). Fifth, governments must be aware of potential sequencing conflicts related to the timing of project selection processes in different ministries and line agencies. One strategy to mitigate this is for central government to oversee and provide guidance for systematizing project selection and coordinating sectorial prioritization activities. Last, planning offices and decision makers must be familiarized with the multi‐ criteria approach to build credibility of the decision support tool itself, requiring consultation and clear explanation of the technical inputs and model structure. While some fluency with established quantification methods like CBA is observed, there is a certain degree of risk aversion associated with applying new methods. Building acceptance of the decision support tool itself is critical to legitimizing the analysis (Mandri‐Perrott, Marcelo, & Haddon, 2015). Panama: Transport and Water and Sanitation In Panama, the conflation of the current economic outlook and three institutional supports spurred the IPF pilot. GDP growth and economic buoyancy in 2014 motivated an ambitious public investment program, accompanied by a high number of infrastructure project proposals to the Ministry of Economics and Finance. Nevertheless, the proposed set of projects exceeded the available funding space and allowable deficit ceiling, demanding selection of some projects and postponement of others. The application of a prioritization methodology was endorsed in the Government Strategic Plan 2015‐2019 and a draft amendment to the 2008 Social Fiscal Responsibility Law, which stated that a system of prioritization strategies was needed for infrastructure development in the future. Similarly, the public investment law contained implementation notes to employ a prioritization strategy tied to the five‐year investment plan 2015. The 2015 World Bank Country Partnership Framework (CPF) also called for application of a prioritization tool. These factors confirmed demand for prioritization of investments in infrastructure planning. The IPF was applied to a selection of 35 proposed projects in water supply and sanitation and 19 in transport. These projects were identified in consultation with the Ministry of Economics and Finance. The pilot offered a key opportunity to replicate and refine the existing framework, in that it entailed decision analysis based on limited financial‐economic data, particularly for a portion of the water projects. In this way, it replicated a common input problem for infrastructure decision‐making, namely restrictions on data. Planners from several agencies, in consultation with the Ministry of Economy and Finance and a team from the World Bank, agreed on a set of component SEI and FEI variables. The SEI variables initially selected included the number of beneficiaries (BEN), direct jobs created during implementation (EMP), the population of poor serviced by the project (POOR), social and environmental risks (SER), and the carbon footprint (CO2). The final analysis used only the first three, due to data problems and lack of specificity in the risk variable.17 17. See Marcelo, Mandri‐Perrott, & House, 2015 for a description of these challenges. 20 The FEI was originally expected to include the internal rate of return (IRR) and/or economic rate of return (ERR) of projects, depending on data availability. However, given that many of these investments were proposed for projects with no direct monetary benefits and largely indirect economic effects (i.e. mainly projects with a large public good component18), the calculation of economic IRR would have produced unrealistic or incalculable results. The alternative would have been to account for all indirect positive effects and estimate benefits. This would have required data on monetized benefits for such effects, which was unavailable. For these reasons, benefit‐cost ratios (BCRs) were used. The BCRs also allowed for the analysts to control for project size and avoid penalizing projects with higher costs but potentially higher benefits.19 The results from the transport project SEI and FEI rankings are presented in Figure 7. Figure 7. SEI and FEI, Transport Projects, Panama Pilot 2015 Social and Environmental Indicator (SEI) Financial and Economic Indicator (FEI ) 100 100 80 80 60 60 40 40 20 20 0 0 Project 17 Project 18 Project 19 Project 13 Project 16 Project 11 Project 12 Project 14 Project 10 Project 15 Project 4 Project 5 Project 6 Project 1 Project 7 Project 8 Project 2 Project 9 Project 3 Project 3 Project 6 Project 2 Project 8 Project 9 Project 7 Project 5 Project 4 Project 1 Project 17 Project 19 Project 10 Project 13 Project 11 Project 18 Project 16 Project 14 Project 15 Project 12 Figure 8 shows the results of plotting each project’s SEI and FEI scores. Figure 8. SEI and FEI plot for selected transport projects, Panama 2015 100 P12 90 80 P15 70 P1 P14 60 P4 P16 50 SEI P18 P11 40 P13 30 P19 P10 20 P17 P7 P9 P8 10 P5 P2 P6 P3 0 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 FEI 18. An example of this sort of project would be investments in wastewater treatment facilities, where tariffs are often insufficient to cover the full costs of the assets and its sustained operation and maintenance. 19. More details on the calculation of composite indices and results of the pilot in Panama are available in Marcelo, Mandri‐Perrott, & House, 2015. 21 The pilot offered a number of key lessons and highlighted areas of future development. The first and most significant lesson was that composite indices were far more sensitive to indicator values than weights. This suggests that PCA can be a useful way to weight variables if time and objectivity are important factors in selection. A sensitivity analysis was performed to compare PCA indices against composite indices using subjectively established weights. Two subjective weighting schemes (equal weighting and hypothetical policy‐determined) were tested to calculate alternative SEI composite indices. Figure 9 shows that the categorization of projects changed only minimally when using policy‐determined or equal weights (Marcelo, Mandri‐Perrott, & House, 2015). In practice, the use of subjective weighting can give rise to a number of problems ranging from purely technical (i.e., rank reversal in AHP) to political (i.e., lack of transparency and discretion in selection of infrastructure‐projects). On the other hand, subjective weighting is more intuitive and directly responsive to policy preferences. Figure 9. Comparison of Transport SEI and FEI scores, PCA and equal weighting PCA weighted SEI Equal weighted SEI 100 100 P12 P12 90 90 P15 P15 80 80 P14 P1 P1 P14 70 70 P18 P16 P4 P11 60 60 P16 SEI SEI 50 50 P4 P18 P11 P13 40 P13 40 P10 P19 P10 30 P19 30 P17 P17 20 20 P9 P7 P8 P7 P8 P9 P5 10 P5 10 P2 P2 P6 P3 P6 P3 0 0 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 FEI FEI The second lesson is that special consideration should be given to the selection and definition of metrics to deal with regressive biases. This applies to any comparative approach. As in Vietnam, Panama showed a potential for an inherent bias towards infrastructure projects in wealthier or urban regions, simply due to their generally better ‘performance’ on component indicators. If development plans were aimed at improving rural areas, however, this could create adverse results. This problem can be overcome by careful indicator specification or the inclusion of additional indicators. The third lesson relates to the appropriate use of financial and economic indicators in conditions of low information. We recognize the value of incorporating CBA elements on the financial and economic side of the IPF. To integrate these elements to the framework when information is extremely limited, additional criteria is required. For example, if only project cost is known, additional variables must be considered to build the FEI. This practical constraint highlights the importance of improving information systems at the project level to effectively implement any kind of prioritization tool. 22 The fourth lesson is that the IPF can take into account both efficiency and efficacy considerations. This is shaped by the construction of variables. For example, one could use the absolute number of beneficiaries as an input to the SEI to consider policy effectiveness when service expansion is a priority. On the other hand, ‘beneficiaries per dollar spent’ may be more appropriate if the goal is fiscal efficiency. In the case of Panama, where development of water services in rural areas was an important policy goal, the decision was made not to control indicators by project size in order to protect against the possibility of privileging urban projects with greater economies of scale (Marcelo, Mandri‐Perrott, & House, 2015). The final lesson is that IPF can be used as an opportunity to strengthen data weaknesses. The process of discussing relevant indicators by which to select projects is a catalyst for improving information levels to move towards more complex economic analysis. In Panama, the discussions prompted by the IPF have resulted in improved data project data collection. The IPF, as a process, can be a valuable starting point for improving appraisal by inciting aspirational discussions about the kinds of data that would be most helpful to deciding among projects within a sector. Implementing IPF The sequence of implementing IPF, as described in the technical description and pilot applications above, is summarized in Figure 10. With respect to this sequence, the pilot in Panama also gave rise to an important discussion of requirements for implementing IPF and factors of particular interest in further piloting to improve its applicability. These issues are discussed in the following section. Some are process‐oriented and relate to overall infrastructure planning; some are organizational and deal with issues of resources, authority, and capacity; and some are technical, relating to the informational inputs required. Figure 10. Sequence of the IPF I Criteria • Deliberation and consultation with decision Identification makers and experts • SEI and FEI based on existing appraisal data II • PCA when objectivity in criteria weighting is preferred SEI and FEI • CBA elements incorporated when available / calculable calculation • Sensitivity analysis • (SEI, FEI) coordinates III • Quadrants defined by budget Visual interface constraint • Based on informed political IV and technical debate Selection • Quadrant A projects high priority to invest 23 Process With respect to infrastructure planning, it is important that IPF follow on from critical pre‐prioritization and selection steps. These are described well in the World Bank’s Public Investment Management framework and include the provision of clear sectoral policy guidance based on policy analysis, pre‐ screening, and at least basic project appraisal and review (Rajaram et al., 2014, p. 22‐26). Policy guidance is also important for pre‐screening projects for inclusion in prioritization exercises; important guiding documents include published sector strategies and mid‐term development plans. Moreover, there are important decisions to be made about when IPF is more or less applicable as a prioritization approach than are other approaches, e.g., comparison by NPV following from SCBA. For one, IPF may be used to structure decisions when full economic analysis for all projects in the proposed set is unfeasible or too costly for government. There is an important question of how much government should spend on appraisal and prioritization, relative to investment. The issue of setting thresholds for appraisal requirements should be further explored to create country‐specific guidance for appraisal requirements. Once the decision to employ IPF is made, there are some important factors that should be made explicit and transparently reported in advance of implementing the approach. The settings that must be decided in advance include: selection of criteria and specific definitions, the weighting methodology to be used for constructing indices, the types of sensitivity analysis to be employed, and guiding principles for selecting projects from amongst Quadrants B and C. After IPF is performed and projects are prosed for selection, decision‐makers should also publish the results of the IPF mapping and sensitivity and report deviations from project selection guidelines with supporting justification. Organization While projects may be proposed from different line agencies or subnational government units, prioritization should be managed at the same level of fund allocation or by an administrative unit with authority over investment decisions to ensure that analysis is effectively utilized. Alternatively, governments can establish legislative or other administrative backing to protect the credibility of project selection and ensure that selection is actually based on appraisal and the agreed method of prioritization. It is also important that prioritization be aligned with the budget cycle. The IPF can be used to prioritize capital investment plans for annual or multi‐year budgets. It is important, however, that the budget period and envelope be explicitly determined and aligned, and that proposed projects be checked to ensure that investments would feasibly occur during the specified budget period. Lastly, IPF is designed for application within only one sector. More specifically, it may be advisable to further subdivide analysis at the subsector level, particularly when certain kinds of projects (in the case of water and sanitation, for example) are significantly different in terms of performance according to certain project indicators. While IPF is intended for comparison of projects within a sector, it can be a useful device for discussing sectoral reallocation. The project plots can be used 24 to examine implications of foregoing investments that perform at the margin for another sector’s budget envelope. For example, foregoing a large transport project at with funding in Quadrant B or C could free up sufficient budget space to fund multiple additional sanitation projects. Minimum Data Requirements and Improving Data Quality While IPF is designed to make best use of available information and other elements of project appraisal, there is a minimum amount of data required to make the approach usable. These include estimation of project costs as well as measurements of expected benefits and key social and environmental project impacts. Project costs and some benefits can be reported as a single monetized measure, preferably through the calculation of net present values via financial cost‐benefit analysis or partial SCBA. Variables that measure expected benefits and key social and environmental impacts, on the other hand, may be reported in different units or even qualitatively. Since IPF is intended to promote the use of improved data and support efforts to improve infrastructure decision‐making overall, it is important that IPF not detract from ongoing efforts to build overall capacity to appraise, select, and prepare good projects. Because the approach is complementary and not competitive with SCBA, for example, capacity building for good public investment management should include training on the expanded set of tools for project appraisal and selection. Conclusions and Next Steps for Developing the IPF This paper presents the IPF as a flexible multi‐criteria tool for infrastructure prioritization. The framework’s key strengths are its incorporation of key policy goals, the systematic framing of infrastructure decisions, and the use of available elements of project appraisal. The graphical display of results in a single plane is one of the main features of this tool. This way of presenting project information is helpful to informing discussions about relative expected performance of projects according to financial‐economic and social‐environmental factors and to considering the potential effects of reallocating funds between sector budgets. The experiences in Vietnam and Panama show the potential of the IPF to support infrastructure decisions, particularly in environments characterized by limited institutional and technical capacity and data restrictions. It is important, however, to reaffirm that the approach is not intended to replace best practices in project selection, such as SCBA, or obviate efforts to improve appraisal in the pre‐selection stages of public investment management. Rather, the IPF can be used as a catalyst to identify needed information in order to progress toward more sophisticated appraisal methods and selection frameworks. Looking Forward With respect to the IPF’s general applicability, we have identified a set of critical issues for future deliberation to further improve the IPF. These include the following: ‐ Sectorial Rebalancing: How can decision‐makers best use the IPF to explore possibilities of rebalancing sectorial budgets? Where good PPP candidates are identified within the high or medium‐priority project sets, the budget 25 constraint may be extended in that particular sector. Alternatively, funds for projects in quadrants B and C may be reallocated to other sectors with higher shortfalls, or high‐SEI scored projects with insufficient funding. ‐ Participation and consultation: Which stakeholders should be included in the application of IPF? What are the appropriate institutional arrangements and checks to ensure accountability and transparency? ‐ Implementation: What institutional arrangements are required to support implementation of IPF and facilitate improved data collection? Given the flexibility in criteria selection, what checks are required to ensure the tool is applied professionally, with sufficient technical oversight? ‐ Sequencing / application to other project appraisal and selection approaches: How can IPF be used sequentially or concurrently with other tools? What measures are needed to ensure that IPF is used to promote more rigorous economic project appraisal? Additionally, the IPF may be extended in the future to establish linkages to other facets of infrastructure development. Proposed extensions include the following: ‐ Private Participation: How can the IPF be used to assist with identification of public‐private partnerships or other private financing opportunities? How will IPF results change with variable budget constraints resulting from private participation? ‐ Integrated Planning: How can complementary projects be treated in simulation? ‐ Multi‐year Budgeting: How can IPF best deal with multi‐year budget processes and differences in sectorial budgeting schedules and processes? Lastly, there is a need to deal with prioritization, not only as a question of what to do, but also a question of when to invest. The IPF may be used to rule some projects out altogether, but may also be extended to assess of the relative immediacy of proposed projects and timing of investments over long impact horizons. In their assessment of the infrastructure needs in South Asia, Andres et al. suggest that prioritization criteria “must be able to answers questions about short‐term needs versus longer‐term development needs, especially in developing countries… Given substantial lock‐ins associated with infrastructure investments, should a country continue attempting to fill current gaps or direct investments to infrastructures that are likely large bottlenecks in the medium term?” (Andres, Biller, & Dappe, 2014). While the IPF, as a flexible framework, is sufficiently malleable to address these extended needs, the answers to these remaining questions will only be realized through continued piloting. Refining this framework is a worthwhile pursuit to ensure its appropriate application and usefulness to supporting infrastructure decision‐making with an eye to improve both the efficient investment of public funds and the goals of sustainable development. 26 References Andres, L., Biller, D., & Dappe, M. H. (2014). Infrastructure Gap in South Asia: Infrastructure Needs, Prioritization, and Financing. World Bank Policy Research Working Paper (7032). Araral, E., Fritzen, S., Howlett, M., Ramesh, M., & Wu, X. (2012). Routledge handbook of public policy. Routledge. Bardach, E., & Patashnik, E. M. (2015). A practical guide for policy analysis: The eightfold path to more effective problem solving. CQ Press. Barfod, M. B., Salling, K. B., & Leleur, S. (2011). Composite decision support by combining cost‐benefit and multi‐criteria decision analysis. Decision Support Systems, 51(1), 167‐175. Beinat, E., & Nijkamp, P. (1998). Multicriteria analysis for land‐use management (Vol. 9): Springer Science & Business Media. Brown, H. (2014). Next generation infrastructure: Springer. Cantarelli, C. Flyvbjerg, B., Molin, E., & Van Wee, B. (2010). Cost overruns in large‐scale transportation infrastructure projects: explanations and their theoretical embeddedness. European Journal of Transport Infrastructure Research, 10(1), 5‐18. CDIA. (2010). City Infrastructure Investment Programming Prioritisation Toolkit: User Manual: From Wish List to Short List: Prioritising Urban Infrastructure Projects for Local Development. Manila, Philippines: Cities Development Initiative for Asia. Dabla‐Norris, E., Brumby, J., Kyobe, A., Mills, Z., & Papageorgiou, C. (2011). Investing in Public Investment: An index of public investment efficiency. IMF Working Paper 11/37. Washington: International Monetary Fund. DCLG. (2009). Multi‐criteria Analysis: A Manual (pp. 165). London: London Department for Communities and Local Governance. De Montis, A., De Toro, P., Droste‐Franke, B., Omann, I., & Stagl, S. (2004). Assessing the quality of different MCDA methods. Alternatives for environmental valuation, 99‐184. Dunn, W. N. (2015). Public policy analysis. Routledge, 2015. Freudenberg, M. (2003). Composite indicators of country performance. Hajer, M., & Wagenaar, H. (2003). Deliberative policy analysis: understanding governance in the network society. Cambridge University Press. Head, B. W. (2010). Reconsidering evidence‐based policy: Key issues and challenges. Policy and Society, 29(2), 77‐94. HM Treasury. (2014). UK National Infrastructure Plan 2014. Jacobs, D. (2008). A Review of Capital Budgeting Practices. IMF Working Paper 08/160. Washington: International Monetary Fund. Jolliffe, I. (2002). Principal component analysis. John Wiley & Sons, Ltd. 27 Kabir, G., Sadiq, R., & Tesfamariam, S. (2014). A review of multi‐criteria decision‐ making methods for infrastructure management. Structure and Infrastructure Engineering, 10(9), 1176‐1210. Khisty, C. J. (1996). Operationalizing concepts of equity for public project investments. Transportation Research Record: Journal of the Transportation Research Board, 1559(1), 94‐99. Layard, R., & Glaister, S. (1994). Cost‐benefit analysis. Cambridge University Press. Manaugh, K. (2013). Incorporating issues of social justice and equity into transportation planning and policy. McGill University. Mandri‐Perrott, C., Marcelo, D., & Haddon, J. (2014). A Methodology to Prioritize and Select Infrastructure Investments. Report to the Vietnam Ministry of Planning and Investment. The World Bank. Marcelo, D., Mandri‐Perrott, C., & House, S. (2015). Prioritizing Infrastructure Investments in Panama: Pilot Application of the World Bank Infrastructure Prioritization Framework. Report to the Panama Ministry of Economy and Finance. The World Bank. Martens, K. (2009). Equity Concerns and Cost‐Benefit Analysis: Opening the Black Box. Paper presented at the Transportation Research Board 88th Annual Meeting. Nardo, M., Saisana, M., Saltelli, A., Tarantola, S., Hoffman, A., & Giovannini, E. (2005). Handbook on constructing composite indicators. Nutley, S. M., Davies, H. T., & Smith, P. C. (2000). What works?: Evidence‐based policy and practice in public services. MIT Press. OECD (2014). Recommendation of the Council on Effective Public Investment Across Levels of Government. Adopted 12 March 2014. Perreault Jr, W. D., & Young, F. W. (1980). Alternating least squares optimal scaling: Analysis of nonmetric data in marketing research. Journal of Marketing Research, 1‐13. Petrie, M. (2010). Promoting public investment efficiency: A synthesis of country experiences. Paper presented at World Bank Preparatory Workshop, Promoting Public Investment Efficiency, Global Lessons and Resources of Strengthening World Bank Support for Client Countries. Poister, T. (1978). Public program analysis: Applied research methods (p. 42). Baltimore, MD: University Park Press. Rajaram, A., Tuan, M., Biletska, N., & Brumby, J. (2010). A Diagnostic Framework for Assessing Public Investment Management. World Bank Policy Research Working Paper 5397. The World Bank, August 2010. Rogers, M., & Duffy, A. (2012). Engineering project appraisal: John Wiley & Sons. Ruiz‐Nuñez, F., & Wei, Z. (2015). Infrastructure Investment Demands in Emerging Markets and Developing Economies. World Bank Policy Research Working Paper, 7414. 28 Saaty, T. L. (2004). Decision making—the analytic hierarchy and network processes (AHP/ANP). Journal of systems science and systems engineering, 13(1), 1‐35. Salling, K. B., Leleur, S., & Jensen, A. (2007). Modelling decision support and uncertainty for large transport infrastructure projects: The CLG‐DSS model of the Øresund Fixed Link. Decision Support Systems, 43(4), 1539‐ 1547. Thomopoulos, N., Grant‐Muller, S., & Tight, M. (2009). Incorporating equity considerations in transport infrastructure evaluation: Current practice and a proposed methodology. Evaluation and program planning, 32(4), 351‐359. Tsamboulas, D. (2007). A tool for prioritizing multinational transport infrastructure investments. Transport Policy, 14(1), 11‐26. Van Delft, A., & Nijkamp, P. (1977). Multi‐criteria analysis and regional decision‐ making (Vol. 8): Springer Science & Business Media. Wang, Y. M., & Elhag, T. M. (2006). An approach to avoiding rank reversal in AHP. Decision Support Systems, 42(3), 1474‐1480. Weimer, D., & Vining, A. (2015). Policy analysis: Concepts and practice. Routledge. Zerbe, R. O., & Bellas, A. S. (2006). A primer for benefit‐cost analysis: Edward Elgar Publishing. 29 Annex 1. Demands of Infrastructure Project Prioritization Attention to alternative approaches to systematic prioritization responds to demands for evidence, comprehensiveness, value, efficiency and legitimacy. An important underpinning for prioritization is the consideration of evidence. The rise of evidence‐based policy analysis (EPBA) can be traced to the UK government of the 1990s, which rigorously pursued approaches to policy analysis in cross‐departmental teams working on complex issues. EBPA is believed to be important to answer questions like: What options will ‘deliver the goods’? How can programs provide greater ‘value for money’? and ‘How can program managers achieve ‘outcomes’? (Nutley, Davies, and Smith, 2000). Expanding beyond purely technical research and analysis (i.e., ‘science’), a line of thought recently gaining traction suggests that two other types of evidence are relevant to modern policy making: practice (professional program management experience) and political judgment (Head, 2010). Scientific knowledge is garnered from systematic analysis of causal relationships that explain conditions and trends. Practical implementation knowledge, on the other hand, is derived from the “practical wisdom” of professionals in their communities of practice, whereas political knowledge relates to making contextual judgments about “the possible and the desirable” (Head, 2010, p. 80). An increased appreciation of the multiple bases of knowledge is the next iteration of EBPA. An appreciation of the ‘multiple lenses’ of evidence suggests that prioritization must also be sufficiently comprehensive. Comprehensiveness implies making decisions based on (a) consideration of a sufficiently extensive set of projects, and (b) a sufficiently wide set of criteria. While criteria need not be exhaustive, they should nevertheless account for key goals of infrastructure policy. Direct attendance to the multiple goals embedded in an infrastructure strategy (e.g., fiscal prudence, equity, sector‐specific gains, etc.) supports the use of multiple criteria. Furthermore, comprehensiveness is a response to common criticisms that infrastructure investment decision‐making is ad hoc, politically driven, or characterized by ‘cherry‐picking’, ‘easy wins’, or ‘creaming’ rather than analysis. Decisions about infrastructure investment are also inherently integrated with considerations of effectiveness and value. The selection and structuring of a prioritization methodology are intertwined with decisions about how to define policy effectiveness, which may include goals of economic growth, sectorial goals, environmental sustainability, or human development. The logic of value, on the other hand, speaks to creating public value at the least cost. Lastly, prioritization is undertaken to afford public legitimacy to decisions, particularly in actively democratic contexts where public assent matters most. Legitimacy is typically founded on both inputs and outputs. Input legitimacy refers to the processes whereby decisions are made and is a matter of design. To be legitimate with respect to input, the process of infrastructure selection should be transparent, fair, and systematic. Output legitimacy, on the other hand, is determined by outcomes, and is a matter of an institution earning its relevance based on performance. While not the primary focus of this paper, a final parameter of interest is opportunity. In the case of infrastructure prioritization, this refers to the opening 30 of new opportunities for funding. As options for infrastructure finance expand beyond public resources and traditional bank lending, more attention is focused on conditions that increase the likelihood of institutional investment and private sector participation. The assemblage of well‐planned project pipelines, which are prioritized in a legitimate and transparent fashion can help reduce political risks and improve project bankability. The prioritization approach a government adopts must inevitably balance three oft‐competing needs: accuracy, feasibility, and suitability. Accuracy demands that methods be sufficiently precise to afford meaningful comparison, but does not require extreme exactitude. Rather, it suggests that thresholds of correctness are required to ensure that the logic of evidence is attained. The second condition encompasses administrative practicality and political feasibility. Practicality deals with the institutional capacity, the cost and time limitations, and the information availability. An important facet of this is the principle of parsimony, the use of a minimum amount of relevant information. Political feasibility, on the other hand, accepts that prioritization cannot be so devoid of latitude that it is rendered unresponsive to political factors. There is a balance to be struck between technical objectivity and democratic accountability. Lastly, the principle of suitability demands that criteria selected be appropriate to judge relative desirability, as determined by stakeholders and their representatives. The suitability of decision criteria is dependent on policy goals and norms of governance. 31 Annex 2. Cost‐Benefit Analysis and Value for Money Cost‐benefit analysis (CBA), is a useful method of project comparison where capacity and data are sufficient for its implementation across proposed projects. The practical development of CBA started in the 1930s in the United States, largely for public investment planning at the federal level, and has remained a staple of policy analysis since (Zerbe & Bellas, 2006). This is largely because CBA, whilst complex with respect to inputs, is a straightforward concept, allowing comparison of projects based on a single metric – that of monetized value. CBA essentially totals all costs and benefits of a project over its lifetime and discounts future flows to calculate present values. The (discounted) present values of costs and benefits are compared, either by use of net present value (ranking projects by highest NPV) or the benefit‐to‐cost ratio (BCR) (used to reflect efficient use of inputs for outputs). CBA can be applied in traditional financial terms to assess alternative projects for an organization or firm, but may also be extended for public expenditure analysis by considering the full suite of (monetary and non‐monetary) costs and benefits to society. In Social Cost‐ Benefit Analysis (SCBA), prioritization is based on selecting projects that maximize the net present values for society overall. Reduction to a common unit of measurement allows easy comparison of projects (Andres et al., 2014). These assessments require the quantification and monetization of positive and negative effects (costs and benefits); therefore, extensive information about projects and their projected impacts is required (Van Delft & Nijkamp, 1977). Since information in many contexts is limited, however, and since many costs and benefits are difficult to monetize20, this can make SCBA a tall order, particularly when governments with limited resources are presented with large sets of small and medium size projects for comparison. Practically speaking, relevant data for SCBA may be unavailable, too expensive, or too difficult to collect or calculate on a regular basis, given a government’s financial and technical resources for project preparation. While a number of technical issues related to CBA mechanics are common focal points of academic debate,21 these can be largely dealt with via established economic methods to yield relatively robust analyses. More important to the systematic use of CBA for infrastructure prioritization are the resource, capacity, and time limitations that constrain its extensive application in many contexts. 20. For example, valuing lives saved, added convenience, or averted pollution requires making assumptions. A common criticism of SCBA is that social and environmental costs are often underestimated, relegating key social and environmental issues to positions of lesser importance, particularly if their monetized impacts are relatively low compared to other economic considerations. 21. First, addressing intangible factors and strategic concerns is difficult with CBA. Common procedures to establish monetized values for some non‐marketed factors (e.g., stated preference or hedonic pricing approaches) are not necessarily applied to all non‐priced impacts (e.g., social cohesion). These problems expose a potential for optimism bias or cost underestimation (Cantarelli, Flyvbjerg, Molin, & Van Wee, 2010; Thomopoulos et al., 2009), and represent information gaps that can be detrimental to analysis. A second issue relates to the selection of a discount rate. Many analyses assume a standard rate to be applied to a country and sector, which undoubtedly alleviates the burden of determination. However, it is also known that slight alterations in rates of return can have a significant effect on calculated benefit–cost ratios and net present values (Thomopoulos et al., 2009; Van Delft & Nijkamp, 1977). 32 An additional set of considerations has prompted adjustments to SCBA or the complementing of SCBA with other approaches. For one, CBA‐based assessments of societal value do not typically consider distributional effects or issues of equity and social justice, which is a key concern when investments are intended to close development gaps. In other words, the goal is maximizing societal value, but without regard to the particular ‘winners’ and ‘losers’ of alternative projects. Complementary social analysis or the use of multi‐criteria methods to extend CBA have been used to deal with these aspects. Other attempts are made to extract the particular costs and benefits that are linked to key priority goals from the overall CBA in order to focus more specifically on partial impacts of key policy interest. For example, Berechman and Paaswell (2005) utilize a modified SCBA to assess transport projects, separating out the estimations of transportation benefits and costs from overall economic development benefits and costs. The approach was employed to deal with the observation that overall economic costs and benefits were far higher than transportation‐specific costs and benefits (e.g., time saved). This rendered the latter completely insignificant to the overall assessment, despite the importance of transport goals to policy makers. ‘Value for Money’ (VfM) refers to a broad range of approaches used for project appraisal, many of which wholly or partially employ CBA. VfM analyses may be qualitative or quantitative, but generally compare projected project outcomes with the resources employed to attain them. In this way, VfM is useful as an assessment of the relative efficiency of alternative means of reaching the same ends. That said, VfM analyses are widely disparate with respect to the inputs used and level of quantification. Some are inclusive of full CBA or cost‐ effectiveness analysis22 as key inputs, whereas others may qualitatively discuss the differences between alternatives of the same cost or differences in base costs for alternative solutions to the same policy problem. 22. Cost effectiveness analysis relates to evaluating alternative means of attaining the same goals, but is less applicable to differentiating between the relative attractiveness of different types of projects. 33 Annex 3. Multi‐criteria analysis for infrastructure selection Multi‐criteria approaches to infrastructure development and planning respond to concerns about over‐specialization, the need to reconcile multiple infrastructure‐related policy goals, and practical limitations on information. The past ten years have witnessed an opening to multi‐criteria analyses to rectify these concerns, make direct use of political knowledge and judgment, and bring non‐financial concerns to the forefront of decision‐making. On the practical side, increased use of MCDA reflects attempts to work around time, information, and capacity restrictions (DCLG, 2009). If information is fully available and the policy goal at hand is maximization of societal benefit, then optimizing models are most useful.23 But if information is incomplete or multiple policy goals are at stake, Beinat and Nijkamp suggest that use of a variety of decision criteria reflects compromise between multiple priorities, while discrepancies between outcomes and goals are traded off by use of preference weights (Beinat & Nijkamp, 1998). MCDA has been particularly salient for the transport sector (Tsamboulas, 2007), since projects often have a host of different objectives and come with complex issues arising from the project. In terms of intuition and transparency, additive MCDA models that sum criteria with assigned weights are favorable since they are “able to cope with almost any problem” (Tsamboulas, 2007) and are easy to understand. While some additive MCDAs are contingent on wholly quantitative and statistical inputs, others, such as the Analytic Hierarchy Process (AHP), focus explicitly on expert value judgment to assign values to variables and criteria weights. De Montis et al., extensively compare alternative decision methods for application to sustainable development, contrasting the approaches across their operational components, applicability in the user context, and applicability for the problem structure (2004). They find that important considerations include the ability for the approach to deal with complex situations (criteria, different scales and aspects, type of data, uncertainties); the possibility to involve more than one decision‐maker (stakeholder participation, communication, and transparency); and engagement of stakeholders in order to increase knowledge and change opinions on problem structuring and alternative solutions (De Montis et al., 2004). Two critical issues in the application of MCDM, specifically for additive models, are the selection of criteria by which alternatives will be assessed and the weighting of criteria. The simplest mode of weighting is equality, wherein all criteria are equally considered. For example, the Cities Development Initiative for Asia equally weights (.20) project purpose, public response, environmental impact, socio‐economic impact, and feasibility of implementation in its toolkit for assessing public projects (CDIA, 2010). An alternative method is negotiated expert guidance, wherein a panel of decision‐makers decides weights based on experience and their basket of interests. A formalized, facilitated method of expert‐based criteria weighting is the Analytic Hierarchy Process (AHP), 23. For the principle of optimization to be the basis of a decision, however, it must be assumed that all measures of performance related to the objectives of a proposal can be expressed in a common scale of measurement, as in CBA (Rogers & Duffy, 2012). 34 developed by Saaty (2004). The key input for AHP is decision‐maker responses to a series of pairwise comparisons of alternative options. Responses are used to derive criteria weights and performance scores. AHP is used extensively in the Republic of Korea for infrastructure planning to supplement SCBA analysis.24 Through an extensive review of journal articles focused on MDCA and infrastructure decision‐making between 1980 and 2012, Kabir, Sadiq, and Tesfamariam find that the application of multi‐criteria approaches to infrastructure selection and evaluation are on the rise. The number of MCDA infrastructure management studies has risen from single digits in the 80s to hundreds towards the end of the 2000s (Kabir, Sadiq, & Tesfamariam, 2014). Whilst these do not necessarily measure MCDA in use for selection (and are far fewer than studies based on CBA), the pattern indicates a sharp upswing in interest in MCDA for infrastructure. Of these 300 studies, however, only a subset of approximately 40 expressly applied MCDA to choose amongst alternative strategies for infrastructure maintenance and development (e.g., site selection, project architectures alternatives, technology selection), but with more limited scopes than sector‐wide project privatization. Only fourteen attended broadly to ranking projects (as opposed to selecting amongst mutually exclusive alternatives), and only ten ranked projects across a sector. This suggests that the space for developing sectorial infrastructure prioritization strategies remains wide open. Figure 11. Number of studies published related to MCDM in infrastructure management, 1980‐ 2012 (Kabir et al., 2014) 100 Others 80 Underground infrastructure Buildings 60 Bridges Transportation 40 Water supply and wastewater 20 Water resource management Total 0 1980‐1982 1983‐1985 1986‐1988 1989‐1991 1992‐1994 1995‐1997 1998‐2000 2001‐2003 2004‐2006 2007‐2009 2010‐2012 Source: Authors’ figure, drawn from (Kabir et al., 2014) data 24. AHP has also had some weaknesses. For example, there is a methodological problem of “rank reversal”, which occurs when an additional new option becomes available that results in the reversal of the initial alternatives’ ranking. There have been attempts to overcome this, however, with various improvements whilst retaining the underlying strengths of the approach (e.g. Wang & Elhag, 2006). The approach is supported by well‐known adaptations with corresponding software applications, including REMBRANDT, MACBETH, Expert Choice, and or HIPRE. 35 Annex 4. Examples of globally recognized indices Area / Sector Composite Indicator Economy Composite of Leading Indicators (OECD) OECD International Regulation Database (OECD) Economic Sentiment Indicator (EC) Internal Market Index (EC) Business Climate Indicator (EC) Environment Environmental Sustainability Index (World Economic Forum) Wellbeing Index (Prescott–Allen) Sustainable Development Index (UN) Synthetic Environmental Indices (Isla M) Eco–Indicator 99 (Pre Consultants) Concern about Environmental Problems (Parker) Index of Environmental Friendliness (Puolamaa) Environmental Policy Performance Index (Adriaanse) Globalization Global Competitiveness Report (World Economic Forum) Transnationality Index (UNCTAD) Globalization Index (AT Kearny) Globalization Index (World Markets Research Centre) Society Human Development Index (UN) Corruption Perceptions Index (Transparency International) Overall Health Attainment (WHO) National Healthcare Systems Performance (King’s Fund) Relative Intensity of Regional Problems Innovation and Summary Innovation Index (EC) Technology Networked Readiness Index (CiD) National Innovation Capacity Index (Porter and Stern) Investment in Knowledge–Based Economy (EC) Performance in Knowledge–Based Economy (EC) Technology Achievement Index (UN) General Indicator of Science and Technology (NISTEP) Success of Software Process Improvement (Emam) 36 Annex 5. Select Approaches to Prioritization United The UK National Investment Plan, managed by the Treasury’s Kingdom infrastructure unit, specifies an Infrastructure Top 40 list of projects marked for priority government support and investment. These projects are grouped by sector, but not listed in order of importance. Projects are chosen by the following criteria: ‐ Strategic importance (SI): significant contribution towards an objective; ‐ Capital value (CV): significant capital value; ‐ Regional priority (RP): high strategic importance or capital value in a region; ‐ Demonstrator (D): innovative or novel and could improve future delivery; ‐ Unlocking investment (UI): enables significant private sector investment (HM Treasury, 2014). Australia Infrastructure Australia, a federal statutory board established under the Department Infrastructure and Transport, is tasked with planning and coordination cross‐state road and public transport projects. In order to prioritize proposed projects, the agency applies a two‐state process of project “profiling” and “appraisal.” Profiling, as a first filter, qualitatively assesses the compatibility of proposed initiatives to strategic infrastructure priorities (i.e., key issues and problems) along a scale of “highly beneficial” to “highly detrimental” with respect to stated policy goals. Thereafter, CBA is employed as the primary tool for project appraisal, including estimates of Wider Economic Benefits (WEBs), such as those related to agglomeration. Advice on calculating WEB is based on the UK government’s Transport Analysis Guidance (2014). Following CBA, the process requires that assessors qualitatively discuss benefits and costs that generally cannot be monetized (e.g., visual / landscape, social cohesion, heritage or cultural impacts) and thereafter classify each non‐monetized item along a spectrum from “highly beneficial” to “highly detrimental.” These two inputs are used to inform selection, which is based on expert review and consensus of a panel of eleven members. Australia, New New South Wales has developed a Major Projects Assurance South Wales Framework inclusive of an additive multi‐criteria model. The framework assesses proposed projects at several stages of project planning and prioritizes projects according to assessed performance along two dimensions. Performance with respect to strategic objectives is measured by alignment with NSW’s investment themes, value for money, the project’s ability to afford citizens “a better life” (by reducing cost of living and improving livability), and economic efficiency. Performance with respect to the ‘Infrastructure NSW Project Assurance’ objective is based on sufficiency of the analysis, cost‐benefit analysis, professional assessments of the suitability of project management, and risk assessment. CBA is augmented by professional review and qualitative inputs. Qualitative assessments are numerically scored on a scale from 3 (strongly positive) to ‐3 (strongly negative) and added using a system 37 of weights decided by a panel of professionals within Infrastructure NSW. Similar to the proposed IPF, projects are plotted onto a two‐ dimensional plane, with axes defined by the Strategic Objective and Project Assurance Objective scores. Projects are classified as short‐, medium‐, and long‐term, depending on their collective scores. Infrastructure NSW conceptual project mapping Source: Government of New South Wales (2014) Korea, Rep. The Republic of Korea employs cost‐benefit analysis, supplemented by multi‐criteria decision methods to prioritize a large number of projects across sectors. Using the AHP structured expert pairwise technique, experts decide the weights of decision criteria, including SCBA. AHP has also been used to rank projects sub‐sectorally (primarily in transport) in the US, Indonesia, China, Turkey, India, and Palestine, but is not (to our knowledge) used as a national prioritization framework outside Korea. Indonesia During 2014‐2015, Indonesia’s Committee for Acceleration of Priority Infrastructure Delivery (KPPIP) employed a three‐level infrastructure prioritization approach, including multi‐criteria analysis. Following a screening for basic project requirements, an additive multi‐criteria model was to identify 22 priority infrastructure projects from amongst thousands of proposed projects. The indicators for project scoring and ranking (with associated additive weights) included project purpose (25%); feasibility of implementation (30%); socio‐economic impact (30%); and environmental impact (15%). The scoring and ranking outcomes were used as a basis of “committee discussion” that resulted in the shortlisting of 22 projects. Cities The Cities Development Initiative published a 2010 City Infrastructure Development Investment and Prioritisation Toolkit that utilizes a multi‐criteria Initiative decision model to consider project purpose; public response; (international environmental impact; socio‐economic impact; and feasibility of partnership) implementation. The prioritization approach was designed to prioritize funding agency support for projects proposed by local municipalities. 38 Annex 6. Principal Component Analysis Principal Component Analysis (PCA) is an information reduction procedure that seeks redundancies within a set of variables (Joliffe, 2002). These redundancies can be expressed as linear combinations or ‘principal components’ of the variables comprising the set. Each principal component is a weighted average of the original indicators or component variables. The coefficients, or weights, associated with variables in each principal component are those that maximize the variance of each. The first principal component corresponds to the linear combination of variables that retains the maximum information of the original data set. The notation for the first principal component is: (1) ⋯ Where denotes each observation and denotes the weight for the variable . In the context of the IPF, the first principal components of the social‐ environmental and financial‐economic variable sets become the composite indices. The coefficients of each ‘first principal component’ are taken as the weights associated with each variable . Statistical software such as SPSS and SAS include routines to perform PCA. 39