Report: Benjamin Dierker, “Building New Critical Infrastructure: No Time to Waste,” Alliance for Innovation and Infrastructure, July 2024. https://www.aii.org/wp-content/uploads/2024/07/Building-New-Critical-Infrastructure.-No-Time-to-Waste.pdf

Introduction

The Alliance for Innovation and Infrastructure (Aii) issued a report on the cost and timeline effects of competitive bidding for electric transmission projects. The report surveyed literature on competitive transmission bidding issued over the past decade. In 2011, federal regulators eliminated a federal right of first refusal (ROFR) to enable competitive bidding for many forms of regional transmission development. Before the Federal Energy Regulatory Commission (FERC) removed the federal ROFR, regional transmission organizations (RTOs) would assign the right to build and own a project to the incumbent utility. FERC allows states to enact their own ROFRs. Over a dozen states have considered or enacted one. ROFR remains prominently debated in state and federal policy today. 

The report’s novel finding is that most parties do not adequately account for time, leading to unseen costs. Aii “[r]aises questions about many of the other reports purporting to show significant cost savings from competition and notes the results of reports showing the opposite finding.”

Literature Review

Aii’s literature review suggests that the merits of ROFR and competition are “centered on a back and forth between two stakeholder groups.” A 2019 report and response by the Brattle Group, used by consumers and competitive suppliers, found 20 to 30 percent cost savings and innovation benefits. On the other side, a series of analyses backed by a group of incumbent utilities assert that competition causes delays and increases cost. 

Encouragingly, Aii finds that existing cost assessments of bidding outcomes are insufficient and that final project costs and in-service dates must be examined. This is important because both competitive and incumbent-built projects have incurred delays and cost escalations. The Brattle study largely preceded the availability of data on final project costs. More recent studies do not comprehensively analyze and compare project cost increases between competitively bid and incumbent projects. For example, the recent study Aii emphasizes cherry-picks projects to present an improper inference favorable to incumbent projects. While the Aii analysis covers critiques of pro-competition studies, it omits reference to critiques of anti-competition studies. These critiques have largely discredited the anti-competition studies.

The Brattle study assumed that project costs increase (relative to initial estimates or bids) at the same rate for incumbent and competitive projects. If anything, this may be charitable toward incumbent projects, which generally flow project cost overruns into rates. Competitive projects tend to have stronger cost overrun controls (e.g., cost and return on equity caps), construction performance incentives, and superior visibility and monitoring of cost increases. However, more study is needed to reach definitive conclusions. 

Aii concludes that pro-competition entities cite the Brattle report and “little else for quantitative analysis of costs.” This is an overstatement. Aii ignores international experience documenting similar savings, as well as the body of quantitative work that includes updated cost estimates and recent comprehensive affidavits and evidence demonstrating a 20 to 42 percent average cost savings from competition. Much of this is buried in regulatory records, which may result in an honest oversight from a literature survey design. 

By contrast, cherry-picking findings in the identified literature is inexcusable. For example, the report uses excerpts from a prominent Massachusetts Institute of Technology (MIT) paper to reference descriptions of competitive solicitation costs and cases where incumbents hold advantages (endnotes 13, 57, and 59). Yet Aii conveniently ignores MIT’s main conclusion: “the experience to date is sufficiently promising to consider expanding the use of open competitive procurement solicitations for transmission projects.” Overall, Aii’s literature review is a biased survey yielding invalid conclusions. 

Project Timeline Effects

The Aii report concludes that competitive bidding pushes “in-service dates back by over a year on average.” This is based on 26 competitively bid projects, which averaged 433 days between RTOs’ need identification and selection of a winning bid. This approach has five profound flaws: 

  1. Aii’s dataset has a skew of high-value outliers. The proper measure of central tendency for skewed distributions is median. The 433-day average the report references is 24 percent higher than the median of 349 days. The largest outliers (both 1,208 days) were the first to occur in New York, with subsequent competitive solicitations performing well
  2. The 26 projects selected are a biased, unrepresentative sample. It over-represents the RTOs using sponsorship models, where there is a longer gap between need identification and bid selection. It underrepresents RTOs with voluminous competitive bidding records, such as dozens of projects in California, where solicitations generally take under 200 days and the consumer advocate expects expanded solicitations to shorten development timelines
  3. The time between need identification and bid selection is an exaggerated measure of the competitive solicitation timeline. Without competition, RTOs would still take months after need identification to diagnose solutions and select projects. RTOs for the Midwest, Great Plains, and California only bid out projects, not needs—providing a better measure of the time for solicitations. 
  4. Generally, RTOs plan transmission to meet a distant need date, such that accommodating extra months in the planning process does not alter the target in-service date. For example, the Midwest RTO planned its first tranche of projects to meet a need in 2030, so all winning bidders’ announcements committed to an in-service date of 2030. Cases of urgent in-service needs, often called “immediate need” projects, are exempt from competitive processes to prevent delays. Thus, Aii’s frame is largely a strawman argument. 
  5. The report lists incumbents’ timeliness advantages but ignores competition’s advantages. Solicitations often result in creative, less-expensive solutions with earlier in-service dates than what incumbents offer. Competitive bidders commonly include schedule delay penalties. Competition also improves benefit-to-cost ratios, which tends to reduce state permitting delays

Altogether, the direction of the net effect of competition on transmission development timelines is unknown. Aii’s dataset suggests that competitive solicitations take months—not over a year—to develop. 

For perspective, transmission takes over 10 years to develop (on average) and lasts 50 to 80 years. Taking Aii’s methods and dataset at face value, the median time for competitive solicitations equals 9 percent of the total development timeline and under 2 percent of its service lifespan. Ultimately, those who incur the consequences of transmission costs and delays—the customers—remain steadfast proponents of competition. 

Conclusion

While Aii’s report has the tone of an objective study, it is structurally biased and unfit to guide policymaking. It overstates solicitation timeliness, ignores countervailing effects, and provides no credible evidence that competition causes a significant net delay in transmission development. It puts more weight on discredited studies than ones that have withstood scrutiny. Its claim calling the cost savings of competition into question is unsubstantiated. 

The Aii report methodology is fatally flawed. The empirical approach demonstrates severe selection bias and adopts the wrong descriptive statistical measure. The literature survey excludes salient evidence on transmission competition, which largely discredits the studies that Aii’s report awards the most weight.

Perhaps the only actionable conclusion in Aii’s report is that “[m]ore research would be useful.” Aii presents worthwhile research questions, but credible methods should be used to answer them.