Policy pragmatism prevailed in California yesterday when Gov. Gavin Newsom vetoed SB 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” The measure, which had passed the California Legislature a month earlier, proposed a radical new approach to digital technology policy in America. Newsom wisely rejected it because it would have come at “the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”

Other lawmakers should heed this lesson and realize that a sweeping war on computation is the wrong way to craft artificial intelligence (AI) policy for the nation. Policymakers can use more targeted policy levers and iterative solutions to govern AI and ensure its safety while also preserving the enormous life-enriching potential of algorithmic systems.  

Violating the Most Important Principle of Technology Regulation

SB 1047 proposed a comprehensive new regulatory regime for advanced computational systems based on fears about hypothetical harms. The bill established an arbitrary threshold for what constituted a powerful “frontier” AI model and “critical harms” that might flow from them. The measure also proposed a new general-purpose regulatory bureaucracy and many new reporting and auditing rules for covered models. These onerous mandates and preemptive regulatory processes would have had, as former House Speaker Nancy Pelosi argued when opposing the bill, “significant unintended consequences that would stifle innovation and will harm the U.S. AI ecosystem.”

SB 1047 was also extraterritorial in reach because its mandates were not limited to California companies. That would have left California free to regulate any AI company in America. If other states followed this approach, it would create a confusing compliance nightmare that could undermine the development of more sophisticated algorithmic systems nationwide. 

At root, SB 1047 violated a core tenet of smart technology policy: Regulation should not bottle up underlying system capabilities; instead, it should address real-world outputs and system performance. Rep. Jay Obernolte (R-Calif.), who chairs the House AI Task Force, has correctly identified how policymakers must avoid AI policies “that stifle innovation by focusing on mechanisms instead of on outcomes.” Previous R Street research has noted that this is the most important principle for AI regulation.

This is where SB 1047 went wrong, essentially treating the very act of creating powerful computational systems as inherently risky. It would be unwise to regulate computers, data systems, and large AI models to address hypotheticals. That approach would have crippled America’s broader AI capabilities at a time when other nations like China are looking to greatly accelerate their own. 

Policy should instead use science and cost-benefit analysis to evaluate actual AI use cases. If specific AI applications create provable risks, then they can identify and address those risks. American law generally works this way for most technologies, ensuring innovation continues apace while safety concerns are addressed. As Newsom stressed in his veto statement, AI policy must be “led by experts, to inform policymakers on Al risk management practices that are rooted in science and fact.”

America’s massive administrative state already regulates—and sometimes over-regulates—algorithmic systems in this fashion. At the federal level alone, 439 departments and dozens of independent regulatory agencies possess long-standing, targeted mechanisms to address algorithmic developments in their areas. This summer, the Center for American Progress published a major report highlighting the extensive powers already available to government to regulate AI. This is not to say that existing regulation is always used appropriately, but it would be wrong to pretend that government is powerless to address new technological concerns. 

Consider how the Federal Aviation Administration, the Food and Drug Administration, and the National Highway Traffic Safety Administration already regulate autonomous and algorithmic systems that involve air, drug, and auto safety. These agencies possess plenary regulatory authority, but it is issue-specific by nature. These agencies might actually be regulating their sectors too aggressively in some instances, but it is better to address AI risks in this more targeted fashion instead of bottling up the underlying power of large computing systems in an attempt to address safety concerns. 

Meanwhile, lurking in the background is America’s tort system, where trial lawyers are always ready to pounce. Despite many faults, the combination of targeted regulation and tort liability represent the superior way to address AI concerns. 

What Happens After SB 1047

The debate over AI regulation will continue next year, and perhaps even more AI safety bills will be introduced in California and other states. Newsom signed several other AI bills this month, for example, and almost 800 AI-related bills are being considered across the United States today. This represents an unprecedented degree of political interest in a still-emerging technology. 

In the wake of Newsom’s SB 1047 veto, the debate over AI model regulation will likely shift to the federal government. Several major federal AI bills being considered by Congress currently would empower the National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce to play a larger role in overseeing algorithmic systems, including frontier model safety. Following President Joe Biden’s massive AI executive order last October, NIST recently created a new AI Safety Institute to address many of these issues and has pushed leading model creators to formally collaborate on AI safety research, testing, and evaluation with the agency. While this process has some potential problems—beginning with the fact that Congress has not yet formally authorized this new bureaucracy or its functions—it is likely the federal government will continue to take the lead on AI frontier model governance.  

Hopefully, states will avoid replicating the California approach to model-level AI safety regulation following Newsom’s veto of SB 1047. Instead, they will probably look to advance bills that resemble a major Colorado AI bill Gov. Jared Polis (D) signed into law in May as well as a similar measure that almost passed in Connecticut. These bills allege that “algorithmic discrimination” will arise if AI systems are not preemptively regulated, and they mandate impact assessments and audits to address it. While these measures are very different from California’s SB 1047, they raise many of the same concerns about the impact of regulation on innovation and competition. When signing the Colorado law, Gov. Polis noted he was “concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.”  

Many other AI bills being introduced in the states today follow the Colorado and Connecticut approach. A major policy battle will ensue about this approach to AI regulation because mandatory AI impact assessments and audits will entail significant costs and trade-offs in their own right.

Conclusion

As this debate continues in 2025, policymakers should not forget that humility and forbearance are wise policy virtues in light of the complexities associated with regulating something as new and rapidly evolving as AI. As Gov. Newsom noted when vetoing SB 1047, “any framework for effectively regulating Al needs to keep pace with the technology itself.” 

That is another reason why more targeted, iterative policy responses make more sense than sweeping measures like SB 1047, which would have set a disastrous precedent for AI regulation in America. As Newsom rightly concluded when he rejected the bill, there are many better ways of “protecting against actual threats without unnecessarily thwarting the promise of this technology to advance the public good.”

Follow our artificial intelligence policy work.