State lawmakers are on their way to creating the equivalent of 50 different computational control commissions across the nation—a move that could severely undermine artificial intelligence (AI) innovation, investment, and competition in the United States.

California is leading the push for aggressive state AI controls, including a bill that would subject developers to criminal liability, but many other states are advancing their own regulatory agendas. According to state and local AI legislation tracker MultiState.ai, there are now 585 state bills pending, and the number continues to grow steadily. While these measures vary widely in scope and intent, many propose far-reaching bureaucratic constraints on algorithmic systems that would allow unprecedented government control of AI systems.

Unfortunately, Congress is mostly ignoring what is happening in the states and failing to craft the pro-innovation national policy vision needed to maintain America’s current lead in advanced computation and algorithmic technology. While over 100 AI-related bills have been introduced at the federal level, these measures fail to address the patchwork problem presented by hordes of state and local AI micromanagers. If not constrained, this growing regulatory thicket could undermine U.S. global competitiveness as our nation faces rising threats from China and other countries in the race for AI supremacy.

A Sea Change in Approach

Many lawmakers’ aggressive efforts to regulate AI systems preemptively embody the antithesis of the wise policy approach adopted for the internet and digital technologies. In the mid-1990s, U.S. policymakers in Congress and the Clinton administration forged a bipartisan consensus to create a national policy framework for online speech and commerce rooted in flexible, pro-development policies.

This market-oriented policy vision produced an outpouring of innovation and investment. According to the U.S. Bureau of Economic Analysis, in 2022, the U.S. digital economy accounted for over $4 trillion of gross output, $2.6 trillion of value added (translating to 10 percent of U.S. gross domestic product), $1.3 trillion of compensation, and 8.9 million jobs. Today, 18 of the 25 largest digital companies in the world are U.S.-based.

State lawmakers seem hell-bent on reversing this remarkable success story. Many state AI bills recommend a precautionary, principle-based approach to AI policy that would treat many algorithmic technologies as guilty until proven innocent and require developers to obtain bureaucratic permission slips before they could innovate—if they are allowed to at all.

A new Hawai’i measure would literally codify the precautionary principle for AI, which “shifts the burden of proof to those who want to undertake an innovation to show that it does not cause harm; and holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative.” This is an impossible requirement for AI developers to satisfy because no technology can ever be proven perfectly safe before deployment. Unfortunately, this is becoming the standard for many state AI proposals.

Connecticut is considering legislation demanding that AI developers preemptively produce documentation disclosing the “foreseeable risk of algorithmic discrimination” in their systems. While well intentioned, such an open-ended requirement would put state bureaucrats in a position to delay or deny innovations based on speculative fears. Meanwhile, Oklahoma, Colorado, and other states have floated bills that likewise imply that all AI systems are inherently discriminatory and demand that developers comply with many layers of paperwork requirements to prove otherwise. In pushing back against the Colorado measure, a coalition of smaller AI developers said the bill “would severely stifle innovation and impose untenable burdens on Colorado’s businesses, particularly startups.”

New York State is currently considering almost 80 AI-related bills including a “Robot Tax Act” that would levy a new tax on any firm “using technology to displace workers,” even though automation is a normal part of almost every business sector. New York City has implemented a major new rule for automated hiring tools, requiring annual “algorithmic bias audits” (and potential liability) for product developers on the grounds of theoretical discrimination. This foreshadows the rise of city-by-city AI regulation, adding even more layers of suffocating and contradictory rules to the maze of red tape entrepreneurs could face.

California Takes the Cake

California is blazing an even more aggressive regulatory trail, with more than 50 AI bills pending— including the most problematic bill of any yet introduced in the states, SB-1047: the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” This legislation would create a new Frontier Model Division within the California Department of Technology and grant it sweeping powers to regulate advanced AI systems. This new bureaucracy would be empowered to require annual certification reports from developers, along with other requirements, under threat of criminal liability.

As with other state measures, the California bill makes hypothetical worst-case thinking the basis of precautionary principle-based prohibitions. As one technology advocacy group notes, the measure “forces model developers to engage in speculative fiction about imagined threats of machines run amok, computer models spun out of control, and other nightmare scenarios for which there is no basis in reality.” Adding insult to injury, this new California AI super-regulator would be funded by fees imposed on the AI companies it regulates, with its bureaucrats determining how to structure those levies.

This bill is one of the most far-reaching and potentially destructive technology measures under consideration today. Currently, only the largest tech companies would likely be able to handle the bill’s compliance costs and liability threats. Smaller developers would be hit especially hard because the law’s expansive liability regime would cover downstream derivative AI models so as to threaten open-source coders or original models with steep penalties—thereby decimating innovation and competition from start-ups.

Meanwhile, California is also attacking AI and automation with other bills that would restrict self-driving trucks and limit the use of autonomous vehicles in commercial activities like ride sharing. More incredible are the outright Luddite bills to ban self-checkout at grocery and retail stores and ban the use of AI in call centers that provide government services, making things even less efficient.

Again, because Congress has done nothing to preempt these types of AI regulation, states like California can run wild in their ambition to bottle up algorithmic innovation. Even if not all of them pass, the growing patchwork of parochial red tape that does get on the books will result in a death-by-a-thousand-cuts scenario as smaller innovators struggle to deal with mounting compliance headaches and liability threats from the complex labyrinth of inconsistent mandates.

More Reasonable Governance Approaches

Not all state and local AI bills are as problematic as those mentioned here. Some bills or executive orders simply propose studying AI uses and exploring solutions to thorny problems like AI-generated deepfakes, deceptive election ads, or government and law enforcement uses of algorithmic capabilities. Although these topics raise their own complexities, this “study-and-review” approach makes much more sense than the sweeping regulatory approaches some states call for today. It is better to break down AI policy into smaller components and make policy in a more targeted fashion instead of presuming we have all the answers up front. Patience and humility remain essential prerequisites for wise tech policymaking.

Of course, some additional AI regulation will be needed; but as R Street reports have documented, many agencies and policies covering specific algorithmic applications already exist. AI policy should focus on real-world outputs and outcomes—not inputs and or theoretical dangers. “What really matters is that AI and robotic technologies perform as they are supposed to and do so in a generally safe manner,” a previous R Street analysis noted. “A governance regime focused on outcomes and performance treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm and tailored, context-specific solutions to it.”

Policymakers should tap existing laws and agency remedies before imposing costly new precautionary controls based on hypothetical fears. Those existing policies include civil rights laws, unfair and deceptive practices regulations, recall authority for defective products, and targeted lawsuits for other algorithmic harms that can be proven in court. To the extent that state and local governments continue to legislate on AI matters, they should follow this same approach, giving AI innovation some breathing room and filling policy gaps as needed. But as parochial policies proliferate, Congress will need to consider how to bring some harmony to these rules, allowing the development of a national framework so that America remains a global leader in the global race for advanced computational capabilities. “We don’t want to do damage,” Sen. Mike Rounds (R-S.D.) argued recently. “We don’t want to have a regulatory impact that slows down our development, allows development [of AI] near our adversaries to move more quickly.” It will ultimately be up to Congress to make sure that state and local governments do not undermine the positive innovation culture that made the United States a technological powerhouse in information and computation technology.