The temptation to “do something” on artificial intelligence will grow as the midterm elections draw near and the media continues to focus on fears about anecdotal AI harms. In particular, state lawmakers will continue to pursue many AI-related initiatives in 2026.

Over 1,200 AI-related laws were floated in 2025, and more than 180 passed.[1] While some of these regulations may be well intentioned, and a few even necessary, it is likely many will run afoul of constitutional principles.

These constitutional flaws cannot be dismissed by analogizing AI to earlier technologies in which states played a significant legislative role. Much more so than the technologies that characterized the agrarian and industrial ages, AI development and deployment implicates interstate commerce and speech considerations, and accordingly, may in some cases violate the dormant commerce clause or the First Amendment.

Legislators rushing to regulate AI must keep these constitutional guardrails in mind and ensure each state stays in its appropriate regulatory lane while still allowing for useful experimentation.

AI Is Not Pencils or Pork Bellies

State AI regulations have a different nexus with interstate commerce than other state laws. These distinctions have legal consequences. In its 2023 ruling in National Pork Producers Council v. Ross, the U.S. Supreme Court upheld a California law that prevented the sale of pork that fell short of the state’s confinement standards.[2]

A key factor in that decision was that out-of-state producers have myriad ways to alter their operations to minimize the disruptive effect of the law. They can, for example, raise pigs intended for the California market one way and all other pigs in another fashion. In short, the court seemed to question whether the nature of pig husbandry “imperatively demand[ed] a single uniform nationwide rule.”

In contrast, algorithmic and computational commerce often mandates just that. The underlying technology is highly interconnected, intangible and instantaneous. From how AI is trained to how it is deployed by end users, a nationwide digital infrastructure is called on by many AI laws. The digital bits of information used to train models and to generate outputs travel effortlessly across borders on a map.

In many instances, those outputs are then made part of the national marketplace of ideas. While it is possible to craft AI-related laws that do not disrupt the flow of digital traffic, how to do so requires the sort of technical scrutiny that often may be lacking in state legislatures.

The modern digital economy and online speech thrive in the U.S. because, generally speaking, the nation has not treated digital data flows as uniquely geographic, place-dependent events.

Since the dawn of the internet, it has been well understood that parochial regulation of online speech and commerce would give rise to unique problems and undermine the flourishing of robust national markets and speech.[3] Bipartisan policies established in the mid-1990s helped ensure that did not happen.[4] Data flows relatively freely in the U.S. because public policies allow it.[5]

America could have taken the path of other governments — most notably, the European Union — and moved to lock down, tightly regulate and tax data systems and transmissions. The country could allow every algorithmic transaction to be treated as a unique geographical event.

California bits could be treated differently from New York bits, and they could both be treated still differently from the bits, algorithms, applications and other digital services across America. States and localities might be permitted to erect technological geofences and algorithmic blockades everywhere to regulate AI speech and commerce. In theory, this could be done right down to the municipal and county level, too.

The aggregate effect of this legal friction, though, could impose tremendous costs on consumers and place a large drag on innovation. This is precisely why in its 1981 Kassel v. Consolidated Freightways Corp. decision, the Supreme Court struck down an Iowa law prohibiting the use of 65-foot trucks on its roads due to alleged safety concerns.[6]

Iowa viewed each truck as a discrete, regulable entity rather than a small part of a national transportation network. While Iowa — and all states — have the authority to regulate intrastate aspects of that network that do not unduly impede its function (think speed limits), the Supreme Court made clear that there are some impassable lines.

The Algorithmic Articles of Confederation

A bits-by-bits approach to AI governance would have massively deleterious effects in terms of the free flow of commerce and speech. We could think of it as a sort of “Algorithmic Articles of Confederation,” with state governments being granted de facto constitutional supremacy in regulating the daily inner workings of the modern economy.

Tellingly, the Founding Fathers faced a similarly fragmented regulatory approach related to the key vehicles of commerce in the 1700s — ships and ports.[7] Rather than continue to allow each state to impose conflicting and even contradictory specifications for which ships and products could be sold and where, they placed myriad protections against such impositions in the U.S. Constitution.

If Congress chooses to remain silent on AI governance matters, this sort of balkanized regulatory scenario could be where the nation is heading. An avalanche of parochial AI regulation looms this year as state and local governments appear ready to push the horizons of AI regulation out even further.

One report published in November 2025 by Pluribus News noted that, in the wake of Congress failing to legislate any national guidelines in 2025, state lawmakers “say they feel even more emboldened,” and are now “retooling as they prepare for another round of fights over artificial intelligence regulation” in 2026.[8]

Analogies to the Past Have Their Limits

Because the public, policymakers and even the courts often reason by analogy and use agrarian era or industrial era comparisons when considering how to address modern technology policy issues, they often get things wrong by thinking that regulating AI and algorithmic systems on a piecemeal, patchwork basis is just like regulating pencils and pork bellies.

But it is nothing of the sort. There is a massive difference in kind and magnitude that is underappreciated by many supporters of the AI states’ rights regulatory model.

To understand why even seemingly light-touch state AI regulations may put a heavy weight on constitutional freedoms related to interstate commerce and speech, it’s necessary to understand how AI is actually trained and used. AI labs train models on vast troves of data collected from across the internet.[9]

The training process itself is made possible by data centers located around the U.S. and beyond.[10] The final model exists in files that do not have a fixed geographic home.[11] This training process is incredibly expensive and complex. Even the most well-resourced labs do not have the resources to train a model subject to two standards, let alone 50 different specifications.[12]

What’s more, these models are quite sensitive. In the same way that altering one or two ingredients to a recipe can lead to a wildly different dish, like switching sugar for salt when baking cookies, a subtle shift in AI training can lead to long-lasting and perhaps irreversible changes to the model’s behavior.[13]

A model trained to one state’s standards may have slightly different responses to user responses than if it had been trained solely based on the lab’s specifications. Given that users may soon rely on AI for more and more tasks, these subtle shifts in the tone and substance of responses may have real effects on how users think, what sources they seek out, and what information they learn.[14]

These attributes of AI make it distinct from many of the technologies governed in prior eras. State regulation of a crop would not alter how that crop grows and what characteristics it takes on in other areas. Likewise, state regulation of how to assemble a car need not lead to changes in how another state assembles those same models.

While some AI regulations may similarly not alter the AI made available around the nation, drawing that line is much harder given the aforementioned aspects of how AI is developed and used.

Interstate Innovation and Speech Demand Constitutional Protection

An “AI Articles of Confederation” approach would reverse the past 30 years of digital technology policy and undermine the nation’s efforts to create a coherent national AI policy framework that can help boost the many potential life-enriching benefits of advanced algorithmic systems, while also ensuring the nation stays ahead of China and other nations in the race for technological supremacy in advanced computational technology.

Such a regulatory model will be particularly problematic for smaller innovators without the legal compliance teams needed to deal with the mountains of confusing and costly compliance requirements.[15]

Moreover, the fact that AI is also an information technology means that there are often important speech-related considerations at stake that necessitate greater scrutiny and protection. For this reason, state efforts to legislate in AI policy may be problematic not only because they create a confusing patchwork that runs contrary to the constitutional federalism principles, but also because some laws impose speech restrictions and obligations that violate the First Amendment.[16]

That does not mean that all algorithmic activities automatically default to federal oversight. However, it does mean that greater care must be exercised by state and local governments when legislating around these technologies and sectors. While states will have plenty of room to enforce existing consumer protections and generally applicable laws, Congress still needs to assert itself and create a national policy framework that limits the potential emergence of this scenario.

Conclusion

When we both testified before the House Judiciary Committee last September at a hearing focused on these issues,[17] we outlined specific steps federal lawmakers should adopt in formal legislation to ensure that national AI markets and priorities are protected from a fragmented, chaotic patchwork of state and local regulatory policies.[18]

For example, New York and California have both recently passed similar major bills addressing frontier AI systems, mandating various disclosures, risk assessments and transparency requirements among other things. After signing the New York version of the law, i.e., the Responsible AI Safety and Education Act, Gov. Kathy Hochul boasted on social media that the bill sets the “national standard” for AI governance.[19]

While these bills were more moderate than their previous iterations, Albany and Sacramento should not be dictating national standards in an extraterritorial fashion for AI labs located outside their borders. Luckily, Congress can easily federalize those standards and provide a simpler framework for frontier model safety standards. Lawmakers can do this by giving the new Center for AI Standards and Innovation, a Biden administration-created body within the National Institute of Standards and Technology, the ability to establish similar guidelines.

Congressional lawmakers should take other steps to limit state efforts to regulate the development of AI systems and applications through confusing AI audits or algorithmic impact assessments. While states will remain free to police discrimination and harms under existing civil rights laws or consumer protection regulations, those policies are generally enforced after a showing of harm, not through proscriptive prohibitions.

If states remain free to impose new ex ante precautionary AI audits or risk assessments, Congress should have a hand in establishing more straightforward and consistent evidentiary standards to avoid a crazy quilt of different compliance policies and liability standards.

In a similar way, a federal AI bill could also instruct the Center for AI Standards and Innovation to establish a new standing working group for federal and state officials to hammer out clear, consistent AI governance guidelines going forward.

Again, states will likely continue to have some leeway to legislate around novel algorithmic issues as they develop, but it is entirely reasonable for the federal government to have a say in how that process works to minimize definitional confusion and needless red tape hassles that would deter interstate innovation and competition.

Finally, Congress can also establish clearer guidelines for other targeted AI issues and sectors, several of which are already primarily under federal jurisdiction. Those include algorithmic and autonomous systems already subject to policies set by the Federal Aviation Administration, the U.S. Food and Drug Administration, the National Highway Traffic Safety Administration, and various fintech and securities regulatory bodies. Too many states and localities are already holding back lifesaving driverless car innovation with a patchwork of burdensome rules.

Both states and the federal government have roles to play in governing AI,[20] but getting this balance of powers and responsibilities right is not easy.[21]

As President Donald Trump outlined in his Dec. 11 executive order about ensuring a national AI policy framework, Congress should obviously leave some issues, such as government use of AI and local decisions related to data center operations, to the states.[22]

And even on the questions appropriately addressed by Congress, there needs to be some limits on how far federal agencies go in regulating AI systems. But Congress must not abdicate its responsibility and cede control over this interstate market to state and local officials. Federal lawmakers must ensure that America’s constitutional framework and national markets continue to thrive in the age of AI.


Kevin Frazier is a senior fellow at the Abundance Institute and director of the AI Innovation and Law Program at the University of Texas School of Law

Adam Thierer is a senior fellow for technology and innovation at the R Street Institute.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] National Conference of State Legislatures, Artificial Intelligence 2025 Legislation, National Conference of State Legislatures (July 10, 2025), https://www.ncsl.org/technologyand-communication/artificial-intelligence-2025-legislation.

[2] Supreme Court of the United States, National Pork Producers Council v. Ross, Oyez (May 11, 2023), https://www.oyez.org/cases/2022/21-468.

[3] Dan Burk, How State Regulation of the Internet Violates the Commerce Clause, Cato Journal (1997), https://www.cato.org/sites/cato.org/files/serials/files/catojournal/1997/11/cj17n2-2.pdf.

[4] Adam Thierer, The Policy Origins of the Digital Revolution: The Continuing Case for the Freedom to Innovate, R Street Institute (August 15, 2024), https://www.rstreet.org/commentary/the-policy-origins-of-the-digital-revolutionthe-continuing-case-for-the-freedom-to-innovate.

[5] Adam Thierer, Testimony: AI at a Crossroads—A Nationwide Strategy or Californication?, R Street Institute (September 18, 2025), https://www.rstreet.org/outreach/adam-thierer-testimony-hearing-on-ai-at-acrossroads-a-nationwide-strategy-or-californication.

[6] Supreme Court of the United States, Kassel v. Consolidated Freightways Corp. Justia (March 24, 1981), https://supreme.justia.com/cases/federal/us/450/662/.

[7] Sam Heavenrich, Regulating Artificial Intelligence Through the FTC, Yale Law Journal (2022), https://yalelawjournal.org/pdf/132.2_Heavenrich_kh391k2m.pdf.

[8] Austin Jenkins, State Lawmakers Gear Up for AI Regulation Battles in ’26, Pluribus News (December 18, 2024), https://pluribusnews.com/news-and-events/state-lawmakers-gearup-for-ai-regulation-battles-in-26/.

[9] AI Models Database: Data Insights, Epoch AI (January 1, 2025), https://epoch.ai/data/ai-models#data-insights.

[10] https://www.newyorker.com/magazine/2025/11/03/inside-the-data-centers-that-trainai-and-drain-the-electrical-grid.

[11] Neil Chilson, Clearing the Path for AI, Abundance Institute (September 2025), https://abundance.institute/our-work/clearing-the-path-for-ai.

[12] Adam Thierer, Testimony: AI at a Crossroads—A Nationwide Strategy or Californication?, R Street Institute (September 18, 2025), https://judiciary.house.gov/committee-activity/hearings/ai-crossroads-nationwidestrategy-or-californication.

[13] Emilio Ferrara, Eliminating Bias in AI May Be Impossible—Here’s How to Tame It Instead, The Conversation (June 27, 2023), https://theconversation.com/eliminating-biasin-ai-may-be-impossible-a-computer-scientist-explains-how-to-tame-it-instead-208611.

[14] https://www.scientificamerican.com/article/humans-absorb-bias-from-ai-and-keep-itafter-they-stop-using-the-algorithm/.

[15] Collin McCune, The Precautionary Empire: Why Policymakers Fail Builders, a16z (September 4, 2025), https://a16z.com/the-precautionary-empire-why-policymakers-failbuilders/.

[16] AI and the First Amendment, a16z AI Policy Brief (November 26, 2025), https://a16zpolicy.substack.com/p/ai-and-the-first-amendment.

[17] https://judiciary.house.gov/committee-activity/hearings/ai-crossroads-nationwidestrategy-or-californication.

[18] AI and the First Amendment, a16z AI Policy Brief (November 26, 2025), https://a16zpolicy.substack.com/p/ai-and-the-first-amendment.

[19] Governor Kathy Hochul, X (Dec. 19, 2025), https://x.com/GovKathyHochul/status/2002169948743897310.

[20] Matt Perault & Jai Ramaswamy, The Commerce Clause in the Age of AI: Guardrails and Opportunities for State Legislatures, a16z (September 2 2025), The Commerce Clause in the Age of AI: Guardrails and Opportunities for State Legislatures | Andreessen Horowitz.

[21] Kevin Frazier, Matt Perault, Jai Ramaswamy, Who Regulates AI, a16z AI Policy Brief (November 20, 2025), https://substack.com/home/post/p-179330362.

[22] Ensuring a National Policy Framework for Artificial Intelligence, White House (December 11, 2025), https://www.whitehouse.gov/presidentialactions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.