Political interest in regulating artificial intelligence (AI) systems and applications exploded in 2025. According to one AI-bill tracking service, policymakers have already floated over 1,100 AI-related legislative proposals this year. State activity has thus far outpaced federal efforts, however, and this looming patchwork of differing state regulatory proposals has raised concern among federal lawmakers. This growing problem demonstrates the immediate need for a comprehensive federal AI framework. And in a November 18th social media post, President Donald Trump said of the matter, “overregulation by the States is threatening to undermine this Growth Engine,” and argues that the nation must “have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.”

President Trump and members of Congress are rightly concerned. The growth of so many parochial AI policies has problematic ramifications for interstate algorithmic commerce and important national AI development and national security priorities. Federal lawmakers are particularly concerned about the “Californication” of AI policy, whereby that state pushes what one analyst calls a “stealth campaign” to dictate AI policy standards for the entire nation, much as California already has done for environmental rules, labor policy, and data privacy standards. This phenomenon is sometimes called “the Sacramento effect.” President Trump identified this problem when launching the administration’s new AI Action Plan in July, when he noted that “if you are operating under 50 different sets of state laws, the most restrictive state of all will be the one that rules.” Gov. Gavin Newsom (D-Calif.) has already signed dozens of AI laws over the past two years, which will have extraterritorial effects on many AI firms and activities well beyond the state’s borders. Left unchecked, “California writes the playbook for AI regulation,” another analyst notes.

To head off this outcome, some members of Congress attempted to include a 10-year pause on state and local AI-specific regulatory activity in the One Big Beautiful Bill Act this summer. While it passed in the House, the amendment failed on the Senate floor, however, even after a compromise was briefly considered to shorten the length of the pause and add additional exemptions. Since then, state AI-specific proposals have continued to proliferate, and several more major state regulatory bills have passed.

If Congress fails to create any sort of federal AI framework, it will open the floodgates to even more confusing and costly regulations in 2026. “State lawmakers are retooling as they prepare for another round of fights over artificial intelligence regulation in the new year,” one news report noted recently. “Legislators say they feel even more emboldened” in the absence of federal guidelines and are looking to launch a wide variety of new regulatory initiatives. Taken together, AI is on the way to becoming, as one leading analyst summarizes, “the most heavily regulated nascent, general-purpose consumer technology in modern history.”

Congress now has another chance to address this AI regulatory patchwork as part of the National Defense Authorization Act (NDAA) budget process. The NDAA is a regular congressional spending authorization process that addresses a wide variety of defense and national security-related activities, and many amendments are included before its final passage. This current NDAA could include a revised multi-year regulatory pause on state and local AI-specific regulation, but also incorporate new federal solutions to some of the AI concerns states are looking to regulate.

The White House could also take additional executive actions to address state overreach on AI using oversight tools within the Department of Justice and Federal Trade Commission, or by using budget strings to restrict grants to states that are looking to over-regulate AI matters that are interstate in character. A rumored forthcoming White House executive order might adopt that approach.

The best solution would be for Congress to carry out its constitutional responsibility to protect interstate commerce and national AI priorities instead of leaving it to the executive branch to handle now or in future administrations. As part of this effort, Congress can also consider addressing some of the concerns animating state regulatory proposals, including AI model safety and transparency, and issues involving AI chatbots and child safety. Congress already voted overwhelmingly to create a national approach to digital revenge porn with the passage of the “Take It Down Act” earlier this year.

The time has come for federal lawmakers to get serious about national AI priorities and the governance of interstate algorithmic activities before it is too late to stop an onslaught of confusing state-based rules.

The American Founders and Modern Technology Markets

Before itemizing the dangers of the growing AI patchwork, it is important to address some of the myths and misconceptions clouding the debate over AI preemption, including some basic misunderstandings of the nature of America’s federalist system of governance.

The brilliance of America’s long-lasting constitutional framework is that it depends upon a division of responsibilities among many different governments. Federalism is not synonymous with “states’ rights,” however. Both the states and the federal government have constitutionally delineated responsibilities. While states have broad discretion to address matters of local concern in their jurisdictions, the federal government has responsibilities pertaining to interstate commerce and national priorities. For example, America’s founders included important provisions in the Constitution addressing coinage, bankruptcies, weights and measures, contracts, shipping policies, and trade across state borders and ports. They adopted these provisions after ruinous protectionism developed during the decade when the Articles of Confederation governed the Union from the period following the American Revolution until ratification of the Constitution.

The Articles were something akin to a near-absolute “state’s rights” approach to governance, with the federal government largely powerless to stop the resulting conflicting policies and bickering disputes among the states that undermined national commerce and threatened domestic tranquility. Once the Founders abandoned the Articles and adopted our current Constitution, robust interstate trade became possible. Over 235 years later, this brilliant document continues to serve as the foundation of the world’s most successful economic union.

In a 1999 book, The Delicate Balance: Federalism, Interstate Commerce, and Economic Freedom in the Technological Age, I explained how America’s constitutional framework has had continuing relevance for the development of many important modern technologies and sectors. America came to dominate both the industrial revolution and the information revolution because our Constitution was able to adapt and accommodate the evolution of new technologies, markets, sectors, and professions. As new technologies dawned—railroads, pharmaceuticals, finance, aviation, space, the internet—Congress took steps to protect interstate commerce and national development priorities. In each case, although to varying degrees, Congress preempted a patchwork of conflicting state laws to ensure the nation benefited from the development of robust national markets.  

The time has come for Congress to do the same for AI markets and modern computational technologies.  

The Problems with a Patchwork

Recent R Street congressional testimonies, filings, and other essays have highlighted three general problems associated with the proliferating patchwork of state and local AI-related regulations.

1. Diminished investment, competition, and development

The governance approach that America adopted for computing, the internet, digital systems, broadband networks, and now AI infrastructure has produced amazing results for the economy and consumers in terms of expanded knowledge, economic growth, jobs, and new products. In 2022 alone, the U.S. digital economy generated over $4 trillion in output (10 percent of GDP).

These policies had compounding returns as a new “AI spring” blossomed in recent years. It is already the case that AI capital expenditures are fueling “a massive private sector stimulus program,” with firms on course to spend an estimated $400 billion on AI infrastructure by the end of 2025. If current trends continue, by year’s end, this investment will, according to one analyst, “exceed peak annual spending during some of the most famous investment booms in the modern era, including the Manhattan Project, NASA’s spending on the Apollo Project, and the internet broadband buildout that accompanied the dot-com boom.” In total, Morgan Stanley predicts nearly $3 trillion of private-sector AI investment through 2028.

These are astonishing numbers, but a patchwork of burdensome new regulations could slow or even reverse these positive trends. As regulation becomes more complicated, it imposes a variety of trade-offs and burdens on markets and the economy. Last May, Gov. Jared Polis (DColo.) signed a major new AI regulatory measure into law, but he also noted that state AI regulations like his could create “a complex compliance regime for all developers and deployers of AI” that will “tamper innovation and deter competition.” Congress must develop “a needed cohesive federal approach,” Polis said, “to limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines along with ensuring access to life-saving and money-saving AI technologies for consumers.” President Obama’s chairman of the White House Council of Economic Advisers has similarly argued that “federal pre-emption with its own framework would help ensure the U.S. remains a digital single market” and encourage more innovation. Subsequent analysis of the Colorado bill by the Common Sense Institute found that the law could cause an estimated 40,000 job losses and eliminate $7 billion in economic output by 2030. 

As parochial AI mandates multiply, it will introduce still more confusion in the national marketplace. While some state lawmakers initially promised to devise consistent approaches to their rules, jurisdictions still cannot agree on common definitions of terms like “high-risk” or “consequential decisions,” or who counts as a “developer,” a “deployer,” or an “distributor” of AI services. This will create serious costs for all innovators, but especially smaller ones. Many small businesses are currently tapping AI capabilities to create exciting new tools and services, but this regulatory thicket will create legal confusion and formidable compliance costs that will hit so-called “Little Tech” innovators hardest. As one tech scholar observes, “paperwork favors the powerful. The more paperwork that’s required, people with resources will get through it and people without them will not.”

Congress needs to address these matters in federal legislation to encourage more AI entry, investment, and innovation. Federal lawmakers also have better expertise in regulating matters pertaining to national AI safety standards.

2. Foregone life-enriching innovations 

Diminished AI investment and development would in turn undermine the many potential life-enriching benefits of advanced algorithmic systems. An ongoing R Street series documents the remarkable benefits of AI-enabled health innovations and the potential for algorithmic systems to usher in a new era of highly personalized medicine that could profoundly improve human health and welfare along multiple dimensions.

Health-related innovations are already quite expensive to develop, partially because innovators face many existing regulatory barriers to new drug and medical device approval. A patchwork of new AI-specific health mandates could make it even harder for some new firms and innovations to launch. A recent survey by Manatt Health found that, through October, 47 states had introduced over 250 AI healthcare-related bills and 21 states enacted 33 of those bills into law. These include health sector-specific bills and others that are broad-based “algorithmic discrimination” bills that could impose new regulatory burdens on the health sector when such proposals regulate “high risk” activities (which would likely include health-related matters involving AI systems).

If Congress fails to address this confusing set of rules, definitions, and liability schemes, it will mean entrepreneurs face major new barriers to entering markets and offering the public life-enriching new services.  

3. Diminished geopolitical strength

A final important reason for Congress to establish a federal framework for AI policy involves national security considerations. R Street Institute testimony from April explained how technological development, global competitiveness, and national strength are complementary. A national AI policy framework that encourages a positive innovation culture can help America win the developing “AI Cold War,” which pits China and the United States against each other in a struggle for geopolitical technological supremacy in advanced computation.

China is working actively to bolster domestic AI development and promote the global diffusion of their AI systems through efforts such as the Digital Silk Road initiative. If China tops the U.S. in the AI diffusion race, it will allow their authoritarian values of control and censorship to spread more easily through their technological systems.

This is why it makes sense for Congress to address AI policy as part of the NDAA process because national security considerations are in play.

State and Local Governments Will Continue to Play Important Roles

Not all state and local AI proposals are equally problematic, and even with a federal AI policy framework in place to limit some AI-specific regulations, governments at all levels will continue to shape AI policy. States will be able to enforce various generally applicable laws such as unfair and deceptive practices regulations, civil rights law, product recall authority, court-based common law remedies, and a variety of other consumer protections. Biden administration officials correctly observed that “there is no AI exemption for the laws on the books,” and Democratic attorneys general have also rightly noted that a wide variety of state laws “apply to emerging technology, including AI systems, just as they would in any other context.”

States will be able to pass other technology-neutral laws that do not directly limit the inner-workings of AI models and applications. Moreover, state and local efforts to speed the development and diffusion of AI infrastructure or systems present no conflict with a federal AI policy framework that imposes limitations on regulatory activity.

State and local permitting rules for data centers are becoming more politically salient, and could become a formidable barrier to additional investment in the physical infrastructure needed to power advanced AI systems. Federal law would not reach these issues, however. Again, it is only when state and local governments are attempting to impose technology-specific mandates on AI systems that a federal AI framework would limit regulatory action.

Congress Needs to Act

As noted, when formulating a national AI policy framework, Congress can also address frontier model safety and kids’ safety concerns that motivate many state AI proposals today. Congress need not address every issue simultaneously, however, and attempting to do so could risk overloading any short-term compromise, potentially resulting in no federal action at all. Congress should begin with a baseline proposal that creates the foundation of federal-state AI policymaking and then build upon it in coming months and years.

Toward that end, as R Street recommended in recent testimony, lawmakers should create a new standing AI working group within the Department of Commerce to coordinate other federal-state AI policy matters as new issues and jurisdictional conflicts develop. Even where Congress cedes authority to the states on some AI issues, federal officials can help craft consistent standards and common definitions to ensure enforcement clarity and minimal compliance burdens. Lawmakers should also work together to tap less-restrictive alternatives to regulation, including AI literacy and educational efforts.

Congress should act promptly on AI policy. Ignoring the developing patchwork of confusing, contradictory, and costly state and local AI policies will have deleterious ramifications for the future of American strength and the development of world-leading computational systems and applications.

Follow our artificial intelligence policy work.