Understanding the Proposed AI Moratorium: Answers to Key Questions
by Kevin Frazier, AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare;
and Adam Thierer, Resident Senior Fellow for Technology and Innovation at the R Street Institute
Last updated June 3, 2025. Want to speak with the authors? Please contact pr@rstreet.org.
Congress is considering a moratorium on state artificial intelligence (AI) regulation. This FAQ answers some basic questions about the proposal and will be updated occasionally to reflect breaking developments.
1. What is the proposed federal AI moratorium?
Contained within H.R. 1 (the “One Big Beautiful Bill Act”) is a provision seeking to prevent any state or its political subdivisions (e.g., cities or counties) from enforcing any law or regulation that specifically targets AI models, AI systems, or automated decision systems.[1] The prohibition would last for a 10-year period starting from the enactment date.
2. What is federal preemption, and how would it apply if the moratorium were enacted?
Rooted in the Constitution’s Supremacy Clause, federal preemption allows federal law to supersede state law when they conflict or when Congress intends to occupy a regulatory field exclusively. Because its language explicitly prohibits states from enforcing certain laws, the AI moratorium—if enacted—would be an act of express preemption. This means that validly enacted federal law would prevent states from making or enforcing their own AI-specific regulations for a 10-year period (unless an exception applies).
3. What is the stated or implied purpose of this 10-year moratorium?
While the draft language itself does not explicitly state its overarching purpose, proponents argue that such measures are intended to foster national uniformity in regulation. Preventing a “patchwork” of differing state laws would reduce compliance burdens for businesses operating across state lines and could potentially encourage innovation. More than 1,000 AI-related bills were introduced in the United States within the first five months of 2025—the vast majority of them state bills.
4. What is the budget reconciliation process, and why is it relevant to the AI moratorium proposal?
Budget reconciliation is a special legislative process that allows for expedited consideration of certain fiscal legislation, primarily to make changes to mandatory spending, revenues, or the debt limit. In the Senate, reconciliation bills have limited debate time and only require a simple majority (51 votes) to pass rather than the 60 votes often needed to overcome a filibuster. Including the AI moratorium in such a bill could be a strategy to pass it with a lower vote threshold, bypassing potential filibusters that might occur if it were a standalone bill.
The AI moratorium currently up for consideration is part of the large budget reconciliation bill that passed the House of Representatives on May 22. The specific portion of the bill containing the moratorium passed out of the House Energy and Commerce Committee on May 14.
5. What types of AI technologies does the moratorium aim to cover?
The moratorium language specifies three categories:
- Artificial intelligence model. “[A] software component of an information system that implements [AI] technology and uses computational, statistical, or machine-learning techniques to produce outputs from a defined set of inputs.”
- Artificial intelligence system. “[A]ny data system, hardware, tool, or utility that operates, in whole or in part, using [AI].”
- Automated decision system. “[A]ny computational process derived from machine learning, statistical modeling, data analytics, or [AI] that issues a simplified output, including a score, classification, or recommendation, to materially influence or replace human decision making.”
These definitions are broad and appear designed to encompass a wide range of current and future AI and automated technologies.[2]
6. What does “regulating” AI mean in the context of this moratorium?
The moratorium prohibits states from regulating AI, subject to the exceptions discussed in the response to the following question. The term “regulating” is not explicitly defined. As Barak Orbach observed, the term “escape[s] a clear definition” and amounts to “one of the most misunderstood concepts in modern legal thinking.” Regulation generally refers to “government intervention in liberty and choices—through legal rules that define the legally available options and through legal rules that manipulate incentives.” That said, other members of the legal community, including Supreme Court justices, have offered different definitions. Dictionary definitions also vary. For example, Black’s Law Dictionary defines regulation as “the act or process of controlling by rule or restriction” while the Oxford English Dictionary lists it as “to control, govern, or direct.” Importantly, Orbach notes that “[r]egulation often imposes no restrictions, but enables, facilitates, or adjusts activities, with no restrictions.”
These differences notwithstanding, it seems likely that the moratorium would minimally cover state laws that set specific standards for AI design, performance, data handling, and/or documentation or impose civil liability, taxes, or fees specifically on AI systems. A key question will be whether this applies only to laws directly and primarily targeting AI or to general laws that might incidentally affect AI systems. Whether the moratorium would cover state laws that devise voluntary mechanisms to effectively enable, facilitate, or adjust activities related to AI is also not clear. Overall, if the language is deemed ambiguous, the presumption against preemption might lead courts to favor a narrower interpretation, focusing on laws with AI as their direct subject matter.
7. Are there any exceptions to this moratorium on state laws?
Yes. Paragraph (2) of the provision outlines several exceptions as part of the proposal’s “rule of construction.” A state law or regulation can still be enforced if:
- Its primary purpose and effect is to remove legal impediments to or facilitate the deployment or operation of AI (e.g., a law making it easier to test AI in a certain sector).
- Its primary purpose and effect is to streamline administrative procedures like licensing, permitting, and zoning in a way that aids AI adoption.
- It does not impose substantive requirements on design, performance, data, liability, tax, et cetera unless that requirement is imposed under federal law or applies the same way to non-AI systems that perform comparable functions. This crucial exception suggests that general laws (e.g., a general consumer protection law against deceptive practices) could still apply to AI systems if they apply equally to comparable non-AI systems.
- It does not impose a fee or bond unless it is reasonable, cost-based, and treats AI systems the same as comparable non-AI systems.
8. What is the Byrd rule, and how does it apply to the proposed AI moratorium?
The Senate’s Byrd rule (Section 313 of the Congressional Budget Act) is designed to prevent “extraneous” matter from being included in reconciliation bills. Originally intended as a temporary measure, it became a permanent part of the Budget Act in 1990. Its purpose is to ensure that reconciliation is used for its intended fiscal purposes rather than to enact major policy changes that are not primarily budget-related. Because the AI moratorium is part of a reconciliation measure, it will be scrutinized in the Senate to ensure it meets specific budgetary criteria.
9. What are the arguments for and against the moratorium violating the Byrd rule?
The Byrd rule includes six tests for extraneousness. The AI moratorium is most vulnerable under:
- Test 1 (No effect on budget). Opponents like the National Conference of State Legislatures argue that the moratorium’s primary effect is to limit state authority, not to directly change federal spending or revenues. Proponents note its role within a larger AI modernization framework that includes $500 million in spending.
- Test 4 (Incidental effect on budget). Even if some indirect federal budgetary impact could be claimed (e.g., effects on federal tax revenue from a more uniform national AI market), opponents would argue this is “merely incidental” to the main non-budgetary goal to preempt state AI regulation. Proponents counter that creating a stable national AI market through preemption could lead to economic growth, thereby increasing federal tax revenues, or that it might reduce some minor federal administrative costs.
10. What happens if the Senate parliamentarian rules that the moratorium violates the Byrd rule?
The Senate parliamentarian counsels the presiding officer of the Senate on Byrd rule compliance. If a point of order is raised against the moratorium and the parliamentarian advises the presiding officer to rule it as extraneous, then the provision will be stricken from the reconciliation bill. To keep an extraneous provision, 60 senators would need to vote to waive the Byrd rule, thereby negating the simple-majority advantage of reconciliation.
If the current AI provision is stricken from the larger bill, it could resurface later as part of broad-based AI legislation. At a Senate Commerce Committee hearing on May 8, Sen. Ted Cruz (R-Texas), Chair of the committee, asked witnesses about their potential support for an AI moratorium. Cruz later said he would include a 10-year moratorium in his own pending “AI sandbox” legislation, although details have not yet surfaced.
11. How would an AI moratorium affect states’ ability to make their own laws?
Opponents contend that the moratorium would significantly curtail states’ traditional authority (often called “police powers”) to legislate for the health, safety, and welfare of their citizens in the specific area of AI regulation. States have been active in proposing and enacting AI laws, and this moratorium would halt or reverse many of those efforts for a decade, shifting regulatory authority (or the decision not to regulate specifically) to the federal level for this period. Proponents assert that states retain that authority so long as they pass generally applicable statutes. They also note that the moratorium would not prohibit states from enforcing the litany of existing generally applicable statutes that address many alleged harms from AI.
12. What is the “presumption against preemption,” and could it limit the moratorium’s scope if enacted?
The presumption against preemption is a canon of statutory construction suggesting that federal law should not be interpreted as superseding states’ historic police powers “unless that was the clear and manifest purpose of Congress.” While the moratorium contains explicit preemptive language, if ambiguities arise in its application (e.g., whether a general state law or voluntary regime “regulates” AI), courts might apply this presumption to interpret the moratorium’s scope more narrowly, thereby preserving state authority where congressional intent to preempt is unclear. However, the applicability of this presumption in express preemption cases has been debated.
13. If the moratorium becomes law, how will courts interpret its terms if there are disagreements?
Courts will use established canons of statutory construction. Key canons include:
- Plain meaning rule.Giving words their ordinary meaning, unless a technical sense is indicated.
- Legislative intent. Trying to ascertain and follow what Congress intended, primarily from the text.
- Expressio unius est exclusio alterius. A list of specific things may imply the exclusion of unlisted things. The exceptions in paragraph (2) exemplify what is allowable, implying that other forms of regulation are not.
- Contextual reading. Interpreting terms within the overall structure of the statute.
- Presumption against preemption. As discussed, this might lead to narrower interpretations of preemptive scope in cases of ambiguity.
- Definitions. The text of the Act itself will influence judicial interpretation by defining key terms (i.e., AI, AI model, AI system, automated decision system). Ambiguities in the text—especially around terms like “regulating” or the scope of “comparable functions”—could lead to litigation.
14. What are the potential benefits of such a moratorium for AI innovation and businesses?
Proponents of an AI moratorium, including Gov. Jared Polis (D-Colo.) and Rep. Jay Obernolte (R-Calif.), argue that it could:
- Create national uniformity. Preventing a complex and potentially conflicting “patchwork” of state laws would make it easier and less costly for businesses—especially startups and small businesses—to operate nationwide.
- Foster innovation. A more predictable and less fragmented regulatory environment could encourage investment and development in AI technologies.
- Reduce compliance burdens. Companies would not have to track and comply with (potentially) 50 different sets of AI regulations. Venture capital firms support federal preemption for these reasons.
15. Would the moratorium eliminate all AI regulation in the United States for 10 years?
No. An AI moratorium would not stop governments from applying the many existing laws and regulations that already cover potential harms. Some of those remedies include unfair and deceptive practices law, civil rights law, product recall authority, product defects law, court-based common law remedies, and a variety of other consumer protections.
During the Biden administration, the heads of four major enforcement agencies released a joint statement noting their existing authority to “enforce their respective laws and regulations to promote responsible innovation in automated systems.” Lina M. Khan, Chair of the Federal Trade Commission under former President Joe Biden, stated more simply, “[T]here is no AI exemption from the laws on the books.” Similarly, the Massachusetts attorney general stated in a 2024 advisory letter that “existing state consumer protection, anti-discrimination, and data security laws apply to emerging technology, including AI systems, just as they would in any other context.”
There are other important caveats:
- Federal laws still apply. Existing and future federal laws and regulations (e.g., federal anti-discrimination laws, privacy laws, sector-specific regulations from agencies like the Food and Drug Administration (FDA), the Federal Aviation Administration, the National Highway Traffic Safety Administration, or the Consumer Financial Protection Bureau) would still apply to AI. The moratorium only targets state laws. Many of the policies these and other federal agencies enforce already comprehensively preempt state and local law.
- Exceptions for certain state laws. As noted previously, some state laws could still be enforced if they meet specified exceptions (e.g., generally applicable laws applied equally to AI and non-AI systems).
- Federal action is possible. The moratorium does not prevent Congress from passing new federal AI laws or federal agencies from issuing AI-related rules during the 10-year period. However, it would significantly limit new state-level AI-specific regulations.
16. Would an AI moratorium result in preemption without protection by removing existing or future state-level safeguards without establishing a comprehensive federal regulatory framework in their place?
An AI moratorium would not limit the applicability of existing legal or regulatory safeguards, nor would it limit potential courts from adjudicating claims brought under existing laws, such as consumer protection statutes. However, Article I, Section 8, Clause 3 (“the Commerce Clause”) and other constitutional provisions give Congress the responsibility to protect the free flow of interstate commerce. While America’s federalist system leaves many powers and responsibilities to the states, Congress has the right to address an inconsistent patchwork of policies that interferes with interstate commerce and national marketplace development.
In the 1990s, during the internet’s founding era, Congress took steps to address state and local barriers to innovation, investment, and competition. The Telecommunications Act of 1996 specified that “[n]o State or local statute or regulation, or other State or local legal requirement, may prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or intrastate telecommunications service.” The law included other specific preemptions of state and local regulation, as well as a provision requiring the Federal Communications Commission and state regulators to forbear from regulating in certain instances to enhance competition. Congress also passed the Internet Tax Freedom Act of 1998 (made permanent in 2016) to stop the spread of “multiple and discriminatory taxes on electronic commerce” and internet access.
In these cases, Congress did not substitute a new tax or regulatory regime when preempting state and local policies. Rather, it left the field mostly free to develop while relying on other existing legal or regulatory safeguards to address concerns.
17. Would a federal AI moratorium stifle the “laboratories of democracy” by preventing state-by-state experimentation with novel policy solutions?
Even with an AI moratorium in place, states would retain their authority to enforce existing rules or craft new, generally applicable policies—in other words, policies that are technology-neutral and do not single out AI systems. For an efficient national AI marketplace to develop, however, some limits on state-specific AI regulation are needed.
A look at a very real alternative universe in which even a few states pass unique AI laws clarifies this point. Imagine, for example, if New York were to enact its proposed “Responsible AI Safety and Education Act” and Illinois approve its pending “Artificial Intelligence Safety and Security Protocol Act.” These two bills alone would require AI developers to adhere to bespoke reporting requirements and undergo regular external audits. While this may not seem too onerous, such costs quickly add up—especially for smaller labs and startups. Avoiding such a fragmented landscape is essential, as the nation’s successful approach to internet regulation demonstrated.
America’s digital technology policy framework at the dawn of the internet succeeded precisely because Congress chose not to preemptively solve every hypothetical concern before online innovation could flourish. Neither a federal internet bureau nor 50 different state computer control commissions held innovators back. Instead, America’s internet policy vision was rooted in forbearance, flexibility, and freedom.
Congress has a history of limiting state interference with interstate markets and commerce in many other contexts in order to foster national uniformity and economic growth. For example:
- National transportation network. Federal laws have long governed aspects of trucking, railroads, and aviation. Congress and the Carter administration comprehensively deregulated the national aviation marketplace in the 1970s to boost national competition. The preemption provisions of the Airline Deregulation Act of 1978 made it clear that states could not regulate the rates, routes, or services of any air carrier engaged in interstate commerce. Later, the Federal Aviation Administration Authorization Act of 1994 preempted state economic regulation of motor carriers, ensuring a more seamless national logistics network.
- Banking and credit. The National Bank Act established a system of federally chartered banks, and federal laws continue to play a significant role in creating uniform standards for financial services, helping to ensure stability and predictability in national credit markets.
- Product safety and labeling. For many products, such as pharmaceuticals and medical devices regulated by the FDA, federal standards preempt differing state requirements to ensure consistent safety protocols and to facilitate nationwide distribution without the burden of meeting dozens of unique state labeling or approval regimes.
These examples show that Congress often steps in to create coherent national frameworks when a patchwork of state laws could impede economic development, technological advancement, or the free flow of commerce—principles directly applicable to the rapidly evolving field of AI.
18. States have enacted several privacy and cybersecurity-related laws regarded by many as essential safeguards, given the absence of a federal law. How would the AI moratorium impact the enforcement of those laws?
The proposed moratorium specifically targets laws and regulations that single out AI. It does not aim to dismantle generally applicable state laws, including most cybersecurity and data privacy statutes—where federal law has been slower to evolve.
Here is a breakdown of the likely impact:
- Generally applicable laws would remain in force. Broad state privacy laws like the California Consumer Privacy Act as amended by the California Privacy Rights Act or similar comprehensive data privacy laws in other states would generally still be enforceable. These laws typically apply to businesses based on revenue, the amount of data processed, or data brokerage activities, irrespective of the specific technology used. Likewise, general cybersecurity statutes requiring reasonable security measures or data breach notifications would still apply to companies developing or deploying AI. The key is that these laws are written in a technology-neutral fashion, applying to any entity that handles personal information or maintains critical systems—not just those using AI.
- AI-specific provisions could be restricted. The critical distinction lies in whether a state law or a specific provision within it imposes unique obligations or restrictions primarily on AI systems that do not apply to non-AI systems performing comparable functions. While a general data privacy law might still apply to AI companies handling personal data, if a state amends that law or issues a new regulation declaring that only companies using AI models must conduct impact assessments (or face unique consent requirements not demanded of other data analytics tools), that AI-specific provision could be preempted by the moratorium. The underlying privacy principles of the general law would stand; the AI-targeted overlay might not.
- There would be no regulatory vacuum for core protections. The moratorium does not intend to leave individuals without recourse for privacy violations or cybersecurity failures. Existing state consumer protection laws, data breach laws, and laws requiring reasonable data security would continue to provide baseline protection. Federal agencies would also retain their authority to enforce laws against unfair or deceptive practices, discrimination, and other harms—regardless of the technology involved.
- Careful tailoring by states will be necessary.States would need to be meticulous when drafting or amending any laws that might affect AI. If a state aims to address a particular harm, it should frame the law in a technology-neutral way whenever possible. For example, instead of crafting a specific law regarding “AI-driven discrimination in loan applications,” a state might strengthen its general anti-discrimination laws for all automated decision-making systems used in lending. This would ensure that AI systems are covered without uniquely burdening them.
In essence, the moratorium seeks to prevent a fragmented array of state-level, AI-specific operational requirements that could stifle innovation and create significant compliance challenges for AI developers and businesses operating nationwide. It does not strip states of their power to enact and enforce broad, generally applicable laws that protect their citizens’ privacy and security, provided those laws do not disproportionately or specifically target AI in a manner inconsistent with the moratorium’s terms. Congress might still need to clarify federal-state responsibilities as novel issues related to AI and cybersecurity emerge, but the default under this proposal is for technology-neutral state safeguards to endure.
Additional reading from the authors
- Kevin Frazier, “Why the Feds — Not the States — Should Take the Lead on Regulating AI,” Governing, May 28, 2025.
- Adam Thierer, “Adam Thierer Testimony, Hearing on ‘AI Regulation and the Future of U.S. Leadership’,” May 21, 2025.
- Kevin Frazier, “A 10-Year Pause on State AI Laws Is the Smart Move,” Reason, May 21, 2025.
- Kevin Frazier and Adam Thierer, “1,000 AI Bills: Time for Congress to Get Serious About Preemption,” Lawfare, May 9, 2025.
- Adam Thierer, “Comments of R Street Institute on a Learning Period Moratorium for AI Regulation in Response to Request for Information (RFI) Exploring a Data Privacy and Security Framework,” April 3, 2025.
- Adam Thierer, “Real Solutions: Getting AI Policy Right Through a Learning Period Moratorium,” May 29, 2024.
[1] Unless stated otherwise, the term “states” will be used to represent states, cities, counties, etc.
[2] Unless stated otherwise, the term “AI” will be used to represent all covered technologies.