On Sept. 18, 2025, R Street Institute Senior Fellow Adam Thierer testified before the House Judiciary Committee’s Subcommittee on Courts, Intellectual Property, Artificial Intelligence, and the Internet in a hearing titled “AI at a Crossroads: A Nationwide Strategy or Californication?” The ensuing discussion examined how federal preemption of the evolving state-level artificial intelligence (AI) regulatory patchwork could provide greater clarity to innovators and help drive technological innovation.

Thierer summarized his remarks as such: “Congress needs to act promptly to formulate a clear national policy framework for [AI] to ensure our nation is prepared to win the computational revolution.” He touched upon the issues posed by state governments imposing proscriptive, European-style regulations on AI; the need for a federal-level pro-growth AI framework to preempt state regulations; the history and scope of federal preemption in technology policy; and the continued role both federal and state officials will play in the evolution of America’s governance framework.

Regarding state AI regulations, Thierer noted that many AI companies are facing the “worst of both worlds” as states implement disparate, duplicative, and even contradictory regulatory frameworks. These complex and proscriptive regulations are applied in ways that increase compliance costs and threaten to stifle innovation under mountains of red tape. As we’ve seen in the wake of the General Data Protection Regulation (GDPR), these compliance costs will hit “Little Tech” firms the hardest. Big incumbent players with the resources necessary to comply or circumvent this web of regulations will likely come out on top.

To prevent this emerging patchwork from stifling the development of AI systems, Thierer argued, Congress must assert its authority in matters of interstate commerce. It is paramount that the AI governance framework for the country reflects the values of freedom and technological opportunity and not the provincial concerns of policymakers in Albany and Sacramento. Federal preemption of state regulations is not without precedent. As Thierer pointed out, Congress played an active role in formulating policy for previous novel technologies, just as it did with the Copyright Act of 1976, the Telecommunications Act of 1996, and the Internet Tax Freedom Act of 1998. Through bipartisan leadership, such legislation laid the groundwork for the United States to become the global leader in digital technology starting in the late 1990s and early 2000s. In order for the United States to maintain its lead in technological development and the digital economy, it must establish a similar AI framework.

Should Congress decide to construct a national framework for AI, Thierer offers the following recommendations:

  1. Expressly preempt state regulations related to frontier AI labs and models.
  2. Preempt state-led initiatives to address “algorithmic bias” through the regulation of AI development and applications via AI audits or algorithmic impact assessments.
  3. Establish a standing working group, headed by the National Institute of Standards and Technology (NIST) and the Center for AI Standards and Innovation (CAISI), to resolve other issues that may arise between federal and state AI policy.

Thierer’s testimony also emphasized the continued role of states under a national framework:

To reiterate, every state government already possesses a diverse policy toolkit of generally applicable laws to address any real-world harms that might come from AI applications. As the Massachusetts Office of the Attorney General stated in 2024, “existing state consumer protection, anti-discrimination, and data security laws apply to emerging technology, including AI systems, just as they would in any other context.”

States can also continue to focus their efforts on other areas of clear parochial concern, where local knowledge and experience is more relevant. This includes the use of AI in law enforcement, educational systems, and election processes. States can also focus on AI development opportunities and how to use experimental “sandboxes” and “learning labs” to encourage creative governance approaches in sectors that are already regulated. Finally, states might also consider “right to compute” legislation like a measure that already passed in Montana, which would protect the public’s ability to access and use computational resources.

Nevertheless, the first step is for Congress to support the development of a vibrant, national market for AI by creating federal rules of the road that enable innovation rather than stifle it.

Thierer had a number of exchanges with members of Congress throughout his appearance before the subcommittee. Rep. Laurel Lee (R-Fla.) asked about his recommendation that NIST and CAISI be given more authority to help develop standards for AI frontier models. Thierer noted that such a move is necessary because, while many states are attempting to impose highly technical rules and regulations, they lack the capacity and information to do so effectively. Allowing NIST to guide the development of such standards ensures that responsibility and expertise are aligned, allowing for good standards to be developed alongside existing state and federal policies.

Rep. Scott Fitzgerald (R-Wis.) asked how embracing European-style regulations would affect the development of the AI ecosystem in the United States. Thierer pointed out that Europe’s approach to regulation has had a stifling effect on AI innovation—only two of the 25 largest AI companies are headquartered there, whereas 18 are located in the United States. Additionally, Europe’s regulatory model, which assumes that innovators are “guilty until proven innocent,” has decimated their technological economy. Meanwhile, America’s “light-touch” approach has generated myriad benefits for the country as a whole.

Later, Rep. Ted Lieu (D-Calif.) asked for Thierer’s thoughts on actions Congress could take to preempt state AI regulation. Thierer responded that Congress can take both broad and tailored approaches to AI, citing the “TAKE IT DOWN Act” (S. 146) as a specific example of targeted legislation passed into law. Regardless, he emphasized the need to avoid a regulatory patchwork through the creation of a federal framework.

Lastly, Rep. Darrell Issa (R-Calif.) asked about the regulation of AI inputs versus outputs with regard to the scope of federal preemption. In his response, Thierer noted the regulatory morass that would emerge from trying to regulate AI inputs on a state-by-state basis. Instead, a focus on the uses of AI would be best, highlighting the fact that civil rights and consumer protection laws would be exempt from federal preemption. In sum, a moratorium or federal preemption would serve to balance innovation with consumer protection.

Click here to watch the full hearing.

Follow our artificial intelligence policy work.