SUBMITTED STATEMENT OF
ADAM THIERER
RESIDENT SENIOR FELLOW, TECHNOLOGY AND INNOVATION
R STREET INSTITUTE

BEFORE THE
SUBCOMMITTEE ON COURTS, INTELLECTUAL PROPERTY,
ARTIFICIAL INTELLIGENCE, AND THE INTERNET
COMMITTEE ON THE JUDICIARY
U.S. HOUSE OF REPRESENTATIVES

HEARING ON
“AI AT A CROSSROADS: A NATIONWIDE STRATEGY OR CALIFORNICATION?”

SEPTEMBER 18, 2025

Chairman Issa, Ranking Member Johnson, and members of the subcommittee:

Thank you for the invitation to participate in this hearing. My name is Adam Thierer, and I am a senior fellow at the R Street Institute, where I cover emerging technology policy.

My message today boils down to one simple point: Congress needs to act promptly to formulate a clear national policy framework for artificial intelligence (AI) to ensure our nation is prepared to win the computational revolution.

If we get this wrong, the consequences could be profound in terms of geopolitical competitiveness, national security, economic growth, small business innovation, and human flourishing.[1]

Congress Must Stop the Europeanization of American Technology Policy

Unfortunately, America’s AI innovators are currently facing the prospect of many state governments importing European-style technocratic regulatory policies to America and, even worse, applying them in a way that could end up being even more costly and confusing than what the European Union has done.[2]

Euro-style tech regulation is heavy-handed with highly detailed rules that are both preemptive and precautionary in character.[3] In other words, Europe’s tech policy model is “regulate-first” while America’s philosophy is “try-first.”[4]

At the heart of the European regulatory approach lies the implicit assumption that emerging tech entrepreneurs are essentially “guilty until proven innocent” of some theoretical future crime.[5] When this mentality inspires technology policy, it translates into mountains of red tape that suffocate innovation and investment. The evidence shows this approach devastated the European digital economy.[6]

This regulatory vision is especially problematic for so-called “Little Tech” innovators because they struggle with the confusing and costly compliance requirements.[7]As scholars note, “paperwork favors the powerful. The more paperwork that’s required, people with resources will get through it and people without them will not.”[8]

The National Pro-Growth AI Framework America Needs

Congress must not allow European-style regulation to come to our shores.[9] We instead need to double-down on freedom, growth, and technological opportunity to ensure America reaps the benefits of the next great technological revolution.[10]

The Constitution assigns Congress the lead role in protecting interstate commerce. The Founders wisely provided Congress with this power so that it could facilitate commerce between the states by eliminating barriers that states might otherwise be inclined to erect if left to their own devices.[11] It is essential that Congress exercise that responsibility promptly to ensure the robust development of the national AI marketplace.

Specifically, Congress needs to ensure that parochial AI mandates do not have extraterritorial reach that undermine interstate algorithmic commerce.[12] Courts have been clear that state laws may be unconstitutional if they impose costs on interstate commerce that substantially outweigh their in-state benefits.[13]

But Congress should not wait for courts to clarify which new state AI laws are unconstitutional on these grounds. Instead, it should assert its lead role in protecting interstate commerce to avoid a “regulatory cacophony” of conflicting policies that chill nationwide AI competition, choice, and investment.[14] AI systems, like other digital technologies and markets, exhibit strong economies of scale and network effects such that they become more effective and valuable as more people use them. “Fragmented state regulation can impede these effects by creating artificial barriers to data sharing, user acquisition, and system interoperability.”[15]

A national framework is also crucial to ensuring that America has the computational capabilities needed to square off against China and other global adversaries in what some call an “AI Cold War.”[16] Allowing a patchwork of confusing and costly parochial AI policies to develop in the U.S. would be tantamount to shooting ourselves in the foot as this race is just getting underway.[17]

With more than 1,000 AI-related bills already pending across the nation, this danger is real.[18] Some states are far more aggressive and influential on national markets than others, however. Almost 50 AI-related laws are pending in California currently and New York is considering over 130 AI measures. Sacramento and Albany should not be dictating AI policy for the entire nation.[19] The “laboratories of the states” ideal does not work when just one or two large states are effectively imposing their heavy-handed regulatory standard on the entire nation and firms nationally will be forced to comply with the most aggressive regulatory baseline.[20]

This is why during a speech in July announcing the administration’s new “AI Action Plan,” President Trump warned of “lowest common denominator” AI regulation by one state and called for “one commonsense federal standard that supersedes all states.”[21] America would not have become the global leader in digital technology had the nation let 50 State Computer Bureaus or even just one hypothetical California Computer Commission license every aspect of interstate computing and treat the entire internet as a regulated public utility.[22] Thankfully, America avoided that fate because of wise bipartisan decisions that Congress made in the 1990s, which let digital technology be “born free” instead of into a regulatory cage.[23] The U.S. is now the undisputed global leader in almost every segment of computing and digital commerce thanks to this policy approach.[24] 

Now is the time for Congress to work the same magic for AI markets.

Congress Has Taken Steps to Oversee Interstate Tech Markets Before & Should Do So Again for AI

Congress has played an active role in shaping national policy for previous information and communications technologies through important laws like the Copyright Act of 1976, the Telecommunications Act of 1996,[25] and the Internet Tax Freedom Act of 1998.[26] The Clinton administration also promoted a national framework for the internet and electronic commerce and speech in the late 1990s.[27] These policies brought greater uniformity and certainty to markets and encouraged the robust development and diffusion of many important new information age innovations.[28] Congress has also preempted state and local policies that interfere with the flow of commerce or create policy conflicts within important national sectors such as aviation,[29] railroads,[30] and food and drug safety.[31]

Today, a confusing patchwork of state and local legislative proposals threatens to undermine similar objectives on the AI front. Colorado Governor Jared Polis (D) has called upon Congress to preempt state AI laws such as the one his own state passed last year, which he correctly argued would “create a complex compliance regime for all developers and deployers of AI.”[32] Colorado lawmakers recently voted to delay that law because of its costs and confusing provisions.[33]

But that Colorado AI law is still set to go into effect next year and will raise the problems Polis rightly warned of when he noted how, “[g]overnment regulation that is applied at the state level in a patchwork across the country can have the effect to tamper innovation and deter competition in an open market.”[34] Gov. Polis even endorsed the idea of a state AI regulatory moratorium, like the one Congress considered this summer.[35]

Other Democratic governors have raised similar concerns. In May, Connecticut Governor Ned Lamont (D) said, “I just worry about every state going out and doing their own thing, a patchwork quilt of regulations,” and the burdens on AI development that might create.[36] Meanwhile, just last week, New York Governor Kathy Governor Hochul (D) noted how, “it’s hard when one state has a set of rules, another state does, another state. I don’t think that’s a model for inspiring innovation.”[37]

With the recent failure of the AI regulatory moratorium, however, it is open season for still more parochial AI regulations that would give rise to the sort of patchwork problem that Governors Polis, Lamont, and Hochul worry about.[38] Congress could try again to implement such a moratorium to address this problem, or it could move to formally preempt specific state and local regulatory enactments that would impose an undue burden on the free flow of interstate algorithmic commerce or undermine other important national interests in AI developments.[39]

Scoping AI Preemption

If Congress chooses the latter option, federal lawmakers should take the following actions in legislation to formulate a national AI policy framework and make clear its intent to affirmatively preempt a patchwork of parochial AI regulations:

Even where the scoping of formal preemption proves difficult, everyone should agree that AI development will be discouraged if America has dozens of different definitions of the key terms like what constitutes “high risk” AI, or a “substantial factor” in making a “consequential decision.”[42] The proliferation of new regulatory categories in these bills like deployers, developers, distributors, and integrators will also create confusion. Even the term “artificial intelligence” itself is sometimes defined differently in many proposals.[43] America cannot let AI policy unfold like this and, once again, lawmakers should consider how such a confusing governance regime would have undermined the development of the internet and electronic commerce had it been imposed through a patchwork of differing state standards a generation ago.

This is where the “S” in NIST and CAISI matters and can be helpful. At a minimum, innovators and markets need clear and consistent standards to minimize confusion and costly compliance burdens where the field is left open for some future state and local regulation. Inconsistent standards will undermine market certainty and hurt investment, innovation, and competition. The NIST’s AI Risk Management Framework has provided a baseline for previous standards and best practices in this arena, and it offers a flexible, multistakeholder-driven process that can help solve jurisdictional problems and conflicts in an agile fashion.[44] The Federal Trade Commission and Department of Justice might also be able to play a role in investigating future state AI regulatory enactments that might have anti-competitive effects and which could give rise to a potential dormant commerce clause case.[45]

Ongoing congressional oversight of this process will be essential and Congress can determine when it needs to revisit the formal scoping of AI preemption should unforeseeable issues and laws give rise to new interstate burdens or conflict with national priorities.

Both Congress and the States Will Have Continuing Roles & Responsibilities

Regardless of whether Congress chooses to utilize a moratorium or formal preemption to create a more coherent national AI policy framework for America, federal lawmakers can simultaneously consider what sort of new “light-touch” rules might be necessary at the federal level to address various AI safety concerns. This could include new transparency requirements or tailored liability rules.[46]

Congress should simultaneously exercise greater oversight of how various federal agencies are already regulating algorithmic systems both to ensure development continues but that safety objectives are addressed. In some cases, existing policies may need to be reformed to achieve the first objective, while in other contexts there may be a need to supplement existing regulatory processes. In both cases, existing or new policies should be subjected to strict cost-benefit analysis to ensure they minimize burdens on innovation.[47] Greater technical training or resources may be needed at some federal agencies to carry out that mission. Meanwhile, state governments still have a role to play and will have plenty of room to act.[48] To reiterate, every state government already possesses a diverse policy toolkit of generally applicable laws to address any real-world harms that might come from AI applications.[49] As the Massachusetts Office of the Attorney General stated in 2024, “existing state consumer protection, anti-discrimination, and data security laws apply to emerging technology, including AI systems, just as they would in any other context.”[50]

States can also continue to focus their efforts on other areas of clear parochial concern, where local knowledge and experience is more relevant. This includes the use of AI in law enforcement, educational systems, and election processes.[51] States can also focus on AI development opportunities and how to use experimental “sandboxes” and “learning labs” to encourage creative governance approaches in sectors that are already regulated.[52] Finally, states might also consider “right to compute” legislation like a measure that already passed in Montana, which would protect the public’s ability to access and use computational resources.[53]

Conclusion

In closing, the time has come for Congress to exercise its constitutional responsibility to protect the interstate marketplace and the national interest in the development of robust AI capabilities.  This is a once-in-a-generation moment when we need to make sure we get policy right to spur the computational revolution and ensure that the United States remains at the forefront of it.

Thank you for holding this hearing and I look forward to any questions you may have.

______

Appendix: Additional reading on AI & preemption


[1] Adam Thierer, “Winning the AI Future: Why America Should Double Down on the Freedom to Innovate,” R Street Institute In the News, Aug. 28, 2025. https://www.rstreet.org/commentary/winning-the-ai-future-why-america-should-double-down-on-the-freedom-to-innovate.

[2] Will Rinehart, “The Hidden Price Tag of California’s AI Oversight Bill,” Exformation, Sept. 9, 2025, https://exformation.williamrinehart.com/p/the-hidden-price-tag-of-californias. Will Rinehart, “How much might AI legislation cost in the U.S.?” Exformation, Mar. 19, 2025. https://exformation.williamrinehart.com/p/how-much-might-ai-legislation-cost.

[3] Mohamed Moutii, “Europe’s Precautionary Principle Is Killing the Next Big Thing,” The Daily Economy, July 30, 2025. https://thedailyeconomy.org/article/europes-precautionary-principle-is-killing-the-next-big-thing.

[4] Adam Thierer, “Trump AI Action Plan Charts Pro-Innovation Path Forward to Beat China,” R Street Analysis, July 23, 2025. https://www.rstreet.org/commentary/trump-ai-action-plan-charts-pro-innovation-path-forward-to-beat-china.

[5] Adam Thierer, “A Global Clash of Visions: The Future of AI Policy,” The Hill, May 4, 2021. https://thehill.com/opinion/technology/551562-a-global-clash-of-visions-the-future-of-ai-policy.

[6] Greg Ip, “Europe Regulates Its Way to Last Place,” Wall Street Journal, Jan. 31, 2024. https://www.wsj.com/economy/europe-regulates-its-way-to-last-place-2a03c21d. Tom Fairless & David Luhnow, “The Tech Industry Is Huge — and Europe’s Share of It Is Very Small,” Wall Street Journal, May 19, 2025. https://www.wsj.com/tech/europe-big-tech-ai-1f3f862c.

[7] Colin McCune, “The Precautionary Empire: Why Policymakers Fail Builders,” a16z, Sept. 4, 2025. https://a16z.com/the-precautionary-empire-why-policymakers-fail-builders.

[8] The Ezra Klein Show, “Transcript: Ezra Klein Interviews Jennifer Pahlka,” The New York Times, June 6, 2023. https://www.nytimes.com/2023/06/06/podcasts/transcript-ezra-klein-interviews-jennifer-pahlka.html

[9] Adam Thierer, “Eurocrats plot to hobble US AI leadership on our own shores,” The Hill, Sept. 28, 2024. https://thehill.com/opinion/4904165-europe-eu-regulation-tech.

[10] Adam Thierer, “Defending Technological Dynamism & the Freedom to Innovate in the Age of AI,” University of Texas at Austin Civitas Institute, Dynamism Outlook, June 4, 2025. https://www.civitasinstitute.org/research/defending-technological-dynamism-the-freedom-to-innovate-in-the-age-of-ai.

[11] Adam Thierer, “The AI Regulatory Moratorium and the Proper Understanding of American Federalism,” Medium, June 28, 2025. https://medium.com/@AdamThierer/the-ai-regulatory-moratorium-and-the-proper-understanding-of-american-federalism-b1b57b9c8b3e.

[12] Kevin Frazier, “Extraterritorial Limits on States as Laboratories of AI Policy,” The Regulatory Review, Aug. 25, 2025. https://www.theregreview.org/2025/08/25/frazier-extraterritorial-limits-on-states-as-laboratories-of-ai-policy.

[13] Matt Perault and Jai Ramaswamy, “The Commerce Clause in the Age of AI: Guardrails and Opportunities for State Legislatures,” a16z, Sept. 2, 2025. https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures.

[14] Kristian Stout, “Federal Preemption and AI Regulation: A Law and Economics Case for Strategic Forbearance,” Washington Legal Foundation, WLF Legal Pulse, May 30, 2025. https://www.wlf.org/2025/05/30/wlf-legal-pulse/federal-preemption-and-ai-regulation-a-law-and-economics-case-for-strategic-forbearance.

[15] Ibid.

[16] Arthur Herman, “China and Artificial Intelligence: The Cold War We’re Not Fighting,” Commentary, July/Aug. 2024. https://www.commentary.org/articles/arthur-herman/china-artificial-intelligence-cold-war.

[17] Seung Yeon Lee, “The Growing Risks of Fragmented State AI Laws,” Center for Data Innovation, Aug. 28, 2025. https://datainnovation.org/2025/08/the-growing-risks-of-fragmented-state-ai-laws.

[18] Kevin Frazier and Adam Thierer, “1,000 AI Bills: Time for Congress to Get Serious About Preemption,” Lawfare, May 9, 2025. https://www.lawfaremedia.org/article/1-000-ai-bills–time-for-congress-to-get-serious-about-preemption.

[19] Evangelos Razis and James C. Cooper, “The Federalist’s Dilemma: State AI Regulation & Pathways Forward,” George Mason University Law & Economics Research Paper Series, 25-07, (June 2025): p. 42. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5283472. [Noting that “the ‘California effect’ appears alive and well with respect to AI regulation.”]

[20]  Dan L. Burk, “How State Regulation of the Internet Violates the Commerce Clause,” Cato Journal, 17:2 (1997):  158-9. “However, from the perspective of competitive federalism, the situation is far more grave than the traditional balancing test might suggest. If the ‘lowest common denominator’ prevails among on-line services, then the ‘laboratory of the states’ is disabled. No state wishing to experiment with a lesser level of regulation will be able to do so. Similarly it goes almost without saying that the ‘laboratory’ is disabled in the situation where on-line services are driven out of business by conflicting requirements. [. . .] This constitutes an enormous problem for horizontal federalism. A particular state cannot be permitted to dictate to the entire country the regulatory standards for any activity.”

[21] C-Span, “User Clip: President Trump Calls for End of AI State Patchwork,” July 23, 2025. https://www.c-span.org/clip/white-house-event/user-clip-president-trump-calls-for-end-of-ai-state-patchwork/5168519.

[22] Kevin Frazier and Adam Thierer, “No Single State Should Dictate National AI Policy,” Governing, Aug. 28, 2028. https://www.governing.com/artificial-intelligence/no-single-state-should-dictate-national-ai-policy.

[23] Adam Thierer, “Getting AI Innovation Culture Right,” R Street Institute Policy Study No. 281 (March 2023). https://www.rstreet.org/research/getting-ai-innovation-culture-right.

[24] Adam Thierer, “Statement for the Record on ‘Artificial Intelligence: Risks and Opportunities,’” U.S. Senate Homeland Security and Governmental Affairs Committee, March 8, 2023. https://www.rstreet.org/outreach/testimony-on-artificial-intelligence-risks-and-opportunities.

[25] In the Telecommunications Act of 1996, Congress specified that, “[n]o State or local statute or regulation, or other State or local legal requirement, may prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or intrastate telecommunications service.” 47 U.S.C. § 253. The law also included other specific preemptions as well as a provision instructing federal and state regulators to forbear from regulating in certain instances to enhance competition.

[26] Bryan L. Adkins, Alexander H. Pepper, and Jay B. Sykes, Congressional Research Service, “Federal Preemption: A Legal Primer,” May 18, 2023. https://www.congress.gov/crs-product/R45825.

[27] White House, “The Framework for Global Electronic Commerce,” 1997. https://clintonwhitehouse4.archives.gov/WH/New/Commerce.

[28] Adam Thierer, “The Policy Origins of the Digital Revolution & the Continuing Case for the Freedom to Innovate,” R Street Real Solutions, Aug. 15, 2024. https://www.rstreet.org/commentary/the-policy-origins-of-the-digital-revolution-the-continuing-case-for-the-freedom-to-innovate/.

[29] In the Airline Deregulation Act of 1978, Congress specified that “A State… may not enact or enforce a law… related to a price, route, or service of an air carrier.” 49 U.S.C. § 41713(b)(1).

[30] In the Federal Railroad Safety Act, Congress specified that, “Laws, regulations, and orders related to railroad safety and laws, regulations, and orders related to railroad security shall be nationally uniform to the extent practicable.” 49 U.S.C. § 20106(a).

[31] In the Federal Food, Drug, and Cosmetic Act, Congress specified that, “no State or political subdivision of a State may establish or continue in effect with respect to a device intended for human use any requirement which is different from, or in addition to, any requirement applicable under this chapter.” 21 U.S.C. § 360k(a).

[32] Governor Jared Polis, Signing Statement for Senate Bill 24-205, May 17, 2024. https://drive.google.com/file/d/1i2cA3IG93VViNbzXu9LPgbTrZGqhyRgM/view.

[33] Mariam Baksh, “Colorado legislature delays enforcement of AI law as deployer coalition pursues developer liability,” Inside AI Policy, Aug. 26, 2025. https://insideaipolicy.com/ai-daily-news/colorado-legislature-delays-enforcement-ai-law-deployer-coalition-pursues-developer. Kevin Frazier & Adam Thierer, “Colorado’s AI Law Is a Cautionary Tale for the Nation,” Reason, Aug. 15, 2025. https://reason.com/2025/08/15/colorados-ai-law-is-a-cautionary-tale-for-the-nation.

[34] Governor Jared Polis, Signing Statement for Senate Bill 24-205, May 17, 2024. https://drive.google.com/file/d/1i2cA3IG93VViNbzXu9LPgbTrZGqhyRgM/view.

[35] Zach Williams, “Colorado Gov. Polis Supports Federal Moratorium on State AI Laws,” Bloomberg Government, May 13, 2025. https://news.bgov.com/bloomberg-government-news/colorado-gov-polis-supports-federal-moratorium-on-state-ai-laws.

[36] Mark Pazniokas, “Last minute deal wins bipartisan passage of AI bill in CT Senate,” CT Mirror, May 15, 2025. https://ctmirror.org/2025/05/15/ct-ai-artificial-intelligence-bill-passes-senate.

[37] Governor Kathy Hochul, “Audio & Rush Transcript: Governor Hochul is a Guest on Bloomberg TV,” Sep. 10, 2025. https://www.governor.ny.gov/news/audio-rush-transcript-governor-hochul-guest-bloomberg-tv-0.

[38] Adam Thierer, “The AI Regulatory Moratorium Fails: What Comes Next?” Medium, July 1, 2025. https://medium.com/@AdamThierer/the-ai-regulatory-moratorium-fails-what-comes-next-9bd80e14f36b.

[39] Matt Perault and Jai Ramaswamy, “The Commerce Clause in the Age of AI: Guardrails and Opportunities for State Legislatures,” a16z, Sept. 2, 2025. https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures.

[40] Dean W. Ball and Alan Z. Rozenshtein, “Congress Should Preempt State AI Safety Legislation,” Lawfare, June 17, 2024. https://www.lawfaremedia.org/article/congress-should-preempt-state-ai-safety-legislation.

[41] Adam Thierer, “Comments of the R Street Institute to the National Telecommunications and Information Administration (NTIA) on “AI Accountability Policy,” June 9, 2023. https://www.rstreet.org/outreach/comments-of-the-r-street-institute-to-the-national-telecommunications-and-information-administration-ntia-on-ai-accountability-policy.

[42] Dean Ball, Greg Lukianoff, and Adam Thierer, “How state AI regulations threaten innovation, free speech, and knowledge creation,” The Eternally Radical Idea, Apr. 3, 2025. https://eternallyradicalidea.com/p/how-state-ai-regulations-threaten.

[43] Sam Crombie and Jack Nicastro, “Defining “Artificial Intelligence” in State Legislation: An Analysis of the Current Landscape,” Now + Next, July 17, 2024. https://nowandnext.substack.com/p/defining-artificial-intelligence.

[44] National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1 (Jan. 2023). https://www.nist.gov/news-events/news/2023/01/nist-risk-management-framework-aims-improve-trustworthiness-artificial.

[45] Neil Chilson and Josh T. Smith, “Comment on Request for Information on the Development of an Artificial Intelligence (AI) Action Plan,” March 14, 2025. https://files.nitrd.gov/90-fr-9088/Abundance-Institute-AI-RFI-2025.pdf.

[46] Adam Thierer, “AI Policy in Congress Mid-2025: Where Are We Headed Next?” R Street Real Solutions, June 25, 2025. https://www.rstreet.org/commentary/ai-policy-in-congress-mid-2025-where-are-we-headed-next.

[47] Adam Thierer, “Comments of the R Street Institute in Request for Information on the Development of an Artificial Intelligence (AI) Action Plan,” R Street Institute Regulatory Comments, Mar. 15, 2025. https://www.rstreet.org/outreach/comments-of-the-r-street-institute-in-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-plan.

[48] Will Rinehart, “The Best AI Law May Be One That Already Exists,” AEIdeas, Feb. 03, 2025. https://www.aei.org/articles/the-best-ai-law-may-be-one-that-already-exists.

[49] J. Scott Babwah Brennen, Kevin Frazier, and Anna Vinals Musquera, “Are Existing Consumer Protections Enough for AI?” Lawfare, Sept. 3, 2025. https://www.lawfaremedia.org/article/are-existing-consumer-protections-enough-for-ai.

[50] Massachusetts Office of the Attorney General, “AG Campbell Issues Advisory Providing Guidance On How State Consumer Protection And Other Laws Apply To Artificial Intelligence,” Apr. 16, 2024. https://www.mass.gov/news/ag-campbell-issues-advisory-providing-guidance-on-how-state-consumer-protection-and-other-laws-apply-to-artificial-intelligence.

[51] Matt Perault, “Setting the Agenda for Global AI Leadership: Assessing the Roles of Congress and the States,” a16z, Feb. 4, 2025. https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states.

[52] Beth Do and Stacey Gray, “Balancing Innovation and Oversight: Regulatory Sandboxes as a Tool for AI Governance,” Future of Privacy Forum Blog, Aug. 4, 2025. https://fpf.org/blog/balancing-innovation-and-oversight-regulatory-sandboxes-as-a-tool-for-ai-governance. Neil Chilson and Adam Thierer, “A Sensible Approach to State AI Policy,” Federalist Society Regulatory Transparency Project Blog, Oct. 9, 2024. https://rtp.fedsoc.org/blog/a-sensible-approach-to-state-ai-policy.

[53] Bill Kramer, “Montana is the First State to Guarantee Computational Freedom,” Multistate.AI, Apr. 25, 2025. https://www.multistate.ai/updates/vol-59. Taylor Barkley, “Protecting our right to compute — A new frontier for freedom,” The Mercury, Mar. 18, 2025. https://themercury.com/commentary-protecting-our-right-to-compute-a-new-frontier-for-freedom/article_caa5093e-040c-11f0-b3d0-63929ef4e745.html.