AI Policy Contagion: Misguided Mandates Are Spreading Across America
More than 1,500 bills that have an artificial intelligence (AI) nexus sit in America’s laboratories of democracy, where many heavy-handed proposals are becoming law. A new report from the American Consumer Institute (ACI) and R Street ranks those threats—by category—into what we call “The AI Terrible Ten,” or the most problematic AI-focused ideas being considered in America today. More worrying than the volume of bills is that the worst ideas are proving contagious and spreading rapidly.
As AI panic spreads, many politicians are already experiencing the early-stages of “buyer’s remorse,” symptoms of which will increase over time as unintended consequences grow. As fear-based regulation metastasizes into similar ideas in other states, it will undermine the nation’s once coherent, pro-innovation technology framework. If this panic is not contained, it will worsen the patchwork of confusing and costly mandates already mounting, diminishing America’s influence in the globally important AI sector.
Last December, President Trump signed an executive order requiring a review of onerous and legally dubious state AI regulations. This is an important step to shine light on state overreach, and findings are due soon. But the administration cannot handle this problem unilaterally. Congress needs to help address this situation or the patchwork of costly, confusing, contradictory AI mandates will proliferate.
Policy Contagion Meets Buyer’s Remorse
Consider a few examples of how rushed regulations are spreading as AI policy contagion and are already causing buyer’s remorse:
- The “Algorithmic Discrimination” Fiasco: Colorado became the first state to pass comprehensive AI “fairness” legislation in May 2024, which imposes a variety of open-ended mandates on innovators related to concerns about so-called “algorithmic discrimination” in “high-risk” use cases where AI systems represent a “substantial factor” in making “consequential decisions.” These ambiguous terms prompted Governor Jared Polis (D-Colo.) to warn in his signing statement the measure would “create a complex compliance regime for all developers and deployers of AI” through “significant, affirmative reporting requirements,” and noted how he was “concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state.” Colorado lawmakers have spent the ensuing two years trying to scale-back the measure and delaying its implementation. Several other states have pushed Colorado’s model, however, including some conservative states like Texas and South Carolina. Algorithmic discrimination bills continue to spread and threaten to entangle innovators in a web of European-style regulation, which is why it tops our “AI Terrible Ten” list of AI regulatory ideas.
- “Bill of Rights” or Regulations? Policy contagion is also at work with Florida’s “AI Bill of Rights” proposal that Governor Ron DeSantis (R-Fla.) pitched last year. It includes a smorgasbord of new AI regulations that would preemptively regulate data centers, chatbots, publicity rights, defamation, political advertising, and much more. That list spawned a legislative proposal to enshrine the governor’s wishlist into law. Despite the governor’s support, the bill has encountered many reservations among lawmakers who wonder whether Florida should be regulating so aggressively and running counter to the direction of the Trump administration’s vision for AI acceleration. An “AI Bill of Rights” proposal was floated by President Joe Biden, but was later rejected by the Trump administration. Nonetheless, DeSantis’ idea has now inspired a new Louisiana “AI BIll of Rights” proposal, and many other states are considering catch-all AI measures that incorporate a diverse array of distinct topics.
- Model Safety and “Transparency” Mandates: New York followed California in imposing new regulations on frontier AI labs, which develop some of the largest foundational models, to address “catastrophic” risks, including chemical, biological, radiological, or nuclear weapons, and other geopolitical risks. These issues, however, are best left to federal policymakers with more information and the appropriate security clearances that allow them to weigh the tradeoffs properly. Other states are also pairing child online safety policies alongside such frontier model mandates to advance comprehensive rules and get around Trump’s executive order. Meanwhile, in 2025, the California legislature amended its AI Transparency Act to require large online platforms to create labels and badges to distinguish AI-generated content from human creations. The approach backfired so spectacularly that Governor Gavin Newsom (D-Calif.) asked the legislature to fix the law before it went into effect in 2026, warning of “unintended consequences” and serious threats to “user privacy.” Despite those failures, the idea had already spread to New York, Florida, and Virginia.
Other Patchworks Proliferate
Our “AI Terrible Ten” report explains how the AI “patchwork” is actually several distinct patchworks, most famously algorithmic pricing controls and chatbot regulations.
- Chatbot patchwork: Many states have spun up an overcomplicated patchwork of AI-specific chatbot regulations, each with their own definitions, intent, and compliance regimes. The ideas range from broad-based chatbot restrictions and notification requirements to sector-specific regulations on child uses, AI companions, and AI mental health therapy. All of these laws err when they assume that AI is exclusive to chatbots and large language models, like ChatGPT, Grok, and Claude. Many future technologies are unlikely to fit the text-based archetype of the legislation policymakers are currently pushing with chatbots in mind. For example, Washington, Oregon, and Pennsylvania are quickly moving bills that mandate AI, which present as child protections but could instead interfere and interrupt real-time AI uses for hearing and cognitive assistance. Unwanted notifications may also undermine the user AI experience by inundating consumers with annoying AI pop-ups in much the same way that users often complain of “banner fatigue” from “infuriating” cookie pop-ups. As new AI laws come online in 2026, regret will grow as new technologies clash with the legacy regulations policymakers designed with text-based AI applications in mind. State policymakers without buyers’ remorse now may discover it later as the unintended consequences become clearer. Nevada and Illinois, for example, have effectively created first-of-their-kind bans on AI mental healthcare assistance amidst an ongoing mental healthcare worker shortage, endangering lives in the process.
- Pricing patchwork: Price regulations undermine markets, yet more than 50 proposals to regulate algorithmic pricing were floated in 2025, with many more at the heart of legislative debates in 2026. In fact, New York Attorney General Letitia James used investigative authority under the state’s newly enacted and legally controversial Algorithmic Pricing Disclosure Act law to send threatening letters to companies and to call for even more pricing regulation. California meant to crack down on companies using AI to collude and violate state antitrust rules, yet defined their legislation so broadly that one expert concluded that it gives the state government the power to, “accidentally regulate effectively all market transactions.” That should be enough lesson for other state governments to heed California’s cautionary tale and avoid regulating basic price signals. Instead, Maine considered a ban on price fluctuations caused by changing market demand.
Patience and Existing Policies Are the Best Approach
Congress has not yet created a legislative framework delineating the balance of federal versus state authority, but some degree of federal oversight will eventually be essential because the growing patchwork of state and local regulatory activity threatens to undermine interstate technology markets and development.
Some states—especially California and New York—will have a much louder voice in setting national AI policy. New York already has over 170 AI-related laws pending and some major ones implemented. In addition to the new national AI lab development rules and algorithmic pricing regulations already mentioned, New York lawmakers are proposing “robot taxes,” new rules for the use of AI in hiring and journalism, and a significant expansion of occupational licensing regulations to limit AI use. California is moving almost as fast, and several laws passed in Sacramento appear on our “Terrible Ten” list.
Until Congress addresses problematic, parochial AI regulations, states will continue to lead. Our report identifies smarter ways for state lawmakers to address AI-related concerns, beginning with a survey of the many existing laws, regulations, and court-based remedies that already cover algorithmic or robotic systems. Democratic attorneys general in Massachusetts, Connecticut, and New Jersey have published surveys identifying the many applicable enforcement tools already on the books in their states and others: consumer protection regulations, deceptive practices rules, civil rights laws, and various other generally applicable statutes and regulations. Importantly, complimentary federal laws also exist for each of these categories, as Biden administration officials noted while in office. The United States also has an extraordinary range of court-based remedies and, for better or worse, a very active trial bar always looking to file claims to address perceived harms.
State lawmakers should also consider the many American Legislative Exchange Council (ALEC) model bills collected in their new “State AI Policy Toolkit.” That toolkit contains both hammers and scalpels that can be leveraged to position their states as AI leaders. “Right to Compute” laws–passed in Montana and under consideration in New Hampshire, Ohio, South Dakota, and South Carolina–do not ban regulation, but simply create an operating assumption that laws affecting AI and computation be “narrowly tailored” and fulfill a “compelling government interest.” That should be the benchmark for all regulation, but especially so for life-changing technologies.
State policymakers must also know that existing regulatory structures–not just new ones–often clash with emerging technologies. The ALEC model bills provide legislative solutions by suggesting “learning laboratories” to review outmoded laws and recommend better ones, while also looking to stop burdensome new AI taxes. All of these ideas can be implemented in combination with existing and clarified laws that crack down on child online predators that traffic in illicit material and break existing laws.
Chaos, confusing patchworks, speech violations, and technological repression create new problems and solve nothing. America needs prudent, measured responses that ensure safety and innovation are balanced more reasonably than the fear-based fever driving AI policy today.
Co-author Logan Kolas is the Director of Technology Policy at the American Consumer Institute