More than 1,500 bills that have an artificial intelligence (AI) nexus sit in America’s laboratories of democracy, where many heavy-handed proposals are becoming law. A new report from the American Consumer Institute (ACI) and R Street ranks those threats—by category—into what we call “The AI Terrible Ten,” or the most problematic AI-focused ideas being considered in America today. More worrying than the volume of bills is that the worst ideas are proving contagious and spreading rapidly.

As AI panic spreads, many politicians are already experiencing the early-stages of “buyer’s remorse,” symptoms of which will increase over time as unintended consequences grow. As fear-based regulation metastasizes into similar ideas in other states, it will undermine the nation’s once coherent, pro-innovation technology framework. If this panic is not contained, it will worsen the patchwork of confusing and costly mandates already mounting, diminishing America’s influence in the globally important AI sector.

Last December, President Trump signed an executive order requiring a review of onerous and legally dubious state AI regulations. This is an important step to shine light on state overreach, and findings are due soon. But the administration cannot handle this problem unilaterally. Congress needs to help address this situation or the patchwork of costly, confusing, contradictory AI mandates will proliferate.

Policy Contagion Meets Buyer’s Remorse

Consider a few examples of how rushed regulations are spreading as AI policy contagion and are already causing buyer’s remorse:

Other Patchworks Proliferate

Our “AI Terrible Ten” report explains how the AI “patchwork” is actually several distinct patchworks, most famously algorithmic pricing controls and chatbot regulations.

Patience and Existing Policies Are the Best Approach

Congress has not yet created a legislative framework delineating the balance of federal versus state authority, but some degree of federal oversight will eventually be essential because the growing patchwork of state and local regulatory activity threatens to undermine interstate technology markets and development.

Some states—especially California and New York—will have a much louder voice in setting national AI policy. New York already has over 170 AI-related laws pending and some major ones implemented. In addition to the new national AI lab development rules and algorithmic pricing regulations already mentioned, New York lawmakers are proposing “robot taxes,” new rules for the use of AI in hiring and journalism, and a significant expansion of occupational licensing regulations to limit AI use. California is moving almost as fast, and several laws passed in Sacramento appear on our “Terrible Ten” list.

Until Congress addresses problematic, parochial AI regulations, states will continue to lead. Our report identifies smarter ways for state lawmakers to address AI-related concerns, beginning with a survey of the many existing laws, regulations, and court-based remedies that already cover algorithmic or robotic systems. Democratic attorneys general in Massachusetts, Connecticut, and New Jersey have published surveys identifying the many applicable enforcement tools already on the books in their states and others: consumer protection regulations, deceptive practices rules, civil rights laws, and various other generally applicable statutes and regulations. Importantly, complimentary federal laws also exist for each of these categories, as Biden administration officials noted while in office. The United States also has an extraordinary range of court-based remedies and, for better or worse, a very active trial bar always looking to file claims to address perceived harms.

State lawmakers should also consider the many American Legislative Exchange Council (ALEC) model bills collected in their new “State AI Policy Toolkit.” That toolkit contains both hammers and scalpels that can be leveraged to position their states as AI leaders. “Right to Compute” laws–passed in Montana and under consideration in New Hampshire, Ohio, South Dakota, and South Carolina–do not ban regulation, but simply create an operating assumption that laws affecting AI and computation be “narrowly tailored” and fulfill a “compelling government interest.” That should be the benchmark for all regulation, but especially so for life-changing technologies.

State policymakers must also know that existing regulatory structures–not just new ones–often clash with emerging technologies. The ALEC model bills provide legislative solutions by suggesting “learning laboratories” to review outmoded laws and recommend better ones, while also looking to stop burdensome new AI taxes. All of these ideas can be implemented in combination with existing and clarified laws that crack down on child online predators that traffic in illicit material and break existing laws.

Chaos, confusing patchworks, speech violations, and technological repression create new problems and solve nothing. America needs prudent, measured responses that ensure safety and innovation are balanced more reasonably than the fear-based fever driving AI policy today.

Co-author Logan Kolas is the Director of Technology Policy at the American Consumer Institute