The R Street Institute, Information Technology and Innovation Foundation and Chamber of Progress are among the groups on a recent letter to Senate Commerce leaders urging them to hit pause on the “CHAT Act” by Sen. John Husted (R-OH), saying the measure to protect children from risks associated with chatbots is well-intentioned but fundamentally flawed.

“Despite its noble intentions to protect children in a world of digital services, the CHAT Act would in practice do the opposite: it would endanger the privacy and data security of children and families nationwide,” the groups said in a Sept. 16 letter to Senate Commerce Chairman Ted Cruz (R-TX) and ranking member Maria Cantwell (D-WA).

“As artificial intelligence (AI) becomes an ever more prominent feature of modern life, the harms that would likely be imposed by the [CHAT Act] are especially grievous,” says the tech groups’ letter…

The Federal Trade Commission would enforce the measure and issue compliance guidance within 180 days of enactment. The bill was referred to the Commerce Committee, which has yet to schedule consideration.

But the tech and other groups in the letter to Cruz and Cantwell say, “While the scope of the CHAT Act alone is extreme, so too are its effects on the cybersecurity and privacy of American users, which is likely to be deeply dangerous.”

They argue that:

The CHAT Act’s fundamental fault is its requirement that users of AI tools submit to age verification. Age verification requires users to submit a tremendous amount of sensitive, personal information, which then becomes stored in large databases, liable to be hacked or to fall victim to data breaches. This information usually takes the form of scans of government-issued identification documents or biometrics, such as facial scans. It would directly contradict the goal of ensuring children’s safety in the digital world for the federal government to mandate that children serve up their data to technology platforms and expose that data to bad actors.

Further, the groups say, “the CHAT Act’s definition of ‘companion AI chatbot’ is hopelessly broad, encompassing ‘any software-based artificial intelligence system or program that exists for the primary purpose of simulating interpersonal or emotional interaction, friendship, companionship, or therapeutic communication with a user.’”

They say it would “cover essentially all major chatbots, including ChatGPT, Google’s Gemini, Anthropic’s Claude, among others.”

“But the Act’s definition reaches still further,” they say, noting “AI-integrated features that ‘simulate…interpersonal… interaction’ are hardwired into many common products and devices. Among many others, the Act could regulate access to Siri (the assistant native to Apple devices), online customer-support chats, and AI-voice-enabled devices such as Amazon’s Echo.”

“It should also be noted,” the groups say, “that regulatory burdens that fall disproportionately on upstart developers are likely to have regrettable consequences for competition. Large companies can absorb compliance costs; small companies often cannot. To ensure that America’s tech sector continues to thrive, and that free competition remains robust, lawmakers should eschew policies that prevent new competitors from challenging large incumbents.”

They say, “There is a better way forward. Instead of rushing to impose ill-fitted and likely dangerous regulations on AI tools and their users, lawmakers investigate how existing laws, and existing legal frameworks can best be applied to the digital age. … Of course, as new problems arise, it may be necessary to respond, but the CHAT Act seeks to set the country on a dangerous and unsustainable path.”

Letter signers include the Abundance Institute, Competitive Enterprise Institute, NetChoice and seven other organizations.