With less than six months until the 2024 election, a key U.S. Senate committee advanced three bills on May 15 that establish new federal restrictions and guidelines around the use of artificial intelligence (AI) in elections. Two of the three bills sponsored by Sen. Amy Klobuchar (D-Minn.), S.2770 and S.3875, seek to minimize the impact of AI-driven election disinformation, such as deepfakes and deceptive robocalls, using bans and disclaimers. The third, S.3897, tasks a federal commission with helping local election officials prepare for the risks and opportunities of election administration amid rapidly evolving AI technology. While there are serious flaws with the two proposals seeking to regulate political speech, directing the Election Assistance Commission (EAC)—an independent federal commission that provides financial and technical support to local election offices—to issue voluntary guidelines for election offices is a step in the right direction.

Interest in cracking down on deceptive AI-generated political speech is not limited to Washington, D.C. Fourteen states now have laws on the books that impose bans or disclosure requirements on certain types of AI-generated election communications, 12 of which enacted these laws in the last two years. Despite the growing bipartisan popularity of these laws at the state and federal levels, questions remain regarding both their constitutionality and effectiveness at protecting voters from AI-generated election disinformation.

At their core, laws that regulate the use of AI in election communications are establishing a speech restriction that may violate the First Amendment. While it’s true that the U.S. Supreme Court has previously accepted the use of disclosures and disclaimers to inform the public about things like sources of funding for campaign advertisements, imposing such requirements based on the truthfulness of the communication and the technology used to generate it is a fundamentally different issue. Similarly, speech prohibitions that take the same approach of assessing truthfulness and technology should face even more constitutional scrutiny.

Even if legal, there is no guarantee these laws will be effective. Bad actors determined to disrupt the election process will likely find ways to evade the restrictions—as we’ve seen for decades with Federal Communications Commission efforts to regulate illegal robocalls—while foreign governments are beyond the reach of even the most stringent federal or state laws. On the other hand, there’s an incentive to over-label election communications as AI-generated because there’s no single definition of AI, and the technology is already deeply ingrained in all aspects of modern society. As a result, these labels would quickly become irrelevant if included on most election communications.

With those challenges in mind, Congress should focus on how the federal government can play a productive role in supporting local election administrators. S.3897 directs the EAC to lead this effort by developing voluntary guidelines that address the uses and risks of AI in various aspects of election administration, including cybersecurity and responding to disinformation. It also requires the EAC to produce an after-action report on how AI actually impacted the 2024 elections.  

Voluntary guidelines are useful because they provide support to those who need the help while leaving space for individual offices to innovate and explore different strategies for adapting to this new technology. In the long term, the best strategies for dealing with AI are likely to emerge from local officials who deal with the day-to-day impacts of AI—both positive and negative. S.3897 creates space for this innovation while providing meaningful support through the EAC.

Overall, AI is likely to remain a hot topic through the 2024 election and beyond. While there is public pressure on lawmakers to act, Congress should limit their action to supporting local election officials as they adapt to the new reality of administering free and fair elections in the age of AI.