AI and Elections: What to Watch for in 2026
The 2026 midterm elections are right around the corner, which means Americans are bracing for the onslaught of campaign advertisements, fundraising solicitations, and media coverage of the contests that will determine control of the U.S. Congress and state capitols across the nation. If 2024 was any indication, artificial intelligence (AI) and its potential to disrupt American elections will feature prominently in the national dialogue leading up to November. While AI’s actual impacts were far less than originally feared, the rapid improvement of AI tools raises concerns that 2026 could be the year its harmful effects come to full fruition.
Despite its characterization as a tool of electoral deception, AI presents a mix of opportunities and risks. This piece provides an overview of AI’s impact on the election ecosystem and the potential issues policymakers should consider when determining how to adapt and respond during this contentious election year.
Deepfakes and the Information Environment
One of the most discussed topics among lawmakers and the public is the potential for highly realistic AI-generated “deepfakes” to impact elections by poisoning the information environment. These images, videos, and audio clips depicting fictitious events have been used to spread false information about political candidates, public officials, and the election process itself—though any meaningful impact on actual election results is unlikely.
In response to these concerns, a growing number of states have begun regulating the use of AI in certain election-related communications. Today, 26 states have laws on the books regulating this type of political speech (compared to just five in 2023). The list is poised to expand, as at least five other states—New Jersey, Virginia, Maryland, Tennessee, and Vermont—are currently considering bills to regulate the use of election-related deepfakes. This reflects a concerning impulse by lawmakers to control speech through government regulation, largely by requiring labels on false election-related content created with AI rather than by cultivating an environment in which other voices can rebut the fictitious information with truthful counterspeech.
While state lawmakers have been eager to pass laws expanding the government’s role in protecting the public from AI-generated election misinformation, important questions remain regarding the effectiveness and constitutionality of the resulting restrictions. The jury is still out on effectiveness, as most of these laws are new—making it nearly impossible to measure the deterrent effect of the penalties imposed for violating them. As far as constitutionality, California’s 2024 attempt to prohibit the use of deceptive deepfakes was struck down in federal court as a First Amendment violation. Most states have yet to see any legal challenges to the more common approach, which requires a disclosure to alert viewers when AI is used in deceptive election-related media.
As the 2026 campaigns ramp up, lawmakers should pay close attention to the implementation of these regulations and any legal challenges that could limit their authority to regulate this type of political speech in the future. The public proved its resilience against the deepfake threat in 2024 by remaining skeptical of online information—a mindset it should maintain throughout the 2026 election cycle rather than relying on the government to protect against efforts to deceive.
Election Administration
AI holds great promise as a tool that can help election officials do their jobs more effectively and efficiently over time, and the 2026 election stands as an important milestone for election offices to both expand AI utilization in the near term and create organizational policies to guide AI adoption moving forward. Unlike in 2024, when AI tools were mostly optional and dependent on users opting in, many now integrate directly into commonly used apps and software—meaning a wait-and-see approach regarding AI use is no longer an option. With many employees likely using AI tools already, election offices will need to create guiding principles regarding AI utilization and—depending on priorities, resources, and sophistication—explore opportunities to expand usage in ways that improve operations while maintaining public trust.
A core principle of AI use in election offices is to ensure a human being remains in the loop—meaning the AI itself is not in a position to make independent decisions. For example, a basic function of AI in this context is to generate draft press releases or social media posts. While this use of AI tools is powerful—particularly for the many offices without dedicated communications staff—it is clear that an election official must review the content before distribution. Similarly, AI can generate training materials that simplify complex election law into concepts and directions easily understood by the average citizen serving as a poll worker. Here too, a professional election worker should review and refine the AI-generated content before distributing it to trainees.
While this requires a certain investment of time and energy up front, the potential benefits are significant—and 2026 stands to be the year that more election offices begin using AI tools. Fortunately, organizations like the federal Election Assistance Commission and the AI & Elections Clinic at Arizona State University offer resources to help election officials navigate the AI landscape. Taking the first steps toward responsible AI adoption today will position election offices to benefit from the next round of advances on the horizon.
Cybersecurity
Cyberattacks commonly attempt to disrupt American elections, and perpetrators are using AI to their advantage. For example, there was a significant uptick in distributed denial-of-service (DDOS) attacks on election websites utilizing AI tools in the run-up to the 2024 election. At the same time, targeted phishing attempts successfully penetrated both presidential campaigns—another strategy that benefits from AI tools able to craft highly realistic and convincing messages. As AI capabilities improve, so does the potential for malicious use of the technology.
There is sure to be another round of cyberattacks on election offices and political campaigns in 2026; however, the same technological advances available to attackers are also available to the cybersecurity industry. In this case, AI is a double-edged sword that raises risk while improving defense capabilities.
Another issue warranting attention has to do with the evolving responsibilities in election cybersecurity. Although election cybersecurity has been a critical function of the federal Cybersecurity and Infrastructure Security Agency since its 2018 inception, the agency has rolled back some of these duties under the current administration. This, in turn, has shifted responsibility for hardening election technology infrastructure to the state and local levels and could require increased funding from the state.
Regardless of the technological advances, basic cyber hygiene practices at the individual level still apply and remain highly effective. Things like multifactor authentication, strong passwords, and reviewing email addresses and links before clicking still provide strong protections against targeted phishing attempts, regardless of the technology used to generate the attacks.
Conclusion
AI impacts all aspects of society, including the way elections are administered and campaigns are run. This evolution will continue in 2026 with a range of impacts to the election information environment, election administration, and cybersecurity. While the specific effects are unclear, a posture of flexibility and adaptation will give lawmakers and election officials the greatest chance of capturing AI’s benefits while mitigating potential harms and without infringing on the constitutional rights of Americans to cast their ballot and speak their mind.