Introduction

President Joe Biden recently unveiled a $7.3 trillion budget proposal which laid out his FY 2025 priorities on issues ranging from taxes to energy to public safety. While there were 33 separate references to artificial intelligence (AI) and the president’s AI Executive Order in the budget, none directly addressed the impact of AI on elections. However, the budget did propose a significant increase in grant funding that state and local election offices can use to enhance election security. And, as a result of a February decision by the Election Assistance Commission (EAC)—the agency responsible for administering the grants—those funds can now be used to counter election disinformation generated by AI. The EAC approach to engaging with election disinformation provides a useful framework as policymakers consider how to address public concern over this issue moving forward.

Overview of HAVA Election Security Grants

The EAC administers general assistance grant programs that provide funding to the states for the purpose of improving the administration of federal elections. The grants are authorized by the Help America Vote Act (HAVA)—federal legislation passed in 2002 that set minimum standards for the administration of federal elections, provided $650 million of grant funding to help states comply with the new law, and created the EAC to deliver ongoing support to state and local election officials.

Following an initial round of funding in 2003, Congress did not appropriate additional money to the general assistance grant program again until 2018 when it authorized $380 million. Congress also explicitly identified various cyber and physical security initiatives as allowable uses for the funds. Thus was born the Election Security Grant Program, which has since received a total of nearly $1 billion in funding from Congress through appropriations in fiscal years 2020, 2022, and 2023.

Under this program, states can use election security grants to replace voting equipment and implement audit systems, improve cybersecurity, conduct cybersecurity training, and to more generally enhance the security of federal elections. What qualifies as an improvement is ambiguous, so the EAC periodically responds to inquiries about specific uses. For example, in 2022, the EAC determined that expenditures for physical security upgrades and social media threat monitoring qualified as an “improvement” that was eligible for HAVA grant funding.

Authority to Counter AI-Generated Disinformation

One such inquiry was submitted to the EAC in January of this year by U.S. Sens. Amy Klobuchar (D-Minn.) and Susan Collins (R-Maine), asking whether election security grants could be used to “counter election disinformation generated by Artificial Intelligence (AI) technologies.” In response, the EAC confirmed that security grants could be used to “counter foreign influence in elections, elections disinformation, and potential manipulation of information on voting systems and/or voting procedures disseminated and amplified by AI technologies.” According to the Commission, these uses qualified both as administrative improvements as well as voter education, both of which are acceptable uses of HAVA funds. Specifically, the ruling determined that funds may be used for “voter education and trusted information communications on correct voting procedures, voting rights and voting technology to counter AI-generated disinformation.”

Discussion and Policy Considerations

The EAC’s decision to allow election security grant funds to counter AI-generated election disinformation is the latest example of a federal agency using existing authority to address this hot topic. Recently, the Federal Communications Commission confirmed that AI may not be used in already illegal robocalls, seeking to address concerns about deceptive AI-generated messages spreading in advance of the 2024 election. In comparison to the heavy-handed legislative proposals in Congress and state legislatures, which seek to regulate the use of AI in elections with First-Amendment-implicating bans or clunky disclaimer processes, the EAC’s lighter touch has promise, particularly as it relates to voter education communications.

The main strength of the EAC approach is the emphasis on countering—rather than preventing—AI-generated election disinformation. The effectiveness will depend on the specific approaches states take under this authority, but the most opportunity will likely be found in efforts to educate the public about accurate voting information and procedures.

Another strength is that states have flexibility to innovate. The EAC decision outlines allowable categories of spending but state and local election officials are in the best position to experiment with different approaches and over time develop best practices for countering disinformation through public education.

Conclusion

Overall, HAVA security grants provide a reasonable path to helping states counter election disinformation through an existing program using existing resources. Unlike some proposed responses to AI and election disinformation, the security grants emphasize countering rather than prohibiting disinformation and outlines one path that focuses on public education and distribution of accurate information about voting processes. In addition, for many states, these practices can be deployed quickly and in a cost-effective manner.