Excessive Regulations Are More Concerning than AI-Generated Misinformation
Read the first in this series here.
Mr. Farid and I agree that AI-generated misinformation will impact the 2024 election and that a well-informed public and well-prepared government can help defend against the harmful effects. We disagree on the public’s ability to effectively navigate the deteriorating information ecosystem, with Mr. Farid laying out a case for why Americans are ill-equipped to deal with the accelerating onslaught of misinformation. This leads him to conclude that if current trends continue, we are on a path to a “fragile and uncertain” future.
Potential for Harm Extends Beyond Misinformation
I, too, have concerns about the information environment and its impact on society moving forward. However, my concern is less about the misinformation itself and more about the potential harms of government responses that seek to regulate it through restrictions on free speech.
For example, lawmakers in Washington, D.C. and state capitals across the nation have jumped at the chance to propose new laws and regulations attempting to “fix” the problem of AI-generated misinformation in elections through various restrictions on political speech. 18 states now have laws that would either ban or require labeling of deceptive AI-generated political speech with most of those laws approved in the last two years.
Similarly, federal lawmakers and agencies are getting in on the act with the Federal Communications Commission, Federal Elections Commission, and members of Congress proposing new laws and regulations related to the use of AI in elections. Proponents of these laws tout their importance for protecting the public from deception, but in practice, they are speech restrictions that could violate the First Amendment.
Setting aside the constitutionality of these proposals, if allowed to go into effect, they will be harmful and counterproductive. Trust in major American institutions including government and media has been falling for decades, and this decline has contributed to an environment where misinformation can flourish. Restricting speech will only exacerbate this distrust, empower those who spread lies, and truly set the stage for a fragile and uncertain future.
Working Toward a Different Future
With that said, there are opportunities for agreement on specific common-sense steps that can be taken to improve the information environment while protecting free speech:
- Voters should remain skeptical of information consumed online, consult multiple sources to verify information, resist emotional manipulation, and take personal responsibility for not spreading false information.
- State and local election officials should build trust with the public and practice responding to AI-driven disruptions before Election Day.
- Media, civil society organizations, and private-sector technology companies should support voter education and drive public awareness campaigns about AI disinformation risks.
Overall, these actions can help mitigate the impacts of AI-generated misinformation in the 2024 election and beyond. While AI and misinformation are here to stay, cooperation among election stakeholders and persistent efforts by institutions to rebuild trust with the public can set the stage for a stronger future with more civil and truthful public dialogue.