The recent U.S. presidential election brought the debate over misinformation and disinformation into focus once again, with concerns over the potential impacts on election outcomes. In fact, studies by the Brookings Institution and others assert that there were “systematic and organized efforts” to sway public opinion through the spread of false or misleading information. And we are not alone—2024 has been called a “global election super-cycle,” with 64 nations holding major elections.

Other forms of misinformation, such as ill-informed health claims, continue to circulate widely online, which can lead people to make erroneous decisions and erode trust in public institutions. Coupled with rapid advances in the deployment of generative artificial intelligence (GenAI), misinformation and disinformation have become important topics of study.

The volume of misinformation and disinformation in the digital world has garnered the attention of academic researchers, regulators, and legislators. To address these challenges effectively, it is important to distinguish between misinformation—false information shared without malicious intent—and disinformation, which involves willful intent to deceive or cause harm. Although a consensus on appropriate solutions might not yet exist, an emerging field of inquiry centers on fundamental questions of internet governance, democracy, and the online distribution of ideas, both good and bad.

This growing body of research draws on disparate fields of academic study including political science, economics, computer science, data science, and law, offering valuable insights into the scope of the problem while suggesting best practices to mitigate damage. A recent report from the Knight Foundation’s Knight Research Network found that, in 2023, researchers generated 820 articles examining the “intersection of technology, media, and democracy.”

Addressing the rapid and increasingly widespread flow of digital misinformation and disinformation is a daunting challenge. While many proposed solutions rely on centralized control and top-down verification systems, others emphasize more distributed approaches that employ incentive structures, information diffusion, and more effective governance mechanisms for digital activities. This piece examines key issues surrounding misinformation and disinformation research, ultimately concluding that successful policies in this area require working with market forces rather than against them.

Understanding the Challenge

Researchers must navigate unique challenges posed by the ubiquity and expanse of information on the global internet. A recent article by Kai Kupferschmidt, journalist and contributing correspondent for Science, catalogs five key challenges to misinformation research. First, defining misinformation is a foundational matter in and of itself. Some researchers concentrate specifically on “fake news,” while others pursue a broader inquiry that includes misleading but technically accurate content. These differences in definition can affect both the prevalence and the impact of misinformation. They also can yield varied—and sometimes contradictory—policy recommendations. This complication becomes even more acute when considering cultural and contextual factors.

The second key challenge is the political nature of most misinformation research. Because studies consistently show that misinformation circulates more widely among conservative audiences in the United States, accusations of bias and political pressure have been brought to bear on researchers—although the studies themselves could be biased. Philip M. Napoli of the Sanford School of Public Policy at Duke University describes this situation as “agnotology,” in which political and institutional pressures create a “strategic state of ignorance” by limiting the capacity for research into misinformation and disinformation while attempting to foster an online world that allows false information to flow unimpaired alongside truthful information. The issue is not confined to one particular political ideology, however. Actors across the political spectrum attempt to shape the flow of information, making it important to examine the problem objectively, without political prejudice.

While concerns regarding misinformation are genuine, Kupferschmidt raises a third exacerbating concern: Empirically, it is difficult to demonstrate the real-world impact of misinformation. Moving from anecdotal harms to concrete measures of the causal links between misinformation, disinformation, and undesirable consequences is challenging. Studies typically concentrate more on changes in belief than real-world outcomes. When studies do consider actual outcomes, the observed effects are either minimal or too complicated to measure. For example, when considering the impact of misinformation on elections, academic researchers Andrew M. Guess and Benjamin A. Lyons note, “[T]he effects of misinformation on candidate preferences themselves and, moreover, the effects on electoral outcomes or other behavior have yet to be reliably detected.”

More generally, significant research gaps remain when attempting to quantify the impact of misinformation as well as institutional responses to the problem. As European scholars Zoe Adams, Magda Osman, et al. note in their survey of misinformation research, “[T]he Internet and social media may seem rife with misinformation, but reliable estimates on its prevalence and impact on recipients are hard to come by.”

The fourth challenge raised by Kupferschmidt and echoed by many others is that acquiring existing data in order to conduct empirical studies of misinformation and disinformation is quite difficult. Much of the information necessary to carry out such research is proprietary to online platforms and social media companies, which often restrict access. While some researchers attempt to assemble their own datasets from publicly available information, others form partnerships with tech companies to collect and catalog the needed data. Still, that workaround is often insufficient.

Finally, Kupferschmidt notes the pronounced geographical imbalance in misinformation and disinformation research. While the internet is global and misinformation is produced worldwide, most studies focus on the United States and Europe. Researchers have much to learn about how misinformation operates in other parts of the world, particularly East Asia, Africa, and the Middle East, where different political systems and varied social media use could create unique dynamics that diverge from practices seen in the Western world.

The Market Dynamics of Digital Information

The evolution of digital markets and online platforms has enabled dramatic increases in the production and distribution of misinformation and disinformation. Understanding these market dynamics is necessary to develop effective policy responses that work with—rather than against—economic incentives and technological realities. Technological advances in AI and automated content generation have substantially reduced content-production costs, enabling the rapid creation of content featuring both legitimate and false information. At the same time, low barriers to entry and the possibility of substantial rewards create strong incentives to produce and distribute misinformation. These factors make it increasingly difficult to distinguish the authentic from the inauthentic.

Supply and Demand

While understanding the role of institutions and incentives in the misinformation and disinformation supply is essential, assessing the demand for such information might be even more important. Economist Bryan Caplan of George Mason University, who has devoted much of his work to voter irrationality, provides interesting insights into why some people actively seek and consume misinformation. Caplan suggests that many individuals derive great utility from content that confirms their worldview and/or strengthens group identity or cohesion, regardless of its accuracy. By this reasoning, people do not simply receive the information placed before them; rather, they consume information based on their beliefs and desires. Dan Ariely, professor of psychology and behavioral economics at Duke University, has described this perspective as “rational irrationality,” in which consumed and shared misinformation generates individual utility regardless of whether or not it is true.

The social benefits of shared beliefs create powerful incentives to consume and share information that could be false or misleading. These incentives include stronger in-group bonds, enhanced status within ideological communities, and shared narrative frameworks that simplify complex issues. Users can also gain utility from confirming their existing beliefs, building social capital within specific communities, and developing a resistance to threatening information. If this is the case, focusing exclusively on increased access to accurate information will fail because it ignores fundamental demand-side dynamics. As Bartlomiej Chomanski at Adam Mickiewicz University in Poland notes, if people consume information because they want it to be true, then they will consume content debunking alleged misinformation through the same unreceptive filter.

The AI Factor: Implications for Digital Information Markets

The advent of GenAI technologies further complicates the digital information landscape, fundamentally altering both the production and the verification of online content. AI’s ability to create and amplify content—including misinformation and disinformation—has sparked calls for enhanced oversight, bringing a new intensity to the demand for content regulation. Yet, as the Hoover Institution’s John Cochrane notes in his essay for The Digitalist Papers, “AI, Society, and Democracy: Just Relax,” the rush to regulate content—particularly AI-produced content—in order to weed out supposedly inaccurate information might be premature. Innovation is often met with resistance, either due to the self-interest of threatened parties or a fear of the unknown. Indeed, techno-panics have a long history, and forecasters have a poor record of understanding technological advances. As Cochrane explains, “The history of millenarian apocalyptic speculation is littered with worries that each new development would destroy society and lead to tyranny, and with calls for massive coercive reaction. Most of it was spectacularly wrong.”

While the rise of AI might generate such anxieties, thoughtful policy requires going beyond the precautionary principle’s predisposition to regulate the unknown. This is not to say that AI poses no threats to the accuracy of online content and public discourse—using AI to generate convincing text, images, and even videos at scale fundamentally changes the economics of misinformation and disinformation. Malevolent actors will utilize its capabilities in ways that will only aggravate these problems, and in the global election super-cycle, the threat is undeniably real. However, uncertainty about the future should give pause to would-be regulators with limited knowledge of new technologies. To be effective, regulations must identify and address a clear market failure or potential harm in the most cost-effective manner. Contrary to their presumably beneficial intent, sweeping regulatory mandates could do more harm than good.

At the same time, markets are evolving to address concerns over AI and misinformation. While bad actors will exploit AI’s capabilities, AI also satisfies and facilitates the growing demand for more trustworthy information, with new models providing detection and verification systems to tackle the problem.

To this end, a robust and competitive market can encourage the innovations necessary to improve the quality of online information while giving users a degree of flexibility and choice that a more centralized regulatory regime would not allow. As American legal scholar Eugene Volokh notes in his contribution to The Digitalist Papers, “Generative AI and Political Power,” AI regulation has potential to transform the internet from its current “user sovereignty model” to a “public safety and social justice model” as regulatory mandates emerge to control the flow of online content. The internet currently operates within a framework that Volokh describes as driven by end users who make individual choices that shape the collective internet. However, user sovereignty could diminish as platforms adopt supposed safety measures and guardrails—whether voluntarily or through government mandates—limiting what end users are permitted to see. In this sense, AI regulation can install powerful gatekeepers to information, and as the scope of the guardrails increases, so does the gatekeepers’ iron grip over users’ access to information.

This conundrum raises the question of whether regulatory edicts would yield superior outcomes with respect to content moderation and online speech. Volokh surmises that robust competition among AI models could be the most appropriate policy endpoint, noting that excessive regulation and compliance costs could eliminate the very competition necessary to maintain a user sovereignty model of information while simultaneously reducing innovation in content verification that would otherwise improve the quality of online content.

Policy Implications and Future Evolution

The above analysis inexorably leads to this conclusion: The challenge of online misinformation requires solutions that respect the complexity of information markets and the distributed nature of knowledge in society even as they seek to reduce the volume of misinformation and disinformation. While the impulse to address misinformation through centralized control and regulation is understandable—in part because of their regulatory simplicity—such approaches often prove counterproductive and can threaten the very innovation needed to develop more effective solutions. By contrast, market-based approaches can harness the power of competition, innovation, and distributed knowledge to combat misinformation while preserving free speech and digital innovation.

While ease of entry and potential economic and political gains encourage bad actors to produce misinformation and disinformation, these same market dynamics can facilitate potential solutions. Competition among platforms and content creators can foster innovation in sophisticated verification systems and content quality, and while network effects can accelerate the spread of misinformation, they also make it possible for powerful community-based verification systems to emerge. For instance, specialized communities on social platforms like X, Reddit, and Stack Exchange have developed particularly effective systems for evaluating information quality through community moderation and collaborative verification processes. Unlike many centralized information-control schemes, such distributed information-evaluation systems can scale efficiently through existing social-trust networks and quickly respond to new forms of misinformation and disinformation. By crowdsourcing the information-validation process in this way, organic content-verification systems could prove more agile than top-down approaches because they can harness the distributed knowledge of community members and adapt quickly to new concerns.

More broadly, success in the information-validation space requires maintaining competitive markets, enabling experimentation with different verification systems, and creating incentive structures that reward accuracy while identifying ways to minimize the social and emotional demands currently served by misinformation. The role of AI deserves particular attention, as it offers both challenges and opportunities in the fight against misinformation.

When attempting to address the problems and dangers posed by misinformation and disinformation, the role of the market and robust competition cannot be ignored. Harnessing the power of competition and innovation is critical, as heavy-handed, top-down regulatory models could hamper the development of new tools and technologies to counter misinformation and disinformation.

Conclusion

As famed economist Friedrich von Hayek asserted in his 1974 Nobel Prize lecture, the limitations of human knowledge should inspire humility. We can apply this advice to our attempts at designing grandiose plans that address the social challenges of misinformation; alternatively, we can leverage distributed knowledge and the wisdom of the crowd to improve the quality of online information while diminishing incentives to propagate false content.

Our technology and innovation policy work, in your inbox.