For the last five years, X has been building a decentralized solution called “Community Notes” to deal with misinformation on its platform. Users can flag suspicious posts for Community Notes, which triggers a review process by other X users (“note writers”), who then draft a note to correct misinformation, provide context, or identify content generated by artificial intelligence (AI). If the note passes through the X’s algorithms quality and diversity controls, it is displayed below the post.

X recently announced a new “Collaborative Notes” feature that allows AI to draft Community Notes for human contributors to review, refine, and rate. This AI integration specifically addresses one of the system’s most significant weaknesses: speed. Community Notes has been criticized for the lag between a misleading post going viral and a note appearing beneath it. By the time human contributors draft, review, and approve a note, the damage may already be done. AI-generated first drafts can dramatically compress that timeline, getting contextual information in front of users while a post is still circulating at peak velocity. The human review layer remains intact; however, AI eliminates the bottleneck caused by drafting from scratch.  

Traditionally, each Community Note had a lengthy, human-only process to go through before it was displayed, which often resulted in delays of up to 15 hours. Now, when a top-rated contributor flags a post, the system immediately kicks off an AI-generated draft the community can improve in real time through ratings and suggestions. The final Note cannot be published until the draft has gone through this vetting process.

The company had been experimenting with AI note writers since last year’s pilot program allowed users to build their own “note-writing” bots on the platform. Although these bots are AI-generated, they act like human note writers. According to X’s Keith Coleman, who oversees Community Notes, AI note writers in the platform’s pilot program were “prolific,” with one contributing more than 1,000 notes rated as helpful by contributors across the political spectrum.

One overall criticism of social media is that companies have a market incentive to deliver popular content to users without regard for information quality. The advent of Community Notes shows that the market for information on social media is far more complicated and that the desired user experience is an elusive and ever-changing target. While algorithms certainly try to deliver what they think users want, there is also a strong incentive to deliver accurate and moderated content. As it turns out, users seem to prefer a more liberal approach where misinformation—though not necessarily removed—is flagged in a decentralized system driven by trust and reputation.  

This is not merely theoretical. Research has demonstrated that users share significantly more accurate information when social media incentive structures are designed so that user feedback reflects the trustworthiness of content rather than just its popularity. Perhaps more striking, researchers found that trust-based feedback mechanisms actually increased user engagement rather than dampening it. The implication is powerful: Accuracy and engagement are not inherently at odds. Platforms that build systems rewarding truthful content can retain and grow their user base while improving information quality. Community Notes operationalizes this principle with its ”bridging” algorithm, which requires agreement from contributors who typically disagree.

The market incentive here is straightforward, with trust as a competitive asset. Users who feel consistently misled will eventually migrate to platforms that serve them better. And as the dominant revenue source for most social media companies, advertisers prefer brand-safe environments where their products are not displayed alongside demonstrably false content. As platforms compete for users and ad dollars, the ones that can credibly signal a commitment to information quality gain an edge. This is evidenced by the fact that companies are pouring effort into a system whose entire purpose is to append accurate, cross-ideologically vetted context to misleading posts. The Collaborative Notes code (including AI prompts and logic) is open source. Investment in AI technologies now allows X to address the process’ speed directly while keeping content trustworthy.

Still, there are legitimate concerns worth monitoring. AI is not immune to errors, and any AI system generating fact-checks at scale will inevitably produce some that miss the mark. The safeguard is the community itself, as Notes only become publicly visible when rated as helpful by contributors from diverse perspectives. That filter has proven remarkably effective. A study found that up to 97 percent of published Community Notes were rated as “entirely” accurate by ideologically diverse evaluators. Layering AI speed onto that existing quality-control infrastructure is a sensible approach, augmenting human judgment rather than replacing it.

For policymakers, the lesson should be clear. The instinct to regulate platforms into compliance on content moderation often rests on the assumption that companies will never self-correct and that only government mandates can force them to prioritize truth. X’s investment in Community Notes with AI to improve speed and scale demonstrates that market pressures can and do drive platforms toward better information practices without a regulation compelling them to do so. That does not mean every platform will get it right or that no guardrails are ever necessary; however, it does mean that the reflexive narrative of platforms as indifferent engines of misinformation deserves serious scrutiny.

Our Technology and Innovation program focuses on fostering technological innovation while curbing regulatory impediments that stifle free speech, individual liberty, and economic progress.