Donald Trump and the Facebook Oversight Board

Authors

Paul Rosenzweig
Former Resident Senior Fellow, Cybersecurity and Emerging Threats
Chris Riley
Former Resident Senior Fellow, Internet Governance
Mary Brooks
Former Resident Fellow, Cybersecurity and Emerging Threats
Tatyana Bolton
Former Policy Director, Cybersecurity and Emerging Threats

Key Points

Facebook was justified in its decision to remove President Trump’s content and ban him from creating future posts.

While international human rights law is the right place to start, it is not specific or granular enough to provide a useful framework for the decisions the Facebook Oversight Board will face.

A multifactor analysis that considers the harm, intent and imminence of the content in question would be more appropriate for the content decisions before the Oversight Board.

Press Release

Understanding Facebook’s Decision to Ban Trump

Introduction

The rise of social media has generated controversies over how these platforms moderate—and fail to moderate—the content posted on their sites. Some observers argue that the social media giants (like Facebook, Twitter and Google) do too little to take down potentially harmful content, while others claim they do too much, criticizing an excess of censorship. Still others argue that whatever these platforms do lacks transparency and is—in any event—too self-serving to be valid or trustworthy.

In response to these concerns, Facebook has been working toward the creation of a new model of social media governance: An Oversight Board (Board), which has the authority to review some (though not all) of Facebook’s content moderation decisions. Sometimes colloquially referred to as Facebook’s “Supreme Court,” the creation of the Board has not been without its own controversy. Some immediately denounced it as ineffective; others thought it a shill for Facebook’s self-interest. Still others were willing to suspend judgment pending implementation.

Now, only months after the Board was fully established in late 2020, it is facing its first serious test of legitimacy. In early January 2021, during and immediately following the insurrection at the U.S. Capitol, Facebook removed certain content posted by then-President Donald J. Trump and indefinitely suspended his account. Facebook later referred its decision to deplatform Trump—essentially revoking his access to the social media outlet—to the Board for official review. Thus, rather than having time to work with lower-profile matters to develop its doctrine and procedures, the Board is now faced with a momentous and potentially controversial decision as one of its very first cases. Some have called this the Board’s “Marbury moment”—a reference to the seminal American case, Marbury v. Madison, in which the U.S. Supreme Court asserted the absolute role of judicial oversight in the American system of government.

Whether the Trump deplatforming decision proves to be quite so consequential remains to be seen. But it does appear that the case will provide the Board with the opportunity both to establish its own authority and to develop a doctrine of review that would contribute to a transparent and trustworthy oversight process.

To assist in that development, the R Street Institute recently submitted comments to the Board in response to its request for public comments in the case reviewing Facebook’s decision to deplatform Trump.

In those comments, R Street argued there are three principles that must be central components of the Board’s decision-making, and which should be formalized into the Board’s ongoing review:

  1. Context—and therefore case-by-case review—is critical to a valid and appropriate adjudication of the issues presented.
  2. A well-structured framework is necessary to particularize the considerations applicable to content moderation decisions.
  3. The same framework should be applied to private citizens and political actors, with due regard for the different context in which their expression arises.

While the Board’s first round of decisions indicate that it will heavily rely on context in its case evaluation, it has not yet explored the second and third principles above. This paper will explain how the Board should work to close that gap through the consistent use of an explicit, multi-factor framework that is guided by international human rights law but offers more granularity than the high-level principles contained in that body of law. Additionally, the Board should not create separate standards of content moderation for politicians, but should apply the same context-based framework for all users.

Facebook, the Board, and social media platforms more broadly are at an inflection point. The decisions they make now may define the future of digital platform-based communication, with real outcomes for the security and integrity of the internet and those who use it. This white paper suggests how these decisions can be made with recourse to external law and principles, rather than in an ad hoc manner.

Featured Publications