Pragmatic Principles for Content Policy and Governance
This paper offers a comprehensive, affirmative framework for improving the state of online content and private sector management practices – without modifying Section 230.
While government has a role to play, the key to ensuring sufficient, sustained, and evolving responsibility from online platforms is engaging the surrounding critical community of researchers, advocates and other businesses. A collaborative approach such as this will be far more effective than traditional government functions of setting and attempting to enforce specific behavioral practices by companies.
The proposed framework includes some original ideas that would also be complementary with other approaches, such as a proposal that the DOJ evaluate whether federal criminal law is sufficient to address online harmful behaviors like cyberstalking.
Over the past two years the United States has seen a flurry of legislative proposals to modify Section 230 of the Communications Act of 1934, the infamous immunity provision designed to allow the use of moderation for user-generated content shared online without introducing liability as a consequence of the resulting implicit knowledge and control. Yet, none of the proposals have advanced substantially in either the House or Senate. Meanwhile, the European Union (EU) has assumed the driver’s seat in global internet policy yet again by introducing its proposal, the Digital Services Act (DSA), in December 2020.
However, the need for intervention persists. For example, the case of Herrick v. Grindr attempts to hold an online platform accountable for the real-life harm caused by users of its platform. Another larger-scale example circulates around public feelings of bias by major social media and technology companies, which persist despite a lack of supporting data and evidence. Trust has broken down online, and change is needed. Whether that change requires legislation remains an open question. If so, the more challenging question remains of how to design an intervention to deliver meaningful benefits and minimize harmful externalities. However, the status quo seems unsustainable, so Congress is actively engaged in holding hearings and introducing legislation.
While the underlying rationales for reform vary widely and lead to equally varied regulatory approaches, a few ideas appear to have emerged that offer the potential for broad (though not universal) appeal. A future American law aiming to establish greater responsibilities for online intermediaries of user generated content seems likely to include timely compliance with duly issued court orders, incentives or requirements to publish content policies and mechanisms to hold companies liable for some types of procedural insufficiencies. In principle, these ideas align with the spirit of “consumer protection” and balanced intervention, which were proposed in the EU’s DSA. This alignment serves as a sign for future transatlantic cooperation in internet policy.
However, these concepts only tell a portion of the story. No legislative proposal is likely to gain traction without coming to grips with the harder questions that motivate many active draft bills, including in particular how to calibrate incentives for the proper management of lawful but contextually (or universally) harmful content, and whether criminal law is sufficient to govern modern online harmful behavior.
The harms of online content today are multifaceted and complex, and no single law or policy change can address them in full. Yet while Americans debate and introduce countless scattered proposals derived from a smorgasbord of conflicting rationales, other countries race ahead leaving the United States further behind in its policy leadership. This paper offers principles and analysis to contribute to future legislative proposals that seek to offer pragmatic implementations of the widely held ideas noted above along with answers to the more difficult questions. The specific proposals for consideration in this paper are offered with the intention of catalyzing further and deeper study of their consequences over the coming months.
Critically, none of the proposals included in this paper include any modifications to Section 230 itself. While the law has become a lightning rod for political engagement, and the problems associated with harm online are significant, the governmental interventions suitable for making positive progress towards those problems need not be centered on that law.
As a key note of context, this paper is intended to focus solely and specifically on content policy decisions, and in particular does not propose changes to copyright law’s existing notice and takedown systems. These are distinguishable on several grounds in principle, and while there are certainly some meaningful analogies or comparisons to be made between copyright infringement and the space of online harms, this paper does not seek to make them.