The Supreme Court recently heard two cases that could upend the way that users interact with internet platforms—and upend the way users interact with computer platforms. Gonzalez v. Google considers whether platforms like Google, Twitter and Facebook are liable for suggested content delivered to users by algorithms. Twitter v. Taamneh looks at whether a platform’s algorithm would subject the platform to liability for “aiding and abetting “ by allegedly not taking enough action to remove terrorist materials from their platform that lead to the attack, opening the platform to liability under Section §233 of the Antiterrorism Act.  

If the Supreme Court holds that platforms are liable for third-party content, platforms would be inundated with lawsuits, fundamentally restructuring the internet as we know today, reducing content and competition in the marketplace. Gonzalez v. Google, for instance, considers content that is neither created nor endorsed by the platforms, but is delivered based on content consumed by the user. In this case, Gonzalez’s counsel alleges that Google’s algorithm delivered terrorist content that radicalized those responsible for the death of Nohemi Gonzalez. The overarching question is whether the algorithms that deliver third party content are protected by Section 230.

Following a similar fact pattern to Gonzalez, Taamneh’s estate sued Twitter under the Antiterrorism Act arguing that Twitter was “aiding and abetting” a terrorist attack by having its algorithm deliver content that may have inspired the terrorists responsible for committing attacks. While this case did not directly touch on Section 230, it would have a chilling impact on third-party liability and the risk of liabilities for platforms online. This could open the door to future cases that would be death by a thousand cuts for Section 230, and the online marketplace of ideas.

Section 230 and the First Amendment

The First Amendment allows individuals to share their viewpoints without fear or interference from the federal government. As the internet ecosystem evolved, one would have expected the same protections to carry over online. Indeed, 26 words in Section 230 ensured that free speech did not end when our fingers started typing on a keyboard: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Section 230 protects platforms from liability for third-party content, a principle that has enabled the internet to become a marketplace of ideas. Should platforms become liable for third-party content, they would have to reconsider hosted content, undoing decades of innovation and the protections provided by the First Amendment.

Section 230 applies to almost every facet of the internet economy. It is not limited to social media platforms. Blogs, online resellers like Amazon and Etsy, search engines, online encyclopedias like Wikipedia, message boards like Redditt, smartphone apps and smaller startups all depend on Section 230 to build, develop and open their platforms to a broader audience. This liberty gives platforms broad discretion, but at the same time leaves platforms in the unenviable position of deciding what content to allow or disallow on their platforms. Platforms make such judgment calls every day and sometimes with frustrating results. But absent 230, they would only become more zealous in policing their content, stop moderating content altogether or remove third-party content completely. Weakening or removing Section 230 would exacerbate the moderator’s dilemma in which platforms would be forced to moderate nothing or remove any and all content, limiting speech online.

Notably, during oral arguments, Justice Elena Kagan said “we are not the nine greatest experts on the internet.” Kagan and her colleagues all recognized that content moderation is hard. More importantly, it is a policy, not a legal question. The Supreme Court should defer to Congress for deciding policy questions, instead of legislating from the bench.

What’s next?

There are two upcoming cases that the court could choose to review. Both NetChoice v. Paxton and Moody v. NetChoice address challenges to social media laws passed in Texas and Florida, respectively.

The Texas law in question states that a platform “may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on … the viewpoint of the user or another person.” Florida passed similar legislation impairing social media companies from conducting their own content moderation. Due to differing rulings, the Supreme Court will now likely consider whether states can introduce sweeping legislation penalizing companies for moderating third-party content. Regardless of the outcome of these cases, the opinions alone may give ammunition to legislators at the state and local level to erode First Amendment protections online and substantially weaken Section 230. The Supreme Court should throw out both of these laws that not only threaten Section 230, but the First Amendment itself by denying private entities free speech rights over the content hosted on their platforms. The internet has thrived as a marketplace of ideas by allowing users to submit content in real-time. Platforms have the audacious task to filter and moderate “appropriate” content. They don’t always make the right call, but discretion and judgment calls are better than the alternative. The moderator’s role may be tough, but platforms need the discretion to make these judgment calls to ensure their platforms are a safe and usable experience that contributes to the strong marketplace of ideas. Absent Section 230 protection, we will lose a lot more than some think we would gain. Search engines, third party sellers, online reviews, encyclopedias and more all depend on Section 230. If the Supreme Court gets this wrong, the internet will cease to exist in the form we know and love.