Every single word you type on someone else’s platform—whether reviews, comments, or posts—depends on it.

Over the last few years, much of the public was introduced to Section 230 of the Communications Decency Act by politicians and pundits who misunderstand the law. The resulting game of telephone has elevated Section 230 from its obscure, but vital, place protecting free speech online to its current position as a favorite punching bag of both political parties. But the reality is that so much of the internet we’ve come to use would not be recognizable without this law or even with a modified version. Every single word you type on someone else’s platform—whether reviews, comments, retweets or posts—depends on it.

Thanks to Section 230, if you post something online, the place where you posted it is not liable for the content of that post. If Twitter posts something, it is liable for that post. If Twitter posts a comment on The New York Post’s website, Twitter is still liable. As with behavior offline, each remains liable for their own behavior. This law protects every interactive computer service with user content—from Reddit to Peloton (seriously). It even protects you as an individual when you retweet someone or forward an email. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” reads Section 230 (c)(1). It’s as simple as that.

This is important. Each person should only be liable for his or her own content. Not only would holding websites liable for their users’ content be a violation of the First Amendment, but moderators of websites aren’t equipped to distinguish criticism from libel without error. That’s why matters of libel and other unlawful content go before courts and undergo numerous hours of discovery and debate. Further, if every website were liable for every piece of content on their website—including content they never saw—they would be crushed under mountains of litigation. Small startups in particular can’t handle that kind of liability. Mark Weinstein, founder of a scrappy social media upstart called MeWe that competes with Facebook, recently made this exact point in relation to his fast-growing platform. “The big boys have deep pockets. They can easily hire the massive moderation and legal teams that would be necessary to defend themselves. I can’t,” he explained. “Revoking Section 230 would put hundreds of startups and other smaller companies out of business.” Bigger platforms would still have to deal with crushing litigation, but smaller companies would have no chance. Protecting companies from frivolous litigation is precisely why Section 230 was created.

Speaking of frivolous lawsuits—as the authors of Section 230 recounted in a recent article: “On a popular online platform called Prodigy, one person described Stratton Oakmont as a ‘cult of brokers who either lie for a living or get fired.’” That’s right—Stratton Oakmont—the real-life fraudsters on whom the movie “The Wolf of Wall Street” is based, and whose founder Jordan Belfort eventually went to prison for fraud. But well before the commenter was vindicated, Stratton Oakmont “sought hundreds of millions in damages from Prodigy, simply for hosting the comment on its platform,” wrote Sen. Ron Wyden and former Rep. Chris Cox. Prodigy argued that it shouldn’t be legally responsible for user content, especially since it didn’t know whether or not the comment was true (it was). Unfortunately, a New York court thought otherwise, reasoning that because Prodigy moderated its platform, it was liable for the content. If it didn’t moderate, the court argued, it wouldn’t have been liable.

“The alarming message of this case was clear: in the future, online platforms shouldn’t attempt to moderate even the most awful content. Doing so would make them legally responsible for everything their users post,” write Cox and Wyden.

This exact issue was fixed by Section 230. Without the law—or even with a modification that re-opens the litigation floodgates—we return to the “moderator’s dilemma.” Without Section 230, websites would be forced to moderate everything or nothing. If they moderate nothing, they may face no liability from user posts. But they do face an unending stream of spam, harassment, illegal content and other content that harms users. Twitter sent over 100 million anti-spam challenges in the first half of 2020 to verify whether or not potential spam accounts were real. That could stop without Section 230. And as Reuters reported: “Twitter suspended 166,153 accounts between July and December [of 2018] for promoting terrorism.” That’s just a small glimpse at the scale of content moderation.

The alternative is moderating so much that relatively little user content gets through. Criticize your boss on Twitter? Post a bad review on ZocDoc of a doctor who you thought provided awful care? Say something inflammatory on MeWe? Post a #MeToo story? Platforms will not want to be liable for that kind of content, as they could be sued for it whether or not it’s true.

It’s hard to know which platforms will opt for which moderation strategy without Section 230. Even a heavily modified 230 that only allows for certain kinds of moderation would reopen the legal floodgates that prompted the creation of Section 230 to begin with. Platforms, like Prodigy, would be liable for all of the content its users post. Users would be able to sue for a litany of reasons, arguing that, “they removed my post but it actually wasn’t the kind of content they’re legally allowed to remove.” Platforms that decided to moderate at all would likely become walled-gardens of pre-approved speech. Platforms that chose not to moderate would become the digital Wild West where anything goes and nobody’s safe. But one thing remains constant in all these scenarios: users lose out on a reasonable online experience. They would have to fight through awful content, spam, harassment and more in order to have their voices heard. Or they wouldn’t be heard at all because platforms, trying to remain safe and clean, would heavily restrict user content. And the only winners? Big tech companies that can better handle enormous liability than MeWe and trial lawyers. This can be seen in Facebook’s increased market dominance in the wake of general data protection regulation (GDPR).

Politicians on the left attack Section 230 and demand more moderation. On the right, politicians demand less moderation but also want to harm Section 230 to accomplish that. Meanwhile, the law that protects free speech on the internet continues to hang like a punching bag between them. Instead of attacking the bedrock law of our thriving internet ecosystem, they should be protecting it.

Featured Publications