As a divided 118th Congress kicks off, a priority for both political parties is reforming a crucial, once-obscure section of law that underpins how all of us interact on the internet. 47 U.S. Code § 230, generally referred to as “Section 230,”, has become one the most controversial – and frequently misunderstood – laws on the books. Both President Biden and incoming Speaker of the House Kevin McCarthy have pledged to “fundamentally reform” it or even “scrap [it] and start over.”

So what, exactly, is Section 230? And why is it attracting so much ire?

Section 230 in a Nutshell

Simply put, Section 230 is a rule  that allows online platforms to host and manage user-generated content without fear of legal repercussions. In simple terms, it not only ensures that platforms are not liable for content generated by users they host, but also shields them from liability for taking down user-generated content. It is written to allow the platforms broad leeway to decide what content they deem unacceptable.

Since its inception as an add-on to the Communications Decency Act of 1996, Section 230 has paved the way for a far more democratized, interactive internet than could have existed if every controversial post could get platforms sued for harassment or defamation.

Content Moderation’s Conflict of Visions

The downside of hosting user-generated content is that there are always some users who will post outrageous, offensive, or obscene things. In addition, Section 230 does not protect platforms if they knowingly host content that violates federal criminal law, so even the most permissive platform has to take down some content, such as child exploitation, copyright violations, or the sale of illegal drugs. As the number of users on a platform increases, moderating content becomes proportionally more difficult to do consistently and well, both because of the number of posts and endless creativity of users in generating new kinds of questionable content.

One of the reasons Section 230’s protections are critical is because every decision to take down certain content will upset some users. Platforms like Facebook and YouTube have done themselves no favors in this respect by policing what they deem as misinformation.

While progressive and conservative politicians might broadly agree that the largest platforms are moderating content poorly, their motivations for changing that by amending or repealing Section 230 are different. Progressives tend to favor pressuring platforms to take down more content they deem offensive, hateful, or misleading, believing that unfiltered speech promotes extremism and undermines democracy. Conservatives tend to criticize platforms for impinging upon users’ freedom of speech, perceive them as biased towards the Left, and wish they would take down less content.

Section 230 and the First Amendment

The chief motivation behind most Republican attempts to amend or bypass Section 230 has been a rational concern that overly aggressive content moderation, especially on social media, poses a threat to free speech. However, the First Amendment is a restriction on government, not private entities, and attempts to force platforms to carry speech they don’t want to violates their First Amendment rights to freedom of association.

More pragmatically, it’s not accidental that Section 230 explicitly protects companies that take down content “whether or not such material is constitutionally protected,” because policymakers wanted to empower sites to remove pornography, racial slurs, depictions of violence, spam, scam messages, and harassment. All of that lawful-but-awful speech is legal under the First Amendment, but a platform that aims to attract a wide audience, much less advertisers, is wise to not host it.

If Section 230 were to be weakened or repealed, platforms would still have First Amendment protections, but their functional ability to exercisethose rights would be limited by how much money they have to fend off legal challenges. Ironically, this would advantage the Big Tech firms over smaller rivals), and it would lead to companies leaving up less content because it’s cheaper to remove anything potentially controversial.

This actually occurred after Congress carved out an exception to Section 230 targeted at online sex trafficking. Faced with the “moderator’s dilemma” that hosting even one bad actor could expose them to lawsuits, sites like Craigslist took down their entire personal ads sections.

Section 230 as a Handout to “Big Tech”

Some lawmakers, understanding that gutting Section 230 would be especially harmful to smaller companies, propose simply removing its protections for the largest platforms. Aside from the aforementioned constitutional issues this would raise, taking Section 230 away from Big Tech doesn’t only affect Big Tech.

Losing the liability shield for hosting user-generated content would particularly affect individual users and small sellers who profit by using the platforms large companies provide. For example, user-generated reviews could become too risky to host, and these especially benefit smaller sellers who need that feedback to convince customers to trust their products over those of established rivals.

Similarly, small sellers are able to use the targeted advertising offered by large platforms like Google and Facebook to affordably reach niche audiences that were not available through conventional media. Without Section 230, platforms would be liable for the content of every ad run on their site and would be incentivized to be more restrictive regarding who they trust to run ads.

The Path Forward

Fortunately, some lawmakers increasingly appear to realize that the danger to free speech that they can address is actual censorship – that is, government suppression of speech. The recently released Twitter and Facebook communications with government agencies regarding alleged COVID and election disinformation toe the line between mere collaboration and state action. This “jawboning” by federal agencies deserves investigation by Congress and perhaps legislative action.

Meanwhile, existing platforms and new upstarts are responding to the public’s dissatisfaction by exploring ways to moderate content more evenly and transparently. Some are experimenting with decentralizing content filtering or even the platforms themselves to allow consumers to be more in control over their own experience online. The market does not always adapt at the pace lawmakers might prefer, but the public demand for change is clear. If left alone, Section 230’s protections will help enable platforms to compete for better solutions.