Section 230: Be Careful What You Wish For
SACRAMENTO, Calif. — Conservative frustration with social media moderation policies has dissipated since Elon Musk purchased Twitter and loosened up the previous regime’s rules — or at least replaced them with other controversial edicts. But for years, conservatives have championed additional governmental oversight of companies such as Twitter, Google, and Facebook. To paraphrase the curmudgeonly journalist H.L. Mencken, they might soon get what they want “good and hard.”
The Supreme Court this week is hearing a case brought by a California family against Google that centers on the tragic death of their 23-year-old daughter, Nohemi Gonzalez, during a series of 2015 terrorist attacks in Paris by Islamic extremists. “Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” argued the family’s attorneys.
As the New York Times reported, the case “could have potentially seismic ramifications for the social media platforms that have become conduits of communication, commerce and culture for billions of people.” Be careful what you wish for.
The plaintiffs in Gonzalez v. Google want the high court to gut Section 230 of the 1996 Federal Communications Decency Act. That statute protects online platforms from legal liability for the comments, posts, and videos that users share on social media. Currently, one may sue the person who posts inflammatory or defamatory content but not the companies that own the platforms. Without Section 230, Google, Facebook, and YouTube would face an endless sea of litigation.
In their frustration at heavy-handed content moderation or censorship of posts that reflect conservative views, conservatives often have called for a variety of governmental interventions. Some of their proposals amount to little more than unprincipled venting, such as calls to turn these private companies into public utilities. But right-of-center calls for eliminating or “reforming” Section 230 seem to be completely serious.
“Section 230 was created to balance a platform’s interest in moderating offensive content while offering a platform for relatively free expression that people would not find in traditional publications,” explains a Republican Study Committee Backgrounder. “Nonetheless, social media giants seem committed to acting more and more like traditional publishers biased toward a particular ideology, calling into question the wisdom of retaining Section 230 in its current form.”
Likewise, the Heritage Foundation has argued that Section 230 should be refined with a variety of proposed changes that include sunsetting it every seven years, making the protections contingent on the platforms’ cooperation with law enforcement, and otherwise limiting at least some of its protections. The net result would be to open up the companies to more lawsuits and more intervention by the federal authorities.
Ironically, this idea puts many on the right in sync with progressives and the Biden administration, albeit for different reasons. Conservatives believe that the social media companies are treating conservative views unfairly and therefore want less moderation, whereas the Left believes that the platforms don’t intervene enough to, say, limit COVID “misinformation.”
As Politico reported, the president held a “listening session” last year in which he called for changes, “including the removal of Section 230 liability protections along with stronger competition provisions and privacy protections together with calls to make algorithms more transparent and prevent them from discriminating against protected groups.” Take a guess at how that would work out.
Gutting Section 230 would result in the worst of both worlds. If the legal code treats social media platforms like traditional publishers, then they would face a choice: they could either strictly police content or stop policing it at all. Social media users would find two types of resulting platforms: a) those that are highly moderated and would, of course, anger virtually everyone (and conservatives especially), and b) those that would quickly resemble one’s spam file or an open sewer.
Regarding the first choice, Facebook’s parent company Meta made the obvious point in a January statement: “Exposing companies to liability for decisions to organize and filter content from among the vast array of content posted online would incentivize them to simply remove more content in ways Congress never intended.”
Regarding the second, the statement also noted, “Without this protection, millions of online companies would not be able to help keep people safe by reducing and blocking dangerous content.” That’s because many companies couldn’t afford to moderate every post and would therefore choose a totally hands-off approach. How would this make anything better?
Even Musk has learned how hard it is to create a more open Twitter without getting ensnared in endless debates about when something goes too far or whether a particular account engages in extremist pontificating. There’s no way to resolve this conflict, which is only exacerbated by the fact that both sides want different outcomes. The only way to resolve it is in the public square, where private companies can set their own rules.
We might not like those rules, but I can guarantee we’ll like them better than an alternative that rests this power in the courts and government agencies.