Jeffrey Toobin should be liable for his big reveal, not Zoom
On Monday, Vice reported that the New Yorker suspended reporter Jeffrey Toobin because he exposed himself on a company Zoom call. While the infamous “incident” has sparked conversations about appropriate conduct during virtual meetings, the indiscretion is also extremely relevant to the debate regarding social media regulation and Section 230 of the Communications Act of 1934.
Revealing one’s self in a work setting can invariably lead to lawsuits. And if lawsuits are filed, Toobin, rather than Zoom, should be liable. Zoom likely had no knowledge of the incident until it was reported, nor did Zoom have anything to do with his actions. Toobin is also not an employee of Zoom. It is this same principle that Section 230 protects: that platforms should not be liable for content published by users. But platforms are liable for their own content that they publish, such as tweets from @Twitter or any static pages on Facebook.com created by Facebook.
Unfortunately, there have been a plethora of proposals to strip Section 230 protections unless certain (often arbitrary) conditions are met. These proposals are ill-advised, and the Toobin incident highlights why.
First, consider the pure volume of user-generated content on websites. In June, Zoom reached over 300 million daily meeting participants. Facebook has billions of users, and Twitter has hundreds of millions. Last year, Facebook released a report showing the billions of pieces of content it moderated, from child exploitation to terrorist propaganda. It would be literally impossible for any amount of human moderators to see every piece of content posted. That each website should be liable for content they did not create or even see is ridiculous.
Furthermore, if Section 230 protections were removed, that would mean that websites would face liability for all user content if they moderated any user content. Websites would then moderate nothing, so they’re not liable, or they would moderate so much that users would face massive barriers to posting. If conservatives think that too much of their content is being censored now, just wait until Facebook becomes liable for everything its users post. This is known as the “moderator’s dilemma.” In terms relevant to the Toobin news, Zoom would either have to allow everybody to show their genitals on the platform or strictly police meetings to ensure that none made it through. This regulatory burden would likely be crushing to new market entrants who lack the resources for a robust moderation program. As we learned from the GDPR regulations in Europe, these types of regulations harm competition by squashing startups and entrenching existing players.
Some people incorrectly think Section 230 only applies to platforms that moderate in a viewpoint or partisan-neutral way. That is factually incorrect. Some think it ought to only apply in those circumstances. That is unwise, particularly for conservatives — do we really want the government determining what is viewpoint-neutral? It is likely unconstitutional under the First Amendment to have the government compel or regulate speech based on content in this way. Some have argued that Zoom is not covered under Section 230, but according to the plain text of the law, it is indeed a covered “interactive computer service” to which the law applies. Others have argued that because Zoom does not moderate content that it doesn’t need to be covered under Section 230. But it does moderate, and moderation as the respective service sees fit is what 230 protects.
Some argue that all legal content ought to be allowed on platforms, whether by legal mandate or by moderation principle. However, lots of undesirable content is legal, including spam, annoying and repetitive posts, fake profiles, and, yes, pornography. If all legal content had to be allowed, Zoom could be flooded with people masturbating on camera like Toobin, and Zoom would be powerless to stop it. Remember “Zoom bombing”? Now imagine that Zoom isn’t allowed to remove those people. That’s why Section 230 was created: to empower services like Zoom to take down the genitals it can without becoming liable for the ones it can’t.
This isn’t to say that platforms have perfect, great, or even good moderation practices. However, content moderation is hard and imperfect, and Section 230 provides the necessary protections to do our best and the freedom to moderate safely.
If a lawsuit comes of the Toobin incident, he will be liable for the behavior, and Zoom will not. Let’s keep each responsible for their own junk.