INTRODUCTION AND
SUMMARY OF ARGUMENT

This Court has long recognized that the First Amendment forbids not just direct regulation of protected speech, but also state action likely to result in self-censorship and the curtailment of lawful speech. The Ninth Circuit held that under the AntiTerrorism Act (“ATA”), an online platform with only generalized awareness that a terrorist organization–among literally hundreds of millions of other speakers–may have used its service has “knowingly provided substantial assistance” to an act of terrorism. That interpretation threatens to substantially narrow the speech that platforms host, raising serious First Amendment concerns. But those concerns are easily avoided by rejecting the Ninth Circuit’s erroneous reading of the statute, and by requiring that a defendant possess actual knowledge that a specific piece of user-generated content on its platform provides substantial assistance to a terrorist act before imposing aiding-and-abetting liability on the basis of its function as a speech intermediary.

The multi-factor Halberstam v. Welch test that Congress specified for determining aiding-and abetting liability under the ATA can and should be interpreted to account for the First Amendment implications of imposing liability on speech “intermediaries”–third parties who publish or distribute the speech of others–like Petitioner Twitter, Inc. and Respondents Facebook, Inc. and Google LLC. These platforms and other intermediaries provide essential fora for speech and have become a primary source of news, information, and discussions across the nation and around the world. Indeed, “[t]he numbers suggest that companies like Google and Twitter have at least as great an impact on free expression as do traditional newspapers.” Marvin Ammori, The “New” York Times: Free Speech Lawyering in the Age of Google and Twitter, 127 Harv. L. Rev. 2259, 2266 (2014). The Halberstam inquiry requires that “(1) the party the defendant aids must perform a wrongful act that causes an injury; (2) the defendant must be generally aware of his role as part of an overall illegal or tortious activity at the time that he provides the assistance; and (3) the defendant must knowingly and substantially assist the principal violation.” Halberstam v. Welch, 705 F.2d 472, 487—88 (D.C. Cir.1983). The Ninth Circuit interpreted the third of these prongs to allow liability for a platform based only on “general awareness” that a handful of users might be using the platform in ways that support offline terrorist conduct. If that is a sufficient basis for liability, intermediaries will no longer be able to function as fora for others’ speech, and free expression will be the loser.

This Court has long taken seriously the impact of imposing overly broad indirect liability on speech intermediaries. In a line of cases dating back to Smith v. California, 361 U.S. 147 (1959), the Court has made clear that laws that incentivize intermediaries’ selfcensorship through an insufficiently stringent scienter requirement undermine the public’s ability to engage in constitutionally protected speech and accordingly violate the First Amendment. Smith and its progeny concerned state laws that–by threatening liability for facilitating others’ speech–prompted intermediaries such as bookstores and newspapers to curtail their distribution of protected speech. As this Court emphasized, intermediaries’ rational caution in the face of potential liability deprives the public of robust access to all kinds of speech–exactly what the First Amendment is designed to protect. Applying similar principles here, the Court should require an intermediary’s actual knowledge that a specific piece of user-generated content on its platform provides substantial assistance to a terrorist act before imposing aiding-and-abetting liability.

If, instead, the Ninth Circuit’s startlingly broad construction of the ATA stands, online intermediaries will be forced to suppress protected speech, just like the publishers and content distributors at issue in Smith and its progeny. By allowing for indirect liability based on mere “generalized knowledge” that alleged terrorists or their affiliates use such intermediaries’ services, this construction would effectively require platforms to sharply limit the content they allow users to post, lest courts find they failed to take sufficiently “meaningful steps” against speech later deemed beneficial to an organization labeled “terrorist.”

Indeed, given the vast amounts of speech that online intermediaries handle every day, were the Ninth Circuit’s approach the law, intermediaries would be likely to use necessarily blunt content moderation tools to over-restrict speech or to impose blanket bans on certain topics, speakers, or specific types of content. Even today, online intermediaries frequently take down content mistakenly identified as offensive or forbidden–for instance, by confusing a post about a landmark mosque with one about a terrorist group.

Faced with potential ATA liability, all manner of speech intermediaries–not only online platforms–will grow more risk-averse and more susceptible to overly cautious moderation, thus suppressing large amounts of protected speech. And such a reading would open the door to future federal or state legislation imposing liability on online intermediaries for inadvertently hosting other kinds of content, thereby chilling platforms and inhibiting the public’s access to speech on other topics. This problem is exactly what the Court cautioned against in Smith and its progeny.

This Court should accord proper respect to the key First Amendment principles at stake and reverse the Ninth Circuit’s erroneous interpretation of the ATA.

Read the rest here.

Featured Publications