Back in the early days of the Electronic Frontier Foundation (I was the organization’s first employee) we at EFF were united by the belief that the online world, whether we were talking about privately owned platforms or the open internet, was going to be a central locus for freedom of expression. EFF was founded in 199o, a few years before the internet was opened up for commercial development, which led unsurprisingly to an abrupt climb in public participation on the Net. So we had a head start when it came to anticipating how free-speech issues would play out online.

One early obvious source of social tension arising from growth of this new medium was government anxiety and intervention, first focused on the threat of hackers and computer intrusion. Quickly thereafter the calls for government intervention centered on issues such as encryption (which makes wiretapping harder), copyright infringement, the easy spread of pornographic content, the prospect of cyberterrorism, “bomb-making information on the internet” and so-called “hate speech.” In its first decade, EFF tackled those issues, partly to stop government overreach but also to create breathing space for individuals and companies to develop online services and internet “spaces” where free speech could flourish.

EFF’s primary focus in those years was challenging government impulses to control, constrain and regulate internet speech. But at the same time we saw cases arise from the actions and policies of then-dominant platform providers (in those days they included the highly-moderated Prodigy, plus the somewhat less moderated CompuServe and AOL). Because these companies hosting digital forums were not bound by the First Amendment—yet might, for all we knew, become the market-dominant platforms of the internet era—we at EFF believed it was best that they join a social consensus that allowed digital freedom, freedom of expression and freedom of association. The First Amendment is a limitation on government action but, we argued, private companies ought to value freedom and privacy too.

As I wrote in an (unsigned) editorial in EFF News (volume 1, number 0): “We at EFF do not dispute that Prodigy is acting within its rights as a private concern when it dictates restrictions on how its system is used. We do think, however, that the Prodigy experience has a bearing on EFF interests in a couple of ways. First, it demonstrates that there is a market – a perceived public need – for services that provide electronic mail and public conferencing. Second, it illustrates the fallacy that ‘pure’ market forces always can be relied upon to move manufacturers and service providers in the direction of open communications. A better solution, we believe, is a national network-access policy that, at the very least, encourages private providers to generate the kind of open and unrestricted network and mail services that the growing computer-literate public clearly wants.”

On the other hand, we knew early on that in order to liberate the platform and service providers—to give breathing space to free expression and privacy—it was critical that neither statute, nor regulation, nor caselaw compelled providers to move in the opposite, more restrictive, direction. Drawing upon earlier Supreme Court precedent, a federal district court case called Cubby, Inc. v. CompuServe, Inc. (1991) suggested that the best way to classify online platforms was as something not quite like a common carrier (e.g., a postal service or telephone company) and not quite like a publisher (Penguin or the New York Times) either.

In Cubby, a federal judge suggested (in a closely reasoned opinion) that the proper First Amendment model was the bookstore – bookstores, under American law, are a constitutionally protected space for hosting other people’s expression. But that case was misinterpreted by a later decision (Stratton Oakmont, Inc. v. Prodigy Services Co., 1995), so lawyers and policy advocates pushed to include platform protections in the Telecommunications Act of 1996 that amounted to a statutory equivalent of the Cubby precedent. Those protections, in Section 230, allowed platform providers to engage in certain kinds of editorial intervention and selection without becoming transformed by their actions into “publishers” of users’ content (and thus legally liable for what users say).

In short, we at EFF wanted platform providers to be free to create humane digital spaces without necessarily acquiring legal liability for everything their users said and did, and with no legal compulsion to invade users’ privacy. We argued from the very beginning, about the need for service providers to be just, to support human rights even when they didn’t have to and to provide space and platforms for open creativity. The rules we worked to put into place later gave full bloom to the World Wide Web, to new communities on platforms like Facebook and Twitter and to collaborative collective enterprises like Wikipedia and open-source software.

In pure economic terms, Section 230 (together, it must be said, with the Digital Millennium Act’s notice-and-takedown provisions regarding copyrighted works) has been a success—the leading internet companies (among Western democracies at least) have been American. Section 230, with its bright-line rules barring internet services’ legal liability for content originated by a service’s users (rather than the services themselves) brought the Cubby model into the 21st century. Services could “curate” user content if they wanted to (just as a bookstore has a First Amendment-grounded right to choose which books it carries and sells), but wouldn’t be liable either for content they overlooked or for content they had (mis)judged to be lawful. In the digital world, Section 230 gave the platforms something like common-carriage legal protections but also autonomy to shape the character of their online “spaces.”

Because some platforms have been hugely successful, and because market shakeouts have left some players like Facebook and Google dominant (at least for now), other players have sought to roll back Section 230. Most recently the ostensible focus has been on sex trafficking (and commercial sexual services generally), which some critics believe has been made worse by online platforms like Backpage.com—even though Backpage almost certainly isn’t protected by Section 230 given the service’s role in originating sex-service content. But, really, the concern about internet sex-trafficking is being used as a stalking horse for players who are looking for opportunities either to sue the platforms and win big bucks or to impose stronger censorship obligations on the platforms for a variety of reasons—not the least of which is today’s moral panics about social media and big tech, which I’ve written about here and here.

This isn’t to say we should never revisit Section 230 and consider whether its protections need to be refined. Maybe they do. But given that there is a larger moral panic going on about social media (e.g. Russian-sponsored trolls using Facebook to push for Brexit or Donald Trump’s election), we shouldn’t rush to judgment about amending or repealing Section 230. Most ordinary internet users love Google and Facebook (even when they’re sometimes irritated by what they find on these and other platforms). We ought not to hurriedly undermine the legal protections that allowed these American success stories to flourish. Even if today’s internet giants can survive the loss of Section 230 and absorb the costs of censorship compliance, new market entrants likely can’t. Which means that hobbling 230 will stifle the competition that got us to today’s rich internet in the first place.

 

 

Image credit: kryzhov