Twenty years ago, Larry Lessig published the original version of his book Code and Other Laws of Cyberspace. A few years later, he put out a very updated version called Code 2.0. Both versions are classics and important pieces of the history of the internet — and are especially interesting to look at now that issues of how much “code” is substituting as “law” have become central to so many debates. When the original book was published, in 1999, Mike Godwin wrote a review for a long defunct journal called E-Commerce Law Weekly. Given the importance of these issues today, we’re republishing a moderately updated version of Godwin’s original 1999 review. It’s interesting to view this review through the lens of the past 20 years of history that we now have lived through.

Imagine that you could somehow assemble the pioneers of the Internet and the first political theorists of cyberspace in a room and poll them as to what beliefs they have in common. Although there would be lots of heated discussion and no unanimity on any single belief, you might find a majority could get behind something like the following four premises:

  1. The Internet does not lend itself to regulation by governments.
  2. The proper way to guarantee liberty is to limit the role of government and to prevent government from acting foolishly with regard to the Internet.
  3. The structure of the Internet—the “architecture” of cyberspace, if you will—is politically neutral and cannot easily be manipulated by government or special interests.
  4. The expansion of e-commerce and the movement of much of our public discourse to the online world will increase our freedom both as citizens and as consumers.

But what if each of these premises is at best incomplete and at worse false or misleading? (Leave aside the likelihood that they’re not entirely consistent with one another.) What if the architecture of the Net can be changed by government and the dynamism of e-commerce? What if the very developments that enhance electronic commerce also undermine political freedom and privacy? The result might be that engineers and activists who are concerned about preserving democratic values in cyberspace were focusing their efforts in the wrong direction. By viewing governmental power as the primary threat to liberty, autonomy, and dignity, they’d blind themselves to the real threats—threats that it may require government to block or remedy.

It is precisely this situation in which Harvard law professor Lawrence Lessig believes we find ourselves. In his new book Code and Other Laws of Cyberspace (Basic Books, 1999), Lessig explores at length his thesis that the existing accounts of the political and legal framework of cyberspace are incomplete and that their very incompleteness may prevent us from preserving the aspects of the Internet we value most. Code is a direct assault on the libertarian perspective that informs much Internet policy debate these days. What’s more, Lessig knows that he’s swimming against the tide here, but he nevertheless takes on in Code a project that, although focused on cyberspace, amounts to nothing less than the relegitimization of the liberal (in the American sense) philosophy of government.

It is a measure of Lessig’s thoroughness and commitment to this project that he mostly succeeds in raising new questions about the proper role of government with regard to the Net in an era in which, with the exception of a few carveouts like Internet gambling and cybersquatting, Congress and the White House have largely thrown up their hands when it comes to Internet policy. While this do-nothingism is arguably an improvement over the kind of panicky, ill-informed interventionism of 1996’s Communications Decency Act (which Lessig terms “[a] law of extraordinary stupidity” that “practically impaled itself on the First Amendment”), it also falls far short, he says, of preserving fundamental civil values in a landscape reshaped by technological change.

Architecture Is Not Static

To follow Lessig’s reasoning in Code, you need to follow his terminology. This is not always easy to do, since the language by which he describes the Internet as it is today and as it might someday become is deeply metaphorical. Perhaps the least problematic of his terms is “architecture,” which Lessig borrows from Mitchell Kapor’s Internet aphorism that “architecture is politics.” Although his use of the term is a little slippery, Lessig mostly means for us to understand the term “architecture” to refer to both (a) the underlying software and protocols on which the Internet is based and (b) the kinds of applications that may run “on top of that Internet software infrastructure.” And while the first kind of architecture is not by itself easily regulable, Lessig says, the second kind might make it so—for example, by incorporating the various monitoring and identification functions that already exist on proprietary systems and corporate intranets.

More difficult to get a handle on is his use of the word “code,” which seems to expand and contract from chapter to chapter. At some bedrock level, Lessig means “code” to signify the software and hardware that make up the Internet environment—akin to the sense of “code” that programmers use. But he is also fond of metaphoric uses of “code” that muddy the waters. “Code is law,” Lessig writes at several points, by which we may take him to mean that the Internet’s software constrains and shapes our behavior with as much force as law does. And of course the book’s title equates code and law.

Elsewhere, however, he writes that code is something qualitatively different from law in that it does not derive from legislative or juridical action or community norms, yet may affect us more than laws or norms do, while providing us less opportunity for amendment or democratic feedback. It does not help matters when he refers to things like bicycle locks as “real-world code.” But if you can suspend your lexical disbelief for a while, the thrust of Lessig’s argument survives any superficial confusions wrought by his terminology.

That argument depends heavily on the first point Lessig makes about Internet architecture, which is simply that it’s malleable—shapeable by human beings who may wish to implement an agenda. The initial architecture of the Internet, he says correctly, emphasized openness and flexibility but provided little support for identifying or authenticating actual individuals or monitoring them or gathering data about them. “On the Internet it is both easy to hide that you are a dog and hard to prove that you are not,” Lessig writes. But this is a version of the Internet, he says, that is already being reshaped by e-commerce, which has reasons for wanting to identify buyers, share financial data about them, and authenticate the participants in transactions. At the center of e-commerce-wrought changes is the technology of encryption, which, while it has the ability to render communications and transactions in transit, also enables an architecture of identification (through, e.g., encryption-based certification of identity and digital signatures).

The key to the creation of such an architecture, Lessig writes, is not that a government will require people to hold and use certified IDs. Instead, he writes, “The key is incentives: systems that build the incentives for individuals voluntarily to hold IDs.” Lessig adds, “When architectures accommodate users who come with an ID installed and make life difficult for users who refuse to bear an ID, certification will spread quickly.”

But even if you don’t believe that e-commerce alone will establish an architecture of identification, he writes, there are reasons to believe that government will want to help such an architecture along. After all, a technology that enables e-commerce merchants to identify you and authorize your transactions may also have an important secondary usefulness to a government that wants to know where you’ve been and what you’ve been up to on the Internet.

And if the government wants to change the technological architecture of the Internet, there is no reason to believe it would not succeed, at least to some extent. After all, Lessig says, the government is already involved in mandating changes in existing architectures in order to effectuate policy. Among the examples of this kind of architectural intervention, he says, are (a) the Communications Assistance to Law Enforcement Act of 1994, in which Congress compelled telephone companies to make their infrastructure more conducive to successful wiretaps, (b) Congress’s requiring the manufacturers of digital recording devices to incorporate technologies the extent to which perfect copies can be made, and (c) the requirement in the Telecommunications Act of 1996 that the television industry design and manufacture a V-chip to facilitate individuals’ ability to automatically block certain kinds of televised content.

With an identification architecture in place, Lessig argues, what previously might seem to be an intractable Internet-regulation problem, like the prohibition of Internet gambling, might become quite manageable.

The Government and Code

An account of social activity on the Internet that deals solely with the legal framework is inadequate, Lessig argues. In Lessig’s view, the actual “regulators” of social behavior come from four sources, each of which has its own dynamic. Those sources of social constraints are the market, the law, social norms, and architecture—here “architecture” means “the constructed environment in which human beings conduct their activities). “But these separate constraints obviously do not simply exist as givens in a social life,” Lessig writes. “They are neither found in nature nor fixed by God,” he writes, adding that each constraint “can be changed, although the mechanism of changing each is complex.” The legal system, he says, “can have a significant role in this mechanics.”

So can the open-source movement, which Lessig refers to as “open code.” The problem with “architectural” constraints, and the thing that distinguishes them from any other kind, is that they do not depend on human awareness or judgment to function. You may choose whether or not to obey a law or a social norm, for example, and you may choose whether or not to buy or sell something in the market, but (to use the metaphor) you cannot enter a building through a door if there is no door there, and you cannot open a window if there is no window. Open code—software that is part of a code “commons,” that is not owned by any individual or business, and that can be inspected and modified—can provide a “a check on state power,” Lessig writes, insofar as it makes any government-mandated component of the architecture of the Net both visible to, and (potentially) alterable by, citizens. Open code, which still makes up a large part of the Internet infrastructure, is thus a way of making architecture accountable and subject to democratic feedback, he argues. “I certainly believe that government must be constrained, and I endorse the constraints that open code imposes, but it is not my objective to disable government generally,” Lessig writes. But, he adds, “some values can be achieved only if government intervenes.”

A Jurisprudence of Cyberspace?

One way that government intervenes, of course, is through the court system. And as Lessig notes, it may be the courts that are first called upon to interpret and preserve our social values when technology shifts the effective balance of rights for individuals. A court faced with such a shift often must engage in “translation” of longstanding individual rights into a new context, he says.

Take wiretapping, for example. Once upon a time, it was not so easy for law-enforcement agents to get access to private conversations. But once telephones had become commonplace and, as Lessig puts it, “life had just begun to move onto the wires,” the government began to tap phones in order to gather evidence in criminal investigations. Does wiretapping raise Fourth Amendment concerns? The Supreme Court first answered this question in Olmstead v. United States (1928)—the answer for the majority was that wiretapping, at least when the tap was places somewhere other than on a tappee’s property, did not raise Fourth Amendment issues since the precise language of the Fourth Amendment does not address the non-trespassory overhearing of conversations. That is one mode of translation, Lessig writes—the court preserved the precise language of the Fourth Amendment in a way that contracted the scope of the zone of privacy protected by the Fourth Amendment.

Another, and arguably preferable approach, Lessig says, would be to follow Justice Louis Brandeis’s approach in his dissent in Olmstead—an approach that preserves the scope of the privacy zone while departing from a strict adherence to the literal language of the Amendment. Brandeis’s dissent, arguing that the capture of private conversations does implicate the Fourth Amendment, was adopted by the Supreme Court forty years after Olmstead.

But what if technology raises a question for a court for which it is not clear which interpretative choice comes closer to preserving or “translating” the values inherent in the Bill of Rights? Borrowing from contract law, Lessig calls such a circumstance a “latent ambiguity.” He further suggests—this is perhaps the most unfashionable of his arguments—that, instead of simply refusing to act and referring the policy question to the legislature, courts might simply attempt to make the best choice at preserving constitutional values in the hope that its choice will at minimum “spur a conversation about these fundamental values…to focus a debate that may ultimately be resolved elsewhere.”

Internet Alters Copyright and Privacy

All this begins to seem far afield from the law of cyberspace, but Lessig’s larger point is that the changes wrought by the Internet and related technologies are likely to raise significant “latent ambiguity” problems. He focuses on three areas in which technologies raise important questions about values but for which a passive or overliteral “translation” approach would not be sufficient. Those areas are intellectual property, privacy, and freedom of speech. In each case, the problem Lessig sees is one that is based on “private substitutes for public law”—private, non-governmental decision making that undercuts the values the Constitution and Bill of Rights were meant to preserve.

With intellectual property, and with copyright in particular, technological changes raise new problems that the nuanced established legal balances built into the law do not address. Lessig challenges the long-standing assertion, in Internet circles, at least, that the very edifice of copyright law is likely to crumble in the era of the Internet, which enables millions of perfect copies of a creative work to be duplicated and disseminated for free, regardless of whether the copyright holder has granted anyone a license. In response to that perceived threat, Lessig observes, the copyright holders have moved to force changes in technology and changes in the law.

As a result, technologically implemented copyright—protection and copyright—management schemes are coming online, and the government has already taken steps to prohibit the circumvention of such schemes. This has created a landscape in which the traditional exercise of one’s rights to “fair use” of another’s work under the Copyright Act may become meaningless. The fact that one technically has a right to engage in fair use is of no help when one cannot engage in any unauthorized copying. Complicating this development, Lessig believes, is the oncoming implementation of an ID infrastructure on the Internet, which may make it impossible for individuals to engage in anonymous reading.

This bears some explaining. Consider that if you buy a book in a bookstore with cash, or if you read it in the library, nobody knows what you’re buying and reading. By contrast, a code-based licensing scheme in which you identify yourself online in order to obtain or view a copy of a copyrighted work may undercut your anonymity, especially if there’s an Internet I.D. Infrastructure already in place. The technology changes are “private” ones—they do not involve anything we’d call “state action” and thus do not raise what we normally would call a constitutional problem—but they affect public values just as deeply as traditional constitutional problems do.

A similar argument can be made about how the Internet alters our privacy rights and expectations. Because the Internet both makes our backgrounds more “searchable” and our current behavior more monitorable, Lessig reasons, the privacy protections in our Bill of Rights may become meaningless. Once again, when the searching and monitoring is done by someone other than the government, it means that the “state action” trigger for invoking the Bill of Rights is wholly absent.

What’s more, such searching and monitoring, whether done by the government or otherwise, may be invisible to the person being investigated. You will have lost your right to any meaningful privacy and you will not even know it is gone until it is too late. Lessig’s analysis of the problem here is convincing, even though his proposed solution, a “property regime” for personal data that would replace today’s “liability regime,” is deeply problematic. This is partly because it would transmute invasions of privacy into property crimes—aren’t the jails full enough without adding gossips to the inmates—and partly because the distinction he draws between property regimes and liability regimes as to which benefits the individual more is (in my view) illusory in practical terms.

Perhaps Lessig’s most controversial position with regard to the threat of private action to public values is the one he has explored previously in a number of articles for law reviews and popular publications—the argument that some version of the Communications Decency Act—perhaps one that required minors to identify themselves as such so as to be blocked from certain kinds of content—is less dangerous to freedom of speech than is the private use of technologies that filter content. It is important to understand that Lessig is not actually calling for a new CDA here, although that nuance might escape some legislators.

Lessig interprets such a version of the CDA, and the architecture that might be created by it, as a kind of “zoning,” which he sees as preferable to private, non-legislated filtering because, he says, zoning “builds into itself a system for its limitation. A site cannot block someone from the site without that individual knowing it.” By contrast, he says, a filtering regime such as (now widely regarded as moribund) Platform for Internet Content Selection enables all sorts of censorship schemes, not just nominally child-protecting ones. PICS, because it can scale to function at the server or even network level, can be used by a government to block, say, troubling political content. And because PICS support can be integrated into the architecture of the Internet, it could be used to create compelling private incentives for people to label their Internet content. Worse, he says, such blocking would be invisible to individuals.

Lessig’s Arguments Hard to Harmonize

There are many problems with Lessig’s analysis here, and while it would take more space than I have here to discuss them in depth, I can at least indicate what some of the problems are. First of all, it’s not at all clear that one could not create a “zoning” solution that kept the zoning-excluded users from knowing—directly at least—that they have been excluded. Second, if a zoning scheme works to exclude users identified as kids, is there any reason to think it would not work equally well in excluding users identified as Iranians or Japanese or Americans? Don’t forget that incipient I.D. architecture, after all.

Third, a PICS-like scheme, implemented at the server level or higher, is actually less threatening to freedom of speech than key-word or other content filtering at the server level or higher. PICS, in order to function, requires that some high percentage of the content producers in the world buy into the self-labeling scheme before a repressive government could use it to block its citizens from disapproved content. Brute-force key-word filtering, by contrast, does not require anyone else’s cooperation—a repressive government could choose its own PICS-independent criteria and implement them at the server level or elsewhere.

Fourth, there’s nothing inherent in the architecture of a PICS-style scheme—in the unlikely event that such a scheme were implemented—or any other server-level filtering scheme that requires that users not be notified that blocking took place. In short, you could design that architecture so that its operation is visible.

Lessig is right to oppose the implementation of anything that might be called an architecture of filtering. But one wonders why he is so intent on saying that zoning is better than filtering when both models can operate as tools of repression. Lessig answers that question by letting us know what his real worry is, which is that individuals with filtering tools will block out those who need to be heard. Says Lessig: “[F]rom the standpoint of society, it would be terrible if citizens could simply tune out problems that were not theirs…. We must confront the problems of others and think about problems that affect our society. This exposure makes us better citizens.” His concern is that we will use filtering tools to bar us from that salutary exposure.

Leaving aside the question of whether his value here is one we should embrace—it is hard to harmonize it with what Brandeis in his Olmstead dissent termed “the right to be let alone”—it seems worth noting that the Internet does not really stand as evidence to Lessig’s assumption that people will use their new tools to avoid confrontation with those holding different opinions. Indeed, much of the evidence seems to point the other way, as anyone who has ever viewed a long-running Internet flame war or inspected dueling Web sites can attest. Nothing forces combatants on the Internet to stay engaged, but they do anyway. The fact is, we like to argue with each other—as Deborah Tannen has pointed out, we have embraced an “argument culture.” Whether that culture is healthy is another question, of course.

But even if one disagrees with Lessig’s analysis of certain particular issues, this does not detract from his main argument, which is that private decision making, enhanced by new technologies and implemented as part of the “architecture” of the Internet, may undercut the democratic values—freedom of speech, privacy, autonomy, access to information—at the core of our society. Implicit in his argument is that the traditional focus of civil libertarians, which is to challenge government interventions in speech and privacy arenas, may be counterproductive in this new context. If I read him right, Lessig is calling for a new constitutional philosophy, one rooted perhaps in Mill’s essay On Liberty in which government can function as a positive public tool to preserve from private encroachments of the liberty values we articulated in the Constitution. Such a philosophy would require, however, a very imaginative “translation” of constitutional values indeed to get past the objection that the Bill of Rights is only about limiting “state action.”

What Code is really about is (the author’s perception of) the need for political liberals to put a positive face on the role of government without embracing statism or seeming to. Although this is clearly Lessig’s project, he’s pessimistic about its success—in the public debate about Internet policy, he complains, the libertarians have essentially won the field. What he would like to see, perhaps, is a constitutional structure in which something like the Bill of Rights could be invoked against challenges to personal liberty or autonomy, regardless of whether the challenges come from public or private sources. The ideology of libertarianism, he believes, will interpret the changes wrought by e-commerce and other private action as a given, like the weather. “We will watch as important aspects of privacy and free speech are erased by the emerging architecture of the panopticon, and we will speak, like modern Jeffersons, about nature making it so—forgetting that here, we are nature,” he writes in a somewhat forlorn final chapter.

Lessig may be right in his gloomy predictions, but let us suppose that his worst fears are not realized and a new debate does begin about the proper role of government in cyberspace and about appropriate limitations on private crafting of the online architecture. If that happens, it may be that at least some of the thanks for that development will have to go to Lessig’s Code.

Featured Publications