Nearly everyone, it seems, has ideas about how to regulate and rein in Big Tech. President Donald Trump wants a special panel to monitor companies like Facebook and Twitter and issued an executive order attempting to clamp down on supposed censorship on their part. Senators from the left (Richard Blumenthal and Diane Feinstein) and right (Josh Hawley and Ted Cruz) have called to strip longstanding regulatory safe harbors from social media companies. Senator Elizabeth Warren speaks for a large portion of the left and at least some on the right when she contends that major technology companies “have too much power—too much power over our economy, our society and our democracy.”

All this said, there’s little consensus about what public policy problems ought to be confronted under the heading of Big Tech—or even what “Big Tech” is. In fact, reaching a sensible definition leads to the conclusion that some parts of Big Tech aren’t big at all, and that some widely cited concerns—size, per se, and supposed censorship—would require sweeping changes that could have significant effects beyond what’s intended. But some real, novel concerns—namely, disinformation and information custody—can be confronted as “tech problems.” And, accordingly, the best way to regulate Big Tech is to impose a code of ethics—one created through a multi-stakeholder process and enforced by a mixture of public and private effort. Such a code could draw on the experience of the professions and—oddly enough—the comic book industry. 

What Big Tech Is—and What It Is Not

By some definitions, Big Tech could be a synonym for “big business.” Nearly every very large company makes heavy use of technology, but few would classify market-leading companies like McDonald’s and Wal-Mart as Big Tech—even though both established their positions, in part, because of innovative, often-proprietary technology. By contrast, not all Big Tech companies make technological breakthroughs: The basic technology for social networking existed for about a decade before Facebook started in a Harvard dorm room. In addition, some companies that meet most common definitions of Big Tech are not actually big. Twitter, which provides a forum for some of today’s most important political announcements, ranks 817th on the Fortune 1,000 and has fewer than 5,000 employees. Likewise, other well-known companies like the encyclopedia provider Wikimedia Foundation, discussion forum hub Reddit, and classified ad giant Craigslist are non-public entities with remarkably small workforces.

If Big Tech doesn’t comprise every large company that makes use of technology and isn’t even limited to large companies, then what is it? Generally speaking, three characteristics tie together these firms.

First, Big Tech firms are pervasive: they have direct “customer” relationships with a sizable fraction of American adults. For example, a large majority of Americans have Facebook pages and, extrapolating from Internet and search engine use, about 85 percent of Americans over age nine use Google. Amazon is the nation’s second-largest retailer and second-largest employer overall. These companies—both big and medium-sized—are as much a part of American daily life as Coca-Cola and Hershey bars.

As part of this direct engagement, Big Tech firms take custody of personal data to provide things that customers want and encourage them to spend money. This data can take the form of search history (Google), social life (Facebook), buying habits and demand for web services (Amazon), or passing thoughts (Twitter.) Some of these companies make their major products free to end users (Google and Facebook), while others like Amazon and Match Group (owner of Match.com and Tinder) establish billing relationships.

Finally, Big Tech companies also serve as disintermediaries. In other words, they provide platforms that allow ordinary people to provide goods, information, or services to vastly more people than they could on their own. Disintermediation includes social media, of course, which allows for the sharing of personal information in digital form. But it also includes companies like eBay and Amazon, which let anyone open an international retailer, and those like Uber, which connect riders and deliveries with drivers. The common thread is that these companies let individuals or small enterprises do things that used to be possible only for big (or bigger) businesses.

Is “Bigness” a Problem?

While there may be valid arguments against simple concentration of power in any industry, it’s impossible to make these arguments in a coherent way that suggests Amazon, Twitter, and Facebook should be broken up or subject to utility-like regulation, while Proctor & Gamble, JP Morgan Chase, and ExxonMobil should remain unmolested. It isn’t Big Tech’s revenues, market valuations, or even the firms’ market shares within limited universes that are at the heart of the issues policymakers wish to confront. Some parts of Big Tech aren’t big, and the parts that are big do not produce the harms to consumer welfare generally contemplated by current antitrust standards. Taking on bigness in tech would therefore require taking on bigness everywhere.

In many markets, Big Tech’s supposed monopolistic dominance is overstated or misunderstood. For example, take Twitter, which President Donald Trump has attacked for calling him out on untruths and invocations of violence. By definition, it does control the entire market for “tweets,” but its share of the markets for person-to-person communication, news, and even Internet advertising is infinitesimal. Similarly, Uber may have a very large share of the app-based, ride-hailing business (although it has a significant competitor in Lyft), but it represents only a small fraction of the ground transportation market. Indeed, it has disrupted—and sometimes even destroyed—previous regulated monopolies for taxicabs in most major cities. Any of these companies may pose legitimate concerns, and there may be a need to regulate aspects of their operations (for example, setting safety standards for app-based transportation platforms). However, hardly any antitrust theory suggests they have overall market power. Trying to break them up or subject them to special regulation on the basis of their size could only be justified by animus against them.

Of course, some Big Tech firms—like Amazon—certainly are big and do have significant market power. Along with Amazon, Apple, and Alphabet, which owns Google, all rank among the top 20 companies by revenue, and these players can have significant impact on other parts of the market. For example, Google’s search advertising revenue exceeds the ad revenue of all daily newspapers in the United States combined, which has contributed to the near financial collapse of many of them. Wikimedia Foundation’s free product has sidelined the entire market for print encyclopedias.

Applying the consumer welfare standard (the predominant antitrust paradigm of the last 40 or so years), the main question for regulators to confront is not whether these companies are big, but whether they hurt consumers by overcharging and underproducing. The danger that monopolies will seek to do the latter to optimize profits represents the classical justification for antitrust laws. As long as markets continue to produce consumer surplus American antitrust enforcers have tended to leave them alone in recent years.

Wikimedia Foundation’s Wikipedia offers a stark example. It provides an extremely useful product because it’s so widely used, extensive, and easy to correct; it’s also free and better than its dead-tree counterparts. The platforms offered by Google and Facebook also famously provide almost all of their services at no charge to consumers. Technology scholar Will Rinehart has estimated these services to be worth $9,900 annually per user.

Even the biggest Big Tech firms still face real competition and can never be confident in their permanent dominance. As of the summer of 2020, Google controls more than 90 percent of the U.S. search market, but its leading competitors are Microsoft and Verizon (through its Yahoo subsidiary), both of which brought in more than $100 billion in revenue in 2019 and also rank among America’s 50 largest companies. Both, then, are potentially huge competitors. If Amazon’s concentration of retail power is concerning to regulators, why shouldn’t Wal-Mart, which sells more than Amazon in nearly every category, raise the same concerns? All of this is to say that concerns about concentration of power cannot be limited to Big Tech in any logical way, and there’s little evidence that Big Tech causes the kinds of harms on which regulators are currently focused.

Indeed, opponents of Big Tech—very largely from the left—have made precisely this point. Testifying before Congress in late 2017, Barry Lynn of the Open Markets Institute, the leading advocate for abandoning the consumer welfare standard, made his point quite powerfully by drawing together a wide range of industries where he saw a case for regulatory activity that he argues would be beneficial. Indeed, it’s impossible to sustain a critique of Big Tech’s market power logically without applying it to nearly every sizable enterprise everywhere and thereby abandoning the consumer welfare standard.

The consumer welfare standard does have its downsides. But, even if another standard were desirable, any decision to radically change the focus of antitrust law in any sensible way would have significant economy-wide implications. Those who want to subject Big Tech to heightened antitrust scrutiny should have the courage of conviction possessed by many on the left and a few on the paleo-conservative right: a desire to break up or heavily regulate all big companies and thereby significantly remake all sectors of the economy around principles different from those that currently guide it.

Is Censorship a Problem?

President Trump, conservative movement stalwart L. Brent Bozell III, and a bevy of other figures on the right accuse Big Tech of censoring conservative views. Certainly, the content moderation rules have sometimes led to certain opinions being de-platformed and information labeled as false. But changes to the legal and regulatory framework that have shaped social media and other revolutionary communications tools would set back the very principles of free expression that Big Tech’s critics claim to value. Indeed, the collective ability of major platforms to set these rules is the key to maintaining free expression online. Government efforts intended to advance “free speech” under any significantly different framework will inevitably end up either squelching freedom or reducing an already unruly information environment to a true anarchy.

The irony here is that the current framework was designed both to empower companies to moderate content but also to allow them to maximize freedom of expression for content producers, including individual citizens. The key law covering Internet content regulation is Section 230 of the Communications Decency Act of 1996. It states that Internet platforms can’t be held liable for “any action voluntarily taken in good faith” to restrict or ban user-generated “material that the provider or user considers to be obscene . . . or otherwise objectionable, whether or not such material is constitutionally protected.” This allows them to optimize content to social and market demands that allow everyone to speak without having to draw one-size-fits-all lines. This system, unique to the United States among the G-7 countries, is a major reason why essentially all global Big Tech companies make their homes here.

To understand why Section 230 has served as a vanguard for the protection of free speech online, it’s worth looking at the major alternatives to Section 230-style intermediary liability: outright elimination of intermediary liability protections, or a regime (as proposed by several U.S. Senators under the EARN IT Act, which has gone through several iterations) that grants intermediary liability only to providers that act in what these lawmakers view as socially desirable ways. Both are seriously flawed.

An outright elimination of intermediary liability would result in an Internet that was either anarchic or sterile. If providers could not moderate content, many would simply adopt an “anything goes” attitude and allow for any otherwise legal content. With the ability to post being essentially free and easy to automate, this would result in public forums being clogged by commercial messages, many of them fraudulent or suspect in nature. (This happened with Usenet in the 1990s.) An unregulated Internet also would have to allow nearly unlimited amounts of sexually explicit material, racial prejudice, and extreme views, which are legal but undesirable in many or all contexts. Only by strictly moderating content—reducing public Internet forums to the equivalent of a print newspaper’s letters-to-the-editor page—could publishers hope to keep up. This would render the platforms’ current business models impossible. YouTube, for example, adds 500 hours of content every minute of every day. There is no plausible way to watch all of this while keeping the platform open and free to use. A “free speech” standard alone would end up limiting speech.

The prospect of setting clear standards for platforms and providing them with liability protection only when those standards are met is more attractive in some quarters, but would be equally problematic in practice. Indeed, 2018’s anti-sex trafficking FOSTA-SESTA law has done this in a small way already by carving websites that might facilitate human trafficking out of 230 protections. Doubling down on a similar approach—requiring platforms to engage in other conduct politicians desire in order to “earn” intermediary liability—is likely to be equally problematic. Even if initial standards seemed reasonable, it’s easy to imagine left-wing groups demanding broad “hate speech” regulations that bar expression of orthodox religious views on topics of sexuality and political figures of all stripes trying to use various measures to reward friends and punish enemies. Broad efforts might be struck down by courts, but uncertainty alone would chill free expression.

Of course, none of this answer a question at the top of many people’s lists: Are current content-moderation policies biased against conservatives? Author and talk-radio host Dennis Prager sued Alphabet after a handful of his PragerU videos were demonetized or restricted to adult viewers for violating content policies. (Merely “demonetized” content is not itself removed and may continue to link to a content producer’s monetized offerings.) He lost that suit. Likewise, Donald Trump has claimed that Google rigs search results against him, although he has not provided evidence that is happening or explain exactly what is rigged. But the overwhelming bulk of the evidence indicates the main considerations are commercial, not ideological. In any case, if the major tech platforms are set on censoring conservatives, they are failing dismally. While Prager’s lawsuit against the company went forward, YouTube generated more than one billion hits for PragerU’s videos (and more than 3.5 billion as of mid-2020). PragerU was also able to post videos attacking YouTube for censorship and explaining PragerU’s point of view on the issues.

The best way to ensure a free Internet is to have one: to allow owners of private platforms to enforce the standards they deem fit on their private electronic property. The regime created by Section 230, in the main, optimizes content regulation in a way that benefits almost everyone. Doing away with supposed “censorship” would make things worse and restrict freedom of expression.  

Disinformation and Custody of Information: The Major Issues

Even if platforms manage to regulate in a prudent fashion, disinformation will still creep in. Unlike “censorship” by the platforms, the amount of inaccurate and misleading information circulating online should be considered disturbing. During the 2016 election campaign, U.S. intelligence—as well as subsequent investigations by both houses of Congress—concluded that Russia aimed to influence the election against Hillary Clinton and in favor of Donald Trump. Ample evidence also exists that Russians associated with the Internet Research Agency (the “troll farm”), GRU (military intelligence), and SVR (civilian intelligence) also engaged in a variety of potentially disruptive conduct including hacking attacks on voter-registration systems. China has launched similar disinformation campaigns, most prominently a 2020 effort (taken down by Twitter) to create thousands of fake accounts to push Beijing’s side regarding ongoing protests in Hong Kong. Other sites like Alex Jones’s Infowars (nominally conservative) and the Palmer Report (nominally progressive) don’t have any obvious foreign connections and mix factual information with conspiracy theories, exaggerations, and falsehoods. And some ongoing Internet memes, such as the QAnon conspiracy theory popular on the far right, have gained influence.

The impact of all of this is hard to measure and may not always be large. The Internet Research Agency had a budget of $1.25 million a month, much of it devoted to things other than the American election, at a time when the Clinton campaign and its associated super PACs spent about $100 million a month. Alex Jones’s broadcast audience topped out at fewer than 1.5 million, about the same as Fox’s Lego Master reality show, and fell to less than half that after major platforms dropped him. Measuring the impacts of these things is difficult, and formal efforts to do so have typically found they were modest.

Even if hard to quantify, however, disinformation can still rightly be considered a problem. Small increases in the number of people taken in by disinformation about vaccination have resulted in outbreaks of almost-extinct diseases like measles. A few people voting differently than they would have or staying home based on falsehoods can swing an election between political leaders who pursue diametrically opposed public policies. Even if most efforts at disinformation campaigns fail, they cannot be good. As Judy Estrin and Sam Gill write in Washington Monthly, the problem can be considered similar to pollution: something that isn’t desirable to ban altogether, but is certainly harmful if too much is allowed:

What are the various digital pollutants, and at what level are they dangerous? As with environmental sciences, we must take an interdisciplinary approach, drawing not just from engineering and design, law, economics, and political science but also from fields with a deep understanding of our humanity, including sociology, anthropology, psychology, and philosophy.

Taking similar broad considerations into account, individual privacy is also worthy of legal protection. Most American adults have shown they’re willing publish on social media a great deal of information about their lives, some of it on topics like sexual preferences that were once considered shameful. This alone shows that standards of privacy do change and have relaxed. But almost everyone wants some aspects of their lives to be outside of public view. The label “privacy” really covers two sets of concerns: those about the collection and aggregation of data and those about custody of that data.

Resistance to the mere collection of data has long been popular on the academic left, but more recently it has attracted attention from mainstream political figures like presidential candidate Andrew Yang and California Governor Gavin Newsom. Their central conceit is that it is wrong or unfair for firms to make money from consumers’ personal data, or to use such data to direct advertisements and sales. This theory is absurd and impossible to implement. Nearly every business tries to build relationships with customers, and these depend on remembering facts about those customers. If Facebook were required to pay for data that people provide voluntarily in order to share pictures of their pets in a way they chose to, then why shouldn’t pharmacies pay customers to store records they need to avoid potentially dangerous drug interactions? As the San Jose Mercury News pointed out in an editorial after Newsom floated his proposal for “data dividends,” distributing all of Facebook’s revenues to its user base would result in a payment of just more than 50 cents a month to each of the two billion people with accounts around the world. Of course, if Facebook wanted to retain its staff and operations, the payout would have to be vastly less. The fact that Big Tech collects data and uses it to serve consumers, as do businesses in every other line of work, is not extraordinary. Mandatory disclosures of the data collected and how it will be used (coupled with an opportunity to opt-out), which are already nearly universal and required under the European Union’s General Data Protection Regulation (GDPR) and the State of California’s Electronic Communications Privacy Act (ECPA), do make sense. But treating the mere collection of data that people voluntarily provide as suspect isn’t logical or, for that matter, possible under an economy that involves ongoing customer relationships.

While the mere collection of data that users provide voluntarily should not be considered a problem, the custody of this data certainly is a concern, and an area where many companies could do better. A list compiled by CSO Magazine in April 2020 shows that 3.5 billion accounts (many of them belonging to the same people) have been compromised in various harmful ways in the 21st century alone. The largest of these breaches, involving Adobe, impacted 153 million people, equal to roughly half of America’s adult population. Nearly every economically engaged adult in the democratic world has likely had his or her data compromised.

The consequences of data breaches aren’t usually significant to individual consumers, but they certainly can be. As the Electronic Frontier Foundation has argued: “Your location data may cost a company less than a penny to buy, yet cost you your physical safety if it falls into the wrong hands.” Likewise, disclosure of data to government authorities without proper and strong legal protections presents a grave risk to freedom. Some data losses may impose costs in time rather than actual dollars: a compromise of a customers’ banking information, for example, might require hours or even weeks to correct, as direct withdrawals are changed even if no actual funds get stolen.

Data collection itself is not a problem public policy can solve without rethinking the way businesses interact with customers; data custody, however, certainly deserves some sort of policy response.

Ethical Codes

To deal with these problems Yale Law School Professor Jack Balkin and my former R Street colleague Mike Godwin have drawn similarities between what tech giants should do and existing professional ethics codes. These codes may prove particularly valuable because they have endured for so long. Upon graduating from medical school, most doctors still commit to a code of medical ethics that draws on the one developed by Hippocrates in the fifth century BCE. Legal ethics, likewise, can trace a straight line back to Greek and Roman models. Godwin suggests a list of potential topics that a code might follow when it comes to user data. Drawing on the standards of legal ethics, he suggests that three major standards should apply to tech platforms: Care, loyalty, and confidentiality. According to Godwin, big tech platforms:

[S]hould also be duty-bound to treat [users] with care (don’t allow individuals’ data to be used in ways that harm them; don’t serve them content or ads that are false or misleading), loyalty (don’t put company interests ahead of the well-being of the individuals whose data you hold) and, perhaps most important, confidentiality. That last duty means, at a minimum: Don’t share individuals’ data with companies without their knowing, particular consent.

This would require large parts of Big Tech to serve as “information fiduciaries,” to borrow Balkin’s phrase, with a high-level obligation—preempting other laws and standards—to take trusteeship of personal information over which they retain custody. In principle, of course, this is not an objectionable or difficult concept: Nearly every part of Big Tech would say “we already do that” if asked about their willingness to ensure privacy or strive for accuracy. If enforced systematically, many of the recommendations that Godwin lists would provide reasonable solutions for the problems of disinformation and data custody.

The Comics Code

This leads to the obvious question of how to develop a code of ethics. While the millennia-long history of now-specific and well-evolved codes for legal and medical ethics speaks to their authority and usefulness, it is also a reason not to consider them as models in isolation. Fortunately, there is another model for developing a code: the comic book industry.

Like Internet companies today, comic books were pervasive and had lots of troubling “pollution” in terms of violence, racism, and sexual content. They also carried advertising and engaged in data collection and sales methods clearly inappropriate for the youthful readers they targeted. Serious, thoughtful people—both prominent members of Congress and the era’s preeminent psychiatrist, Fredric Wertham—believed they caused significant social harm. Following congressional hearings on the topic, the industry’s response was a voluntary code of ethics that initially drew heavily on Hollywood’s Motion Picture Production Code (aka the Hays Code). The Comics Code was carefully drawn and ahead of its time: for example, it forbid racial prejudice in comics at a time when children’s counting rhymes used racial language now considered obscene. Unlike other codes, including the Hays Code upon which it was based, it proved possible to update: When comic book publishers realized that total prohibitions of drug use made it impossible to tell anti-drug stories and that prohibitions on sex made even depictions of teenage dating impossible, it was revised in 1971 and again in 1989. (A revision mechanism for a code, perhaps modeled on the United Nations’ Internet Governance Forum, could be a good idea.) For most of its history, the code was enforced largely because many companies refused to advertise in comics that were not code-approved, and most newsstands, pharmacies, supermarkets, and candy stores refused to stock them. Publishers could still sell comics that didn’t receive code approval, but the market was limited to specialty shops. While the code eventually fell into disuse as comic books and graphic novels began targeting adults and taking on serious literary qualities, it preserved the industry and eliminated objectionable content, proved possible to update, and standardized practices in a way that Big Tech could learn from.

Defining a Code

But what exactly should one include in a code for Big Tech? Looking at the major duties Godwin summarizes indicates a few things: a definition of fiduciary duties, coupled with a clear right of “exit” for users when information custodians do not fulfill them, universal confidentiality standards, and flexible, industry-wide standards to police disinformation.

First, the duty of “loyalty”—perhaps the heart of a fiduciary standard—will likely require the most work and needs to be enforced through some sort of right of exit. Lawyers, doctors, and financial advisors already have specific codes of conduct that define their fiduciary duties. As such, detailed specific standards for custody of data, its protection, and use should certainly be part of any code. Particularly as issues get complex, however, these are subjective and can result in tensions when fiduciaries and clients have good-faith disagreements. This is one reason why nearly all professional ethics codes include the ability of customers to take their data and business elsewhere without undue burden.

Every major part of Big Tech already has made high-minded statements about taking confidentiality seriously. But what confidentiality means and consists of is subjective. In addition to simply protecting themselves from hackers (an ethical responsibility everyone would endorse), Big Tech should set standards for how it sells individual data. Platforms clearly need some ability to do so if they are to function as ordinary economic players and, for example, sell themselves in toto. While specific standards necessarily differ and should always be able to be modified by contract, one universal standard could solve many problems: Confidentiality standards should be very clear and those that apply at the time data is provided should travel with the data, unless explicitly modified by a contract that users can decline.

Just as anti-pollution laws evolve to deal with previously unknown pollutants, anti-disinformation and “harm” standards must evolve over time. A few already near-universal standards should apply to everyone agreeing to ethical codes: clear, even-handed flagging of material generated by state actors, material that sexually exploits children and continued crackdowns on bot/troll armies, could be some points of agreement. Platforms should likewise have an affirmative objection in order to ensure that content targeted toward users on the basis of personal information is accurate. But some efforts will prove tricky. For example, is it “disinformation” to point out (accurately) that vaccination does very occasionally result in adverse outcomes? This type of information has clearly done harm by discouraging a practice vital to public health, but it is also true. Striking the right balance will require experimentation and judgment. This means Big Tech and its users will need some freedom to experiment with exactly what tactics, standards, filters, and tools work best to optimize the mix of information so that ideas are freely and usefully exchanged. In some cases, platforms might, under a code, find ways to empower users to choose fact-checkers that they themselves trust. Whatever the case, a code should err in favor of allowing users who desire it to find almost any otherwise legal content they affirmatively want to see while also providing ample resources to protect children and let users check facts. In this context, the protections of Section 230 and the ability to develop varying content moderation regimes, and even let users chose between them, are vital tools in the fight against disinformation.

When it comes to making the codes effective, peer pressure is the most obvious immediate enforcement mechanism. Companies that agreed to a code in public would have strong competitive and other reasons to abide by it. The cyber-ethics code, like the Comics Code, also could be enforced by an industry-founded self-regulatory organization created for that purpose. As an incentive to uphold the code, firms that fail to follow it could be prohibited from having any role in revisions and pay private fines for violations. While far from perfect, this type of self-regulation does have an effect in many industries and is the first remedy that the Consumer Products Safety Commission uses under existing law when confronted with a product that causes accidents or injuries. If an industry fell down on enforcing a code, however, the Federal Trade Commission in the United States and analogous consumer protection agencies in other countries could also act. To date, courts have been reluctant—with good reason—to read too much into what are essentially advertising claims made by social media platforms about accuracy, freedom, or privacy, particularly in light of Section 230. An ethical code, prominently proclaimed as a universal standard, could give courts and the FTC something substantive to look at. The most potent enforcement mechanisms, however, might be other economic players. Just as newsstand retailers refused to sell comic books that did not abide by the code and companies refused to advertise in them, major advertisers and governments could agree to do business only with those Big Tech firms that abided by the code.

Of course, these non-legislative mechanisms may not be enough. A widely adopted and advertised code would, ultimately, work its way into court cases and, through judicial opinions, new law would emerge drawing on readings of it. Similarly, public entities could exercise discretion in how they use their own advertising dollars and promote their own efforts. For instance, platforms that did not comply with the code could be made ineligible for public contracts. Eventually, however, it is inevitable that, like legal and medical ethics, some aspects of an ethical code for Big Tech would be incorporated into law as wisdom accumulates and clarity becomes necessary. Laws like GDPR—far from perfect but at least far clearer than the patchwork that governs data protection in the United States—could be a first example of this happening. But norms should be allowed to evolve. Imposing a specific statutory framework first could well have the effect of squelching innovation and entrenching incumbents rather than allowing for continued innovation.

Like successful upstarts in nearly all major new industries, the power and reach of the Internet giants has led to a widespread desire for the state to “do something” about them. In riven political times, indeed, this “techlash” is one of the few issues upon which major leaders of the right and left agree. A lack of clear plans for action (and even objectives for “dealing with” big tech) has, so far, prevented major new Federal-level reforms to the Internet’s fundamental governing laws, standards, or procedures. In the absence of Federal laws, indeed, others have begun imposing regulations: the European Union’s GDPR and California’s Electronic Communications Privacy Act (ECPA) are examples of the patchwork of laws that is coming to govern the Internet. Major platforms have also begun creating structures—Facebook’s independent “supreme court” for content moderation most prominent among them—that are privately run government-like mechanisms. One way or another, the status quo will not hold.

While some issues associated with “Big Tech” (particularly its bigness) cannot properly be considered tech issues at all—and addressing them would have implications far beyond tech—legitimate concerns do exist around disinformation and data custody. Public policy will inevitably have to confront them.

As this article argues above the best way to deal with these problems would involve drawing on wisdom accumulated in professional ethics codes and in the information ethics code drawn up by the comic book industry in the 1950s. This code preserved the industry’s economic vitality and solved what were then perceived as significant social problems. While the specifics of creating and writing such a code will necessarily be complex, it could do a tremendous amount to confront legitimate problems with Big Tech.

As the Internet began as a government project and has grown and advanced with public support, there has never been a libertarian golden age of a wholly unregulated Internet. Big Tech arose in one regulatory paradigm, and, in its maturity, certain aspects of this paradigm have become unstable and led to genuine externalities. The task now confronting both policymakers and Big Tech itself is to confront these problems without destroying a major economic engine or restricting the freedoms that have made the Internet so vibrant.

Considered and constructed properly, a code of professional ethics can solve real problems, protect free expression, and harmonize itself with a regulatory framework grounded in CDA Section 230 that, despite its flaws, has created a vibrant and important industry.

Featured Publications