If Mark Zuckerberg and Sheryl Sandberg are hunkered down this week strategizing how to handle a devastating cascade of bad press, you can understand why. Wednesday’s in-depth New York Times storydocuments how Facebook has consistently taken a “delay, deny and deflect” approach to addressing its critics—rather than, you know, fixing Facebook’s real problems. A raft of other negative stories have followed, most picking up on selected details from the Times story, of which the most absurd include the attempt to blame criticism of Facebook on George Soros and Zuckerberg’s reported insistence, after Apple CEO Tim Cook criticized social media companies, that top execs abandon their iPhones.

But instead of trying to spin the company’s way out of this week’s new PR problem, Zuck and Sandberg should pivot right now and focus on creating a new public policy and legal framework for the company that would comprehensively restore public trust. What does that new framework look like? A series of journal articles by Yale law professor Jack Balkin, culminating in a forthcoming article for the Buffalo Law Review called “The First Amendment in the Second Gilded Age,” suggests some important steps we can take to reframe our relationships with both companies and government when it comes to today’s internet. The basic idea is to create a new legal category—he calls it “information fiduciaries”—that are a better fit for the roles Facebook and other companies play as facilitators of our free-speech rights and trustees of our personal data.

Balkin’s approach has the advantage of being holistic rather than piecemeal. In that sense, it’s better than current reactive steps Facebook has already been taking, such as crowdsourcing the identification of fake news, or deplatforming particularly noxious speakers like Alex Jones, or renewing promises to monitor its business partners for how they handle user data. Properly implemented, these steps should strengthen our free speech on the platforms while also strengthening protections of our privacy and autonomy.

Balkin, who holds the Knight professorship of constitutional law and the First Amendment at Yale, is one of the legal academics who recognized early on how the internet was going to change public discourse in this century. So it’s no surprise that he is the founder and director of the Yale Information Society Project, an interdisciplinary program at the law school that covers a full range of issues at the intersection of law, technology, and society. (Full disclosure: I was a fellow at Yale ISP from late 2005 until early 2007, years before Balkin developed the legal approach I’m discussing here.)

In “The First Amendment in the Second Gilded Age,” Balkin writes that the “grand bargain of twenty-first century media” mostly doesn’t directly raise First Amendment issues because the First Amendment governs speech as directly regulated by federal and state governments. Under the terms of that “grand bargain,” Balkin writes,

Privately-owned infrastructure companies will provide you with many different valuable services. They will provide you with a search engine that is nothing short of miraculous—that allows you to find anything you want virtually instantaneously. They will provide you with social media that allow you to publish and express almost anything your heart could desire. Indeed, they will encourage to publish, and to communicate with others, repeatedly and incessantly. End users get all of these services, all of this stuff—and they get it all for free. And in return, media owners get to collect their data, analyze it, and use it to predict, control, and nudge what end-users do.

That last bit, of course, is the problem. On the one hand, the private companies obtain our data freely (that is, we consent to give it to them, even though we may not always, or often, think through the consequences of this choice). In return, we get cool free services that let us stay in touch with our high school friends and even conduct research or political organizing. On the other hand, the companies need to make money in order to offer us these free services, and the easiest way to do that is to sell advertising. Ideally, that advertising is for stuff we are likely to be interested in, which is the central reason for the companies to gather data about our interests from how we use these services.

The need for the companies to make money in order to offer us free services also creates a potential hazard: Algorithmically enhanced advertising is used, critics like Tristan Harris insist, to deprive of us of agency. In effect, the critics say, online advertising, together with addictive aspects of social media, robs us of free will. (This is an argument about advertising that predates search engines and social media. As I’ve written in Lawfare, its major wellspring is a 1950s book by Vance Packard, The Hidden Persuaders.)

Unsurprisingly, one leading narrative in the debates about how social media platforms and other big internet and technology companies handle our personal information is that maybe they just shouldn’t gather it at all. It’s a narrative that touches not just social media platforms and search engines but also the makers of devices like smartphones that, because how we constantly carry them with us, become repositories of where we’ve been, whom we know, and who we are. That view has the advantage of simplicity—it’s easy to understand and easy to state. And this narrative informs some suggestions for reform. For example, early Facebook investor Roger McNamee has argued that Facebook and other services should simply cease gathering user data and move to a strict subscription model.

But if we ended the data gathering altogether (and maybe ended advertising altogether), it would have consequences. If the New York Times or the Washington Post quit using online advertising—or cut advertising entirely—it would necessarily prevent any ads are manipulating us, but it would also increase their direct cost to readers, maybe by twice as much, or maybe much more. The same is true for social networking and search companies. As Balkin points out, “A subscription model has disadvantages for social media companies, because it might produce a far smaller user base, and therefore less interesting and intriguing content that would keep end users coming back for more.”

Of course, we know already that data gathering definitely can be limited by law, as with the European Union’s General Data Protection Regulation, which went into effect in May. But the GDPR’s prescriptions for handling personal data are built around the presumption that anygathering of personal data is bad for users and citizens. That’s not the only model for protecting users, and it might not be the best one.

Balkin’s alternative, the “information fiduciaries” model, says the companies that gather our data in order to subsidize services for users could adopt—or Congress or the courts could impose on them—a legal and professional relationship with users as, in effect, trustees of our personal data. Just as doctors and lawyers gather information about us in order to serve us better, the companies might be constrained by the creation of similar professional relationships based on the services they offer and the individual users they are serving. As “fiduciaries,” Balkin argues, the companies would have “three basic duties: a duty of care, a duty of confidentiality, and a duty of loyalty.” These are the same duties that doctors and lawyers have with regard to their clients. Care and confidentiality mean the companies holding your data need to keep it secure and not use it negligently in ways that might hurt you, even accidentally. A duty of loyalty—again, the same duty that doctors and lawyers are bound by—means that the company you trust with your data can’t use it in ways that benefit the company while hurting your interests, as when Facebook contracted to share user data with Cambridge Analytica.

Here’s what Balkin’s model would mean for us as users. We might still agree, in return for access to useful free services like Facebook and Twitter, to abide by the companies’ terms-of-service agreements—the agreements that allow the companies to use the personal data we give them in limited ways (for example, to serve us ads or other content we’re interested in). But if the terms-of-service agreements are a contract between the users and the companies, that contract doesn’t define the whole set of duties the companies owe to us. Apart from the agreements, the companies would be bound to be careful with our data, to keep it confidential, and to never use it (or allow it to be used) against us. (A company like Facebook, which has also gathered information about individuals who don’t use the service, might even be bound to use the data it’s gathered according to these ethical and professional obligations.)

At least with regard to subscribers, these three duties—care, confidentiality, and loyalty—would take precedence over contracts or terms-of-service agreements. Per Balkin, “This fiduciary duty arises out of a contractual relationship—the terms of service or end-user license agreement—that digital companies require of their end-users. But duties of an information fiduciary are not limited to the specific terms of Facebook’s privacy policy—a complicated contract that few people have actually read.” Balkin points out that if Facebook’s duties were only the specific obligations spelled out in the terms-of-service agreement, the company “could make those duties vanish simply by changing its privacy policy, to take effect the next time an end user logs in.” A better approach, he says, is to recognize that “these fiduciary obligations exist on top of the contractual rights of the parties.” In other words, the agreements wouldn’t, and couldn’t, allow you to waive your right to expect the services to honor these duties.

The concept of companies as “fiduciaries”—trustees of our personal information—underscores the limits on what the companies should be able to do. But it also could empower these successful, well-capitalized enterprises to be powerful, effective advocates of protecting our personal privacy.

Companies might not want to take this role on—but then again, they might. Think how much stronger Apple’s arguments in the San Bernardino, California, shooter iPhone litigation might have been if Apple could have said meaningfully that it has standing as a trustee of its users’ privacy interests to argue against any obligation to crack iPhone security. At the very least, this would enable Apple to counter any government argument that its opposition to cracking iPhone security is merely a “deliberate marketing decision to engineer its products so the government can’t search them, even with a warrant.”

How do we get to Balkin’s fiduciary framework? One way this could be done is by a federal statute with the right incentives built in (similar to the Communications Decency Act’s Section 230, which conditions immunity of internet services on whether they originated the content). Another way might be through professional associations working with state governments (analogous to state bar associations and medical associations). Or it could be a combination of both. However it’s implemented, we can do so on the principle that if doctors and lawyers are legally and ethically required to do no harm to patients and clients, it isn’t too much to ask that the companies—professional experts in how our digital data can be used—should be bound by analogous requirements. (New data-gathering companies, like new doctors and lawyers, might consider buying malpractice insurance.) Plus, this framework need not be limited to just internet companies—it could include Apple, cable companies, Netflix, cellphone providers, and so on.

This leads to one obvious legal benefit—it could diminish or eliminate the third-party doctrine that allows the government to easily seize or subpoena your data if it’s held by a service provider. Carpenter v. United States, which was decided in the summer and imposed Fourth Amendment limitations on when the government can seize your cellphone location data, called into question the whole idea of thinking that once some company is holding your information, you have no “reasonable expectation of privacy” and therefore no Fourth Amendment privacy rights. Justices both in the majority and in the dissents recognized that in the digital age, third-party doctrine is a fig leaf covering massive government prerogatives to capture our personal data—but they disagreed, of course, as to what to do about that.

Balkin’s “information fiduciary” model could turn the third-party doctrine, already weakened by the Carpenter decision, into an irrelevant legal relic of a simpler analog era. With a statutory and/or professional framework of fiduciary obligations for tech companies, we could simply argue that, if the companies have a ton of data about us, we’re protected not only by the Fourth Amendment but also professional codes of ethical conduct that bind the tech companies that have fiduciary duties to us. (Our courts disfavor giving government access to lawyer-held client information or doctor-held medical information. They ought to disfavor Facebook’s doing so as well.)

Another possible benefit might be that Apple, Google, Facebook, Twitter, and others might have standing as fiduciaries or trustees to defend our fundamental rights of speech and privacy. After all, as we know from a 60-year-old Supreme Court case, NAACP v. Alabama, our ability to speak collectively may depend on our privacy and anonymity. In that case, the Alabama state government tried to compel the organization to disclose lists of its members (including addresses and phone numbers and so on). The NAACP resisted, and the Supreme Court concluded that (a) the NAACP has standing to assert fundamental speech and associational rights on behalf of its anonymous members, knowing that if compelled to disclose membership this would have real-world consequences for those members, and (b) these fundamental rights are deeply grounded in the First Amendment. The case is obviously relevant in this century because our ability to speak freely online and our privacy online are so intimately related. Properly understood, the NAACP was a “platform” for political speech and action in 1958, just as Facebook and Twitter are today. The companies might have standing, just as the NAACP does, to assert that the free-speech rights of their subscribers depend on their fiduciary obligation to keep user data confidential.

The fact is, whatever our reservations about big tech companies, or internet advertising, or privacy, we use the advertising-subsidized platforms to engage in more frequent, and more public, speech than preceding generations of Americans ever could hope to do. If we’re going to let the companies have this kind of intimate contact with our information—which often serves us well, as when we’re shown advertising that genuinely informs and helps us, or when our documents and contacts are synced among linked devices—can’t we also empower them to be advocates for our privacy, just as the NAACP has been for more than half a century? If the tech companies have fiduciary relationships—not just contractual relationships—with their users, the companies arguably have stronger legal standing to resist government commands to disclose user data.

Taken together, this reframing allows us to think maybe it’s OK after all for Facebook or Google to use our info to serve ads to us based on our likely interests, or to find new potential members of our social networks, but not to manipulate us, give our info over to untrusted and unsupervised third parties like Cambridge Analytica, gather info on us even if we’re not users of their services, or be reckless about serving fake news to us. And Balkin’s “information fiduciaries” framework may also incentivize the companies to actively engage, as Apple and others sometimes have done, as advocates for our privacy and free speech—provided we give them legal standing to do so.

At the very least, Balkin’s framework addresses our current concerns over Facebook’s and other companies’ gathering and use of our personal data. But I think Balkin offers more than that. His articles have charted a possible path back to the common ground we all shared in Reno v. ACLU, back in 1997, a Supreme Court decision that tech companies and users came together to celebrate. The Reno case didn’t ignore the likelihood that the internet and other digital technologies would be disruptive and disturbing as we lived through the technologies’ growing pains. The court’s optimistic recognition of First Amendment protections for expression on the internet signified hope that we all would survive the bumpy ride of the internet’s adolescence on the way to a maturity based on mutual trust among stakeholders. And if some companies’ shortsighted behavior regarding our personal data has eroded that trust, Balkin has shown us one path to enable them, and us, to restore it.

Image from rvlsoft

Featured Publications