I’ve written two installments in this series (part 1 is here and part 2 is here). And while I could probably turn itemizing complaints about social-media companies into a perpetual gig somewhere — because there’s always going to be new material — I think it’s best to list only just a few more for now. After that, we ought to step back and weigh what reforms or other social responses we really need. The first six classes of complaints are detailed in Parts 1 and 2, so we begin here in Part 3 with Complaint Number 7.

(7) Social media are bad for us because they’re so addictive to us that they add up to a kind of deliberate mind control.

As a source of that generalization we can do no better than to begin with Tristan Harris’s July 28, 2017 TED talk, titled “How a handful of tech companies control billions of minds every day.”

Harris, a former Google employee, left Google in 2015 to start a nonprofit organization called Time Well Spent. That effort has now been renamed the Center for Humane Technology ( http://www.timewellspent.io now resolves to https://humanetech.com). Harris says his new effort — which also has the support of former Mozilla interface designer Aza Raskin and early Facebook funder Roger McNamee — represents a social movement aimed at making us more aware of the ways in which technology, including social media and other internet offerings, as well as our personal devices, are continually designed and redesigned to make them more addictive.

Yes, there’s that notion of addictiveness again — we looked in Part 2 at claims that smartphones are addictive and talked about how to address that problem. But regarding the “mind control” variation of this criticism, it’s worth examining Harris’s specific claims and arguments to see how they compare to other complaints about social media and big tech generally. In his June 2017 TED talk. Harris begins with the observation that social-media notifications on your smart devices, may lead you to have thoughts you otherwise wouldn’t think:

“If you see a notification it schedules you to have thoughts that maybe you didn’t intend to have. If you swipe over that notification, it schedules you into spending a little bit of time getting sucked into something that maybe you didn’t intend to get sucked into.”

But, as I’ve suggested earlier in this series, this feature of continually tweaking content to attract your attention isn’t unique to internet content or to our digital devices. This is something every communications company has always done — it’s why ratings services for traditional broadcast radio and TV exist. Market research, together with attempts to deploy that research and to persuade or manipulate audiences, has been at the heart of the advertising industry for far longer than the internet has existed, as Vance Packard’s 1957 book THE HIDDEN PERSUADERS suggested decades ago.

One major theme of Packard’s THE HIDDEN PERSUADERS is that advertisers increasingly relied less on consumer surveys (derisively labeled “nose-counting”) but on “motivational research” — often abbreviated by 1950s practitioners as “MR” — to look past what consumers say they want. Instead, the goal is to how they actually behave, and then gear their advertising content to shape or leverage consumers’ unconscious desires. Packard’s narratives in THE HIDDEN PERSUADERS are driven by revelations of the disturbing and even scandalous agendas of MR entrepreneurs and the advertising companies that hire them. Even so, Packard is careful in his book, in its penultimate chapter, to address what he calls “the question of validity” — that is, the question of whether “hidden persuaders'” strategies and tactics for manipulating consumers and voters are actually scientifically grounded. Quite properly, Packard acknowledges that the claims of the MR companies may have been oversold, or may have been adopted by companies who simply lack any other strategy for figuring out how to reach and engage consumers.

In spite of Packard’s scrupulous efforts to make sure that no claims of advertising’s superpowers to sway our thinking are accepted uncritically, our culture nevertheless has accepted at least provisionally the idea that advertising (and its political cousin, propaganda), affects human beings at pre-rational levels. It is this acceptance of the idea that content somehow takes us over that Tristan Harris invokes consistently in his writings and presentations about how social media, the Facebook newsfeed, and internet advertising work on us.

Harris prefers to describe how these online phenomena affect us in deterministic ways:

“Now, if this is making you feel a little bit of outrage, notice that that thought just comes over you. Outrage is a really good way also of getting your attention. Because we don’t choose outrage — it happens to us.”

“The race for attention [is] the race to the bottom of the brainstem.”

Nothing Harris says about the Facebook newsfeed would have seemed foreign to a Madison Avenue advertising executive in, say, 1957. (Vance Packard includes commercial advertising as well as political advertising as centerpieces of what he calls “the large-scale efforts being made, often with impressive success, to channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.”) Harris describes Facebook and other social media in ways that reflect time-honored criticisms of advertising generally, and mass media generally.

But remember that what Harris says about internet advertising or Facebook notifications or the Facebook news feed is true of all communications. It is the very nature of communications among human beings that they give us thoughts we would not otherwise have. It is the very nature of hearing things or reading things or watching things that we can’t unhear them, or unread them, or unwatch them. This is not something uniquely terrible about internet services. Instead it is something inherent in language and art and all communications. (You can find a good working definition of “communications” in Article 19 of the United Nations’ Universal Declaration of Human Rights, which states that individuals have the right “to seek, receive, or impart information.”) That some people study and attempt to perfect the effectiveness of internet offerings — advertising or Facebook content or anything else — is not proof that they’re up to no good. (They arguably are exercising their human rights!) Similarly, the fact that writers and editors, including me, try to study how words can be more effective when it comes to sticking in your brain is not an assault on your agency.

It should give us pause that so many complaints about Facebook, about social media generally, about internet information services, and about digital devices actively (if maybe also unconsciously) echo complaints that have been made about any new mass medium (or mass-media product). What’s lacking in modern efforts to criticize social media in particular — and especially when it comes to big questions like whether social media are damaging to democracy — is the failure of most critics to be looking at their own hypotheses skeptically, seeking falsification (which philosopher Karl Popper rightly notes is a better test of the robustness of a theory) rather than verification.

As for all the addictive harms that are caused by combining Facebook and Twitter and Instagram and other internet services with smartphones, isn’t it worth asking critics whether they’ve considered turning notifications off for the social-media apps?

(8) Social media are bad for us because they get their money from advertising, and advertising — especially effective advertising — is inherently bad for us.

Harris’s co-conspirator Roger McNamee, whose authority to make pronouncements on what Facebook and other services are doing wrong derives primarily from his having gotten richer from them, is blunter in his assessment of Facebook as a public-health menace:

“Relative to FB, the combination of an advertising model with 2.1 billion personalized Truman Shows on the ubiquitous smartphone is wildly more engaging than any previous platform … and the ads have unprecedented effectiveness.”

There’s a lot to make fun of here–the presumption that 2.1 billion Facebook users are just creating “personalized Truman Shows,” for example. Only someone who fancies himself part of an elite that’s immune to what Harris calls “persuasion” would presume to draw that conclusion about the hoi polloi. But let me focus instead on the second part–the bit about the ads with “unprecedented effectiveness.” Here the idea is, obviously, that advertising may be better for us when it’s less effective.

Let’s allow for a moment that maybe that claim is true! Even if that’s so, advertising has played a central role in Western commerce for at least a couple of centuries, and in world commerce for at least a century, and the idea that we need to make advertising less effective is, I think fairly clearly, a criticism of capitalism generally. Now, capitalism may very well deserve that sort of criticism, but it seems like an odd critique coming from someone who’s already profited immensely from that capitalism.

And it also seems odd that it’s focused particularly on social media when, as we have the helpful example of THE HIDDEN PERSUADERS to remind us, we’ve been theoretically aware of the manipulations of advertising for all of this century and at least half of the previous one. If you’re going to go after commercialism and capitalism and advertising, you need to go big–you can’t just say that advertising suddenly became a threat to us because it’s more clearly targeted to us based on our actual interests. (Arguably that’s a feature rather than a bug.)

In responding to these criticisms, McNamee says “I have no interest in telling people how to live or what products to use.” (I think the meat of his and Harris’s criticisms suggests otherwise.) He explains his concerns this way:

“My focus is on two things: protecting the innocent (e.g., children) from technology that harms their emotion development and protecting democracy from interference. I do not believe that tech companies should have the right to undermine public health and democracy in the pursuit of profits.”

As is so often the case with entrepreneurial moral panics, the issue ultimately devolves to “protecting the innocent” — some of whom surely are children but some other proportion of whom constitute the rest of us. In an earlier part of his exploration of these issues on the venerable online conferencing system The WELL, McNamee makes clear, in fact, that he really is talking about the rest of us (adults as well as children):

“Facebook has 2.1 billion Truman Shows … each person lives in a bubble tuned to their emotions … and FB pushes emotional buttons as needed. Once it identifies an issue that provokes your emotions, it works to get you into groups of like-minded people. Such filter bubbles intensify pre-existing beliefs, making them more rigid and extreme. In many cases, FB helps people get to a state where they are resistant to ideas that conflict with the pre-existing ones, even if the new ideas are demonstrably true.”

These generalizations wouldn’t need much editing to fit 20th-century criticisms of TV or advertising or comic books or 19th-century criticisms of dime novels or 17th-century criticisms of the theater. What’s left unanswered is the question of why this new mass medium is going to doom us when none of the other ones managed to do it.

(9) Social media need to be reformed so they aren’t trying to make us do anything or get anything out of us.

It’s possible we ultimately may reach some consensus on how social media and big internet platforms generally need to be reformed. But it’s important to look closely at each reform proposal to make sure we understand what we’re asking for and also that we’re clear on what the reforms might take away from us. Once Harris’s TED talk gets past the let-me-scare-you-about-Facebook phase, it gets better — Harris has a program for reform in mind. Specifically, he calls for what he calls “three radical changes to our society,” which I will paraphrase and summarize here.

First, Harris says, “we need to acknowledge that we are persuadable.” Here, unfortunately, he elides the distinction between being persuaded (which involves evaluation and crediting of arguments or points of view) and being influenced or manipulated (which may happen at an unconscious level). (In fairness, Vance Packard’s THE HIDDEN PERSUADERS is guilty of the same elision.) But this first proposition isn’t radical at all — even if we’re sticks-in-the-mud, we normally believe we are persuadable. It may be harder to believe that we are unconsciously swayed by how social media interact with us, but I don’t think it’s exactly a radical leap. We can take it as a given, I think, that internet advertising and Facebook’s and Google’s algorithms try to influence us in various ways, and that they sometimes succeed. The next question then becomes whether this influence is necessarily pernicious, but Harris finds passes quickly over this question, assuming the answer is yes.

Second, Harris argues, we need new models and accountability systems, guaranteeing accountability and transparency for the ways in which our internet services and digital devices try to influence us. Here there’s very little to argue with. Transparency about user-experience design that makes us more self-aware is all to the good. So that doesn’t seem like a particularly radical goal either.

It’s in Harris’s third proposal — “We need a design renaissance” — that you actually do find something radical. As Harris explains it, we need to redesign our interactions with services and devices so that we’re never persuaded to do something that we may not initially want to do. He states, baldly, that “the only form of ethical persuasion that exists is when the goals of the persuader are aligned with the goals of the persuadee.” This is a fascinating proposition that, so far as I know, is not particularly well-grounded in fact or in the history of rhetoric or in the history of ethics. It seems clear that sometimes it’s necessary to persuade people of ideas that they may be predisposed not to believe, and that, in fact, they may be more comfortable not believing.

Given that fact, it follows that If we are worried about whether Facebook’s algorithms lead to “filter bubbles,” we should call for (or design) a system around the idea of never persuading anyone whose goals aren’t already aligned with yours. Arguably, such a social-media platform might be more prone to filter bubbles rather than less so. One doesn’t get the sense, reviewing Harris’s presentations or other public writings and statements from his allies like Roger McNamee, either that they’ve compared current internet communications with previous revolutions driven by new mass-communications platforms, or analyzed their theories in light of the centuries of philosophical inquiry regarding human autonomy, agency, and ethics.

Moving past Harris’s TED talk, we next must consider McNamee’s recent suggestion that Facebook move from an advertising-supported to for-pay model. In a February 21 Washington Post op-ed, McNamee wrote the following:

“The indictments brought by special counsel Robert S. Mueller III against 13 individuals and three organizations accused of interfering with the U.S. election offer perhaps the most powerful evidence yet that Facebook and its Instagram subsidiary are harming public health and democracy. The best option for the company — and for democracy — is for Facebook to change its business model from one based on advertising to a subscription service.”

In a nutshell, the idea here is that the incentives of advertisers, who want to compete for your attention, will necessarily skew how even the most well-meaning version of advertising-supported Facebook interacts with you, and not for the better. So the fix, he argues, is for Facebook to get rid of advertising altogether. “Facebook’s advertising business model is hugely profitable,” he writes, “but the incentives are perverse.”

It’s hard to escape the conclusion that McNamee believes either (a) advertising is inherently bad, or (b) advertising made more effective by automated internet platforms is particularly bad. Or both. And maybe advertising is, in fact, bad for us. (That’s certainly a theme of Vance Packard’s THE HIDDEN PERSUADERS, as well as of more recent work such as Tim Wu’s book 2016 book THE ATTENTION MERCHANTS.) But it’s hard to escape the conclusion that McNamee, troubled by Brexit and by President Trump’s election, wants to kick the economic legs out from under Facebook’s (and, incidentally, Google’s and Bing’s and Yahoo’s) economic success. Algorithm-driven serving of ads is bad for you! It creates perverse incentives! And so on.

It’s true, of course, that some advertising algorithms have created perverse incentives (so that Candidate Trump’s provocative ads were seen as more “engaging” and therefore were sold cheaper — or, alternatively, more expensively — than Candidate Clinton’s. I think the criticism of that particular algorithmic approach to pricing advertising is valid. But there are other ways to design algorithmic ad service, and it seems to me that the companies that have been subject to the criticisms are being responsive to them, even in the absence of regulation. This, I think, is the proper way to interpret Mark Zuckerberg’s newfound reflection (and maybe contrition) over Facebook’s previous approach to its users’ experience, and his resolve — honoring without mentioning Tristan Harris’s longstanding critique — that “[o]ne of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent.”

Some Alternative Suggestions for Reform and/or Investigation

It’s not too difficult, upon reflection, to wonder whether the problem of “information cocoons” or “filter bubbles” is really as terrible as some critics have maintained. If hyper-addictive filter-bubbles have historically unprecedented power to overcome our free will, surely presumably have this effect even on most assertive, independently thinking, strong-minded individuals — like Tristan Harris or Roger McNamee. Even six-sigma-degree individualists might not escape! But the evidence that this is, in fact, the case, is less than overwhelming. What seems more likely (especially in the United States and in the EU) is that people who are dismayed by the outcome of the Brexit referendum or the U.S. election are trying to find a Grand Unifying Theory to explain why things didn’t work out they way they’d expected. And social media are new, and they seem to have been used by mischievous actors who want to skew political processes, so it follows that the problem is rooted in technology generally or in social media or in smartphones in particular.

But nothing I write here should be taken as arguing that social media definitely aren’t causing or magnifying harms. I can’t claim to know for certain. And it may well be the case, in fact, that some large subset of human beings create “filter bubbles” for themselves regardless of what media technologies they’re using. That’s not a good thing, and it’s certainly worth figuring out how to fix that problem if it’s happening, but focusing on how that problem as a presumed phenomenon specific to social media perhaps focuses on a symptom of the human condition rather than a disease grounded in technology.

In this context, then, the question is, what’s the fix? There are some good suggestions for short-term fixes, such as the platforms’ adopting transparency measures regarding political ads. That’s an idea worth exploring. Earlier in this series I’ve written about other ideas as well (e.g., using grayscale on our iPhones).

There are, of course, more general reforms that aren’t specific to any particular platform. To start with, we certainly need to address more fundamental problems — meta-platform problems, if you will — of democratic politics, such as teaching critical thinking. We actually do know how to teach critical thinking — thanks to the ancient Greeks we’ve got a few thousand years of work done already on that project — but we’ve lacked the social will to teach it universally. It seems to me that this is the only way by which a cranky individualist minority that’s not easily manipulated by social media, or by traditional media, can become the majority. Approaching all media (including radio, TV, newspapers, and other traditional media — not just internet media, or social media) with appropriate skepticism has to be part of any reform policy that will lead to lasting results.

It’s easy, however, to believe that education — even the rigorous kind of education that includes both traditional critical-thinking skills and awareness of the techniques that may be used in swaying our opinions — will not be enough. One may reasonably believe that education can never be enough, or that, even when education is sufficient to change behavior (consider the education campaigns that reduced smoking or led to increased use of seatbelts), education all by itself simply takes too long. So, in addition to education reforms, there probably are more specific reforms — or at least a consensus as to best practices — that Facebook, other platforms, advertisers, government, and citizens ought to consider. (It seems likely that, to the extent private companies don’t strongly embrace public-spirited best-practices reforms, governments will be willing to impose such reforms in the absence of self-policing.)

One of the major issues that deserve more study is the control and aggregation of user information by social-media platforms and search services. It’s indisputable that online platforms have potentiated a major advance in market research — it’s trivially easy nowadays for the platforms to aggregate data as to which ads are effective (e.g., by inspiring users to click through to the advertisers’ websites). Surely we should be able to opt out, right?

But there’s an unsettled public-policy question about what opting out of Facebook means or could mean. In his testimony earlier this year at Senate and House hearings on Facebook, Mark Zuckerberg has consistently stressed that individual users do have some high degree of control over the data (pictures, words, videos, and so on) that they’ve contributed to Facebook, and that users can choose to remove the data they’ve contributed. Recent updates in Facebook’s privacy policy seem to underscore users’ rights in this regard.

It seems clear that Facebook is committing itself at least to what I call Level 1 Privacy: you can erase your contributions from Facebook altogether and “disappear,” at least when it comes to information you have personally contributed to the platform. But does it also mean that even other people who’ve shared my stuff no longer can share it (in effect, allowing me to depart and punch holes in other people’s sharing of my stuff when I depart?

If Level 1 Privacy relates to the information (text, pictures, video, etc., that I’ve posted), that’s not the end of the inquiry. There’s also what I have called Level 2 Privacy, centering on what Facebook knows about me, or can infer from my having been on the service, even after I’ve gone. Facebook has had a proprietary interest in drawing inferences from how we interact with their service and using that to inform what content (including but not limited to ads) that Facebook serves to us. That’s Facebook’s data, not mine, because FB generated it, not me. If I leave Facebook, surely Facebook retains some data about me based on my interactions on the platform. (We also know, in the aftermath of Zuckerberg’s testimony before Congress, that Facebook manages to collect data about people who themselves are not users of the service.)

And then there’s Level 3 Privacy, which is the question of what Facebook can and should do with this inferential data that it has generated. Should Facebook share it with third parties? What about sharing it with governments? If I depart and leave a resulting hole in Facebook content, are there still ways to connect the dots so that not just Facebook itself, but also third-party actors, including governments, can draw reliable inferences about the now-absent me? In the United States, there arguably may be Fourth Amendment issues involved, as I’ve pointed out in a different context elsewhere. We may reasonably conclude that there should be limits on how such data can be used and on what inferences can be drawn. This is a public-policy discussion that needs to happen sooner rather than later.

Apart from privacy and personal-data concerns, we ought to consider what we really think about targeted advertising. If the criticism of targeted advertising, “motivational research,” and the like historically has been that the ads are pushing us, then the criticism of internet advertising seems to be that internet-based ads are pulling us or even seducing us, based on what can be inferred about our inclinations and preferences. Here I think the immediate task has to be to assess whether the claims made by marketers and advertisers regarding the manipulative effects ads have on us are scientifically rigorous and testable. If the claims stand up to testing, then we have some hard public-policy questions we need to ask about whether and how advertising should be regulated. But if they aren’t — if, in fact, our individual intuitions that we retain freedom and autonomy even in the face of internet advertising and all the data that can be gathered about us — then we need to assert that that freedom and autonomy and acknowledge that, just maybe, there’s nothing categorically oppressive about being invited to engage in commercial transactions or urged to vote for a particular candidate.

Both the privacy questions and the advertising questions are big, complex questions that don’t easily devolve to traditional privacy talk. If in fact we need to tackle these questions pro-actively, I think we must begin by defining what the problems are in ways that all of us (or at least most of us) agree on. Singling out Facebook is the kind of single-root-cause theory of what’s wrong with our culture today may appeal to us as human beings — we all like straightforward storylines — but that doesn’t mean it’s correct. Other internet services harvest our data too. And non-internet companies have done so (albeit in more primitive ways) for generations. It is difficult to say they never should do so, and it’s difficult to frame the contours of what best practices should be.

But if we’re going to grapple with the question of regulating social-media platforms and other internet services, thinking seriously about what best practices should be, generally speaking, is the task that lies before us now. Offloading the public-policy questions to the platforms themselves — by calling on Facebook or Twitter or Google to censor antisocial content, for example — is the wrong approach, because it dodges the big questions that we need to answer. Plus, it would likely entrench today’s well-moneyed internet incumbents.

Nobody elected Mark Zuckerberg or Jack Dorsey (or Tim Cook or Sundar Pichai) to do that for us. The theory of democracy is that we decide the public-policy questions ourselves, or we elect policymakers to do that for us. But that means we each have to do the heavy lifting of figuring out what kinds of reforms we think we want, and what kind of commitments we’re willing to make to get the policies right.

Featured Publications