In the heat of today’s debate about the ethics—and possibly anti-democratic effects—of targeted advertising on Facebook and on other internet platforms, it’s easy to forget that this debate about advertising is an old one. Do commercial or political advertisers aim to push our psychological buttons in ways we’re unaware of? Does advertising really have the power to compel us to buy products or to choose candidates?

In their modern manifestations, these questions hinge on the power of big data, operating together with the presumed accuracy with which our individual tastes and preferences can be tracked and profiled by services like Facebook. But when these same questions were raised in the middle of the last century, they centered on so-called “motivational research” and the so-called social scientists who purported to know more about our tastes and preferences that we might know ourselves.

The mid-20th-century debate about the ethics and aims of advertising accelerated after the publication of Vance Packard’s seminal 1957 bestseller, “The Hidden Persuaders.” Packard’s narratives are driven by revelations of the disturbing and even scandalous agendas of motivational-research entrepreneurs and the advertising companies that hired them. As Packard writes in his introduction:

This book is an attempt to explore a strange and rather exotic new area of American life. It is about the large-scale efforts being made, often with impressive success, to channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences. Typically these efforts take place beneath our level of awareness; so that the appeals which move us are often, in a sense, ‘hidden.’ The result is that many of us are being influenced and manipulated, far more than we realize, in the patterns of our everyday lives.

Some of the manipulating being attempted is simply amusing. Some of it is disquieting, particularly when viewed as a portent of what may be ahead on a more intensive and effective scale for us all. Cooperative scientists have come along providentially to furnish some awesome tools.

Packard’s book sparked a version of the debate about the ethics and morality of advertising that we are now having today The emerging story about Russia’s use of Facebook to spread doubt and discord during the 2016 election raises the larger question of whether advertising itself—and especially the targeted, data-driven and data-gathering advertising that subsidizes journalism’s internet-based outlets as well as the most successful social-media platforms—is more of a benefit or more of a blight.

Facebook in particular has been criticized as a platform that has empowered advertisers to target in ways that affect us unconsciously, or in ways we can’t prevent. Some of Facebook’s more prominent critics—notably Tristan Harris and Roger McNamee of the recently-formed Center for Humane Technology—argue that algorithmically shaped online ads, tweaked to our individual tastes and prejudices, overcome our independence and free will in ways that undermine democracy. They’ve urged that free-to-the-user social-media platforms abandon advertising altogether, switching to a paid-subscription model.

But it’s hard to see how the criticisms of advertising on social-media platforms don’t apply at least as much to advertising that subsidizes the traditional press, which increasingly relies on both the internet as a conduit and on internet advertising to reach its audiences. (Of course, today’s major newspapers, which reach an increasing number of subscribers in their online editions, also take advantage of the data-gathering, audience-shaping technologies deployed by advertisers on the internet—for example, using platforms like Facebook to push out their stories to new readers.) Let’s say the New York Times or the Wall Street Journal cut out advertising altogether. This might reduce the risk that either paper is biased in favor of advertisers’ interests, and it would erase the possibility that their ads are manipulating us, but it would also increase the direct cost to readers, maybe by twice as much, or maybe much more. Yet nobody has been seriously suggesting that our major newspapers quit carrying ads.

McNamee, an early investor in Facebook, has recently become highly critical of what he sees as the platform’s role in facilitating the election of Donald Trump through enabling Russia’s meddling in the 2016 election. But even before that election, McNamee has explained, he was concerned that Facebook shaped the United Kingdom’s surprising 2016 vote for Brexit:

I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.

Meanwhile, the Remain campaign was making an appeal to reason. Leave’s crude, emotional message would have been turbocharged by sharing far more than Remain’s. … [T]he price of Facebook (and Google) ads is determined by auction, and the cost of targeting more upscale consumers gets bid up higher by actual businesses trying to sell them things. As a consequence, Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.

When news reports emerged showing that the Russian Federation had sponsored efforts to interfere in the U.S. election, including efforts using Facebook as a platform, McNamee says, he “formed a simple hypothesis: the Russians likely orchestrated some of the manipulation on Facebook that I had observed back in 2016.” He has made clear that he thinks Russian interventions led to the unexpected outcomes in the U.K. and in the United States. Partnering with Harris, McNamee has decried Facebook (and by implication other platforms) for “brain hacking” that overcomes voters’ judgment by seducing them into self-imprisonment in “filter bubbles.”

As Packard’s book reminds us, this concern about advertisers’ (and propagandists’) potential to overcome the free will, rationality, and independence of the public is not new. But what makes “The Hidden Persuaders” even more useful in the current moment is that Packard is careful to address what he calls “the question of validity”—that is, the question of whether “hidden persuaders’” strategies and tactics for manipulating consumers and voters are actually scientifically grounded. He acknowledges that the claims of the motivational-research companies may have been oversold or may have been adopted by companies who simply lack any other strategy for figuring out how to reach and engage consumers.

Has the advent of the internet and its algorithms that both measure what engages us and suggest what will please us changed “hidden persuasion” into an actual science? McNamee and Harris think so, especially thanks to our current dependence on handheld phones:

Smartphones changed the advertising game completely. It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible. And it left Facebook and Google with a prohibitive advantage over traditional media. … [Their algorithms] appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits. They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.

But despite what McNamee and Harris argue, there’s little independent research that supports their case against “filter bubbles.” And there is also plenty of research that calls those fears into question. Likewise, Tim Wu’s 2016 book “The Attention Merchants”—which may be regarded as a kind of 21st-century evolution of “The Hidden Persuaders”—carefully dodges the question of scientific validity with the qualitative argument that our attention as human beings is necessarily finite, so that anything that successfully demands our attention does so in a zero-sum game that necessarily eclipses something else that we might have attended to. Even so, Wu acknowledges early on that human beings are extraordinarily gifted when it comes to screening out things we have no interested in:

The neuroscience of attention, despite having greatly advanced over the past few decades, remains too primitive to explain comprehensively the large-scale harvesting of attention. … But there is one thing scientists have grasped that is absolutely essential to understand about the human brain before we go any further: our incredible, magnificent power to ignore.

It’s true that to some extent today’s internet platforms signify a change: They create more opportunities to tailor ads (and other services) to our interests, not least because the computation power and the ability to collect data has increased rapidly. But it’s hardly a settled scientific fact that the new platforms have somehow triumphed over our “magnificent power to ignore.” And I don’t think we can argue the privacy issues raised by this advertising model as if nobody ever before had the ability to exploit the data we generate with our buying habits and public behaviors before now. What Facebook has actually done, albeit unintentionally, is make it more obvious what things can be known about us. Properly, we should view this development less as a threat than as an opportunity to think more rigorously about the bargains we strike with companies like Facebook as individuals, and the bargains we collectively strike with such companies as a society.

To take one obvious example: Consider the #deletefacebook movement. (And for the sake of this part of the discussion, let’s use “Facebook” as a proxy for any social-media company or internet-search giant that may be collecting data about us.) We don’t yet have data about how many people are actually following through with their declared interest in opting out of Facebook, but we do know—ironically, thanks to a virtual-private-network-based harvesting of search queries!—that interest in this topic has grown in recent months.

But there’s actually an unsettled public-policy question about what opting out of Facebook means or could mean. In his recent testimony before Congress, Facebook CEO Mark Zuckerberg consistently stressed that individual users do have some high degree of control over the data (pictures, words, videos, and so on) that they’ve contributed to Facebook, and that users can choose to remove the data they’ve contributed. Recent updates in Facebook’s privacy policy underscore users’ rights in this regard.

Think of this as Level 1 privacy: You can erase your contributions from Facebook altogether and “disappear,” at least as an affirmative presence on the platform. But it’s worth seeking further clarification as to what such opting out (or opting in and then withdrawing) may mean. Certainly it should mean I can pull down my own content (the pictures and words I posted, for example) and forbid others to use it. It may also reasonably mean that even other people who’ve shared my stuff no longer can share it (in effect, allowing me to depart and punch holes in other people’s sharing of content stuff when I depart). All this is Level 1.

Level 2 is what Facebook knows about me, or can infer from my having been on the service, even after I’ve gone. Facebook has had a proprietary interest in drawing inferences from how we interact with their service and using that to inform what content (including but not limited to ads) that Facebook serves to us. That’s Facebook’s data (the inferences, that is), because Facebook generated it. (I certainly didn’t, and Facebook may in some sense know things about me that I don’t myself know.) If I leave Facebook, surely Facebook retains some data about me based on my interactions on the platform. We also know, as a result of Zuckerberg’s testimony before Congress, that Facebook manages to collect data about people who themselves are not users of the service.

And then there’s Level 3, which is the question of what Facebook can and should do with this inferential data that it has generated. Should it share it with third parties? Should it share it with governments? It seems reasonable to guess that if I depart and leave a Mike Godwin-sized hole in Facebook content, there are still ways to connect the dots so that not just Facebook itself, but also third-party actors, including governments, can draw reliable inferences about the now-absent me. We may reasonably conclude that there should be limits on how such data can be used and on what inferences can be drawn. This is a public-policy discussion that needs to happen sooner rather than later.

Apart from privacy and personal-data concerns, we ought to consider what we really think about targeted advertising. Historically, criticisms of advertising have been similar to criticisms of propaganda—in sum, that advertising may somehow manipulate us into wanting things we shouldn’t want and feeling things we shouldn’t feel. (This is certainly what we can glean from critics like Packard and Wu.) But the arguments arising from tailored internet-service advertising seem fundamentally different—here the issue isn’t so much that we’re being influenced to want things we shouldn’t want as it is that we’re being offered ads that, more and more, are based on larger amounts of data that reflect our actual conscious or unconscious preferences. If the criticism of targeted advertising, “motivational research,” and the like historically has been that the ads are pushing us to desire what we otherwise wouldn’t have, then the criticism of internet advertising seems to be that internet-based ads are pulling us or even seducing us toward what we already desire, based on what can be inferred about our inclinations and preferences.

As a first step, we should assess whether the claims made by marketers and advertisers regarding the effects ads have on us are scientifically rigorous and testable. (As Packard noted, it’s not unheard of for advertising agencies to oversell the effectiveness of the marketing and advertising strategies.)  If the claims stand up to testing, then we have some hard public-policy questions we need to ask about whether and how advertising should be regulated.

But if the claims of the new “hidden persuaders” don’t stand up to scrutiny, then we should consider that there might be nothing categorically oppressive about being invited to engage in commercial transactions or to embrace a particular political point of view. That’s a simpler answer—maybe too simple—but it lines up well with what we have traditionally assumed in democratic societies to be true about freedom of speech and public debate and persuasion in general. In democracies we don’t try to control citizens’ thoughts, but we do try to persuade citizens, and also to allow ourselves as citizens to be persuaded. So the question that arises with regard to advertising—both consumer advertising and politically driven advertising—is whether we’re being subjected to attempts at persuasion or instead to overwhelming tactics of control.

The questions raised about both privacy and advertising are big, complex issues that don’t easily devolve to pro- or anti-privacy rhetoric (or pro- or anti-social-media rhetoric). If in fact we need to tackle these questions proactively, we must begin by defining what the problems are in ways that all of us (or at least most of us) agree on. It may be tempting to focus on Facebook, but other internet services harvest our data too. And non-internet companies have done so (albeit in more primitive ways) for generations. It is difficult to say these services never should do so, and it’s difficult to frame the contours of what best practices should be.

That said, if we’re going to grapple with the question of regulating social-media platforms and other internet services, we need to dig in and think seriously about what best practices should be. Efforts to offload these important public policy questions to the platforms themselves—for example, by calling on Facebook or Twitter or Google to apply their own judgment to censor what they’ve internally judged to be antisocial content—is a dereliction of Americans’ civic duty to craft public policy publicly. If we believe democracy is something worth preserving—and if we have faith in our ability as individuals to resist the pull of advertising and to think skeptically about it—then even if new technologies truly threaten or undermine democracy, we ought nonetheless to use the tools of democratic engagement to save it.

 

 

Image credit: sitthiphong

Featured Publications