What If Widespread Disinformation Is the Solution to Fake News?
These questions get raised in an early chapter of Neal Stephenson’s new novel, Fall, or Dodge in Hell, and the author’s answers are eerily persuasive. They’re also a weird echo of creative thinking pioneered by the cypherpunks more than 20 years ago—a group that Stephenson, then working on his encryption-centric opus Cryptonomicon,frequently hung out with and consulted. (Cypherpunk creativity, which nowadays deserves credit for things like cryptocurrency, is a gift that keeps on giving.)
I confess I haven’t yet finished Stephenson’s latest 800-plus-page tome, which so far might be characterized, although not necessarily captured, by the term “near-future dystopia.” But when I came across Stephenson’s depiction of how automated disinformation could actually remedy the damage that internet-based “doxxing” and fake news inflict on an innocent private individual, I paused my reading and jumped down the rabbit hole of tracing this idea to its 1990s roots.
What caught me was a passage about the character Maeve, an Australian who becomes involved with Corvallis, a cloud-computing engineer who’s quasi-famous. When both appear in video news coverage of a presidential speech, they become obsessional objects of internet gossip, which Stephenson refers to with some justice as “Crazytown”: “For he had been identified by name, on national television, by the president of the United States, and had been a reasonably well-known person to begin with. And she had been standing next to him.”
The result, per Stephenson, is predictable:
“Crazytown was repelled by facts and knowledge, as oil fled from water, but was fascinated by the absence of hard facts, since it provided vacant space in which to construct elaborate edifices of speculation. Toward power it felt some combination of fear and admiration, and Corvallis was powerful. Toward vulnerability it was drawn, in the same way that predators would converge on the isolated and straggling. Within a week, Maeve—who suffered from the fatal combination of being mysterious, vulnerable, and female—had been doxxed.”
All that could have happened yesterday, or anytime in the last decade. But then Corvallis’s friend Pluto shows up, joins Corvallis and Maeve on a private jet flight to Australia, and offers Maeve a solution:
“It came to my attention that you were being abused on the Internet,” Pluto said, “and so I am here to destroy it.”
“Destroy what?” Corvallis inquired.
“The Internet,” Pluto said. “Or what Dodge referred to as the Miasma. Does your jet have Wi-Fi?”
“Yes, but it doesn’t work over the Pacific Ocean.”
Pluto sighed. “Then it will have to wait until we have reached Australia.”
“I didn’t like your friend at first,” Maeve said, “but I’m warming up to him.”
“That is convenient, Maeve, if I may take the liberty of addressing the lady by her Christian name, because I will require your permission. Your complicity in utterly destroying your reputation.”
“It’s already destroyed, haven’t you seen a bloody thing?”
“It is not sufficiently destroyed yet,” Pluto said. He glanced at the screen of his laptop.
“The total number of unique slanderous and defamatory statements that have been made about you, on all of the blogs, boards, and social media networks being tracked by my bots, currently stands at a little more than seventy-three thousand.”
Pluto’s solution is to release a troop of bots designed to defame a person randomly.
“This kind of thing has to be gone about in a systematic way, so that nothing is missed,” he said, now staring out the window at a fuel truck. “Partly through direct study of dictionaries, thesauri, and so on, and partly through brute-forcing archives of defamatory Miasma postings, I have compiled what I think is a pretty comprehensive ontology of execration. A mere lexicon doesn’t get us anywhere because it’s language-specific. Both in the sense of relating to only one language, such as English, and in the sense that it only covers defamation in a textual format. But many defamatory posts are now made in the form of images or videos. For example, if you want to call someone a slut—”
“We don’t need to go there right now,” Corvallis said.
“‘Slut,’ ‘bitch,’ ‘hag,’ ‘fatty,’ all the bases need to be covered [but if] it’s all skewed toward, say, ‘feminazi,’ then the impression will be created in the minds of many casual users that the subject is indeed a feminazi. But if an equal amount of traffic denounces the subject as a slut, a bitch, a whore, an attention seeker, a gold digger, an idiot […] then even the most credulous user will be inoculated with so many differing, and in many cases contradictory, characterizations as to raise doubts in their mind as to the veracity of any one characterization, and hence the reliability of the Miasma as a whole.”
Later, Pluto explains how Maeve would take advantage of the disinformation efforts:
“It’s an open campaign. We would announce it. Publish statistics on how it’s going. You could do press interviews, if you wanted. The sheer magnitude of it would make it obvious, even to the most credulous user of the Miasma, that it was all a bunch of nonsense. Afterward, no one in their right mind would ever believe anything negative about you that had ever been posted on the Miasma. But because it is all technically slanderous, you would have to promise not to sue me.” Maeve asks him how this particular campaign would “destroy the Internet,” and Pluto explains that he’s going to “open-source all the tools” and combine them with “an easy-to-use graphical interface.”
This whole chapter rang many bells for me, not least because it paralleled a discussion I had with a law professor at a conference last year when I pitched the idea of a “libel service.” Basically, you’d hire a “libel service” to randomly defame you on the internet, so that whenever anyone says something bad about you on Twitter or Facebook, or in the comments area of some newspaper, you could just say “that’s probably my libel service.” No one would know whether the defamatory statements were true or not, and people would be predisposed to doubt anything too terrible that’s said about you.
The professor was skeptical—why would anyone actively seek to be defamed?—but I said, wait, the cypherpunks were talking about this idea 25 years ago, and there’s no reason to think it wouldn’t work. I’d first heard the notion explained by Eric Hughes—a mathematician and programmer who, along with John Gilmore and Tim May, was a founding member of the cypherpunks movement—at some conference or other back in the mid-1990s. Hughes’s idea, as he spelled it out back then, was remarkably like Pluto’s exposition in Stephenson’s novel, except that Stephenson, of course, turned the idea’s volume up to 11.
Did Stephenson get the idea from Hughes? I hadn’t spoken to Hughes in at least ten years, but I’m in contact with Stephenson from time to time—I’ve reviewed him and interviewed him for Reason—so I sent him a query about it. At the same time, I asked around to see if anyone I know had contact information for Hughes; it turns out he’s now in Salt Lake City, the principal of a contract-programming company. Hughes got back to me first.
I asked him if he remembered the conversation we had in the early 1990s about the libel service. Was it his idea?
“It could have been mine,” Hughes told me. “I know I talked about it to different people, but the idea of a disinformation service to protect people from damaged reputations was in the air. It’s possible I came up with it and then Neal heard it from someone who heard it from me.”
Stephenson got back to me next, and I asked him if he’d gotten his character’s deliberate-defamation scheme from Hughes. “It’s linked in my mind with Matt [Blaze] and Encyclopedia Disinformatica,” Stephenson said, “but now that you mention it, I do remember Eric talking about similar ideas around the same time.”
I know Matt Blaze, but I didn’t remember Encyclopedia Disinformatica, so I had to query him about that idea and its connection to Stephenson’s novel. (It bears mentioning that, like me, neither Hughes nor Blaze has finished the novel, but both recognized the cypherpunk roots of Pluto’s disinformation scheme.)
The theoretical encyclopedia in question is one that, as Blaze describes it, would have lots of true information in it but also plenty of false information in it, too. Its function would be to demonstrate that even content that sometimes appears to be mostly true needs to be questioned and independently verified. Blaze’s idea was that this would be a kind of perverse media-education project, one that (one hopes) would seed some skepticism about what we encounter on the internet and elsewhere. But it’s not quite the same as the project that appears in Stephenson’s novel.
In tracing this idea back to its roots, I realized there was an early recognition, at least among people who were thinking about the implications of a wide-open internet, that disinformation—sometimes computer-assisted or computer-enhanced—was going to be a problem we’d need to think about before it became, well, a problem.
But Stephenson’s new book adds another takeaway: In the novel, Pluto’s automated-defamation scheme does actually work for some high percentage of the population, who learn to think more critically about stuff they came across on the internet and elsewhere in our media culture. (In the near future, they hire their own editors to cull the digital information overload for them.) But there’s also an irreducible fraction of people who continue to cherry-pick narratives, whether true or false, solely on the criterion of whether the narratives confirm their cherished beliefs. They won’t be newly sophisticated media skeptics or discriminating news consumers—instead they’ll commit to the path of confirmation bias, which Cato’s Julian Sanchez described a few years back as “epistemic closure.” (And, yes, these people will hire their own editors too, picked to serve up content that confirms their biases rather than challenging them.)
In the novel, we see a far more fragmented United States, in which different populations use their digital tools and networked devices to protect themselves from other American subcultures. It will be weird and dangerous, and even successful cypherpunk hacks like Pluto’s defamation scheme ultimately will be only stopgap measures. The problem isn’t disinformation, defamation, or deep fakes—it’s self-deception, a deep-rooted and likely ineradicable human vice. All this sounds like a near-future dystopia, all right, but right now I’m only at page 200. I’ve got to drive on to page 883 to find out if that’s a fair assessment after all.
Image credit: cbies