Online sexual abuse may become harder to prevent, thanks to a bill making its way through Congress. Ironically, the purpose of that bill is to curb online sexual abuse.

Deepfake porn,” which involves using artificial intelligence software to swap faces in pornographic videos, is quickly emerging as a troubling new method of sexual exploitation. Motherboard has reported extensively on the growth of this worrying phenomenon, by which celebrities, exes, or classmates can be made to look like they’ve participated in porn.

As experts have noted, the law isn’t well-equipped to handle deepfake porn—the videos do not always fall neatly into existing legal prohibitions, and free speech concerns may prevent new laws from specifically restricting such material.

As a result, the job of policing this obviously harmful content has fallen on private companies who host the servers and platforms where deepfakes are hosted and traded. Reddit and Pornhub, for example, both have announced that they will not allow deepfakes and have started deleting them. Researchers are studying ways to use automated image-processing to detect faked videos and other multimedia.

But new legislation could get in the way of these anti-deepfake efforts. The House of Representatives passed the Allow States and Victims to Fight Online Sex Trafficking Act, otherwise known as FOSTA. That bill, which now moves to the Senate for consideration, targets sex traffickers who post advertisements for illegal services on social media websites.

There’s no question that sex trafficking is a problem that demands powerful solutions. But FOSTA’s solution is also troubling. Traditionally, websites that allow outside users to post messages or information, like Craigslist and Facebook, do not share legal responsibility if the outside users post illegal or improper content, unless the website actually had a hand in making the content.

This allocation of legal liability, embodied in a law called section 230 of the Communications Decency Act, has been critical to the growth of new internet services. Section 230 allows those services to moderate for undesirable content without fear that a mistake in content-moderation could land them in court. Certainly, section 230 does not excuse websites from ignoring egregious postings—it does not apply to violations of federal criminal law, for example—but the legal buffer that section 230 offers has allowed online services to proactively experiment with different ways of policing their services.

FOSTA, in the version that passed the House (substantially different from earlier, less-problematic versions), changes all that: Under it, websites that have “knowledge” that some posted material relates to illegal sex-trafficking can be deemed legally responsible for that material. What it means for a website to have “knowledge” remains an open question, especially if the site uses automatic or artificial intelligence systems to review user posts. Therefore, this language opens the door to a potentially wide range of lawsuits and prosecution.

The worst case scenario is that, to avoid having “knowledge” of sex trafficking, Internet services will stop content-moderation entirely. This scenario, which some experts call the “moderator’s dilemma,” would most likely affect smaller websites—including message boards and forums that serve special interests—that can’t afford the advanced filtering systems or armies of content editors that the big sites use. These smaller sites have already faced difficult problems with content moderation, and would be even less likely to spend resources on cleaning up after their users if doing so might lead to a lawsuit.

But even if websites don’t stop monitoring their content entirely, laws like FOSTA pressure them to focus on Congress’s issue of the day—in this case sex trafficking—at the expense of other problems online. That’s where things like deepfake porn come into play.

People are, sadly, remarkably good at coming up with awful uses of new technology: deepfake porn, fake news, cyberbullying, revenge porn, you name it. The gatekeepers of the Internet need to be adept to quickly tackle these difficult problems as they arise. Public pressure on internet companies is necessary to push those companies to do everything they can, but the sluggishness of the federal legislative process will be a drag on solving tomorrow’s online problems. A law requiring websites to take down deepfake porn, for example, might take years to pass (and could be subject to court challenges), at which point some new and unpredictable abusive practice will likely have arisen.

The right way for Congress to get internet companies to deal with serious online problems like sexual abuse is, counterintuitively, to leave those companies alone—to leverage public backlash and economic pressure to get them to take action against bad actors.

The internet has faced challenges in the past, and it has emerged intact to face the challenges of today. It will move past today’s challenges to face those of the future, but not if Congress starts declaring what the internet should be.

Featured Publications