Last month, a bipartisan group of U.S. senators unveiled the much discussed EARN IT Act, which would require tech platforms to comply with recommended best practices designed to combat the spread of child sexual abuse material (CSAM) or no longer avail themselves of Section 230 protections. While these efforts are commendable, the bill would cause significant problems.

Most notably, the legislation would create a Commission led by the Attorney General with the authority to draw up a list of recommended best practices. Many have rightly explained that AG Barr will likely use this new authority to prohibit end-to-end encryption as a best practice. However, less discussed is the recklessness standard the bill adopts. This bill would drastically reduce free speech online because it eliminates the traditional moderator’s dilemma and instead creates a new one: either comply with the recommended best practices, or open the legal floodgates.

Prior to the passage of the Communications Decency Act in 1996, under common law intermediary liability, platforms could only be held liable if they had knowledge of the infringing content. This meant that if a platform couldn’t survive litigation costs, they could simply choose not to moderate at all. While not always a desirable outcome, this did provide legal certainty for smaller companies and start-ups that they wouldn’t be litigated into bankruptcy. This dilemma was eventually resolved thanks to Section 230 protections, which prevent companies from having to make that choice.

However, the EARN IT Act changes that equation in two key ways. First, it amends Section 230 by allowing civil and state criminal suits against companies who do not adhere to the recommended best practices. Second, for the underlying Federal crime (which Section 230 doesn’t affect), the bill would change the scienter requirement from actual knowledge to recklessness.  What does this mean in practice? Currently, under existing Federal law, platforms must have actual knowledge of CSAM on their service before any legal requirement goes into effect. So if, for example, a user posts material that could be considered CSAM but the platform is not aware of it, then they can’t be guilty of illegally transporting CSAM. Platforms must remove and report content when it is identified to them, but they are not held liable for any and all content on the website. However, a recklessness standard turns this dynamic on its head.

What actions are “reckless” is ultimately up to the jurisdiction, but the model penal code can provide a general idea of what it entails: a person acts recklessly when he or she “consciously disregards a substantial and unjustifiable risk that the material element exists or will result from his conduct.” But what’s worse, the bill opens the platform’s actions to civil cases. Federal criminal enforcement normally targets the really bad actors, and companies that comply with reporting requirements will generally be immune from liability. However with these changes, if a user posts material that could potentially be considered CSAM, despite no knowledge on the part of the platform, civil litigants could argue that the moderation and detection practices of the companies, or lack thereof, constituted a conscious disregard of the risk that CSAM will be shared by users.

When the law introduces ambiguity into liability, companies tend to err on the side of caution. In this case, that means the removal of potentially infringing content to ensure they cannot be brought before a court. For example, in the copyright context, a Digital Millennium Copyright Act safe-harbor exists for internet service providers (ISPs) who “reasonably implement” policies for terminating repeat infringers on their service in “appropriate circumstances.” However, courts have refused to apply that safe-harbor when a company didn’t terminate enough subscribers. This uncertainty about whether a safe-harbor applies will undoubtedly lead ISPs to act on more complaints, ensuring they cannot be liable for the infringement. Is it “reckless” for a company not to investigate postings from an IP address if other postings from that IP address were CSAM? What if the IP address belongs to a public library with hundreds of daily users?

This ambiguity will likely force platforms to moderate user content and over-remove legitimate content to ensure they cannot be held liable. Large firms that have the resources to moderate more heavily and that can survive an increase in lawsuits may start to invest the majority of moderation resources into CSAM out of an abundance of caution. As a result, this would leave less resources to target and remove other problematic content such as terrorist recruitment or hate speech. Mid-sized firms may end up over-removing user content that in any way features a child or limit posting to trusted sources, insulating them from potential lawsuits that could cripple the business. And small firms, who likely can’t survive an increase in litigation could ban user content entirely, ensuring nothing on the website hasn’t been posted without vetting. These consequences, and the general burden on the First Amendment, are exactly the type of harms that drove courts to adopt a knowledge standard for online intermediary liability, ensuring that the free flow of information was not unduly limited.

Yet, the EARN IT Act ignores this. Instead, the bill assumes that companies will simply adhere to the best practices and therefore retain Section 230 immunity, avoiding these bad outcomes. After all, who wouldn’t want to comply with best practices? Instead, this could force companies to choose between vital privacy protections like end-to-end encryption or litigation. The fact is there are better ways to combat the spread of CSAM online which don’t require platforms to remove key privacy features for user.

As it stands now, the EARN IT Act solves the moderator’s dilemma by creating a new one: comply, or else.