Concern Abroad, Censorship at Home: The Contradictions in U.S. Digital Policy
Last week’s House Judiciary GOP meeting, “Europe’s Threat to American Speech and Innovation,” highlighted a growing contradiction in the American government’s evolving stance on free speech and its role in regulating online content.
The hearing took place after committee members returned from the United Kingdom, where they studied the country’s implementation of new online safety laws—the Digital Services Act and the Digital Markets Act—and the enforcement of longstanding regulations like the Malicious Communications Act of 1988 and Section 127 of the Communications Act of 2003.
These laws empower the U.K. government to monitor and regulate online speech, creating criminal liability for offenses like disseminating “grossly offensive” content, posting obscene material, or spreading so-called hate speech. Critics argue that these measures severely curtail civil liberties by allowing the state to define and penalize subjective expressions. While the United Kingdom lacks a constitutional equivalent to the U.S. First Amendment, the chilling effect on public discourse was not lost on American lawmakers who decried the British approach as invasive and antithetical to democratic principles.
Members of Congress expressed concern about the broad and potentially repressive nature of these laws during last week’s hearing, warning that foreign efforts to curb online expression could set precedents that undermine freedom in democracies worldwide. But the outrage on display in the committee chamber belied a quieter, yet growing domestic trend: the increasing willingness of American officials to involve the federal government in speech regulation under the guise of protecting users, securing elections, or combating misinformation.
This contradiction lies at the heart of U.S. digital policy. While some lawmakers criticize foreign efforts to regulate content, others in Washington are pursuing initiatives that mirror many of the same speech-curbing tendencies—albeit dressed in different rhetoric.
For instance, the Federal Trade Commission (FTC) recently initiated an inquiry into the content moderation practices of major tech companies. FTC Chair Andrew Ferguson framed the move as a way to ensure platforms are not “bullying” their users or suppressing lawful expression based on political or ideological grounds. Yet by launching a formal investigation into the editorial decisions of private companies, the FTC is effectively asserting government oversight over what speech can and cannot be elevated online. This raises constitutional questions about the agency’s authority to police viewpoint discrimination, as well as whether these investigations themselves risk becoming a form of de facto censorship.
Meanwhile, the Federal Communications Commission (FCC) has also sought to expand its influence over digital speech, particularly under the banner of “public interest” obligations for internet service providers and platforms. Although the FCC’s traditional mandate focuses on technical standards and spectrum allocation, recent proposals suggest a growing interest in regulating online harms—another potentially slippery slope toward the federal policing of expression. Yet even as bureaucrats and policymakers devise ways to regulate free speech, the court has already shut off most avenues by passing down several rulings to clarify that the government should have no role in the content decisions of private platforms.
That question is at the heart of Murthy v. Missouri, a case that made its way to the U.S. Supreme Court this year. The lawsuit accuses several federal agencies—including the White House, the Centers for Disease Control and Prevention, and the Federal Bureau of Investigation—of improperly pressuring social media companies to suppress dissenting views on topics ranging from COVID-19 to election integrity. A lower court found that the government had likely violated the First Amendment by “coercing or significantly encouraging” platforms to remove content.
NetChoice v. Paxton centers on a Texas law aimed at restricting how large social media platforms moderate user content—ostensibly to prevent viewpoint discrimination. Ultimately, the Supreme Court remanded the case back down to lower courts, reminding them that the government cannot compel private companies to host speech against their will (in large part because of the First Amendment). By defending the editorial rights of private platforms, NetChoice v. Paxton reinforces a uniquely American safeguard against government overreach in digital speech, preserving a freer and more competitive online marketplace of ideas.
Even informal government pressure, or “jawboning,” is not merely theoretical. Internal documents released through the “Twitter Files” and other disclosures have shown high-level communications between federal officials and content moderators, often containing requests or “flags” for removing specific narratives. While defenders argue these were merely recommendations, the imbalance of power between the state and private companies makes such “recommendations” difficult to refuse—especially when accompanied by veiled regulatory threats.
In response, lawmakers like Sen. Eric Schmitt (R-Mo.) have introduced legislation to curb backchannel communication between government agencies and tech firms. Schmitt’s bill would require federal watchdogs to report any censorship-related contacts with platforms, creating a transparency mechanism that could help prevent covert influence. While the bill faces long odds in the Senate, it reflects a broader reckoning with how deeply entangled the government has become in moderating digital speech.
Taken together, these developments paint a troubling picture. U.S. officials rightly criticize foreign censorship laws for stifling public discourse, but many of those same officials back policies that lead to a similar outcome domestically.
Such an approach risks undermining both constitutional freedoms and market-based solutions. Content moderation is an inherently subjective task best left to private companies that can set their own community standards and bear the consequences of those choices in the marketplace. By inserting itself into that process—whether through direct mandates, regulatory threats, or covert pressure—the government threatens to politicize digital platforms, erode public trust, and violate constitutional rights.
What happened in the House Judiciary hearing was not just a critique of foreign laws—it was a mirror held up to our own evolving posture. If American policymakers truly believe in free speech, they must apply those principles consistently, including in our own country.