After a prolonged engagement with the government over the moderation decisions it made during the COVID-19 pandemic, YouTube recently announced a “Second Chance” program for creators who were moderated off the platform.

While we could debate the merits of this decision for the information ecosystem writ large, one troubling thread is that the government continues to pressure platforms into moderation decisions.

Since 2023, the House Judiciary Committee—particularly Rep. Jim Jordan (R-Ohio)—have hounded YouTube for their actions during the pandemic. Ironically, while they sought to uncover inappropriate jawboning by the government, they continued to engage in it long after the events ended. After YouTube released a formal statement in September stating, among other things, that they would “never use third-party ‘fact-checkers’,” the Committee gloats that these admissions “come after Chairman Jordan’s subpoena to Google and a years-long investigation into the company.” This is exactly the type of pressure that both sides of the aisle should prevent from happening.

The Committee and Rep. Jordan are free to disagree with YouTube’s decisions publicly, but they have voted and publicly voiced support for changing the liability laws that protect moderation decisions. This implicit threat looms over any pressure the government provides on these decisions.

Rep. Jordan and other members of the Committee continue to confuse the platform’s legitimate First Amendment right to moderate content on their own terms with citizens’ (non-existent) First Amendment right to speak anything wherever they want online. Rather than use government to shape platform decision-making, users should punish platforms in the marketplace by using alternatives like Bluesky, Truth Social, and other options that offer different moderation techniques.

From a free-market perspective, the Second Chance program may or may not be a good thing. Platforms are dynamic, network-driven services: They compete not just on features and price, but also on speech norms, safety expectations, and creator opportunity. A program that increases speech at the margin—especially by reconsidering permanent bans—lets users, advertisers, and creators “vote with their feet” in real time. If the policy proves useful, YouTube will reap traffic, watch time, and goodwill; if it fails, audiences will defect and advertisers will demand tighter controls. Either outcome disciplines the platform without deputizing the state as a speech referee. (This applies to all platforms as they court different audiences.)

But that private feedback loop only works if government actors keep their thumbs off the scale. Consider the recent episode in which Federal Communications Commission Chair Brendan Carr publicly threatened ABC affiliates over Jimmy Kimmel’s comments following the death of Charlie Kirk. Even prominent Republicans like Sen. Ted Cruz (R-Texas) called those statements “dangerous,” warning that such threats resemble mob-style coercion rather than principled oversight. Whether one thinks Kimmel’s commentary was wise or reckless is irrelevant—what matters is that a regulator invoked licensing powers to influence editorial decisions. That is exactly the kind of government pressure that distorts private moderation choices and chills speech.

YouTube’s new program may be the right course of action, but the fact that it comes at the behest of a government committee should trouble everyone who cares about a healthy information environment and the right to free speech.  

Our Technology and Innovation program focuses on fostering technological innovation while curbing regulatory impediments that stifle free speech, individual liberty, and economic progress.