Colorado wants to tell AI developers what their models are allowed to say, but a new federal lawsuit argues that the First Amendment prevents this interference.

On April 9, xAI filed a constitutional challenge to Colorado’s Consumer Protections for Artificial Intelligence Act, which is scheduled to take effect on June 30. The complaint raises a question with implications far beyond a single state or a single company. Can a government compel an AI developer to embed the state’s preferred viewpoints into the architecture of its models? The answer should be no, regardless of which government is asking and regardless of which viewpoint it prefers.

Colorado’s law requires developers of “high-risk” AI systems to exercise “reasonable care” to protect consumers from “algorithmic discrimination.” In the abstract, that sounds like unobjectionable consumer protection. But the law defines algorithmic discrimination in a way that embeds a particular ideological viewpoint into the regulatory framework itself.

It covers any AI output that results in “unlawful differential treatment or impact” disfavoring individuals based on protected characteristics, while simultaneously exempting AI systems designed to “increase diversity or redress historical discrimination.” The statute does not neutrally target all disparate impacts, it only targets the kind Colorado disfavors and blesses the kind it approves. The regulatory burden falls on developers whose models produce outputs the state disagrees with, while developers whose models produce outputs aligned with the state’s preferred position on equity and discrimination receive a carve-out.

The complaint argues that this framework constitutes both content-based and viewpoint-based discrimination under the First Amendment. xAI contends that every stage of AI development constitutes an expressive editorial judgment: which data sources to include in a training corpus, how to structure supervised learning datasets with prompt-response pairs that reflect the developer’s priorities, how to calibrate reinforcement learning to reward or penalize certain types of outputs, and what behavioral instructions to encode in system prompts.

These are not neutral engineering decisions. They are choices that reflect a developer’s values and vision for how AI should engage with the world, no different in principle from the editorial judgments that newspapers, parade organizers, and social media platforms make when deciding what content to present and how to present it. The complaint draws on 303 Creative v. Elenis,Hurley v. Irish-American Gay Group, and Moody v. NetChoice to establish that these design choices carry constitutional protection.

The divergent behavior of competing AI models reinforces the point. Developers at different companies make different editorial choices, and those differences produce meaningfully different outputs when identical prompts are run across competing platforms. Colorado’s law would require all developers to alter those editorial decisions to conform to Colorado’s preferred position, compelling companies to recalibrate training data, re-weight model outputs, or hard-code additional guardrails that prevent the “wrong” kind of disparate impact while permitting the kind the state favors, ultimately placing an artificial limit on the potential ideological diversity of differing AI systems. 

The complaint therefore also brings into question a listener’s rights argument. By forcing developers to alter their models’ outputs, the law ensures that users receive only government-approved versions of what the AI would otherwise produce, burdening not just the developer’s right to speak but the user’s right to receive information and ideas.

The principle at stake is not limited to Colorado or to progressive regulatory ambitions. Last summer, the R Street Institute argued that the Trump administration’s “Preventing Woke AI” executive order raised similar concerns from the opposite direction. When the federal government conditions procurement contracts on AI models being “free from top-down ideological bias” and adhering to “Unbiased AI Principles,” it is acting as a buyer, not a regulator. But without clearly defining such terms, it sets a precedent that allows governments  to use their buying power to sway how technology companies handle questions of ideology. And Colorado’s law takes the government’s role even further and into more constitutionally murky waters. If Colorado can mandate that AI models reflect its views on equity and discrimination, the next state, or the next administration, can mandate that they reflect something else entirely. The problem is similar in both cases, but more constitutionally problematic with Colorado’s law in that the government is forcing its own judgment about expressive design on the developers.

The free-market position offers both a constitutionally sound and practically superior alternative. AI developers should make their own editorial choices, and users should decide which models best serve their needs. The marketplace of ideas works in AI just as it works in publishing, broadcasting, and every other medium of expression. The rapid growth of multiple competing frontier models, each with distinct approaches to contested topics, demonstrates that the market is already producing the diversity of perspectives that regulators claim to want. If a model’s outputs are perceived as biased in ways that alienate users, the competitive market will correct that far more efficiently than any government mandate.

The xAI lawsuit will test whether courts agree that AI development is expressive activity protected by the First Amendment. Whatever the outcome, the underlying principle is one that policymakers on both sides should internalize. The government does not belong in the business of telling AI developers what to think, what to build, or what viewpoints to embed in their models. That is true whether the mandate comes from a state legislature or a White House, and perhaps signals that it is  time for Congress to enact federal legislation preempting unconstitutional state AI laws, which are growing at a rapid pace.