Anthropic, the Pentagon, and the AI Innovation Ecosystem
The Pentagon’s dispute with Anthropic recently ended with President Donald J. Trump directing every federal agency to cease using the company’s technology immediately and Defense Secretary Pete Hegseth designating Anthropic a supply-chain risk to national security. Typically reserved for companies from adversarial nations, applying this designation to a domestic company in apparent retaliation for a failed contract negotiation is without precedent. It warrants a close examination of what the federal government actually did, whether it had the legal authority to do it, and what the consequences will be going forward.
What the Law Requires
The supply-chain risk designation is a serious legal authority grounded in 10 U.S.C. § 3252 and implemented through the Defense Federal Acquisition Regulation Supplement Subpart 239.73. It was designed for a specific and narrow purpose: to protect the defense industrial base from foreign adversary infiltration. The statute defines supply-chain risk as “the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system.”
Previous supply-chain risk designations have targeted companies with documented ties to foreign adversaries (e.g., Huawei, ZTE, Kaspersky, Hikvision, Dahua). Anthropic, a San Francisco-based company with no foreign adversary nexus, is the first domestic company to receive this designation.
Legal experts told DefenseScoop that the statute requires the government to demonstrate a risk of sabotage, subversion, or manipulation of operations by an adversary and that it is not clear how Anthropic’s usage restrictions on Claude could satisfy that standard. Others noted in press coverage that the designation requires a completed risk assessment and congressional notification before action, neither of which appears to have occurred.
The Week’s Incoherence
The specific sequence of events reveals that the supply-chain risk designation was not a considered policy judgment. Earlier in the week, the Pentagon proposed invoking the Defense Production Act (DPA) to compel Anthropic to accept its terms, treating the company’s technology as so vital to national defense that emergency industrial authority was warranted. By Friday, the Pentagon had declared the company a threat to national security.
Both characterizations cannot be true. DPA invocation logic would require that Anthropic’s technology be indispensable to the national defense and that a lack of access would harm national security. The supply-chain risk designation would require that Anthropic’s technology pose a direct threat to national security. The administration held both positions simultaneously within the same week, suggesting neither position resulted from a genuine policy assessment.
The chaos behind the scenes reinforces this reading. According to Axios, the under secretary of defense was on the phone offering Anthropic a deal at the same moment the secretary of defense was posting about the designation on X. A source familiar with the negotiations said the deal would have permitted the government to collect and analyze Americans’ geolocation data, web-browsing history, and personal financial information. This is not a minor detail, as the Trump administration’s public position has not indicated any interest in mass surveillance—in fact, it has warned against violating the Fourth Amendment. The terms offered privately tell a different story.
OpenAI’s CEO announced Friday evening that his company had reached a deal with the Pentagon and that the Department of Defense (DOD) had agreed to prohibitions on mass surveillance and the creation of autonomous weapons—the same two restrictions Anthropic sought.
The Chilling Effect
Former artificial intelligence (AI) officials and legal experts for the Trump administration described the designation as unprecedented and legally unsound; some even suggested it amounted to an attempt to destroy a domestic company. That underlying concern is serious.
The supply-chain risk designation carries consequences that extend beyond the government’s own contracts. Hegseth’s announcement stated that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The downstream effects could reach Amazon, which has invested approximately $8 billion in Anthropic and integrated Claude deeply into its web services platform. It could also affect defense software firms that use Claude to power sensitive military work. Should it survive legal challenge, the designation will function less like a procurement decision and more like a commercial blacklist imposed by government fiat.
The lesson of the week is pointed, specifically for frontier AI labs. Anthropic’s two red lines—no mass surveillance of Americans, no fully autonomous lethal weapons without human oversight—were not radical positions. The Pentagon had publicly stated that it did not intend to cross either line; however, the dispute was about whether those assurances would be contractually binding or whether Anthropic would have to trust the government’s stated intentions. Anthropic reasonably concluded that contract language matters. For that judgment, the company lost its federal contracts and gained a national security threat designation.
The message to the rest of the industry is that safety constraints on AI usage are negotiable under sufficient pressure and that companies that decline to negotiate face existential legal and commercial consequences. This is a destructive signal at a moment when the United States is actively competing with China for dominance in frontier AI development. The innovation ecosystem that produces the capabilities the Pentagon wants to buy depends on companies being able to make durable commitments about how their technology will and will not be used. Undermining those commitments through coercion does not make American AI more capable. In fact, it makes American AI companies less trustworthy as partners—the precise quality that differentiates them from their Chinese counterparts in the eyes of allied nations and international customers.
The Broader Economic Distortion
What happened last week will not be forgotten in boardrooms or by investment committees. Frontier AI development is a capital-intensive, long-horizon enterprise, and the conditions that make those investments possible rest on two foundational assumptions: that contracts will be honored and that assets will not be expropriated. When the government fails to perform these duties adequately, private companies are likely to change their behavior to account for the increased risk. In offsetting these risks, the AI ecosystem may be altered in a manner that not only stunts the development of those capabilities the Pentagon may wish to acquire but also harms the competitive edge the United States currently enjoys over China and other geopolitical rivals.
Beyond adjusting their investment behavior, companies will expend time and money cultivating favor with politicians and public officials to protect existing assets and secure contracts. Influence seeking by developers is already underway in Congress and at executive agencies, and it will likely keep expanding if the federal government continues down its current antagonistic course. Simply put, from the standpoint of societal good, resources expended acquiring political influence are wasted.
Beyond rent seeking, a politicized ecosystem risks distorting AI evolution itself. Operating in an environment where political considerations loom larger, AI companies will be forced to apply time and resources to distractions, focusing less on which investments and opportunities maximize consumer benefit or improve model capabilities and more on projects favored by politicians and officials. In practice, this could look like less investment in new medical applications and other societally beneficial uses of AI and more like political “pet projects” that bend to the whims of whoever holds political power at the time.
If AI proves to be the next general-purpose technology, then the considerable costs of these distortions could drastically reduce the realized benefits of AI for growth, innovation, and productivity. These costs are invisible — they appear not in budgetary expenditures but in foregone opportunities, in the betterment and innovations that never occur.
What Comes from a Failure of Restraint
The government had adequate tools to address the underlying procurement tensions without resorting to emergency industrial authority and national security designations. It could have declined to renew the contract or pursued the alternatives it was already developing, as the xAI agreement was in place and OpenAI and Google were close to deployment on classified systems.
Instead, the administration escalated through a sequence of increasingly coercive threats and ended the week having granted one competitor the same contract protections it had previously denied another. For the first time in history, the U.S. government has applied a foreign-adversary national security designation to a domestic company in apparent retaliation for a contract dispute. The integrity of that authority—which the government genuinely needs to protect the defense industrial base from actual adversary infiltration—has been diminished. Every technology company watching the week’s events is now attempting to determine whether the risks of government work are worth accepting.
The costs will not appear on any ledger. They will accumulate in investments not made, capabilities not developed, and innovations that never occur.
That chilling effect will outlast this dispute by years.