The Pentagon Made a Choice. Anthropic Is Suing Over It.

The Pentagon wanted Claude without constraints. Anthropic said no. Now the company is suing over what happens when the military tries to destroy a developer for a contract dispute.

The Pentagon Made a Choice. Anthropic Is Suing Over It.

On February 28, 2026, a U.S. Tomahawk missile struck Shajarah Tayyebeh elementary school in Iran. The school was full of children. The Pentagon's own investigation concluded the strike was a targeting error—the result of outdated data that no one had bothered to verify.

175 people died. Most of them were children.

Two weeks later, Defense Secretary Pete Hegseth announced the Pentagon would designate Anthropic as a "supply chain risk" and terminate their $200 million contract. He ordered all federal agencies to immediately stop using Claude.

The company is now suing the Pentagon. The lawsuit argues that what the Pentagon did was unconstitutional. And the timing—just days after proving Anthropic right about why targeting systems need safeguards—makes the case impossible to ignore.

What Anthropic Actually Said

The conflict didn't start in March. It started in February, when the Pentagon demanded Anthropic remove its safety constraints.

Specifically, the Pentagon wanted "all lawful use" of Claude. That meant two things Anthropic had refused to permit:

1. Mass surveillance of U.S. citizens

Anthropic said no. Not for technical reasons, but for principle. Using Claude to conduct mass surveillance of Americans would contradict the foundational values the model was trained on. It would violate democratic norms Anthropic believed the company couldn't ethically ignore.

2. Fully autonomous weapons systems

Anthropic said no to this one too. But this objection came with something else: they said the technology isn't ready.

Dario Amodei, Anthropic's CEO, was explicit: "Current frontier AI systems are not reliable enough" for autonomous targeting decisions. The technology exists. The capability is there. But reliability—the ability to verify that the system is making the right call, every time—hasn't caught up.

Anthropic offered to work with the Pentagon on research and development. Come back in a few years. Let's build something that's actually ready. The Pentagon said no.

Both objections were known to the Pentagon when they signed the original contract. Anthropic's constraints weren't a surprise. They were the terms of the deal.

The Pentagon's Response

The Pentagon had a choice: negotiate, accept the disagreement, or renegotiate the contract with different terms.

Instead, they chose a third path. They designated Anthropic a "supply chain risk."

Supply chain risk designations exist for one purpose: to protect national security from technologies so dangerous they cannot exist anywhere in the U.S. military supply chain. The designation is typically used for foreign technology suspected of espionage or sabotage. China's Huawei. Russian systems. Adversary tech.

It has never been used against an American company.

It is especially wild to use it against an American company that is actively providing services to the U.S. military at the moment the designation happens. Claude was being used in the raid against Nicolás Maduro. Claude was being used in the war with Iran.

The Pentagon wasn't trying to exclude Claude from future systems. They were trying to destroy Anthropic as a company.

That's what "supply chain risk" does. It bars you from any government contract. It bars contractors from using your services. It bars subcontractors from using your services. Every door closes. Simultaneously.

For a disagreement over contract terms.

Why the School Strike Matters

The Iran school strike happened before the supply chain designation. It happened while the Pentagon and Anthropic were negotiating. See: https://www.nytimes.com/2026/03/11/us/politics/iran-school-missile-strike.html

The strike killed 175 people because of "outdated data." The Pentagon's own investigation could not explain why the information hadn't been verified before the missiles were launched. No one double-checked. The error cascaded.

This is precisely the failure Anthropic said the technology wasn't ready to prevent. This is the scenario Amodei pointed to when he said autonomous systems need guardrails, not because of values but because of capability.

The Pentagon proceeded anyway with other tools. ChatGPT and Grok—models without Anthropic's safety constraints—were cleared for targeting. They were deployed in the same conflict where the school was struck.

Anthropic was right. Not because their values are superior. But because their assessment of the technology's readiness was accurate.

The Pentagon then responded to being right by trying to destroy them.

The lawsuit raises three frameworks:

First Amendment claim: Compelling Anthropic to remove safety constraints and deploy Claude in ways that violate the model's training and design principles constitutes compelled speech. The Pentagon would be forcing a developer to watch their creation be misused.

Administrative Procedure Act claim: The Pentagon did not follow notice-and-comment procedures for a major decision affecting a contractor. The agency should have to justify the decision, allow Anthropic to respond, and create a public record.

Contract law claim: The Pentagon is retroactively changing terms of a signed agreement and using regulatory power to retaliate for Anthropic's refusal to accept the change.

The Pentagon's position is implicit: once software is commercially available, the military can use it however it wants. If the developer doesn't like it, they should have thought of that before selling. Anthropic's position is different. See their statement: https://www.anthropic.com/news/statement-department-of-war - there is no blanket exemption from the Constitution for military necessity. You can't use regulatory power to destroy a company for a contract dispute.

The Excluded and the Included

The asymmetry reveals what this is really about.

Claude is excluded from targeting systems. ChatGPT and Grok—models built without the safety constraints that caused the Pentagon's conflict with Anthropic—are cleared and deployed.

OpenAI didn't sue. Elon Musk didn't sue. Neither company has the institutional commitment or the technical capability to say no to the Pentagon.

Anthropic is the only developer with both. The Pentagon is discovering what that looks like when it matters.

If Anthropic loses, the message is clear: don't build safety constraints into your models, or the military will find a way to make you regret it.

If Anthropic wins, it establishes that developers have legal standing to refuse deployments they believe violate the Constitution or exceed the technology's capability. It means military procurement teams can't simply integrate whatever software they want without the developer's consent.

One of these positions will shape military AI policy for a decade.