Trading Ideas

Trump banned Anthropic, then used it to strike Iran: report 

US President Donald Trump’s administration moved to ban Anthropic from federal use, and then, just hours later, the US military reportedly relied on Anthropic’s own Claude AI during strikes on Iran.

The Wall Street Journalreported, citing people familiar with the matter, that US military commands worldwide, including US Central Command (CENTCOM), employed Claude for work tied to operations.

The episode lands as more than a Silicon Valley-versus-Washington spat.

It reads like a case study in how political messaging collides with the Pentagon’s operational reality when a tool is already embedded deep inside classified systems.

The ban: A public dressing-down, with an asterisk

Trump’s ultimatum was not subtle.

In a Truth Social post, he wrote he was directing “every federal agency” to “immediately cease” using Anthropic’s technology, adding: “We don’t need it, we don’t want it, and will not do business with them again!”

The clash stems from a dispute over Anthropic’s refusal to loosen certain guardrails around how its models can be used in military contexts.

Defense Secretary Pete Hegseth labeled the company a “supply chain risk,” a term typically associated with vendors viewed as strategically untrustworthy.

But the ban came with a key nuance that undercut the “effective immediately” drama: multiple reports describe a six-month phase-out window for agencies already using Anthropic’s systems.

That asterisk matters because Anthropic isn’t a theoretical vendor in Washington.

The company publicly announced in 2025 that the Department of Defense awarded it a two-year prototype agreement with a $200 million ceiling, and it said it was already deploying “Claude Gov” models for national security customers.

Anthropic has also said Claude is integrated into defense workflows “on classified networks” through partners including Palantir, which is precisely the sort of integration that does not unwind overnight.

Claude was reportedly in the Iran operations room

The Wall Street Journal’s core reporting is the part that makes the story sting.

According to the Journal, citing people familiar with the matter, US military commands, including CENTCOM, used Anthropic’s Claude during the Iran strikes just hours after the administration’s denunciation.

CENTCOM declined to comment on the specific systems being used in ongoing operations against Iran.

That non-answer is, in its own way, the point: the military’s stack is a web of vendors and platforms, and the day’s politics doesn’t always map neatly onto the night’s mission planning.

The Palantir link helps explain the continuity.

In a separate episode involving a US operation in Venezuela that captured Nicolás Maduro, Claude was used “through Anthropic’s collaboration with Palantir Technologies.”

If Claude is accessed through a classified platform layer rather than as a standalone app, the practical question becomes less “Is it banned?” and more “Which workflows still touch it, and who can realistically shut them off quickly?”

Also Read: Anthropic-Pentagon standoff unites industry, exposes AI’s new fault lines

The bigger question: Who controls AI during war?

The administration’s posture suggests it wants the government to retain broad discretion once it pays for an AI system.

Anthropic’s public stance is that its restrictions are part of its safety posture and that contract language allowing safeguards to be overridden would defeat the point of having them.

The knock-on effect is already visible in the competitive scramble.

OpenAI struck a Pentagon deal hours after the Anthropic ban, and CEO Sam Altman said the agreement reflects safety principles such as prohibitions on domestic mass surveillance and human responsibility for the use of force.

So the unresolved tension isn’t “will the US military use AI?” It already is, reportedly, and at speed.

The real question is whether any AI company can credibly claim to control the ethics of its model once it’s inside a government’s most sensitive systems, or whether the only enforceable guardrail is the one the buyer is willing to live with.

The post Trump banned Anthropic, then used it to strike Iran: report appeared first on Invezz

admin

You may also like