
Ten days ago, Dario Amodei did something almost no AI executive of his standing has ever done: he sued the United States Department of Defense.
On March 9, Anthropic filed twin complaints โ one in the Northern District of California, one in the District of Columbia โ against the DoD, Defense Secretary Pete Hegseth, and more than a dozen other federal agencies and officials. The claim, in plain English: the government is punishing Anthropic for refusing to let the Pentagon use Claude for "any lawful purpose," including domestic mass surveillance and fully autonomous lethal targeting. The legal language is starker. Anthropic's San Francisco complaint calls the Pentagon's actions "unprecedented and unlawful" and frames the supply-chain-risk designation as First Amendment retaliation โ punishment for speech the government does not like.
The decision is striking on its own. It becomes more interesting when you set it against the worldview Amodei has spent the last eighteen months articulating in public.
In October 2024, Amodei published Machines of Loving Grace, a long essay arguing that powerful AI could compress decades of biomedical, economic, and democratic progress into a handful of years. The piece is utopian in tone, but its operational core is hard-edged. Amodei proposes what he calls an entente strategy: a coalition of democracies, led by the United States, secures the AI supply chain, scales models faster than authoritarian rivals, and uses that lead to lock in "robust military superiority" while bargaining the rest of the world into the democratic camp.
The essay is, among other things, a justification for Anthropic doing business with the U.S. national-security state. If democracies must dominate AI to keep authoritarians from doing it first, then a frontier lab that refuses to engage with the Pentagon is a frontier lab that has chosen the wrong side of history. That logic is what produced the $200 million DoD contract Anthropic signed in mid-2025. It is also the logic Hegseth invoked in February when he gave Anthropic a 5:01 p.m. Friday deadline to drop its usage restrictions or be designated a supply-chain risk.
Amodei did not drop them. On February 26, he told CBS the company's red lines on mass surveillance and autonomous weapons "do not change our position." On February 27, President Trump directed federal agencies to cease using Anthropic's products, and Hegseth made the supply-chain-risk designation official. Ten days after that, Anthropic was in federal court.
The two complaints are coordinated. Filed in jurisdictions friendlier to administrative-law and First Amendment claims respectively, they make three core arguments:
If any of those theories survive, Anthropic gets its contracts back and a precedent that constrains how a future administration can weaponize procurement. If none of them survive, Anthropic is locked out of the largest single buyer of frontier AI in the world for the foreseeable future.
That is the bet.
Here is where Amodei's record gets harder to read cleanly.
On the same week the Pentagon dispute escalated, Anthropic revised its Responsible Scaling Policy. The previous version committed the company to pause training of more powerful models if their capabilities outstripped Anthropic's ability to control them. That commitment, framed for two years as a load-bearing safety promise, was loosened in the new version on the grounds that it could "hinder" Anthropic's ability to compete. Critics noted, fairly, that the rewrite arrived in the middle of a national-security fight in which Anthropic was being pushed to scale faster, not slower.
Meanwhile, the model the entente strategy implicitly depends on already exists internally. Anthropic's next-generation system โ known inside the company by the codename that industry watchers and a handful of early-access partners have been quietly discussing โ is being prepared for a constrained rollout to security partners under what Anthropic calls Project Glasswing. The pitch is that a lab capable of building dangerous cyber-offensive AI is the same lab best positioned to defend against it. Hold the rhetoric and the operational reality side by side and the picture sharpens: a company that talks like a safety lab, scales like a frontier lab, sells like a defense contractor, and litigates like a civil-liberties plaintiff โ sometimes in the same week.
None of these are necessarily contradictions. They are, however, choices, and Amodei is making all of them simultaneously.
Three things, with reasonable confidence.
One: Amodei believes the red lines are the product. Anthropic's commercial differentiation among large enterprise buyers has been "the safer frontier lab." Surrendering the mass-surveillance and autonomous-weapons restrictions under government pressure would have collapsed that differentiation overnight. The lawsuit is, in part, brand defense โ but a brand he appears to actually believe in.
Two: he is willing to spend political capital he has not yet earned. Suing a sitting administration is a near-irreversible act. It forecloses backchannel negotiation, locks the company into a posture other agencies will read as adversarial, and invites retaliation across the regulatory surface โ export controls, antitrust, tax, immigration. Amodei is betting that legal vindication, if it comes, will be worth more than the relationships he is burning to pursue it.
Three: the entente strategy has a problem he has not yet solved publicly. Machines of Loving Grace assumes the democratic state and the frontier lab are aligned partners. The last three weeks suggest the alignment is conditional, contested, and โ when the state defines "lawful use" maximally โ potentially incompatible with the safety commitments the lab claims as its identity. A worldview that depends on a partnership now needs a backup plan for what happens when the partner stops behaving like one.
For the next sixty days, three things are worth tracking. First, the preliminary-injunction motion: a ruling either way will signal how the courts read the retaliation theory. Second, whether other frontier labs โ OpenAI, Google DeepMind, Meta โ file amicus briefs, sit it out, or quietly take the contracts Anthropic vacated. Third, whether the next Responsible Scaling Policy revision tightens or loosens further. The rhetoric will keep saying democracies must lead. The filings, the contracts, and the policy edits will tell you what Amodei actually means by it.
Anthropic filed twin complaints on March 9, 2026 (Northern District of California and District of Columbia) alleging the Pentagon's supply-chain-risk designation is First Amendment retaliation for Anthropic's refusal to permit "any lawful use" of Claude โ including domestic mass surveillance and fully autonomous lethal targeting.
In his October 2024 essay Machines of Loving Grace, Amodei argues a coalition of democracies, led by the US, must dominate the AI supply chain, scale faster than authoritarian rivals, and use that lead for military superiority. It's the worldview that justified Anthropic's $200M DoD contract โ and the worldview now in tension with the lawsuit.
The preliminary-injunction ruling (a signal on the courts' read of the retaliation theory); whether OpenAI, Google DeepMind, or Meta file amicus briefs or quietly take vacated contracts; and whether Anthropic's next Responsible Scaling Policy revision tightens or loosens further.
Discover more content: