A federal appeals court in Washington, DC, on Wednesday rejected Anthropic’s request to temporarily block the Pentagon’s blacklisting of its AI tech while its lawsuit against the Department of War plays out in court.
The San-Francisco based AI firm is suing the department for designating it a supply-chain risk last month – alleging retaliation after Anthropic sought to prevent the government from using its tech for mass surveillance and weaponry.
“In our view, the equitable balance here cuts in favor of the government,” the appeals court said in its decision Wednesday.
“On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.”
In a separate decision last month, a judge in San Francisco federal court granted Anthropic a preliminary injunction during its legal challenge.
After the split court decisions, Anthropic is unable to work with the Department of War and other defense contractors are barred from using its Claude chatbot in their work with the Pentagon.
Anthropic can still work with other government agencies and defense contractors are allowed to use Claude outside of their interactions with the Pentagon.
In its ruling Wednesday, the appeals court said the company “will likely suffer some degree of irreparable harm absent a stay,” but that its concerns “seem primarily financial in nature.”
Despite Anthropic’s argument that the Pentagon’s blacklisting restricts its free speech, the court wrote that “Anthropic does not show that its speech has been chilled during the pendency of this litigation.”
But the court also said “substantial expedition is warranted” in the case because of the financial harm Anthropic will face during ongoing litigation.
“We’re grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful,” an Anthropic spokesperson told The Post in a statement.
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
Acting US Attorney General Todd Blanche, meanwhile, called the appeals court decision a “resounding victory” in a post on X late Wednesday.
“Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company,” he wrote.
Anthropic signed a $200 million contract with the Department of War in July that made it the sole provider of AI models on the government’s classified networks.
But talks soured during contract negotiations as War Secretary Pete Hegseth blasted the firm for seeking exemptions on the use of its models for mass surveillance of citizens and weaponry, insisting that the Pentagon should have free rein over the tools “for all lawful purposes.”
Hegseth slapped Anthropic with a supply-chain risk label – a designation previously reserved for foreign businesses that present national security threats, like Chinese firm Huawei Technologies.
If the designation remains in place, it will force all defense contractors to prove that they do not use Anthropic’s AI models in their work with the military.
Tensions heated up after a fiery 1,600-word missive from Anthropic CEO Dario Amodei bashing the Trump administration was leaked to the public last month. In it, the exec accused the Pentagon of targeting his company for not giving “dictator-style praise to Trump.”
Start your day with all you need to know Morning Report delivers the latest news, videos, photos and more.
Amodei apologized for the “tone” of his message, arguing his comments came hours after President Trump blasted Anthropic staff as “leftwing nut jobs” and Hegseth revealed his plans to label the company a supply-chain risk.
After the Pentagon effectively blacklisted Anthropic, Sam Altman’s OpenAI swooped in with a deal to provide AI services to the government.
In his leaked memo, Amodei wrote that Anthropic was being punished because he didn’t “donate to Trump,” while “OpenAI/Greg have donated a lot,” referring to OpenAI president Greg Brockman, the Information reported.
Amodei – who donated to Democratic former Vice President Kamala Harris’ failed presidential campaign – also mocked OpenAI’s “straight up lies,” adding that the platform’s safeguards are “maybe 20% real and 80% safety theater.”
During a tech conference in March, Altman took a jab at Amodei, saying that it’s “bad for society” if companies start abandoning their commitment to the democratic process because “some people don’t like the person or people currently in charge.”