This is a partial setback for the Californian nugget of artificial intelligence (AI). A federal court in Washington refused, Wednesday April 8, to suspend the Pentagon’s ban on Anthropic, which had obtained its first legal victory in San Francisco against the Trump administration at the end of March.
The decision, consulted by Agence France-Presse (AFP), maintains in force a measure which aims to force Pentagon subcontractors to certify that they do not use technologies from Anthropic, creator of the Claude chatbot.
The move, which designated Anthropic a “supply chain risk” to the Pentagon, was taken on February 27 in response to the company’s refusal to have its artificial intelligence tools used for mass surveillance of US citizens and to make weapons fully autonomous.
Read also | Article reserved for our subscribers Anthropic Blocks Pentagon’s Use of Its AI for ‘Mass Domestic Surveillance’ and ‘Fully Autonomous Weapons’
Until now, only non-American companies had been targeted by such a designation.
The effectiveness of this sanction, which falls under the federal public procurement code, is however debated, with some lawyers believing that the regulatory texts necessary for its implementation are still lacking.
“The best marketing investment”
On March 26, in a parallel appeal, Anthropic obtained a first victory against a similar sanction, this time taken within the framework of the military code: a judge in San Francisco suspended the directive of Defense Minister Pete Hegseth who designated Anthropic as a “risk” for the classified operations of the Pentagon. The suspension lasts until the case goes to trial in the coming months. The California judge also suspended a directive from Donald Trump which ordered all federal agencies to stop using Anthropic technologies.
Read also | Article reserved for our subscribers San Francisco judge puts Pentagon in difficulty against Anthropic
The government appealed the California decision, but it is the procedure in Washington which will return to the forefront first: the federal appeals court agreed to examine the merits in an accelerated manner and set the hearing for May 19.
In their decision on Wednesday, the three judges considered that the balance of interests tilted in favor of the government: if Anthropic suffers “likely irreparable harm”, this is “primarily financial”, while the issue for the government concerns the security of the Pentagon’s operations “in the context of an active military conflict”.
The court suggests that the company was also able to make a profit, citing statements from its boss, Dario Amodei, to his employees – “the general public and the media see us as the heroes (we are number 2 in the App Store!)”, and an article from the specialist media Digiday estimating that Anthropic’s opposition to the Pentagon “could turn out to be Silicon Valley’s best marketing investment in years”.
Read also | Article reserved for our subscribers Anthropic restricts launch of its latest AI model to prevent cyberattack risks
On Monday, Anthropic announced “exponential” growth in revenues, tripled in one quarter, claiming $30 billion in annualized revenues, for the first time above the performance posted by its great rival OpenAI.
Originally published at Almouwatin.com







