U.S. appeals court lets Pentagon blacklist of Anthropic stand for now

U.S. appeals court lets Pentagon blacklist of Anthropic stand for now
FILE PHOTO: U.S. Department of War and Anthropic logos are seen in this illustration created on March 1, 2026. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
Reuters

A federal appeals court in Washington, D.C., on Wednesday declined to block the Pentagon’s national security blacklisting of Anthropic for now, handing a win to the Trump administration after a separate appeals court reached the opposite conclusion.

 

Anthropic, developer of the popular Claude AI assistant, alleges that Defence Secretary Pete Hegseth overstepped his authority when he designated the company a national security supply-chain risk - a label that bars Anthropic from Pentagon contracts and could trigger a government-wide blacklist.

Anthropic executives have said the designation could cost the company billions of dollars in lost business and reputational damage.

A panel of judges on the U.S. Court of Appeals for the District of Columbia Circuit denied Anthropic’s bid to pause the designation while the case proceeds. The decision is not a final ruling.

Lawsuits against Hegseth’s orders

The lawsuit is one of two Anthropic has filed over Hegseth’s unprecedented move, which came after the company refused to allow the military to use its AI chatbot, Claude, for U.S. surveillance or autonomous weapons, citing safety and ethical concerns.

Hegseth issued orders designating Anthropic under two different laws, and the company is challenging each separately.

A federal judge in California blocked one of the orders on 26 March, saying the Pentagon appeared to have unlawfully retaliated against Anthropic for its views on AI safety.

Anthropic’s designation marks the first time a U.S. company has been publicly labelled a supply-chain risk under obscure government procurement statutes aimed at protecting military systems from sabotage or infiltration.

In its lawsuits, Anthropic says the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given an opportunity to challenge its designation, in violation of its Fifth Amendment right to due process.

The lawsuits argue the designations were unlawful, unsupported by evidence and inconsistent with the military’s previous praise of Claude.

The Justice Department says Anthropic’s refusal to lift the restrictions could create uncertainty within the Pentagon over how Claude may be used and risk disabling military systems during operations, according to a court filing.

The government said its decision stemmed from Anthropic’s refusal to accept contractual terms, not its views on AI safety.

The Washington, D.C., case concerns a law that could expand the blacklist across the wider civilian government following an interagency review.

The California case deals with a narrower statute that excludes Anthropic from Pentagon contracts related to military information systems. British proposal

On Sunday, the Financial Times was quoted by Reuters as reporting that Anthropic’s dispute with the U.S. Department of Defense has prompted Britain to consider expanding the Claude developer’s presence in the country.

According to the report, proposals range from expanding its London office to pursuing a dual stock market listing, the newspaper said, citing people familiar with the plans.

Anthropic and the Department for Science, Innovation and Technology did not immediately respond to Reuters requests for comment.

Prime Minister Keir Starmer’s office has supported the department’s work, which is expected to be presented to Anthropic chief executive Dario Amodei during a visit in late May, the FT said. Australian agreement

On 1 April, Anthropic said it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy and its impact on workers and jobs.

Under the agreement, the Claude developer will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities.

Anthropic said it would also target investment in data centre infrastructure and energy across Australia.

The deal mirrors similar agreements with safety institutes in Japan.

Australia currently has no specific AI legislation. The centre-left Labour government has said it will rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns.

Tags