A federal judge sharply criticized the Pentagon’s ban of Anthropic during a hearing Tuesday, as the standoff between the artificial intelligence giant and the Trump administration played out in federal court.
The hearing gave the AI company the chance to argue that the Pentagon blacklisted it as a national security risk in retaliation for its safe-use requirements.
Anthropic drew the ire of President Donald Trump and Defense Secretary Pete Hegseth in February after it refused to allow the Pentagon to use its Claude AI model for autonomous lethal warfare and the mass surveillance of Americans, citing safety and responsibility concerns.
The startup’s attempt to restrict the military’s use of its products led Trump to order the federal government to cut ties with Anthropic. The Pentagon formally designated the company a “supply-chain risk” to national security — a serious designation typically reserved for companies with ties to America’s foreign adversaries.
Now, Anthropic is asking the court to stop the Pentagon and the White House from enforcing the supply-chain risk designation while its request for reconsideration under the statute, which it argues the Pentagon interpreted unlawfully, is litigated. The company is also seeking to prove the designation is a violation of its First Amendment rights.
U.S. District Judge Rita Lin questioned whether the Trump administration’s actions are a punishing tactic and expressed concerns that the restrictions violate Anthropic’s free speech protections.
“I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic,” Lin said. “Specifically, my concern is whether Anthropic is being punished for criticizing the government’s contracting position in the press.”
Lin also called the restrictions “troubling,” adding that “they don’t really seem to be tailored to the stated national security concern.”
“If the worry is about the integrity of the operational chain of command, the Pentagon could just stop using Claude,” she said.
The judge said she expects to issue a ruling in the next few days. Anthropic requested a decision by March 26, but the court is not obligated to adhere to that date.
The case spotlights a question that will carry significant weight as Washington grapples with how to regulate AI: Who gets to decide the limits, risks and potential misuse of the rapidly evolving technology — the innovators themselves or the federal government, which has paid private companies millions to harness AI’s capabilities?
“That tension is going to shape the next phase of AI regulation in Washington,” said Joe Hoefer, the chief AI officer at Monument Advocacy, a Washington-based lobbying firm.
“The real significance here isn’t just the action against Anthropic, but the precedent it sets for how Washington will arbitrate tensions between AI developers and the national security community,” Hoefer said. “That dynamic will shape how the entire industry approaches government partnerships going forward.”
If the Pentagon’s classification stands, Hoefer said, it would spell a precautionary environment for AI developers, who would be left with less room to define and mitigate risk on their own terms.
Dario Amodei, CEO of Anthropic and a former founding member of OpenAI, said the company had “no choice but to challenge” the Pentagon’s actions in court.
“Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximizes positive outcomes for humanity,” Anthropic’s legal representation wrote in the lawsuit, filed March 9 in the Northern District of California. “Anthropic brings this suit because the federal government has retaliated against it for expressing that principle.”
In 2025, Anthropic inked a $200 million contract with the Pentagon for the use of its AI technology in classified defense systems. As part of the agreement, the startup was to prototype AI capabilities that advanced U.S. national security, according to a news release announcing the contract.
But Anthropic argued negotiations over the development of a military-specific Claude AI devolved after it refused to allow the Pentagon to use the technology for “all lawful uses” — which would have included mass surveillance of Americans and the deployment of lethal autonomous weapons without human oversight, both of which Amodei has said the technology is not ready for.
Hegseth called Anthropic’s red line a “master class in arrogance and betrayal,” and said Amodei made the decision to “strong-arm the United States military into submission” in a post on X announcing the Pentagon designated the company a supply-chain risk.
In a Truth Social post written in all capitalized letters, Trump said, “The United States will never allow a radical left, woke company to dictate how our great military fights and wins wars,” and called on federal agencies to “immediately cease” their use of Anthropic AI.
Hours after the Anthropic ban in February, OpenAI CEO Sam Altman, Amodei’s longtime rival, announced the company had struck a deal with the Pentagon for its AI tools to be used in the military’s classified systems.
The post Federal judge calls Pentagon’s ban of Anthropic ‘troubling’ appeared first on MS NOW.