A battle of wills between Anthropic and Defense Secretary Pete Hegseth concluded Friday with the AI company effectively banned from all federal government systems. It was something of a deus ex machina, courtesy of President Donald Trump, to a moral standoff over how the Pentagon could use Anthropic model, Claude. Hegseth’s hardline stance — and Trump’s dudgeon over being challenged — showcase how right CEO Dario Amodei was to try to put (some) limits on the way his company’s technology was deployed.
As with almost every major technological leap, America’s laws are deeply lagging when it comes to policing the rapid growth of AI.
Anthropic first won a major $200 million contract with the Defense Department last year, granting access to the company’s AI models and allowing them to be deployed on work involving classified material. But Anthropic, which was founded over concerns about the lack of safeguards at other AI startups like OpenAI, is one of the few companies in the AI space calling for more federal regulation over the emerging technology. It’s telling then that OpenAI swooped in soon after Anthropic’s unceremonious dumping to fill the vacuum, despite pledging to (somehow) prevent the Pentagon from abusing its technology.
Amodei soon found himself defending his company from claims that it is building “woke AI,” as White House AI czar David Sacks put it last year, and at odds with an executive order demanding that AI models used by the federal government be “free from ideological bias.” But the tensions between the Pentagon and Anthropic really exploded after Amodei published an essay in January listing his concerns about how AI could be used by governments:
“There need to be limits to what we allow our governments to do with AI, so that they don’t seize power or repress their own people. The formulation I have come up with is that we should use AI for national defense in all ways except those which would make us more like our autocratic adversaries.”
In brief, Amodei named two major redlines for Claude’s use: conducting mass surveillance domestically and developing autonomous weapons — or weapons that don’t need a human to operate them. The Pentagon says it doesn’t intend to do either of those things — but also won’t let someone else say that it can’t. Defense officials have said that Anthropic must instead accept that its services can be used “for all lawful purposes” regardless of what the terms and conditions of its contract say. Hegseth likewise said last month at SpaceX headquarters, the Pentagon will not “employ AI models that won’t allow you to fight wars.”
The extremely heated rhetoric from the Pentagon towards Anthropic over the last week was well outside the norm. Hegseth warned Amodei in a meeting Tuesday that without granting carte blanche access to Claude, that the administration could invoke the Defense Production Act or label the company a “supply-chain risk” on par with foreign companies. Both are wild threats to make against an American company, particularly one already integrated into the DOD’s systems. And Amodei rightly pointed out in a statement on Thursday that the two threats are “are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Despite Hegseth’s warnings, Amodei concluded in his statement “we cannot in good conscience accede to their request.” However, before Hegseth’s 5 p.m. Friday deadline could be hit, Trump came out on TruthSocial to declare that over the next six months every federal agency will immediately cease use of Anthropic’s technology. “We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote, calling Anthropic a “out-of-control, Radical Left AI company.”
Hegseth soon thereafter made good on his promise, writing on X that he’d instructed his department to name the company a “Supply-Chain Risk to National Security,” effectively making it persona non grata for the Defense Department and its suppliers.
The red lines set by Amodei sound weighty, but still allow the company’s AI to be deployed in any number of potentially lethal ways.
The Pentagon’s inability to accept constraints isn’t necessarily unique to Trump or Hegseth. The defense community has pushed back hard against anything seen as a constraint on potential actions under GOP and Democratic administrations alike. History is littered with examples, from America’s refusal to join the International Criminal Court to rejecting the international treaty banning land mines. Whether or not the Pentagon intends to use Claude in the ways Anthropic rejects is in many ways secondary to the idea that the military would accept guardrails on its actions from outside the chain of command.
At the same time, calling out Hegseth and the Pentagon is not intended to be a sweeping defense of Anthropic or Claude’s generative AI model, much less the AI industry writ large. Let’s not forget that it was only recently that Anthropic softened its own internal safeguards on responsibly scaling up its models in order to better compete with other companies.
Meanwhile, the red lines set by Amodei sound weighty, but still allow the company’s AI to be deployed in any number of potentially lethal ways. One hypothetical example: sifting through reams of surveillance data to determine targets for the military strikes against alleged drug boats off the coast of South America. Whether or not any profile generated on the targets of these signature strikes is accurate isn’t at issue here, only whether a human drone operator is the one to pull the trigger.
It’s easy to cast Amodei as a champion for responsibility when Elon Musk’s Grok is already in the process of being granted access to the same classified systems as Claude. The AI chatbot from Musk’s xAI does little to hide the right-wing, archconservative view of the world it’s been programmed to espouse. But with Musk’s total capitulation to the DOD’s access requirements, there’s been no similar concerns from Hegseth or the White House about incorporating it into federal systems.
Given the competition’s amoral mindset towards products that have the potential to cause massive harm, the bar for assessing Anthropic’s self-regulation is so low as to truly be in hell. It means the Trumpist charges that Claude is somehow “woke AI” are both an overblown attack and an unearned badge of honor.
As with almost every major technological leap, America’s laws are deeply lagging when it comes to policing the rapid growth of AI. Without real safeguards and regulations, there’s little stopping the Pentagon from blacklisting a company that dares draw the line at having Americans’ data siphoned up rather than foreigners’, or at having a robot being the one pulling the trigger.
The post Anthropic was right not to trust Pete Hegseth appeared first on MS NOW.