US Defense Secretary Pete Hegseth’s threat to cut AI group Anthropic from government supply chains, or possibly compel it to prioritize government orders, raises several serious questions.
It’s the latest example of Washington’s strong-arm tactics in the corporate sector, while it also shows how control over AI models is becoming a new battleground.
Hegseth has reportedly given Anthropic until Friday to give the US military full access to its applications, the latest escalation of an ongoing row between one of the world’s top AI startups and the US government.
So far, Anthropic has refused to give Washington complete access to its models for classified military use, including for potentially lethal missions carried out without human control and for domestic mass surveillance.
Sources close to Anthropic say the company has no intention of easing its usage restrictions, according to Reuters, despite Hegseth’s ultimatum.
What exactly has Hegseth threatened and why?
Hegseth called Anthropic CEO Dario Amodei to Washington for a meeting on Tuesday. According to media reports quoting people familiar with the talks, Hegseth made two direct threats to Amodei if Anthropic did not comply.
One was to cut the company out of the Pentagon’s supply chain, while the other would be to invoke the Defense Production Act, a measure from the Cold War era, which gives the US president the power to control domestic industry in the supposed interest of national defense.
“If they don’t get on board, [Hegseth] will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not,” the Financial Times quoted an unnamed senior Pentagon official.
Hegseth wants the Pentagon to have unrestricted access to Anthropic’s generative AI chatbot Claude, but Anthropic, which has long billed itself as a safety-oriented AI company, is resisting.
The company is believed to oppose its Claude technology being used in operations where final military targeting decisions are taken without human intervention, or for mass surveillance within the United States.
What has been the relationship between Anthropic and the US military?
Since November 2024, Anthropic has been providing the Claude model to US intelligence and defense agencies.
According to the Wall Street Journal, the US military used Claude during the 2026 raid on Venezuela which resulted in the capture of Nicolas Maduro. Neither Anthropic nor the US defense department commented on the claims, and it is not clear precisely how the AI system was used in the raid.
The threat by Hegseth to remove Anthropic from Pentagon supply chains would have a serious financial impact on the company.
In July 2025, the US defense department awarded Anthropic a $200 million contract to “prototype frontier AI capabilities that advance US national security.”
Anthropic hailed the arrangement, with Thiyagu Ramasamy, the company’s head of public sector, saying it opened “a new chapter in Anthropic’s commitment to supporting US national security.”
However, at the time, it also emphasized its commitment to “responsible AI deployment.”
“At the heart of this work lies our conviction that the most powerful technologies carry the greatest responsibility,” it said in a statement. “We’re building AI systems to be reliable, interpretable, and steerable precisely because we recognize that in government contexts, where decisions affect millions and stakes couldn’t be higher, these qualities are essential.”
Is Anthropic as safety-oriented as it says?
Anthropic was founded in 2021 by seven former employees of OpenAI. According to CEO Dario Amodei, it was built “on a simple principle: AI should be a force for human progress, not peril.”
However, despite the row with the Pentagon, there are signs that Anthropic is reconsidering that commitment in pursuit of commercial ambitions.
On Tuesday (February 24), the same day as the Hegseth meeting, the company announced it was softening its core safety policy in order to remain competitive with other leading AI models.
“The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” Anthropic said in a blog post announcing the changes.
Anthropic faces intense competition from AI rivals such as OpenAI and Google and is making the policy pivot as a result of what it sees as a lack of AI regulation at the federal level.
The Trump administration has resisted AI regulation at both the state and federal levels. A spokesperson for Anthropic told the Wall Street Journal that the policy shift was unrelated to the Pentagon negotiations.
What ethical questions are at stake?
If Anthropic submits to Hegseth’s demands, or if the defense department were to take control of Anthropic by invoking the Defense Production Act, it would inevitably lead to accusations that the company’s AI was no longer being used with a safety-first mindset.
The issue also shines a light on the Trump administration’s strong willingness to directly intervene in corporate decision-making and in sectors it deems of critical importance.
In August 2025, the Trump administration announced it had made a $8.9 billion investment in Intel, part of a series of moves to directly intervene in US chipmaking.
It has also intervened directly in the rare-earth sector, making major investments in firms such as Vulcan Elements, MP Materials and USA Rare Earth.
Edited by: Ashutosh Pandey
