Uncategorized

Pentagon gives AI firm ultimatum: lift military limits by Friday or lose $200M deal

Pentagon gives Anthropic until Friday to remove military restrictions on Claude AI model or face contract termination and Defense Production Act.

The Pentagon has given artificial intelligence firm Anthropic until Friday to lift restrictions on how its Claude AI system can be used by the military, warning it could cancel a $200 million contract or take other punitive steps if the company refuses, according to multiple sources familiar with the discussions.

The skirmish broke out after the Pentagon claimed Anthropic had asked whether its product was used in the January military operation to capture Venezuelan leader Nicolás Maduro, in a way that suggested the company may not approve if it was. The Pentagon insists AI companies must allow products to be utilized for all lawful military use cases — without company oversight or approval. 

Anthropic suggests its red lines are not allowing its products to be used for fully autonomous weapons or mass surveillance of Americans. 


TOP AI FIRM ALLEGES CHINESE LABS USED 24K FAKE ACCOUNTS TO SIPHON US TECH

War Secretary Pete Hegseth delivered an ultimatum during a Tuesday meeting at the Pentagon with Anthropic CEO Dario Amodei, even as Hegseth praised the company’s technology and said the department wants to continue working with the firm, sources said.

Hegseth told Amodei that if the company did not allow Claude to be used for all lawful purposes, it could face termination of its Pentagon contract, designation as a supply chain risk — potentially limiting its ability to work with defense vendors — or possible invocation of the Defense Production Act to compel access to the technology, according to sources familiar with the meeting.

See also  Nigel Farage appoints JD Vance’s ‘British sherpa’ as head of policy for Reform UK

Claude is currently the only advanced, commercial AI model of its kind operating inside the Pentagon’s classified networks, under a $200 million contract awarded in summer 2025, significantly raising the stakes of the dispute.

Pentagon officials argue the Department of Defense cannot depend on a private company that maintains categorical restrictions on certain uses of its technology, even if those uses are lawful. During the meeting, Hegseth compared the situation to being told the military could not use a specific aircraft for a mission, according to a source familiar with the exchange.

The dispute represents an early test of who controls the guardrails on advanced AI inside U.S. defense systems — private companies or the Pentagon. The outcome could shape how the military partners with leading AI developers as it moves to integrate more powerful machine learning tools into national security operations.

Anthropic, which has branded itself as a safety-oriented AI company, has said its policies are meant to reduce the risk of misuse as advanced AI systems become more powerful.

During the meeting, Amodei walked through those restrictions and argued restrictions would not interfere with lawful, legitimate War Department operations, according to a source familiar with the meeting. 

A senior Pentagon official claimed its position “has nothing to do with mass surveillance or autonomous targeting” because “there’s always a human involved and the department always follows the law.” 

Even as tensions rose, officials on both sides indicated that fully autonomous weapons are not currently contemplated under the department’s lawful use framework, suggesting the clash is as much about control as about battlefield applications.

See also  ‘Big guys, huh?’: Trump hosts US Olympic hockey team ahead of State of the Union

AI TOOL CLAUDE HELPED CAPTURE VENEZUELAN DICTATOR MADURO IN US MILITARY RAID OPERATION: REPORT

During Tuesday’s meeting, Hegseth explicitly referenced potential use of the Defense Production Act, termination of Anthropic’s existing contract and the possibility of designating the company a supply chain risk if it does not agree to allow its products to be used for all lawful purposes, sources said.

Such steps reflect two very different forms of federal leverage. 

A supply chain risk designation could restrict Anthropic’s ability to work with federal vendors and contractors by signaling the company poses reliability or governance concerns, while invoking the Defense Production Act would represent a rare attempt to use national security authorities to compel access to frontier AI systems deemed critical to defense needs.

Terminating the contract would carry consequences beyond ending a vendor relationship. Because Claude is currently embedded inside the Pentagon’s classified networks in a $200 million agreement, cancellation could disrupt existing workflows and require the department to transition sensitive systems to an alternative provider.

Pentagon officials also said Elon Musk’s Grok AI chatbot has agreed to allow its products to be used for all lawful purposes, including potential integration into classified systems, and that other frontier AI firms are “close” to similar arrangements. Grok did not immediately respond to a request for comment.

Anthropic, in a statement attributed to a company spokesperson, said: “Anthropic CEO Dario Amodei met with Secretary Hegseth at the Pentagon this morning. During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service. We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

See also  UK blocks Trump from using RAF air bases for potential Iran attack: report
Share this article:
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter