Tech

Pentagon threatens to cut off Anthropic in AI safeguards dispute: Report | Technology News

2 min readNew DelhiFeb 15, 2026 09:35 AM IST

The Pentagon is considering ending its relationship with artificial intelligence company Anthropic over its insistence on keeping some restrictions on how the U.S. military uses its models, Axios reported on Saturday, ⁠citing ​an administration official.

ARTICLE CONTINUES BELOW VIDEO

The Pentagon is pushing four AI companies to let the military use their tools for “all lawful purposes,” including in areas of weapons development, intelligence collection and battlefield ​operations, ​but Anthropic has not agreed ⁠to those terms and the Pentagon is getting fed up after months of negotiations, according ‌to the Axios report.

The other companies included OpenAI, Google and xAI.

An Anthropic spokesperson said the company had not discussed the use of its AI model Claude for specific operations with the Pentagon. The spokesperson said conversations with the U.S. government ⁠so far had ⁠focused on a specific set of usage policy questions, including hard limits around fully ⁠autonomous ‌weapons and mass domestic surveillance, none of ​which related to current operations.

The Pentagon ‌did not immediately respond to Reuters’ request for comment.

Anthropic’s AI model Claude was used in ‌the U.S. military’s operation ​to ​capture former ​Venezuelan President Nicolas Maduro, with Claude deployed via Anthropic’s partnership with data firm Palantir, ​the Wall Street Journal reported on Friday.

Reuters ⁠reported on Wednesday that the Pentagon was pushing top AI companies including OpenAI and Anthropic to make their ‌artificial intelligence ⁠tools available on classified networks without many of the standard restrictions that the companies ​apply to users.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button