Featured

Pentagon labels Anthropic ‘supply chain risk’ to national security — but will the designation last?

The Defense Department on Thursday reportedly designated artificial intelligence company Anthropic a “supply chain risk” to U.S. national security — even as the firm’s AI models are being used to support the U.S. war against Iran.

The Pentagon hit the San Francisco-based company, maker of the popular Claude AI tool, with the label after the two sides failed to agree on how the military could use the company’s AI models. The supply chain risk designation, previously reserved for foreign adversaries and associated companies, comes after the company voiced concern that its technology might be used for mass domestic surveillance or developing fully autonomous weapons. 

The Pentagon denied planning to use Claude AI for either of those purposes.

Bloomberg first reported the formal designation, but the move came as little surprise. Defense Secretary Pete Hegseth last Friday said the Pentagon intended to take the extraordinary step. 

President Trump on Thursday expressed his own frustration with the company because it refused to give the military unlimited access to its technology.

“Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that,” Mr. Trump told Politico in an interview.

The Defense Department did not immediately respond to The Washington Times’ request for comment.

Anthropic says its models currently are the only ones approved for use in any classified settings, but rival company OpenAI announced its own deal with the Pentagon last week. Notably, the OpenAI statement on the issue says its deal addresses concerns about domestic surveillance and autonomous weapons.

“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” the company said.

The Pentagon said it intended to wind down Claude AI’s use for military applications over the next six months. 

But the Financial Times reported this week that Anthropic is back in negotiations with the Pentagon, raising the possibility that the “supply chain risk” label may be temporary. That designation means that other companies cannot do business with Anthropic if they want government contracts. 

The latest developments come as Anthropic’s models are being used in the planning and support for U.S. military operations in Iran, according to one defense official who spoke on the condition of anonymity due to potential security risks.

The Trump administration’s public actions have built toward this moment over the past week.

Analysts suggest the targeting of Anthropic may present legal issues.

“You can’t just ban a company from doing business unless there’s some reason to do it,” Dan Meyer, the national security law partner at the law firm Tully Rinckey, told The Times. “That’s why they’re reaching for the supply chain arguments. That’s why they’re reaching for the national Defense Production Act arguments — because they can see the debarment case coming.”

The Pentagon had threatened to invoke the Defense Production Act, which would have essentially compelled Anthropic to give the government unlimited access to Claude AI. 

Anthropic says it had agreed to the vast majority of the Pentagon’s requested use cases. But the company drew a line in the sand, according to Anthropic CEO Dario Amodei, because laws and regulations have not caught up with the technology.

“The technology is advancing so fast that it’s out of step with the law,” Mr. Amodei said in a recent interview with CBS News. “Taking data collected by private firms, having it bought by the government, and analyzing in mass via AI — that actually isn’t illegal.”

Mr. Amodei also stressed that while AI is advancing quickly it is “nowhere near reliable enough to make fully autonomous weapons” and choices of life and death.

The public nature of the dispute may have its own consequences, according to analysis by experts at the Special Competitive Studies Project.

“Putting yourself in the shoes of a small defense startup, I do wonder what kind of chilling effect this may have on companies wanting to do business with the Department of War,” David Lin, a senior director at SCSP said during their live President’s Tech Brief show on Friday.

The risk, according to Mr. Lin and his colleagues, is that there is potential to undermine years of relationship-building between Silicon Valley and the Pentagon, and give Anthropic’s competitors an opening at a moment when the U.S. needs its most capable AI systems integrated into defense operations. One analyst on the panel noted the dispute was “reinforcing a narrative of doomerism about AI” at a time when American adversaries are accelerating their own autonomous weapons programs.

Mr. Meyer, the Tully Rinckey attorney, said that even if both sides move toward a settlement, the timeline will be long.

“I think it would be optimistic if this would be resolved before Christmas,” he said.

Source link

Related Posts

1 of 1,681