Featured

OpenAI’s Sam Altman: ‘Never say never’ about building new weapons for the Pentagon

OpenAI CEO Sam Altman won’t rule out building weapons for the U.S. military, but he said he doesn’t expect to manufacture new attack capabilities soon.

The artificial intelligence maker that started as a nonprofit a decade ago has since pursued a for-profit structure and rewritten its rules to work with the Defense Department.

OpenAI added retired Army Gen. Paul Nakasone, the former National Security Agency director, to its board last year, and the company has courted new allies in Washington.

Asked by Mr. Nakasone on Thursday if OpenAI would help make new weapons systems for the Pentagon, Mr. Altman punted in remarks at Vanderbilt University’s Summit on Modern Conflict and Emerging Threats.

“I will never say never because the world could get really weird, and at that point, you sort of have to look at what’s happening and say, ‘Let’s make a trade-off among some really bad options,’ which you all have to do all the time; thankfully, we don’t,” Mr. Altman said. “I think in the foreseeable future we would not do that.”

Mr. Altman said he sees other opportunities for him to work closely with America’s national security establishment.

In January 2024, observers saw that OpenAI reworded its rules to allow its work with the Pentagon to proceed. Its earlier rules prohibited its AI models’ usage for the military and weapons development.

OpenAI told The Washington Times last year that the company’s updated rules meant its tools could not be used to make weaponry.

“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” the company said in a statement. “There are, however, national security use cases that align with our mission.”

Some of those use cases became public last year. Defense tech company Anduril said in December it was partnering with OpenAI to develop and deploy AI solutions for national security missions, particularly to protect U.S. military personnel from attacks by unmanned drones.

Mr. Altman said at the time that the partnership would help the national security community understand and responsibly use its AI tools.

In 2023, the Defense Advanced Research Projects Agency acknowledged having a program that bypassed security constraints to dig into OpenAI’s ChatGPT and got it to provide bomb-making instructions.

As AI developers pursue advanced models surpassing humans’ intelligence, interest in the potential application of new tech tools to augment militaries’ offensive and defensive capabilities has increased.

Mr. Altman indicated on Thursday that he wanted to work with the U.S. government, but he noted people probably don’t want his company’s products having authority over military decisions.

“I think there are really wonderful things we can and are doing together,” Mr. Altman said. “I don’t think most of the world wants AI making weapons decisions.”

Source link

Related Posts

1 of 627