A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.
The artificial intelligence revolution of warfare is well underway and the Pentagon is scrambling to get its troops to the front.
The military failed to fully communicate its AI problems and desired solutions as the hot new technology promises to rewrite the rules for preparing armies and fighting wars, the Defense Department’s Chief Digital and Artificial Intelligence Officer Craig Martell acknowledged this week.
Mr. Martell assembled top AI minds from around the world for a brainstorming conference in Washington this week, frankly telling the tech experts, “We really need your help.”
“We were too quiet, we should have let you know a little bit more loudly,” Mr. Martell told the gathering on Tuesday. “We’re going to fix that going forward and we’re going to start at this symposium.”
The Department of Defense‘s symposium is putting Big Tech companies and smaller firms’ founders in the same rooms with defense and intelligence officers to work on complex challenges posed by AI-powered weaponry. For starters, Mr. Martell promised to open up access to government data for private industry during the next year.
The Department of Defense, he said, will create opportunities for AI developers to sit next to government users to get immediate feedback on how their tools can help the military do a better job.
This push will include collecting new testing and evaluation tools so the developers can create responsible AI algorithms for projects such as Replicator, a Department of Defense initiative to infuse AI products into its guns, bombs and other weaponry.
Deputy Secretary of Defense Kathleen H. Hicks noted last year that the department identified more than 180 instances where generative AI tools could add value to the U.S. military operations.
But she noted at the time that most commercially available AI systems were not sufficiently mature to comply with the government’s ethical rules, a problem highlighted when a Defense Advanced Research Projects Agency program bypassed security constraints for OpenAI’s ChatGPT to deliver bomb-making instructions.
The department‘s development of AI is driven by a new policy enacted last year for autonomous weapons, according to Defense Innovation Unit Deputy Director Aditi Kumar.
Ms. Kumar said in Silicon Valley last week that the policy means a human is no longer needed in the loop for various weapons systems, but the government instead requires “human judgment in the deployment of these weapons systems.”
The human touch
Precisely how the military replaces human beings with human judgment is still a major question mark, but the Pentagon is forming a partnership with the tech experts at Scale AI to test and evaluate its AI systems.
Scale AI, led by 27-year-old entrepreneur Alexandr Wang, said Tuesday it would build benchmark tests for the Department of Defense to scrutinize so-called large language models, or powerful algorithms that can process and analyze massive streams of data at a speed and level of sophistication never before imagined.
“The evaluation metrics will help identify generative AI models that are ready to support military applications with accurate and relevant results using DoD terminology and knowledge bases,” Scale AI said on its blog. “The rigorous [test and evaluation] process aims to enhance the robustness and resilience of AI systems in classified environments, enabling the adoption of LLM technology in secure environments.”
The department‘s Task Force Lima is hard at work figuring out precisely where to use those large language models within the Department of Defense. The department has charged alternative “red teams” with testing AI models for vulnerabilities, according to Capt. Manuel Xavier Lugo, the task force’s mission commander.
Asked at the symposium whether large language models could control autonomous weapons systems such as militarized drones, Capt. Lugo demurred, citing the need to protect sensitive information.
He also explained unmanned aerial vehicles are among the machines that can take orders from AI models.
“If you think about it, the most basic use case in any of this stuff is plain language talking to machines,” Capt. Lugo said. “And that’s not a use case as UAV-specific, that’s a use case for anything.”
Mr. Martell said he was not sold on the Department of Defense building its own AI models. Paying private industry to make models instead is better and more efficient, he argued, because businesses would always be on the cutting edge of advanced technology.
U.S adversaries, notably China, are rushing to exploit artificial intelligence approaches to their own militaries, and some analysts say the Pentagon has lagged behind. The Washington symposium plans a classified session on Friday, which will feature briefings from the National Security Agency and the National Geospatial-Intelligence Agency.
Cybersecurity professionals studying hackers’ expanding use with generative AI tools are growing concerned. The cybersecurity firm CrowdStrike said Wednesday it observed state-sponsored cyberattackers and activist hackers experimenting with generative AI in 2023 to carry out ever more sophisticated cyber invasions of sensitive networks.
“Rapidly evolving adversary tradecraft honed in on both cloud and identity with unheard-of speed, while threat groups continued to experiment with new technologies like GenAI to increase the success and tempo of their malicious operations,” said CrowdStrike’s Adam Meyers in a statement on his company’s discoveries.
CrowdStrike noted that the speed of cyberattacks is continually accelerating, with the fastest recorded breakout time by an attacker last year at two minutes and seven seconds.
The Pentagon’s AI officials are also intent on moving faster.
While Mr. Martell and Capt. Lugo detailed ambitious plans in the coming year for the Pentagon to adopt cutting-edge AI, both men acknowledged struggling with the challenge of maintaining public-facing websites for their work.
Mr. Martell said at the symposium he has tried for a year to get the Department of Defense to change his group’s website to better identify his office.
“Things move slowly in government,” Mr. Martell said.