The military’s ardent push for an artificial intelligence overhaul is taking center stage in Washington this week, where government officials will huddle with top tech minds at a gathering organized by the Department of Defense.
The Chief Digital and Artificial Intelligence Office’s symposium is designed to address difficult questions involving the military use of powerful AI models and the ethics of replacing and augmenting warfighters with machines.
The symposium starting Tuesday assembles defense and intelligence officers, Big Tech powerhouses and smaller companies, academics and foreign government officials. The event culminates with a final classified session in Virginia featuring briefings from the National Security Agency and the National Geospatial-Intelligence Agency.
A major focus of the conference will be the Department of Defense’s use of generative AI. Attendees will learn about the work of Task Force Lima, a team charged with determining where to implement large language models, or powerful algorithms, within the Department of Defense.
Deputy Secretary of Defense Kathleen Hicks told reporters last year that the department identified more than 180 instances where generative AI tools can add value to its operations.
She said most commercially available AI systems were not ready then for her department’s use because they were not mature enough to comply with the government’s ethical rules. That problem was underscored by a Defense Advanced Research Projects Agency program that bypassed security constraints for OpenAI’s ChatGPT to get the popular chatbot to deliver bomb-making instructions.
SEE ALSO: AI leader says China is dominating info war and the U.S. isn’t paying attention
Private businesses are hoping that is all about to change. Panels focused on Task Force Lima’s work at the symposium will feature representatives from major tech companies such as Amazon and Microsoft.
Many different companies are eager to work with the government on AI for military and intelligence purposes. Earlier this year, OpenAI rewrote its rules prohibiting work with the military and on weapons development, which allowed the AI-maker to continue partnering with the Department of Defense.
The build-up of AI tools and munitions will get a fresh look at the coming symposium from government personnel from around the world concerned about setting standards. A panel on the responsible use of AI in the military will feature representatives from the U.K., the South Korean army and Singapore.
“Given the significance of responsible AI in defense and the importance of addressing risks and concerns globally, the internationally focused session at the symposium will be focused on these critical global efforts to adopt and implement responsible AI in defense,” the symposium’s agenda said.
The meetings in Washington this week come on the heels of a major summit organized by the Defense Innovation Unit in Silicon Valley last week fixated on the build-up of AI and autonomous weaponry.
The Silicon Valley meetings provided a rare glimpse into the progress of the Department of Defense’s “Replicator” initiative that the federal government hopes will remake the military through the infusion of AI into weapons systems.
Department of Defense adviser Joy Angela Shanaberger told the gathering of hundreds of tech entrepreneurs, funders and government personnel that the military has set a goal of fielding multiple thousands of all-domain autonomous systems by August 2025.
DIU Deputy Director Aditi Kumar said that buildup of autonomous systems will happen quickly, safely and responsibly under the guidelines of a policy for autonomous weapons created in January 2023, DoD Directive 3000.09.
The policy allows human beings to be replaced with human judgment.
“The policy, what it says is not that there needs to be a human in the loop, but that there needs to be human judgment in the deployment of these weapons systems,” Ms. Kumar told the gathering in Silicon Valley.
The ethics of AI tools replacing humans will feature prominently at this week’s symposium in Washington and defense officials are well aware of people’s concerns about battlefields dominated by killer robots.
While doomsday scenarios of AI gone rogue have panicked some in Silicon Valley, military officials want to make sure the tech sector does not discount opportunities afforded by AI.
Adm. Samuel J. Paparo told the Silicon Valley audience that replacing human beings with new machines will save American lives.
“It really ought to be a dictum to us that we should never send a human being to do something dangerous that a machine can do for us,” Adm. Paparo said at the summit. “That when doing so, we should never have human beings making decisions that can’t be better aided by machines.”