The U.S. and Chinese governments are meeting in Geneva on Tuesday to discuss artificial intelligence risks amid the rapid development of new tech tools upending work and war around the world, according to senior Biden administration officials.
The first meeting on AI between the two governments is intended to identify areas of concern and share the nations’ domestic approaches to tackling AI problems. Both sides have targeted AI as a key technology to dominate as commercial and military applications have exploded in recent years.
“The goals of the talks are scoped to be focused on risk and safety with an emphasis on advanced systems,” a senior administration official told reporters. “The talks are not going to be focused on any particular deliverables but rather an exchange of views on the technical risks of AI and an opportunity to directly communicate on respective areas of concern.”
Officials declined to answer reporters’ questions about the specific risks that are expected to be raised, but both U.S. and Chinese officials have previously made their fears about AI known.
President Biden revealed plans for the U.S. and China to huddle on AI risks and safety upon meeting with Chinese President Xi Jinping in November. Ahead of the meeting, rumors swirled that Mr. Biden would reveal a deal to ban the use of AI in various weaponry including nuclear warheads. No such agreement has yet emerged.
The plethora of AI tools and the resulting potential threats have American officials closely monitoring the development of emerging technologies.
U.S. national security officials have expressed concern about AI-powered deepfakes, “hallucinations” produced from powerful models, and are working to determine the proper level of human involvement in automated systems expected to redefine military and intelligence power, among many other things.
Deepfakes manipulate images, audio and text to trick audiences into believing false information, which can be used harmfully to provoke fear, uncertainty and doubt. U.S. lawmakers fear deepfakes of politicians spreading online soon before Election Day could upend the voting process.
Such manipulated content is easily created by generative AI tools, which produce images, audio, text and video in response to users’ queries via platforms such as ChatGPT. Sophisticated knowledge of the underlying large language models, or powerful algorithms, is not needed to successfully use the tools.
Sometimes, the tools respond to users’ prompts with inexplicably false information, which technologists have dubbed ‘hallucinations.’
America’s intelligence community is paying close attention to the problem of hallucinations as it incorporates generative AI tools into its work. The Office of the Director of National Intelligence and CIA published an open-source intelligence strategy in March that said tradecraft and training must get updated to ensure hallucinations do not corrupt the spies’ work.
Human review of the AI tools’ output is critical to spotting any errors and U.S. officials are still determining the proper level of human involvement in AI systems.
As the Department of Defense moves forward with adopting thousands of autonomous systems, it is implementing a 2023 policy that the department interprets as allowing it to replace human beings with human judgment.
The department’s former Chief Digital and Artificial Intelligence Officer Craig Martell told The Washington Times in February that someone will always be accountable for the military’s tech but not every autonomous decision may include a human.
Such changes have spawned questions about whether adversaries’ push for advanced AI capabilities will affect U.S. officials’ tolerance of letting machines take charge.
Earlier this month, Paul Dean, the principal deputy assistant secretary in the State Department’s Bureau of Arms Control, Deterrence and Stability, told reporters that American decisions involving nuclear weapons would only be made by humans, but so far China and Russia have made no such commitments.
“We would never defer a decision on nuclear employment to AI,” Mr. Dean said. “We strongly stand by that statement and we’ve made it publicly with our colleagues in the U.K. and France. We would welcome a similar statement by China and the Russian Federation.”
China has its own concerns about generative AI and has undertaken a new regulatory effort to label and restrict the flow of information. Chinese regulators are focused on internet trolls and rumors that the Chinese Communist Party regime dislikes, among other things.
While the U.S. and China will look to identify common concerns at the coming meeting, a senior administration official said there was no plan to discuss technical collaboration or research cooperation.
The official said alongside identifying risks, both the U.S. and China would also “discuss respective domestic approaches to addressing those risks.” The U.S. plans to explain its approach for AI safety and the role of international governance in addressing AI.