By Toby Sterling and Stephanie van den Berg
AMSTERDAM (Reuters) – Delegations from both the United States and China are set to attend a summit on the “responsible” use of artificial intelligence (AI) in the military this week in the Netherlands, the first of its kind.
Though it is not clear the 50 countries attending will agree to endorse even a weak statement of principles being drafted by the Netherlands and co-host South Korea, the conference comes as interest in AI more broadly is at all-time highs thanks to the launch of OpenAI’s ChatGPT program two months ago.
Organizers did not invite the Russian Federation because of the conflict in Ukraine, which will be a major topic of discussion at the summit, which runs from Feb. 15 to 16 in The Hague.
“This is an idea for which the time has come,” Dutch Foreign Minister Wopke Hoekstra told members of the foreign press in the run-up to the event. “We’re taking the first step in articulating and working toward what responsible use of AI in the military will be.”
The event may be an early step toward someday developing an international arms treaty on AI, though that is seen as far off.
Leading nations have so far been reluctant to agree any limitations on its use, for fear doing so might put them at a disadvantage.
Some 2,000 people including experts and academics are attending a conference alongside the summit, with discussion topics including killer drones and slaughter bots.
The U.S. Department of Defense will discuss where it sees potential for international cooperation at a presentation on Thursday.
A spokesperson for the Chinese Embassy in the Netherlands referred to a position paper in which China underlined the need to avoid “strategic miscalculations” with AI and to ensure it does not accidentally escalate a conflict.
U.N. countries that belong to the 1983 Convention on Certain Conventional Weapons (CCW) have been discussing possible limitations on lethal autonomous weapons systems – which can kill without human intervention – since 2014.
Hoekstra said the summit will not replace that debate but will look at other aspects of military AI.
Examples include definition of terms, how AI could safely be used to accelerate decision-making in a military context, and how it could be used to identify legitimate targets.
“We are moving into a field that we do not know, for which we do not have guidelines, rules, frameworks, or agreements. But we will need them sooner rather than later,” Hoekstra said.
(Reporting by Toby Sterling; Editing by David Holmes)