High-level Roundtable on the Interface Between Artificial Intelligence and Nuclear Command and Control

High-level Roundtable on the Interface Between Artificial Intelligence and Nuclear Command and Control
©GCSP

High-level Roundtable on the Interface Between Artificial Intelligence and Nuclear Command and Control

Co-Convenors’ Summary

The Normandy P5 Initiative on nuclear risk reduction launched by the Strategic Foresight Group and the Geneva Centre for Security Policy (together the Co-Convenors) is inspired by the Normandy Manifesto for World Peace. The Manifesto was issued by a group of Nobel Peace Laureates and social thinkers in June 2019. It drew attention to the existential risk faced by humanity of global nuclear war that might take place on account of intent, incident, or accident. In the follow up discussions it was decided to engage the five permanent members of the UN Security Council to find solutions to catastrophic risk through dialogue among strategic experts. Since 2022, four experts’ roundtables have taken place in Caen, Normandy and Geneva, Switzerland. 

The following is a summary of the most recent roundtable held on 5-6 December 2024, in Geneva. Several conflicting views were expressed during the dialogue. This summary, prepared by the Co-Convenors, represents their best effort to capture the key observations and recommendations and is not a consensus document. 

The Co-Convenors would like to thank the Future of Life Institute for their support for this initiative. 

The focus of the December roundtable was the interface between artificial intelligence (AI) and nuclear command, control, and communication (NC3). 

In general, the experts welcomed the recent affirmations of the principle of human control in the interface between AI and NC3, as seen in the Blueprint for Action from the REAIM Summit in September and the bilateral declaration by Presidents Biden and Xi Jinping in November. 

The experts made the following key observations: 

  • Decisions surrounding the use of nuclear weapons should always be made by a human; delegating such decisions to AI should never happen. More work is needed to define the characteristics of the human element, so that the human factor is not diminished, and human agency and judgment remain meaningful.
  • Connectionist AI, also known as neural networks and used for computer vision, self-driving cars, and large language models, should not be used in military systems, especially not in nuclear systems, as it is unpredictable, non-deterministic, and is not safe for use in warfare. 
  • If symbolic or rules-based AI, such as used in route planners, is used for decision-making support, there should be other sources of input for comparative assessment, to avoid over reliance and ensure robust interrogation of the accuracy of the data produced. 
  • Automation of nuclear launch is extremely risky and must not be used. 
  • Existing oversight mechanisms for the use of nuclear weapons should ensure that there is no risk of accidental detonation. 
  • Training the people using the AI, not just the training of the AI, is vital. 
  • Transparency over the use of AI is critical, and whilst acknowledging the sensitivities, the P5 should uphold their long-standing commitments on transparency in their nuclear doctrines and discuss with each other how their nuclear doctrines and policies are developing to integrate AI into their nuclear systems. 
  • AI creates opportunities to enhance regulation and compliance with existing agreements, for example detecting nuclear testing or nuclear based activity. 

The following suggestions were made to help mitigate the risks associated with the use of AI in NC3: 

National level voluntary measures 

  • Development of failsafe review systems by all the P5 countries. 
  • Regular inspections and audits of AI systems used in NC3 to ensure compliance with safety standards. 
  • Robust cybersecurity and other measures to protect AI systems from unauthorised access, hacking, manipulation, spoofing and jamming or signals.
  • Development of Ethics Committees and Centres of Excellence on AI in each P5 country. 

International Collaboration 

  • Development of transparent international standards for test, evaluation, validation and verification of AI used in NC3. 
  • Expansion of the P5 Glossary of Key Nuclear Terms to include the AI/NC3 dimension. Discussions around terminology would be good for getting dialogue on the issue started. 
  • Crisis Communication Mechanism to establish quick communication in case of any doubt or malfunctioning of AI systems.
  • Joint training and capacity building programmes for developers and enablers of AI systems to enhance understanding of ethics, safety issues, best practices and International Humanitarian Law. 
  • P5 joint guidelines to prevent the development and use of AI systems that might automatically initiate pre-emptive strikes based on predictive algorithms.

Political Measures 

  • All P5 countries should continue to publish and discuss their nuclear doctrines, including the AI/NC3 dimension. 
  • The P5 should convene regular meetings of senior military leaders and diplomats to discuss AI nuclear concerns. 
  • The recently established experts track to the official P5 Process should be tasked with taking up the issue of AI/NC3. 
  • Education on the issue is needed, for governments as well as the public, to fill the significant knowledge gap on AI.