P5 Experts Roundtable – Online Meeting on the AI nuclear nexus on 24 June 2024
P5 Experts Roundtable – Online Meeting on the AI nuclear nexus on 24 June 2024
Co-Convenors’ Summary
The Normandy P5 Initiative on nuclear risk reduction launched by the Strategic Foresight Group and the Geneva Centre for Security Policy (together the Co-Convenors) has been running since 2021. The Initiative seeks to engage the five permanent members of the UN Security Council to find solutions to catastrophic risk through dialogue with non-government experts from China, France, Russia, the United Kingdom and United States.
At our last roundtable held in Geneva in December 2023, the experts had one session on the nexus between artificial intelligence (AI) and nuclear command, control, and communications (NC3). This led to an excellent and timely discussion. The Co-Convenors therefore decided to focus the Initiative’s work in 2024 on this topic.
The following is a summary of an online meeting held on Monday, 24 June 2024, to discuss the AI/NC3 nexus. This summary has been prepared by the Co-Convenors and represents their best effort to capture the key recommendations and is not a consensus document.
The Co-Convenors would like to thank the Future of Life Institute and the Silicon Valley Community Foundation for their support for this initiative.
The context
Overall, the experts believed that any type of agreement to restrict or limit the use of AI in nuclear systems was challenging in the current geopolitical context. It had not even been possible for the P5 to agree a high-level declaration on retaining human responsibility for making decisions on nuclear use.
Many stressed that the AI/NC3 nexus could not be discussed in isolation. It should be discussed in connection with other threats to strategic stability, particularly cyber security and space security.
The P5 were cautious about talking about the nexus. Reasons included confidentiality, competition, belief that the technology was not mature enough, and even denial that there were any risks.
The risks
AI and the psychology of decision-making was a big concern. Many identified the speeding up of the decision loop (the time leaders would have to decide whether to use a nuclear weapon) as the major problem with integrating AI into nuclear systems. Delivery systems already existed that were near autonomous, making reaction time extremely short.
Committing not to use AI in nuclear decision-making was not just about keeping the human in the loop. Humans that were briefing a leader could be using AI to compile their information. Research showed that the younger generation could be overconfident in AI readings and lack the reasoning skills to challenge them.
Related to this was the issue of AI and the information environment, particularly the risks around disinformation at times of crisis. In addition, there was the cross-cutting issue of cyber security and concerns around data quality and integrity.
The increasing use of AI for intelligence, surveillance, and reconnaissance (ISR) could also have an impact on postures and strategic stability. If you perceived that AI would give your adversary the first mover advantage because it had enabled them to find all your nuclear weapons, then that would undermine your deterrent. It could also lead to nuclear weapons being used earlier in a crisis than they would otherwise.
The threats to associated infrastructure, such as fibre optics and satellites, also needed to be considered. Interference in these communications channels could impact early warning systems. Similarly, the issue of the interoperability of the different national NC3 sub-systems and their potential vulnerabilities to adversarial attacks should also be considered.
The lack of knowledge about what others were doing in this area was contributing to increasing tensions. To overcome this, it would be useful to understand how these systems were being built. However, that would require making information publicly available. At a time of heightened tensions and competition, the incentives for doing this were low.
The way forward
Strong direction from the P5 leaders would be needed if there was to be any agreement to control the use of AI in nuclear systems. A high-level declaration jointly by the P5 committing to retain human responsibility over nuclear decision-making ought to be achievable, as the five had already made such statements individually or as the P3. If this issue were compartmentalised, then it could be achievable.
The P5 Process would be the obvious place for discussions on the nexus. If talks got started, it would be important that the P5 sent the right people to ensure a substantive exchange.
Such discussions could include an exchange on what assurances the P5 could give each other over their use of AI in nuclear systems. They could also discuss their threat perceptions and their fears over how the technology might be used. Such exchanges could lead to a common understanding on not using the most dangerous systems.
It would be important to learn the lessons from other processes, such as the UN Cyber security working groups.
P5 states should consider how they could mitigate the risks nationally, by carrying out fail safe reviews and practising crisis management.
Further research should be done into whether privacy enhancing technology could be used to enable more data sharing, to help build trust and confidence.
Although the nexus was primarily an issue for the nuclear weapon states, all countries saw it as an issue, and should therefore be involved in discussions around reducing the risks, if not in multilateral fora, then in small and/or regional groupings.
Improving understanding of the issues was clearly needed. Not only did government officials need to be more technically literate, but also the AI experts needed to understand the legal and political issues when developing the technology. Education should also extend to the public, who also needed to be better informed and not believe all the hype.