Читать на русском
Rate this article
(votes: 5, rating: 5)
 (5 votes)
Share this article

On September 6–7, 2018 the Stockholm International Peace Research Institute (SIPRI) and the China Institutes of Contemporary International Relations (CICIR) held a joint conference in Beijing on the impact of AI technologies and autonomous systems on nuclear safety and strategic stability.

An interesting feature of the event was the consensus among the participants that nuclear powers would soon use new technologies to modernize their strategic weapons. Using weak artificial intelligence for early missile warning systems and assessing the probability of a missile launch could give the military command of a nuclear power extra time to decide on a retaliatory strike and its scale. New technologies could also increase the precision of nuclear weapons and the effectiveness of missile defence, improve the security of nuclear facilities, and provide better intelligence information.

At the same time, faster decision-making for one party will inevitably prompt its potential adversaries to search for options for faster nuclear weapons delivery systems.

The conference focused in particular on Lethal Autonomous Weapon Systems (LAWS), since they are already actively used by individual countries, primarily the United States and China. Most conference participants agreed that incidents with the autonomous systems could provoke conflicts between great powers in East Asia.

The fact that autonomous weapons and artificial intelligence still belong in the “grey area” of the international law further complicates the situation.

It should be noted that real restrictions on the military use of autonomous systems and artificial intelligence, including for the purpose of improving the effectiveness of strategic weapons, no longer appear to be a feasible scenario. Judging by the reports presented by the conference participants, the arms race in this area has already begun, and the temptation to gain an edge in new weapons is too great for the countries to take general humanitarian considerations into account in anything more than a declarative manner.


On September 6–7, 2018 the Stockholm International Peace Research Institute (SIPRI) and the China Institutes of Contemporary International Relations (CICIR) held a joint conference in Beijing on the impact of AI technologies and autonomous systems on nuclear safety and strategic stability. This is the second event held as part of the two-year SIPRI research project, which is aimed at studying the impact of autonomous systems and artificial intelligence technologies on international relations. The report on the first conference is available on the RIAC website.

The New Race is Inevitable

The Beijing conference focused on security issues in East Asia. The event brought together experts from Russia, China, the United States, France, the United Kingdom, Japan, South Korea, India and Pakistan, who described their countries’ stances on the use of new technologies in issues related to strategic stability.

An interesting feature of the event was the consensus among the participants that nuclear powers would soon use new technologies (primarily machine learning) to modernize their strategic weapons. Using weak artificial intelligence for early missile warning systems and assessing the probability of a missile launch could give the military command of a nuclear power extra time to decide on a retaliatory strike and its scale. New technologies could also increase the precision of nuclear weapons and the effectiveness of missile defence, improve the security of nuclear facilities, and provide better intelligence information.

At the same time, faster decision-making for one party will inevitably prompt its potential adversaries to search for options for faster nuclear weapons delivery systems. Such an “acceleration race” between nuclear powers potentially poses a significant threat to global stability, since such a race would leave progressively less time to assess whether the threat of an attack is real and whether retaliation is expedient. Ultimately, it is possible that countries will be forced to automate decisions concerning a retaliatory strike, which can have unpredictable consequences. At the same time, weaker nuclear powers will feel vulnerable and could soon yield to the temptation to introduce an automatic retaliatory nuclear strike system (similar to the Soviet Dead Hand (Perimeter) system and the U.S. “Operation Looking Glass”).

The participants in the discussion noted that even machine learning professionals do not always fully understand the way it works. Even though AI technologies are developing rapidly, the “black box” problem, that is, the situation when decision-making algorithms remain hidden from developers, is still relevant. Thus, before entrusting decisions on deploying lethal weapons to artificial intelligence, we need to make AI itself far more transparent. However, a contradiction inevitably arises from the need to combine the comprehensibility of machine learning mechanisms with protecting them from the enemy, since data used by neural networks can be “poisoned” by deliberate manipulation. It is also important to note that, due to the specifics of their work, the military has a much smaller volume of data for machine learning than civil companies working on AI.

The conference attendees also discussed North Korea’s work on artificial intelligence. The participants noted that, despite significant efforts that Pyongyang had channelled into AI, North Korean machine learning projects are still in their infancy and will hardly pose a threat in the foreseeable future.

Autonomous, Lethal, Yours

The conference focused in particular on Lethal Autonomous Weapon Systems (LAWS), since they are already actively used by individual countries, primarily by the United States and China. Most conference participants agreed that incidents with the autonomous systems could provoke conflicts between great powers in East Asia. Possible scenarios could include collisions of unmanned vehicles, the loss of control over them, and even theft of an enemy drone.

Such incidents are most likely in the South China Sea, where there are a number of disputed territories to which Beijing lays claim, such as the Paracel Islands and the Spratly Islands. The United States, in turn, has allied obligations with the Philippines, where it has five military bases. The United States has traditionally maintained a large-scale presence in the region.

Drone-related incidents in the South China Sea are not purely theoretical, as there are recent precedents. In particular, in December 2016, China seized a U.S. Navy underwater drone that was collecting research data in neutral waters near the Philippines. Beijing returned the drone to Washington, but accused the United States of threatening China’s sovereignty. Commenting on the incident, experts noted that China had probably examined the drone thoroughly before returning it.

Other areas fraught with potential collisions of autonomous vehicles are the Taiwan Strait, the waters around the South Kuril Islands, the Senkaku Islands (Beijing and Taipei dispute Tokyo’s sovereignty) and the Liancourt Rocks (controlled by South Korea and disputed by Japan).

In the near future, drone incidents may become more frequent both in East Asia and elsewhere, since border control is one of the most promising areas for the use of unmanned vehicles (both airborne and underwater). In particular, autonomous patrolling of land borders is actively being developed by the European Union.

Even drones that are not armed with lethal weapons can cause a conflict if control over them is lost and they inadvertently cross the border into another state. They could also collide with the autonomous vehicles of another state. Additionally, it is not entirely known how drones operating under different systems will interact when approaching each other.

The fact that autonomous weapons and artificial intelligence still belong in the “grey area” of the international law further complicates the situation. A group of UN government experts recently put forward recommendations on resolving this problem at the Inhumane Weapons Convention. On August 31, 2018, the group published a report on the possible principles for regulating autonomous combat systems. In particular, experts propose making humans responsible for the actions of autonomous vehicle at all stages.

Poseidon’s Wrath

Foreign participants in the conference were also concerned by Russia’s unmanned nuclear submarine, the development of which was mentioned by Vladimir Putin in his Presidential Address to the Federal Assembly on March 1, 2018:

“As concerns Russia, we have developed unmanned submersible vehicles that can move at great depths (I would say extreme depths) intercontinentally, at a speed multiple times higher than the speed of submarines, cutting-edge torpedoes and all kinds of surface vessels, including some of the fastest. …

“Unmanned underwater vehicles can carry either conventional or nuclear warheads, which enables them to engage various targets, including aircraft groups, coastal fortifications and infrastructure.”

Later, in July 2018, the Ministry of Defence of the Russian Federation announced the start of tests on the Poseidon an unmanned underwater vehicle (also known as Status-6). The system is on the 2027 State Armament Programme, and the Russian Navy is expected to have received the weapons by then. According to the media, Poseidon will be able to carry a 2-megaton nuclear warhead.

Some conference participants believe that the use of drones with nuclear warheads could radically change the strategic balance of power and provoke a new arms race. Additionally, if an autonomous vehicle is launched by mistake, the marine environment makes it impossible to make contact with it to abort its deadly mission. However, this is not a new problem: for a long time, nuclear submarines were in the same situation: when underwater, they could not receive orders to abort a launch.

Recipes for Détente

At the conference, working groups also held discussions aimed at developing proposals on mitigating the risks and negative effects that new technologies have on strategic stability.

The participants’ proposals included the following steps:

  • Using new technologies for the mutual monitoring of nuclear facilities.
  • De-alerting.
  • Bilateral and multilateral dialogue between nuclear powers on using AI in the military sector.
  • Parties committing (for instance, in a declaration) to preserving human control of nuclear weapons.
  • Countries exchanging information on national AI research.
  • Continued discussion on the parameters of human control of autonomous systems (supporting the work of the specialized group of UN government experts).
  • Developing a code of conduct for the contingency of a possible incident involving combat autonomous systems and unmanned vehicles.
  • Establishing “hotlines” between countries regarding incidents with autonomous systems.
  • Separating early warning systems from systems that make decisions on launching strikes.
  • Developing safety requirements for autonomous systems, including options to abort missions.
  • Stepping up AI technology exchanges. Greater openness of innovations.

It should be noted that real restrictions on the military use of autonomous systems and artificial intelligence, including for the purpose of improving the effectiveness of strategic weapons, no longer appear to be a feasible scenario. Judging by the reports presented by the conference participants, the arms race in this area has already begun, and the temptation to gain an edge in new weapons is too great for the countries to take general humanitarian considerations into account in anything more than a declarative manner.

The current situation emphasizes the need for the rapid development of a legal framework for the use of autonomous systems and artificial intelligence. It is also highly desirable for the prohibition on the automated use of nuclear weapons to be set forth on an inter-country level, at least in the form of a declaration. Otherwise, minutes “gained” can turn out to be too costly for the whole of humanity.


Rate this article
(votes: 5, rating: 5)
 (5 votes)
Share this article

Poll conducted

  1. In your opinion, what are the US long-term goals for Russia?
    U.S. wants to establish partnership relations with Russia on condition that it meets the U.S. requirements  
     33 (31%)
    U.S. wants to deter Russia’s military and political activity  
     30 (28%)
    U.S. wants to dissolve Russia  
     24 (22%)
    U.S. wants to establish alliance relations with Russia under the US conditions to rival China  
     21 (19%)
For business
For researchers
For students