Print Читать на русском
Rate this article
(votes: 3, rating: 5)
 (3 votes)
Share this article
Vadim Kozyulin

Ph.D. in Political Science, Research Fellow at the Diplomatic Academy of the Ministry of Foreign Affairs, Director of the Emerging Technologies and Global Security Project at PIR-Center, RIAC expert

Using autonomous technologies, artificial intelligence and machine learning in the military sphere leads to the emergence of new threats, and it is crucial that we identify them in time.

Over the last decade, the development of technologies that can provide conventional weapons with unique capabilities typical of “killer robots” has been accelerating. The UN has given these types of weapon the designation of lethal autonomous weapons systems (LAWS). This is the name for weapons that are capable of hitting land, air and water targets without human participation.

AI-based LAWS create threats that can be divided into three groups:

1. The first group comprises risks associated with removing human agents from the decision to use weapons, the so-called “meaningful human control problem.” The global public (NGOs such as Stop Killer Robots, Article 36, the International Committee for Robot Arms Control, businesspersons and scientists, in particular, Steven Hawking, Elon Musk and Steve Wozniak) believe it highly probable that fully autonomous weapons will not be able to comply with international humanitarian law and human rights and will create a problem of identifying the persons to be held liable in case of illegal acts by autonomous units. “Killer robots” are accused of being incapable of sympathy, i.e. a human feeling that often acts as deterrent to the use of weapons. Another argument against LAWS is that their use contradicts the principle of humaneness and the demands of public conscience.

2. The second group of threats is related to breaches of strategic stability. Elements of autonomy and AI are appearing in all areas of military confrontation. In the nuclear sphere, high-precision tactical nuclear bombs and hypersonic devices with new nuclear warheads are now appearing. In outer space, it is unmanned space drones, low-orbit surveillance and satellite communications systems. In the area of missile defence, there are new surveillance and tracking systems linked with communications and control systems. And in the cyber sphere, cyber weapons and automated hacking-back cyber systems are emerging. Some of these weapons, for instance, hypersonic missiles and cyberattacks, could serve as instruments of tactical deterrence along with nuclear weapons. That is, even non-nuclear countries now have the capability of sharply increasing their deterrence and attack potential. These trends entail a series of risks:

— the risk of one country establishing technological and military global superiority;

— a new arms race;

— increased regional and international tensions;

— reduced transparency of military programmes;

— a disregard for international law;

— the spread of dangerous technologies among non-state actors.

Based on the experience of using military and commercial drones, researchers conclude that the manufacturing technologies of LAWS, as well as their components and software, will proliferate abundantly, which will give rise to another arms race resulting in instability and escalation of various risks.

Some experts believe that maintaining strategic stability in the coming decades will require a revision of the foundations of the deterrence theory in the multipolar world.

3. The third group of threats stems from the drastically reduced time allocated for making strategic decisions within the Intelligence, Surveillance & Reconnaissance (ISR) and military Communications, Command and Control (C3) systems. The principal drawback of a human compared to a machine is that the human mind requires too much time to assess the situation and make the right decision. An entire series of military programmes in the leading states (in particular, the Pentagon’s Maven, COMPASS, Diamond Shield) aims to have supercomputers take over the work of analysing various data and developing scenarios for the political and military leadership.

That entails, as a minimum, the following risks:

— The shortage of time to make meaningful decisions.

— Insufficient human control over the situation.

— Making strategic decisions on the basis of mathematical algorithms and machine learning systems, not human logic.

— The lack of mutual understanding between the machine and the human. Neural networks are thus far incapable of explaining the regularities of their work in a human language.

To be fair, it should be noted that globalization and the development of cross-border projects, social networks, transnational corporations, international cooperation, surveillance satellites and radio-electronic surveillance equipment have made the world more transparent. The world now has a huge number of sensors that report new threats before they even materialize.

Using autonomous technologies, artificial intelligence and machine learning in the military sphere leads to the emergence of new threats, and it is crucial that we identify them in time.

Over the last decade, the development of technologies that can provide conventional weapons with unique capabilities typical of “killer robots” has been accelerating. The UN has given these types of weapon the designation of lethal autonomous weapons systems (LAWS). This is the name for weapons that are capable of hitting land, air and water targets without human participation.

AI-based LAWS create threats that can be divided into three groups:

1. The first group comprises risks associated with removing human agents from the decision to use weapons, the so-called “meaningful human control problem.” The global public (NGOs such as Stop Killer Robots, Article 36, the International Committee for Robot Arms Control, businesspersons and scientists, in particular, Steven Hawking, Elon Musk and Steve Wozniak) believe it highly probable that fully autonomous weapons will not be able to comply with international humanitarian law and human rights and will create a problem of identifying the persons to be held liable in case of illegal acts by autonomous units. “Killer robots” are accused of being incapable of sympathy, i.e. a human feeling that often acts as deterrent to the use of weapons. Another argument against LAWS is that their use contradicts the principle of humaneness and the demands of public conscience.

2. The second group of threats is related to breaches of strategic stability. Elements of autonomy and AI are appearing in all areas of military confrontation. In the nuclear sphere, high-precision tactical nuclear bombs and hypersonic devices with new nuclear warheads are now appearing. In outer space, it is unmanned space drones, low-orbit surveillance and satellite communications systems. In the area of missile defence, there are new surveillance and tracking systems linked with communications and control systems. And in the cyber sphere, cyber weapons and automated hacking-back cyber systems are emerging. Some of these weapons, for instance, hypersonic missiles and cyberattacks, could serve as instruments of tactical deterrence along with nuclear weapons. That is, even non-nuclear countries now have the capability of sharply increasing their deterrence and attack potential. These trends entail a series of risks:

— the risk of one country establishing technological and military global superiority;

— a new arms race;

— increased regional and international tensions;

— reduced transparency of military programmes;

— a disregard for international law;

— the spread of dangerous technologies among non-state actors.

Based on the experience of using military and commercial drones, researchers conclude that the manufacturing technologies of LAWS, as well as their components and software, will proliferate abundantly, which will give rise to another arms race resulting in instability and escalation of various risks.

Some experts believe that maintaining strategic stability in the coming decades will require a revision of the foundations of the deterrence theory in the multipolar world.

3. The third group of threats stems from the drastically reduced time allocated for making strategic decisions within the Intelligence, Surveillance & Reconnaissance (ISR) and military Communications, Command and Control (C3) systems. The principal drawback of a human compared to a machine is that the human mind requires too much time to assess the situation and make the right decision. An entire series of military programmes in the leading states (in particular, the Pentagon’s Maven, COMPASS, Diamond Shield) aims to have supercomputers take over the work of analysing various data and developing scenarios for the political and military leadership.

That entails, as a minimum, the following risks:

— The shortage of time to make meaningful decisions.

— Insufficient human control over the situation.

— Making strategic decisions on the basis of mathematical algorithms and machine learning systems, not human logic.

— The lack of mutual understanding between the machine and the human. Neural networks are thus far incapable of explaining the regularities of their work in a human language.

To be fair, it should be noted that globalization and the development of cross-border projects, social networks, transnational corporations, international cooperation, surveillance satellites and radio-electronic surveillance equipment have made the world more transparent. The world now has a huge number of sensors that report new threats before they even materialize.

Let us consider these three groups of threats in more detail.

The Meaningful Human Control Problem

In December 2016, the Fifth Review Conference Fifth Review Conference of the Convention on Conventional Weapons (CCW) weapons adopted the decision to create a Group of Governmental Experts authorized to “explore and agree on possible recommendations on options related to emerging technologies in the area of LAWS.” Commentators believe that, despite various obvious terminological discrepancies, those who attended the conference agreed that the use of force should always take place under “meaningful human control.”

Some experts see four components in the problem:

1. The risks that LAWS carry for civilians.

2. The risks of human rights and human dignity violations.

3. The inability of LAWS to comply with the laws of war.

4. The uncertainty concerning legal liability for intentional and unintentional consequences of using LAWS.

It would be a mistake to think that the emergence of LAWS laid bare certain gaps in international law that need to be filled immediately. States and their citizens must comply with the norms and principles of international law in effect, and these norms and principles contain an exhaustive list of rules and restrictions in warfare.

International humanitarian law (IHL) was designed to protect human values, and a number of experts believe that some of its documents have direct bearing on the LAWS problem:

The Martens Clause: the rule formulated by the Russian lawyer and diplomat Friedrich Martens in 1899 stating that even if a given provision is not included directly in the articles of the current law, in situations of military hostilities, the parties will be guided by the principles of laws of humanity and the dictates of public conscience.

The “Laws of humanity” stemming from the 1948 Universal Declaration of Human Rights and the 1966 International Covenant on Civil and Political Rights.

Article 36 of the 1977 Protocol Additional I to the 1949 Geneva Conventions on new weapons.

— Various documents constituting the law of armed conflict with its basic principles:

1. The distinction between civilians and combatants.

2. The principle of proportionality (commensurability) of the use of force.

3. The principle of military expediency.

4. Restricting the means and methods of warfare (prohibition of excessive destruction or causing excessive suffering).

Since the international instruments that are currently in effect place give national governments the responsibility to interpret their obligations, international experts fear that the latter will interpret them in their own favour while neglecting moral concepts and human dignity. From this, they conclude that there is a need for a more detailed elaboration of the IHL norms as applied to LAWS.

Whatever the case may be, the latest consultations on the future of LAWS held on August 27–31, 2018 in Geneva at the Convention on Certain Conventional Weapons (CCW) resulted in the approval of ten potential principles that could serve as a future foundation for the international community’s approach to LAWS. The key principle is that all work in military AI should be conducted in compliance with international humanitarian law, and liability for the use of such systems will always lie with a human. The final decision on the future of the Group of Governmental Experts will most likely be made on November 23, 2018 at the conference of CCW signatory countries.

LAWS and Strategic Stability

At the Washington Summit held between the Soviet Union and the United States in June 1990, the parties made a joint declaration on nuclear and space weapons. In it, they outlined the theoretical foundations of strategic stability, which was defined as a state of strategic relations between two powers when neither has the incentive to deliver the first strike. The parties distinguished two notions within strategic stability: crisis stability and arms race stability. Crisis stability was taken to mean a situation in which even in a crisis neither party had serious opportunities or incentives to deliver the first nuclear strike. Arms race stability was determined with regard to the presence of incentives to increase a country’s own strategic potential.

The principles of strategic stability enshrined in the 1990 Declaration were considered the guidelines for weapons control. Later, the notions of “first strike stability” and even “cross-domain strategic stability” emerged.

Military AI has the potential to breach stability within any concept. Some high-ranking Pentagon strategists have already made statements that autonomous robots could ensure global military dominance. They believe that combat drones will replace nuclear weapons and high-precision munitions and will make it possible to implement the so-called “third offset strategy.”

Obviously, machine learning and autonomy technologies open new opportunities for using nuclear munitions (for instance, a high-precision reduced-capacity B61-12 nuclear bomb) for tactical missions and vice versa. Strategic tasks can be handled using non-strategic weapons.

For instance, the development of hypersonic vehicles with high defence-penetration capabilities leads to a lower nuclear conflict threshold.

The Boeing X-37B Orbital Test Vehicle and XS-1 Spaceplane space drones or the X-43A Hypersonic Experimental Vehicle hypersonic drone will change the model of confrontations in space. Combining the Space Tracking and Surveillance System (STSS) with the Command and Control, Battle Management, and Communications (C2BMC) system demonstrates entirely new strike capabilities of ballistic missiles. The strategy of neutralizing missile systems at launchers by using cyber and radio-electronic Left-of-Launch devices opens up a new roadmap for missile defence. The QUANTUM programme and the automated hacking-back cyber weapon can set destructive software into “fire” mode.

The rapid spread of drone technologies throughout the world and the budding competition for the global market between major manufacturers of strike drones are causes for alarm. Today, the United States has over 20,000 unmanned vehicles, including several hundreds of combat strike drones. Small strike drones that in the future may deliver strikes as an autonomous swarm distributing functions without an operator’s input are now in development in future. China is not officially disclosing the number of drones in service of the People’s Liberation Army; however, some experts believe that it is roughly equal to the number in service of the Pentagon. China both manufactures and actively exports strategic drones capable of both intelligence and strike missions. Following the United States with its MQ-25 Stingray programme, China is developing ship-based drones and unmanned vehicles capable of interacting with manned aircraft.

The United Kingdom, Israel, Turkey, Iran and Japan also lay the claim to a place among the world’s leading drone manufacturers. Military strategists of small and large states believe that, in future, unmanned vehicles will form the backbone of their air force. Back in 2015, United States Md. Secretary of the Navy Ray Mabus said that the F-35 will likely be the last manned strike fighter, and unmanned systems will be “the new normal in ever-increasing areas.”

C3ISR Outsourcing and Strategic Time Pressure

Using artificial intelligence (AI) in the military area is gaining momentum. As a rule, a programme of automatic data collection and analysis opens up possibilities for new projects in related areas. The use of the so-called artificial intelligence in the military sphere will probably increase exponentially moving forwards.

AI will also be a reason for the emergence of new weapons and related army units in the near future, such as cyber command, missile defence, AI-based intelligence, information warfare, electronic warfare (EW) systems, laser weapons, autonomous transportation, robotics units, drones, anti-drone weapons, hypersonic aircraft, unmanned underwater drones and aquanaut teams.

In future, conventional army service branches will change shape, forming different combinations to use the advantages of new AI-based systems. Studies have demonstrated a twofold increase in the effectiveness of air and missile defence working in conjunction with EW systems.

Using AI in the military sphere will result in the gradual introduction of robotics and automation in every possible sphere, in materials and logistics in the first place. Logistics of the future is capable of seriously affecting strategic stability through the high automation of logistical processes up to the autonomous delivery of munitions to the battlefield.

Information exchange between service branches will develop both vertically and horizontally, from aircraft pilots in the air to platoon leaders on the ground and vice versa, and AI will filter information so that each party will only receive data that is useful to them, with information noise being removed. That is the idea behind the Diamond Shield air and missile defence that is currently being developed by Lockheed Martin. Data collected on land, in the air and in space, including through the Pentagon’s MAVEN programme, will be processed by neural networks and distributed in real time to commanding officers of all levels. AI will conduct the actions of military units, creating so-called algorithmic warfare.

AI will track clandestine action in times of peace, too. The COMPASS (Collection and Monitoring via Planning for Active Situational Scenarios) programme is one such example. The goal of COMPASS is to analyse a situation and its participants’ behaviour in a “grey” zone, which is understood as a limited conflict on the border between “regular” competition among states and what is traditionally deemed to be war. Strategic time pressure will lead to assessments of national threats and the use of weapons also being automated and outsourced to AI-based command and analytical systems.

The symbiosis of analytical and command programmes on the basis of neural networks increases the risk of the Human-Machine Interaction model, leaving little room for humans, who will have just one button to press to approve decisions made by machines.

The configurations of AI-based analytical and control systems will be highly classified, thereby causing additional concerns to the public.

Allegorically speaking, human civilization is standing in front of the door into a world where the military handles its objectives using AI and autonomous “killer robots.” Thus far, we do not know for sure how dangerous that is. Maybe our worst expectations will not come true. However, in the worst-case scenario, that world will open Pandora’s box, letting out fears and suffering. Preventing such a scenario in advance is the proper course of action.

Rate this article
(votes: 3, rating: 5)
 (3 votes)
Share this article

Poll conducted

  1. In your opinion, what are the US long-term goals for Russia?
    U.S. wants to establish partnership relations with Russia on condition that it meets the U.S. requirements  
     33 (31%)
    U.S. wants to deter Russia’s military and political activity  
     30 (28%)
    U.S. wants to dissolve Russia  
     24 (22%)
    U.S. wants to establish alliance relations with Russia under the US conditions to rival China  
     21 (19%)
For business
For researchers
For students