Search: Autonomous systems (8 materials)

Autonomous Weapons Pose a Threat to International Security

Interview with professor Noel Sharkey, chair of the International Committee for Robot Arms Control Noel Sharkey is Emeritus Professor of AI and Robotics at the University of Sheffield, co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot Arms Control (ICRAC). Noel has worked in AI/robotics/machine learning and related disciplines for more than four decades. He held research and teaching positions in the US (Yale and Stanford) and the UK (Essex...


Analysis of Future Warfare

At the strategic level, war will mostly be waged in cyberspace. Tactically, we will witness the widespread use of autonomous weapons systems This study presents the results of an analysis of future warfare. As the paper states, cyber warfare will be waged at a strategic level. The operative level will be characterized by the use of long-range precision weapons against economic infrastructure. The tactical level will be characterized by the massive use of autonomous ground-based, air and sea weapons...


Artificial Intelligence and Nuclear Weapons

Artificial intelligence in military affairs Earlier this year, the author had an opportunity to participate in a workshop held under the auspices of SIPRI and the Pathfinder Foundation concerning the introduction of machine learning and autonomy in the nuclear forces-related systems. Interaction of new technologies (which include artificial intelligence in the broadest sense of the word) with means for preventing global conflict (as well as ensuring Armageddon if necessary) is one of the most...


Task Force on Cooperation in Greater Europe Meeting in Istanbul

On April 8–9, 2019, Istanbul hosted a regular meeting of the international Task Force on Cooperation in Greater Europe. On April 8–9, 2019, Istanbul hosted a regular meeting of the international Task Force on Cooperation in Greater Europe. The meeting was organized by European Leadership Network (ELN) together with the Turkish Global Relations Forum (GIFGRF). The following issues were discussed at the meeting: new threats to security in the Euro-Atlantic — confrontation in cyberspace, an arms race...


International and Social Impacts of Artificial Intelligence Technologies

Working Paper No. 44 / 2018 The Working Paper focuses on possible impacts of related technologies, such as machine learning and autonomous vehicles, on international relations and society. The authors also examine the ethical and legal aspects of the use of AI technologies. The present Working Paper of the Russian International Affairs Council (RIAC) includes analytical materials prepared by experts in the field of artificial intelligence, machine learning and autonomous system, as well as by lawyers...


Three Groups of Threats from Lethal Autonomous Weapons Systems

Some experts believe that maintaining strategic stability in the coming decades will require a revision of the foundations of the deterrence theory in the multipolar world Using autonomous technologies, artificial intelligence and machine learning in the military sphere leads to the emergence of new threats, and it is crucial that we identify them in time. Over the last decade, the development of technologies that can provide conventional weapons with unique capabilities typical of “killer robots”...


RIAC at SIPRI and CICIR Conference on Mapping the Impact of Machine Learning and Autonomy on Strategic Stability and Nuclear Risk

... Learning and Autonomy on Strategic Stability and Nuclear Risk. Experts from Russia, China, the United States, France, Britain, Japan, South Korea, India, and Pakistan, attended the event to discuss the possible impact of machine learning technologies, autonomous systems, and artificial intelligence on the development of weapons and the possibility of their use in conflicts. As a result of the conference, joint recommendations were developed to reduce the risk of escalation of relations between nuclear ...


The Ethical and Legal Issues of Artificial Intelligence

... interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly). *** There are no fundamental reasons why autonomous systems should not be legally liable for their actions. The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore,...


Poll conducted

  1. In your opinion, what are the US long-term goals for Russia?
    U.S. wants to establish partnership relations with Russia on condition that it meets the U.S. requirements  
     33 (31%)
    U.S. wants to deter Russia’s military and political activity  
     30 (28%)
    U.S. wants to dissolve Russia  
     24 (22%)
    U.S. wants to establish alliance relations with Russia under the US conditions to rival China  
     21 (19%)
For business
For researchers
For students