Interview with professor Noel Sharkey, chair of the International Committee for Robot Arms Control
Noel Sharkey is Emeritus Professor of AI and Robotics at the University of Sheffield, co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot ...
... Paper No. 44 / 2018
The Working Paper focuses on possible impacts of related technologies, such as machine learning and autonomous vehicles, on international relations and society. The authors also examine the ethical and legal aspects of the use of AI technologies. The present Working Paper of the Russian International Affairs Council (RIAC) includes analytical materials prepared by experts in the field of artificial intelligence, machine learning and autonomous system, as well as by lawyers and ...
Some experts believe that maintaining strategic stability in the coming decades will require a revision of the foundations of the deterrence theory in the multipolar world
Using autonomous technologies, artificial intelligence and machine learning in the military sphere leads ...
... Research Institute (
SIPRI
) and China Institutes of Contemporary International Relations (
CICIR
) on Mapping the Impact of Machine Learning and Autonomy on Strategic Stability and Nuclear Risk.
Experts from Russia, China, the United States, France, Britain, Japan, South Korea, India, and Pakistan, attended the event to discuss the possible impact of machine learning technologies, autonomous systems, and artificial intelligence on the development of weapons and the possibility of their use in conflicts.
As a result of the conference, joint recommendations were developed to reduce the risk of escalation of relations between nuclear ...
... 1) beneficence (robots should act in the best interests of humans); 2) non-maleficence (robots should not harm humans); 3) autonomy (human interaction with robots should be voluntary); and 4) justice (the benefits of robotics should be distributed fairly).
***
There are no fundamental reasons why autonomous systems should not be legally liable for their actions.
The examples provided in this article thus demonstrate, among other things, how social values influence the attitude towards artificial intelligence and its legal implementation. Therefore,...