... Paper No. 44 / 2018
The Working Paper focuses on possible impacts of related technologies, such as machine learning and autonomous vehicles, on international relations and society. The authors also examine the ethical and legal aspects of the use of AI technologies. The present Working Paper of the Russian International Affairs Council (RIAC) includes analytical materials prepared by experts in the field of artificial intelligence, machine learning and autonomous system, as well as by lawyers and sociologists. The materials presented here are intended to contribute to the public dialogue on issues of artificial intelligence and the possible consequences ...
Some experts believe that maintaining strategic stability in the coming decades will require a revision of the foundations of the deterrence theory in the multipolar world
Using autonomous technologies, artificial intelligence and machine learning in the military sphere leads to the emergence of new threats, and it is crucial that we identify them in time.
Over the last decade, the development of technologies that can provide conventional weapons with ...
... Research Institute (
SIPRI
) and China Institutes of Contemporary International Relations (
CICIR
) on Mapping the Impact of Machine Learning and Autonomy on Strategic Stability and Nuclear Risk.
Experts from Russia, China, the United States, France, Britain, Japan, South Korea, India, and Pakistan, attended the event to discuss the possible impact of machine learning technologies, autonomous systems, and artificial intelligence on the development of weapons and the possibility of their use in conflicts.
As a result of the conference, joint recommendations were developed to reduce the risk of escalation of relations between nuclear powers and to prevent ...
... effect. Legal personality determines what is important for society and allows the decision to made as to whether “something” is a valuable and reasonable object for the purposes of possessing rights and obligations.
Due to the specific features of artificial intelligence, suggestions have been put forward regarding the direct responsibility of certain systems [
11
]. According to this line of thought, there are no fundamental reasons why autonomous systems should not be legally liable for their actions. The question remains, however, about the necessity or desirability of introducing this kind of liability (at least at the present stage). It is also related to the ethical issues mentioned ...