Print Читать на русском
Rate this article
(votes: 3, rating: 5)
 (3 votes)
Share this article
Noel Sharkey

Emeritus Professor of AI and Robotics at the University of Sheffield, co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot Arms Control (ICRAC)

Tatyana Kanunnikova

Independent journalist, RIAC expert

Noel Sharkey is Emeritus Professor of AI and Robotics at the University of Sheffield, co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot Arms Control (ICRAC). Noel has worked in AI/robotics/machine learning and related disciplines for more than four decades. He held research and teaching positions in the US (Yale and Stanford) and the UK (Essex, Exeter and Sheffield).

Since 2006, Noel has focused on ethical/legal/human rights issues in AI and robot applications in areas including military, child care, elder care, policing, autonomous transport, robot crime, medicine/surgery, border control, sex, surveillance and algorithmic gender and race bias. Much of his current work is in lobbying policymakers internationally and in advocacy at the United Nations about prohibiting autonomous weapons systems. He writes both academically and for national newspapers and frequently appears in the media. Noel is probably best known to the public for his popular TV work, such as head judge for every series of BBC2 Robot Wars from 1998.

Noel Sharkey is Emeritus Professor of AI and Robotics at the University of Sheffield, co-director of the Foundation for Responsible Robotics and chair of the International Committee for Robot Arms Control (ICRAC). Noel has worked in AI/robotics/machine learning and related disciplines for more than four decades. He held research and teaching positions in the US (Yale and Stanford) and the UK (Essex, Exeter and Sheffield).

Since 2006, Noel has focused on ethical/legal/human rights issues in AI and robot applications in areas including military, child care, elder care, policing, autonomous transport, robot crime, medicine/surgery, border control, sex, surveillance and algorithmic gender and race bias. Much of his current work is in lobbying policymakers internationally and in advocacy at the United Nations about prohibiting autonomous weapons systems. He writes both academically and for national newspapers and frequently appears in the media. Noel is probably best known to the public for his popular TV work, such as head judge for every series of BBC2 Robot Wars from 1998.

Why should we prohibit the development and use of autonomous robot weapons?

There are three main lines of argument against the use of autonomous weapon systems.

First, their technology cannot be guaranteed to comply with the laws of war. Despite significant advances in artificial intelligence over the past decade, algorithms have been showing massive inaccuracy for decision making in real-world conditions. This would be greatly exacerbated in the theatre of armed conflict. We could not guarantee the principles of distinction or proportionality, and we have no way to test them formally to ensure that the programs would operate under a fog of war conditions.

Second, there are moral arguments that delegating life and death decisions to a machine is an affront to human dignity.

Third, what worries me most is the threat that autonomous weapons pose to international security. All of the major powers are developing versions of these weapons. All of the discussion is about using swarm (massive numbers) that can multiply military force. This makes it more likely that they could initiate accidental conflicts at border regions that would be fought at high speed and create massive devastation to civilian populations. No one will give away how their combat programs or algorithms work, and when two unknown algorithms meet, no one can predict what will happen. They could crash to the ground or into buildings, causing humanitarian crises.

They take the human out of the control loop for deciding who lives and dies. It is against human dignity to delegate that decision to a machine.

Some experts say that AI can be programmed never to attack certain flagged targets. How can AI distinguish civilians from militias? Is there a risk of mistake?

A machine cannot be relied upon to distinguish civilians from military personnel. It is not just a visual task but relies on context. This is especially the case in a modern armed conflict where many of the combatants are not in uniforms. Machines could easily be tricked into killing the wrong people. Sensing systems, including cameras, work much better at identification in the laboratories but not in the real 3D world of shadows and shades of light and especially not in the fog of war.

How do you assess the potential of terrorists getting hold of robot weaponry? What kind of weapons could it be?

It is highly likely that insurgent groups will use autonomous drones. Drones have already been used considerably by ISIS, and it would not be difficult for them to use autonomous drones when they have no concerns about targeting civilians.

Is there effective counter-drone or anti-AI expertise to counter such attacks from terrorists?

There are a number of counter weapons available for attacks by aerial drones. Still, at present, these can easily be overwhelmed by swarms of drones as seen when a swarm of 18 bomb-laden drones attacked a Saudi Arabia oil refinery. They had sophisticated air defences that were overwhelmed by the numbers.

Which AI technologies are used by government counterterrorism agencies in their fight against terrorism?

There are many surveillance techniques available to security services for monitoring online activity and phones, as well as a considerable number of biometric tools, like automated facial recognition.

The AI revolution has an impact on different areas of security and intelligence. AI capabilities are believed to be able to collect more intelligence and do it more accurately and faster. In your opinion, what will the role of artificial intelligence in security in the future be?

I believe that it would be a grave error of judgement to rush into using AI for military purposes. The commercialisation of AI is still in its infancy and rushing to use it too quickly before we know its downsides thoroughly could cause militaries a lot of problems and defeats, especially in comparison with more conventional and low-tech forces using tried and trusted methods.

Is the “rise of the machines” scenario possible in the future?

Some people believe so, but I see no evidence to believe that this will happen. Nonetheless, the future is still a while away.

Rate this article
(votes: 3, rating: 5)
 (3 votes)
Share this article
For business
For researchers
For students