Print Читать на русском
Topic: Technology
Region: Europe
Type: Columns - Cybercolumn
Rate this article
(votes: 5, rating: 5)
 (5 votes)
Share this article
Olga Afanasieva

GoodAI, Chief Operating Officer

We won’t trust AI with decision-making in crucial areas yet - for example, AI is assisting medical personnel by merely giving recommendations. However, research into more general AI systems is advancing and we strive to develop algorithms that would be able to make even better decisions and augment, not merely automate our own, human intelligence. This would allow us to optimize scientific research and its practical applications in a range of fields.

To be able to solve problems in completely new domains, a general AI system should be able to autonomously learn and also improve its learning and problem-solving skills. Such general AI has not been developed yet, and one of the most important questions we need to solve along the way is, what kind of values will such systems acquire and propagate?

We call the AI applications that we see today “narrow AI” because the range of tasks they are able to perform is very limited. Essentially, all the heuristics of the AI are “stored” in the brains of the human researchers and engineers, who are designing it to perform pre-determined tasks on specific datasets.

Values, or biases, of such narrow AIs are also stored in human knowledge - a good example is when a narrow AI system acquires a racial bias from data that it’s trained on, and this data is of course generated by humans. A simple narrow AI does not make broad assumptions about the world or develop its own values, like humans do.

We won’t trust AI with decision-making in crucial areas yet - for example, AI is assisting medical personnel by merely giving recommendations. However, research into more general AI systems is advancing and we strive to develop algorithms that would be able to make even better decisions and augment, not merely automate our own, human intelligence. This would allow us to optimize scientific research and its practical applications in a range of fields.

twitter.com/GoodAIdev
Olga Afanasieva

To be able to solve problems in completely new domains, a general AI system should be able to autonomously learn and also improve its learning and problem-solving skills. Such general AI has not been developed yet, and one of the most important questions we need to solve along the way is, what kind of values will such systems acquire and propagate? v

At GoodAI we believe that complex human values cannot be simply hard-coded in the AI, but have to be learned through a gradual and guided process. It’s very much like we humans learn through curricula and in environments specially designed for us by our family, institutions and society at large. Rather than learning hard rules in a world which is rarely black and white, we learn to understand the underlying principles of our culture and that way we can generalize these principles to previously not encountered situations and act consistently with our values, which is a much more robust strategy.

Another argument against hard-coded values in complex adaptive systems (such as a future general AI) is that we want the values to evolve. For example, we would not want to live according to the values of the past centuries that accepted slavery or limited the freedom of groups based on their skin color or gender. It would be unwise to limit the AIs ability to discover even better value systems than we have today. However, to achieve it we need to make sure we set the AI on a right value learning trajectory.

How can we teach the AI the right values? First, we need to make sure we have a mechanism for AI to efficiently learn to understand our world. To be able to teach general problem-solving capabilities to the AI, we would first need to create a system with an innate ability to learn gradually. This means a system capable of learning new skills on top of previously learned ones, effectively re-using acquired knowledge and generalizing it to new domains. While trivial for humans, this is still an unsolved problem in AI.

GoodAI / AI Race

The beauty of gradual learning is in its efficiency: human culture represents an accumulated set of knowledge that each new individual does not have to reinvent from scratch, but it can be passed on by teachers and mentors, and can be built upon. The gradual learning mechanism goes hand in hand with guided learning: as AI researchers, we will be hand-crafting curricula in virtual environments of gradually increasing complexity, where the AI will learn the skills and heuristics that we deem useful and beneficial. We will guide the AI through the learning process much like we guide our children by presenting them with the right information and challenges at a right time, and not letting them venture into the real world, out of the sandbox, before they are ready.

Teaching, training, and especially testing the AI might be a much more tedious process than developing the base algorithm. Whoever will be the creators of general AI, will have a direct influence on what kind of values will the resulting system propagate. Given the transformative potential that such a general AI technology would have, these values could influence the future of humanity. And given the competitive advantage such an AI could give their possessors, these would be the values that would ultimately dominate our universe.

So, how can we make sure general AI is developed for the greater good of humanity as a whole, propagates the good values that we have in common across cultures, and does not benefit just a selected few who end up being the first to deploy it?

To try to answer this complex question, at GoodAI we launched a worldwide challenge titled Solving the AI race. AI has an unprecedented potential to shift the existing balance of power in the world and give an undisputed competitive advantage to a completely new or otherwise weak player. Therefore, it is desirable to mitigate a winner-takes-all scenario when it comes to developing a transformative AI, as we cannot rule out the possibility of rouge actors. We should also plan for how we can protect against negligence in the development process or lack of proper safety testing before the AI is deployed (hasty deployment might be caused by competition pressure).

Finding a strong incentive to cooperate, and creating a robust model of cooperation among AI developers and stakeholders around the world would pave a way towards good AI. Co-development of the next generation of AI could be a way that the parties involved could reap the benefits together, mitigating the winner-takes-all scenario, as well as leading the more extensive work on value learning and AI safety in general. How would that work in practice, exactly, or is there an even better way to mitigate the risks associated with a global race for AI? Would you like to help develop a framework with the capacity to maximize the benefits of AI for society? If so I invite you to try to find answers for these questions together, by joining our challenge.

Rate this article
(votes: 5, rating: 5)
 (5 votes)
Share this article
For business
For researchers
For students