Print Читать на русском
Topic: Technology
Region: Russia
Type: Articles
Rate this article
(votes: 4, rating: 5)
 (4 votes)
Share this article
Anton Kolonin

PhD in Technical Sciences, Founder, Aigents, Architect, SingularityNET, RIAC Expert

Two years into our last review of artificial intelligence, there has been a widening gap between the seeming omnipotence of neural network models based on “deep learning”, which are offered by market leaders, and the demand for an “algorithmic transparency” emanating from the society. In this review, we will try to probe this gap, discussing what trends and solutions can help resolve the problem or lead to its further exacerbation.

Key issues: consciousness, multimodality, super-deep learning, foundation models, transparency/opacity, total usage of AI systems, energy (in)efficiency, militarization.

Pivotal points of growth: studying the “depth” of AI systems, comparing models of parameters and synapses, transitioning from explainability to interpretability, formalizing the approaches to the ethics of AI.

From a humanitarian perspective, it seems necessary to intensify cooperation between the states leading in AI and arms races (Russia, the United States and China) within the UN framework in order to effect a complete ban on the development, deployment and use of Lethal Autonomous Weapon Systems (LAWS).

When entering international markets, developers of universal general AI systems will have to ensure that their AI decision-making systems can be pre-configured to account for the ethical norms and cultural patterns of the target markets, which could be done, for example, by embedding “core values” of the target market, building on the fundamental layer of the “knowledge graph,” when implementing systems based on “interpretable AI.”

Russia cannot hope to aspire for global leadership with its current lag in the development and deployment of “super-deep” neural network models. The country needs to close the gap on the leaders (the United States and China) by bringing its own software developments to the table, as well as its own computing equipment and data for training AI models.

However, keeping in mind the above-identified fundamental problems, limitations and opportunities, there may still be some potential for a breakthrough in the field of interpretable AI and hybrid neuro-symbolic architectures, where Russia’s mathematical school still emerges as a leader, which has been demonstrated by the victory of a group of developers from Novosibirsk in the competition for best cognitive architecture at the AGI 2020 International Conference on Artificial General Intelligence. In terms of practical applicability, this area is somewhat in a state similar to that of deep neural network models some 10–15 years ago; however, any delay in its practical development can lead to a strategic lag.

Finally, an additional opportunity to dive into the problems and solutions in the field of strong or general AI will be presented to participants in the AGI 2022 conference, which is expected to take place in St. Petersburg next year and which certainly deserves the attention of all those interested in this topic.

Two years into our last review on state of the art in the area of artificial intelligence, there has been a widening gap between the seeming omnipotence of neural network models based on “deep learning”, which are offered by market leaders, and the demand for an “algorithmic transparency” emanating from the society. In this review, we will try to probe this gap, discussing what trends and solutions can help resolve the problem or lead to its further exacerbation.

Developments of Recent Years

First of all, what we know as strong or general AI (AGI) has become a well-established item on the global agenda. A team of Russian-speaking researchers and developers has published a book on this topic, where they provide a thorough analysis of the possible prospects of this technology. Open seminars are being held on a weekly basis during the last two years by the Russian-speaking community of the AGI developers.

Consciousness. One of the key problems concerning AGI is the issue of consciousness, as was outlined in our earlier review. Controversy surrounds both the very possibility of imbuing artificial systems with it and the extent to which it would be prudent for humanity to endow such systems with “consciousness”, if possible at all. As Konstantin Anokhin has put it at the OpenTalks.AI conference in 2018, “we must explore the issue of consciousness to prevent that AI is imbued with it.” According to the materials of a round table held at the AGIRussia seminar in 2020, one of the first requirements for the emergence of consciousness in artificial systems is the ability of AI systems to carry out “multimodal” behaviour, which implies integrating information from various sensory modalities (e.g., text, image, video, sound, etc.) by “grounding” it from different modalities in the surrounding reality, enabling them to construct coherent “images of the world”—just as humans do.

Multimodality. It is here that a number of promising technological breakthroughs took place in 2021. For example, having been trained on a multimodal dataset including text–image pairs, OpenAI’s DALL-E system can now generate images of various scenes from text descriptions. In the meantime, the Codex system, which is also developed by OpenAI, has learnt to generate software code in accordance with an algorithm written in plain English.

Super-deep learning. The race for the “depth” of neural models, while has long been dominated by the American giants Google, Microsoft (jointly with OpenAI) and Amazon, is now joined by China’s tech giants Baidu, Tencent and Alibaba. In November 2021, Alibaba created the M6 multimodal network that boasts a record number of parameters or connections (10 trillion in total)—this is a mere one tenth behind the number of synapses in the human brain, as the latest data suggest.

Foundation models. Super-deep multimodal neural network models have been termed “foundation models.” Their potential capabilities and related threats are analysed in a detailed report prepared by the world’s leading AI specialists at Stanford University. On the one hand, the further development of these models can be seen as the closest achievement on the way towards AGI, with the system’s intelligence increased by virtue of an increasing number of parameters (more than in the human brain), perceived modalities (including new modalities that are inaccessible to humans) as well as huge amounts of training data (something that no individual person could ever process). The latter allows some researchers to speculate that a “super-human AI” could be built on such systems in the not-too-distant future. However, there remain some serious issues, both those raised in the report and others discussed below.

Algorithmic transparency/opacity. The further “deepening” of deep models serves to exacerbate the conflict between this approach and the requirements of the “algorithmic transparency,” which is increasingly imperative for AI-based decision-making systems as they proliferate. Limitations on the applicability of “opaque” AI in the areas that concern the security, rights, life and health of people are adopted and discussed around the world. Interestingly, such restrictions can seriously hinder the applicability of AI in contexts where it may be useful, such as in the face of the ongoing COVID-19 pandemic, where it could help solve the problem of mass diagnostics amid a mounting wave of examinations and a catastrophic shortage of skilled medical personnel.

Totally-used AI. AI algorithms and applications are becoming ubiquitous to encompass all aspects of daily lives, be it any kind of movement or financial, consumer, cultural and social activities. Global corporations and the states that exert control over them are those that control and derive benefits from this massive use of AI. As we have argued earlier, the planet’s digitally active population is divided into unequal spheres of influence between American (Google, Facebook, Microsoft, Apple, Amazon) and Chinese (Baidu, Tencent, Alibaba) corporations. Objectively, any possible manipulations on the part of these corporations and states, since they seek to maximize the profits of majority shareholders while preserving the power of the ruling elites, will only increase as the AI power at their disposal grows. It is symptomatic that OpenAI, initially intended as an open public-oriented project, has shifted to closed source, and it is now becoming all the more dependent on Microsoft in its finances.

Energy (in)efficiency. As with cryptocurrency mining, which has long been drawing criticism due to its detrimental environmental impact, power consumption of “super-deep” learning systems and the associated carbon footprint are becoming another matter of concern. In particular, the latest results of the OpenAI Codex multimodal system, developed jointly with Microsoft, touch on the environmental impact of this technology in a separate section. Given that the existing number of parameters in the largest neural network models is several orders of magnitude less than the number of synapses in the human brain, an increase in the number of such models and their parameters will lead to an exponential increase in the negative impact of such systems on the environment. The efficiency of the human brain, as it consumes immeasurably less energy for the same number of parameters, remains unattainable for existing computing architectures.

Militarization. With no significant progress in imposing an international ban on the creation of Lethal Autonomous Weapons Systems (LAWS), such systems are being employed by special services. As the successful use of attack drones has already become a decisive factor in local military conflicts, a wider use of autonomous systems in military operations may become a reality in the near future, especially that live pilots are no longer able to compete with AI systems in the simulation of real air battles. Poor ability to explain and predict the behaviour of such systems at a time of their proliferation and possible expansion into space invites no special comment. Unfortunately, aggravated strategic competition between world leaders in both the AI and arms races leaves little hope for reaching a consensus in a competitive environment, as was stated in the Aigents review back in 2020.

Prospects for Development

Given the insights shared earlier on, we shall briefly discuss the possible “growth zones,” including those where further development is essentially critical.

What can the “depth” reveal? As shown by expert discussions, such as the workshop with leading computational linguists from Sberbank and Google held in September 2021, “there is no intelligence there,” to quote one of the participants. The deepest neural network models are essentially high-performance and high-cost associative memory devices, albeit operating at speeds and information volumes that exceed human capabilities in a large number of applications. However, by themselves, they are failing to adapt to new environmental conditions if not tuned to them manually, and they are unable to generate new knowledge by identifying phenomena in the environment to connect them into causal models of the world, let alone share this knowledge with other constituents of the environment, be they people or other similar systems.

Can parameters be reduced to synapses? Traditionally, the power of “deep” neural models is compared to the resources of the human brain on the basis of a whole set of neural network parameters and proceeding from the assumption that each parameter corresponds to a synapse between biological neurons, as is the case of the classical graph model of the human brain connectome, reproduced using neural networks since the invention of the perceptron over 60 years ago. However, this leaves out of account the ability of dendritic arms to independently process information, or hypergraph and metagraph axon and dendrite structures, or the possibility of different neurotransmitters acting in the same synapses, or the effects associated with interference of neurotransmitters from various axons in receptive clusters. Failing to reflect if one of these factors in full means that the complexity and capacity of existing “super-deep” neural network models is removed by dozens of orders of magnitude from the actual human brain, which in turn calls into question the fitness of their architectures for the task of reproducing human intelligence “in silico”.

From “explainability” to interpretability. Although the developments in Explainable AI technologies make it possible to “close” legal problems in cases related to the protection of civil rights, allowing companies to generate rather satisfactory “explanations” in cases stipulated by law, in general the problem cannot be considered solved. It is still an open question whether it is possible to “interpret” trained models before putting them to use, in order to avoid situations where belated “explanations” can no longer bring back human lives. In this regard, the development of hybrid neuro-symbolic architectures, “vertical” and “horizontal,” appears promising. Vertical neuro-symbolic architecture involves artificial neural networks at the “lower levels” for low-level processing of input signals (for example, audio and video) while using “symbolic” systems based on probabilistic logic (Evgenii Vityaev, Discovery, and Ben Goetzel, OpenCog) or non-axiomatic logic (Pei Wang, NARS) for high-level processing of behavioural patterns and decision-making. Horizontal neuro-symbolic architecture implies the possibility of representing the same knowledge either in a neural network, implementing an intuitive approach (what Daniel Kahneman calls System 1) or in a logical system (System 2) operating on the basis of the abovementioned probabilistic or non-axiomatic logic. With this, it is assumed that “models,” implicitly learned and implicitly applied in the former system, can be transformed into “knowledge,” explicitly deduced and analysed in the latter, and both systems can act independently on an adversarial basis, sharing their “experience” with each other in the process of continuous learning.

A vertical neuro-symbolic architecture of explainable AI. Source: Society of Mind (Marvin Minsky)

Can ethics be formalized? As the ethics of applying AI in various fields are increasingly discussed at the governmental and intergovernmental levels, it is becoming apparent that there are certain national peculiarities in the related legislation, first of all in the United States, the European Union, China, Russia and India. Research shows significant differences in the intuitive understanding of ethics by people belonging to different cultures. Asimov’s Three Laws of Robotics seem particularly useless as in critical situations people have to choose whether (and how) their action or inaction will cause harm to some in favour of others. If AI systems continue to be applied (as they are in transport) in fields where automated decisions lead to the death and injury of some in favour of others, it is inevitable that legislation will develop in relation to such systems, reflecting different ethical norms across countries, and AI developers working in international markets will have to adapt to local laws in the field of AI ethics, which is exactly what is now happening with personal data processing, where IT companies have to adapt to the legislation of each individual country.

Further Steps

Oleg Shakirov, Evgeniya Drozhashchikh:
Artificial Intelligence and Its Partners

From a humanitarian perspective, it seems necessary to intensify cooperation between the states leading in AI and arms races (Russia, the United States and China) within the UN framework in order to effect a complete ban on the development, deployment and use of Lethal Autonomous Weapon Systems (LAWS).

When entering international markets, developers of universal general AI systems will have to ensure that their AI decision-making systems can be pre-configured to account for the ethical norms and cultural patterns of the target markets, which could be done, for example, by embedding “core values” of the target market, building on the fundamental layer of the “knowledge graph,” when implementing systems based on “interpretable AI.”

Russia cannot hope to aspire for global leadership with its current lag in the development and deployment of “super-deep” neural network models. The country needs to close the gap on the leaders (the United States and China) by bringing its own software developments to the table, as well as its own computing equipment and data for training AI models.

However, keeping in mind the above-identified fundamental problems, limitations and opportunities, there may still be some potential for a breakthrough in the field of interpretable AI and hybrid neuro-symbolic architectures, where Russia’s mathematical school still emerges as a leader, which has been demonstrated by the Springer prize granted to a group of researchers from Novosibirsk for best cognitive architecture at the AGI 2020 International Conference on Artificial General Intelligence. In terms of practical applicability, this area is somewhat in a state similar to that of deep neural network models some 10–15 years ago; however, any delay in its practical development can lead to a strategic lag.

Finally, an additional opportunity to dive into the problems and solutions in the field of strong or general AI will be presented to participants in the AGI 2022 conference, which is expected to take place in St. Petersburg next year and which certainly deserves the attention of all those interested in this topic.


(votes: 4, rating: 5)
 (4 votes)

Poll conducted

  1. In your opinion, what are the US long-term goals for Russia?
    U.S. wants to establish partnership relations with Russia on condition that it meets the U.S. requirements  
     33 (31%)
    U.S. wants to deter Russia’s military and political activity  
     30 (28%)
    U.S. wants to dissolve Russia  
     24 (22%)
    U.S. wants to establish alliance relations with Russia under the US conditions to rival China  
     21 (19%)
For business
For researchers
For students