Global Governance of Artificial Intelligence: The Birth of a New Architecture or Fragmentation of the Digital Agenda?
Researcher, Institute of Contemporary InternationalStudies, Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation, Head of International Information Security School
Short version
The emerging system of global artificial intelligence (AI) governance is being formed in the context of an asymmetric distribution of digital power. According to UNCTAD’s Technology and Innovation Report 2025, the dominance of a limited number of multinational corporations is acquiring an oligopolistic character: a situation has formed in which several major players concentrate in their hands the bulk of resources, infrastructure and technologies, and their decisions effectively define the rules of the game for the entire sector.
For example, Alphabet, Amazon, and Microsoft control more than two-thirds of the global cloud services and data storage market, while in the segment of graphics processing units, critical for high-performance computing, Nvidia held around 90% of the global market in the third quarter of 2024.
Another indicator of imbalance is the distribution of investments: around 40% of global private AI investments fall on a limited number of companies (approximately 100), primarily located in the United States and China. This demonstrates that global centers of technological power are already shifting in favor of several States.
Against this background, regulatory competition in AI acquires strategic significance. The accelerating race to develop rules for AI reflects the struggle of states and political blocs for defining influence over regulatory parameters governing the development and use of AI. Several states and politically consolidated blocs aim to advance their own regulatory models that reflect specific political-economic interests, cultural values, and strategic visions of the development of this cross-cutting technology.
The technological imbalance is aggravated by the transfer of competition from the "applied" AI domain, related to its development and practical use, to the sphere of international relations. Taken together, these factors increase the risk of fragmentation in the field of international AI regulation, which, in turn, calls into question the formation of a universal and coherent architecture for global AI governance.
Full version
The emerging system of global artificial intelligence (AI) governance is being formed in the context of an asymmetric distribution of digital power. According to UNCTAD’s Technology and Innovation Report 2025, the dominance of a limited number of multinational corporations is acquiring an oligopolistic character: a situation has formed in which several major players concentrate in their hands the bulk of resources, infrastructure and technologies, and their decisions effectively define the rules of the game for the entire sector.
For example, Alphabet, Amazon, and Microsoft control more than two-thirds of the global cloud services and data storage market, while in the segment of graphics processing units, critical for high-performance computing, Nvidia held around 90% of the global market in the third quarter of 2024.
Growth Paths for the Coming Superintelligence
Another indicator of imbalance is the distribution of investments: around 40% of global private AI investments fall on a limited number of companies (approximately 100), primarily located in the United States and China. This demonstrates that global centers of technological power are already shifting in favor of several States.
Against this background, regulatory competition in AI acquires strategic significance. The accelerating race to develop rules for AI reflects the struggle of states and political blocs for defining influence over regulatory parameters governing the development and use of AI. Several states and politically consolidated blocs aim to advance their own regulatory models that reflect specific political-economic interests, cultural values, and strategic visions of the development of this cross-cutting technology.
In recent years, various international forums and organizations have produced numerous AI-related documents and initiatives: the OECD Council Recommendation on AI (2019), the Hiroshima Process, the Council of Europe Framework Convention on AI, Human Rights, Democracy and the Rule of Law (2024), among others. However, despite the numerous initiatives, none are comprehensive; rather, they exhibit predominantly regional or bloc-based orientations. Moreover, there is a persistent trend towards initiatives formed with active participation of G7 Member States, while, according to UNCTAD, 118 countries, primarily from the Global South, remain outside these processes.
Thus, the technological imbalance is aggravated by the transfer of competition from the "applied" AI domain, related to its development and practical use, to the sphere of international relations. Taken together, these factors increase the risk of fragmentation in the field of international AI regulation, which, in turn, calls into question the formation of a universal and coherent architecture for global AI governance.
The First Bricks of Accelerated Institutionalization: The Advisory Body and Contours of the Global AI Architecture
In the context of the continuing technological imbalance, fragmentation of regulation, as well as a multiplicity of initiatives, the United Nations has gradually sought to consolidate the AI agenda and the role of a universal platform for developing coordinated approaches to global technology management. Earlier steps, including the International Telecommunication Union’s global "AI for Good" summit beginning in 2017, and the adoption in 2021 of the UNESCO Recommendation on the Ethics of Artificial Intelligence—the first universal set of principles for ethical use of AI—laid the foundation for this process.
The efforts of the UN Secretary General proved particularly important. On October 26, 2023, at a press conference in New York, he announced the establishment of a High-Level Advisory Body on Artificial Intelligence. It included 39 experts and aimed to develop recommendations in three main areas:
- International AI governance
- Analyzing AI-related risks and challenges and ways to mitigate them
- Using AI to accelerate progress towards the achievement of the Sustainable Development Goals
Already in September 2024, less than a year after its establishment, the Advisory Body presented its final report "Governing AI for Humanity." It proposed seven key recommendations for managing AI-related risks:
- Establishing an independent international scientific panel
- Launching an intergovernmental dialogue on AI governance
- Creating a hub for AI standards exchange
- Forming a global capacity-building network
- Developing a worldwide AI data system
- Establishing a global AI fund
- Opening an office dedicated to AI issues in the UN system
The report also called for the creation of the first "globally inclusive and distributed AI management architecture" based on international cooperation.
Despite its advisory nature, the document had a direct normative impact on the AI issue: its key provisions were reproduced in a number of resolutions of the UN General Assembly and reflected in the Global Digital Compact. This marked the transition from expert initiatives to the consolidation of a multi-level framework for a global AI governance architecture. This transition notably took place in an unprecedentedly short time: in less than a year, the Advisory Body not only prepared the draft, but also presented the final report, laying the foundation for further progress. It is possible that such an accelerated nature of the Body's work was due to the desire to advance key recommendations in an expert format, bypassing the intergovernmental mechanism at the first stage, that could become a source of major contention.
Normative Consolidation of the AI Agenda: UNGA Resolutions and the Pact for the Future
US Semiconductor Reindustrialization: Implications for the World
The year 2024 was marked not only by the publication of the final report of the High-level Advisory Body, but also by the concentration of significant international initiatives in the field of AI, primarily within the framework of the United Nations. In a short period of time, the General Assembly adopted a number of documents that, on the one hand, outlined regulatory frameworks, and, on the other, reflected the diversity of approaches and differences in strategic interests in the AI domain.
In March 2024, largely influenced by the "criticism of the Bletchley Park Summit outcome", the United States, with the support of its allies, initiated resolution A/78/L.49, "Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development" to the General Assembly for consideration. The document was adopted by consensus and became the first resolution pertaining to the civil application of AI.
Already in July, a second resolution on "Enhancing international cooperation on capacity-building of artificial intelligence," initiated by China, was approved. It shifted focus to equitable distribution of resources and technologies.
Thus, the first two resolutions set different vectors for the development of the global AI agenda: one emphasized the need for trust-based and secure systems, while the other focused on fairness and inclusivity in access to AI capabilities.
The next step was the September 2024 adoption of the Pact for the Future, which included the Global Digital Compact as an annex. The document outlined the contours of new global AI governance mechanisms, largely reproducing the provisions of the report of the High-level Advisory Body. However, the Compact did not receive unanimous support, it contained several controversial provisions and was subject to widespread criticism, including from the Russian Federation, which pointed out the lack of transparency in the preparation of the document and the risk of replacing existing multilateral negotiating formats. At the same time, its adoption reflected the existence of deep differences in individual state AI approaches, and on the other hand, it consolidated the framework trajectory for the further development of AI mechanisms under the auspices of the United Nations.
The military dimension is another aspect of AI that deserves particular attention. On October 16, the First Committee of the General Assembly approved draft resolution A/C.1/79/L.43, later adopted on December 24 as Resolution 79/239 "Artificial intelligence in the military domain and its implications for international peace and security." It confirmed the applicability of international law, including the UN Charter, international humanitarian law and human rights law, to the use of AI in the military sphere. Overall, 165 Member States voted for the draft, while two opposed and six abstained. In its speech, the Russian Federation pointed out the risks of undermining the ongoing multilateral efforts within the framework of the Group of Governmental Experts and also expressed disagreement with the inclusion of criteria for the "responsible use" of AI and references to regional initiatives, which, in Moscow's opinion, should not set standards for negotiations on military AI governance.
The high-level U.S.-initiated meeting of the UN Security Council on AI issues within the framework of the "Maintenance of International Peace and Security" agenda, held on December 19, 2024, was a logical continuation of bringing AI issues into the international security sphere. As stated, the purpose of the event was to give artificial intelligence issues the status of a strategic category in the context of global peace and security.
In 2025, the dynamics continued: on July 25, 2025, the UN General Assembly unanimously adopted resolution A/RES/79/L.94 "The role of artificial intelligence in creating new opportunities for sustainable development in Central Asia," initiated by the Republic of Tajikistan. The document established mechanisms for using AI to achieve the regional Sustainable Development Goals. One of the central provisions of the document is a proposal to establish a Regional AI Center in Dushanbe to coordinate educational, research and infrastructure initiatives in Central Asia.
Together, the adopted documents indicate the formation of norms in the AI sphere and the gradual expansion of the civilian AI agenda, from sustainable development and trust issues to those of international security concerns and regional initiatives. They simultaneously lay the groundwork for future codification. At the same time, there is a tendency towards localized forms of cooperation that can complement the global mechanism. This probably indicates the strengthening of the role of regional centers as platforms for testing and adapting future norms.
Culmination: UNGA Resolution A/RES/79/325 and New Mechanisms of Global AI Governance
On August 26, 2025, the UN General Assembly adopted Resolution A/RES/79/325, establishing two new mechanisms for global AI governance—the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance. The document established their terms of reference and operating modalities:
- The Scientific Panel will consist of forty experts appointed in their personal capacity for three-year terms on an equal geographical basis, with a mandate to prepare annual analytical reports “policy-relevant but not policy-prescriptive," summarizing global research on AI risks, opportunities, and impacts.
- The Global Dialogue will become a multilateral platform for policy discussions and consensus-building in the field of AI technology regulation for governments and stakeholders. The first meeting will take place in September 2025, during the 80th Session of the UN General Assembly. In 2026, the meeting will be held within the framework of the ITU Global Summit "AI for Good", and in 2027—during a multilateral forum on the use of scientific and technological achievements and innovations in the interests of achieving the Sustainable Development Goals.
In fact, UNGA resolution A/RES/79/325 forms the institutional core of global AI governance in the UN system and marks a transition from framework declarations to the creation of a unified architecture. At the same time, the UN has secured its role as a coordinator of a multi-level AI governance system. Nevertheless, it should be considered that there remains a risk of "soft polycentricity," in which the global architecture does not become unified but turns into a network of partially compatible regimes.
Civilian and Military Tracks of AI Agenda
It is of fundamental importance that in the resolution the mandate of the institutions created under the auspices of the United Nations is limited exclusively to issues of civilian application of AI. This deliberately excludes the military aspect, which remains sensitive in international practice. This issue continues to be the subject of a separate discussion. Thus, a dual-track structure is emerging:
- Civil application of AI—institutionalized in new mechanisms (Scientific Panel and Global Dialogue).
- The military use of AI has been placed on a separate negotiating track (primarily within the framework of the Group of Governmental Experts of the States Parties to the Convention on Inhumane Weapons on Lethal Autonomous Weapons Systems, as well as in the UN Disarmament Commission), which avoids blocking negotiations on the civilian dimension of AI.
Possible Scenarios for Evolving Global AI Governance and Future "Rules of the Game"
The years 2024–2025 marked a period of accelerated institutionalization of the AI agenda within the UN. The adopted documents not only signify the transition from declarative principles to the creation of specific procedural and organizational mechanisms, but also demonstrated the increasing politicization of the issue. On the one hand, the UN decisions have consolidated the guidelines and the need to develop coordinated approaches to AI governance. On the other hand, disagreements on the content of individual documents and the multiplicity of parallel initiatives indicate the continuing risk of regulatory fragmentation and competition between regimes.
At the same time, it should not be excluded that emergence of a separate "AI track" represents an attempt to disperse ICT negotiations overall, partially removing politically sensitive issues from central debates. In this context, the establishment of the Independent Scientific Panel and the Global Dialogue simultaneously signal both consolidation and potential further stratification of negotiations.
The process of forming a global AI governance architecture remains in the early stage of institutionalization. Nevertheless, the prospects for global AI governance after the adoption of Resolution 79/325 should be considered not as abstract hypotheses, but as scenarios for the development of an already defined institutional framework. The creation of the Scientific Panel and the Global Dialogue establishes the minimum infrastructure required for coordination—regular expert reports of a non-prescriptive nature and a cyclical intergovernmental track.
Based on these parameters, three basic scenarios for the development of the global AI governance architecture can be distinguished: optimistic (accelerated unification), intermediate (soft polycentricity) and pessimistic (contour fragmentation).
Scenario 1: Accelerated unification through the UN’s institutional core
Regular reports by the Scientific Panel form a consolidated evidence base. Shared concepts, and the Global Dialogue, are gradually moving from exchanging positions to agreeing on minimum mandatory regulatory parameters. The result may be a framework for a political and legal agreement under the auspices of the United Nations (in the format of a universal declaration with a protocol line) or a package of harmonized standards for national and regional implementation.
Scenario 2: Soft Polycentricity
The Global Dialogue and the reports of the Scientific Panel serve as compatibility mechanisms among regional and national regimes without moving towards binding norms. A network or coordination is emerging: a set of agreed guidelines, voluntary codes, and technical standards recognized through practice and references in national acts. "Soft polycentricity" becomes a stable state, with different centers of power maintaining their models, but agreeing on compatibility parameters. The result is the strengthening of soft law, the formation of new customary norms (for example, on mandatory risk assessment and human supervision), and increased transparency through regular reports, without a the use of coercion. The role of the UN in this scenario is reduced to the function of coordinator and "referee" to find the compatibility of different approaches to AI.
Scenario 3: Contour fragmentation while maintaining the support of the United Nations
Several quasi-independent regulatory zones are being formed, each with their own mandatory requirements. However, the Scientific Panel and the Global Dialogue function as platforms for exchanging information, norm stress-testing, incident de-escalation, and the development of "security protocols" on narrow issues (for example, notifying high-risk AI incidents). In this context, the role of the UN will be limited to maintaining "subtle connection" through regular dialogue and reports, which will prevent the complete disintegration of the regime.
To summarize, the evolution of the global AI governance architecture is not predetermined; on the contrary, much will depend on political will and the recognition of a shared human destiny in the face of technological progress. At the same time, the further development of the AI structure will largely depend on the dynamics of the newly created mechanisms.
In this regard, the upcoming international events are of particular importance:
- The informal high-level meeting during the High-level week of the 80th UN General Assembly session (September 2025)
- The Global Dialogue within the framework of the Global Summit of the International Telecommunication Union "AI for Good" (Geneva, 2026)
- The Multi-Stakeholder Forum on Science, Technology and Innovation for the SDGs (New York, 2027)
These events will serve as indicators on whether the emerging architecture moves towards consolidation or towards a multi-level, partially fragmented system. In any case, the process of codification and institutionalization in the AI sphere has entered an irreversible phase, and the international community faces the task of not only agreeing on the "rules of the AI game," but also ensuring their universal applicability, minimizing the risk of "AI fault lines."