From February 16 to 20, 2026, New Delhi hosted the India AI Impact Summit—the fourth and largest in a series of global artificial intelligence summits launched in 2023.
Many had hoped that the summit would serve as a starting point for developing a sustainable course that would enable middle powers to participate in shaping the rules of AI development and prevent the concentration of its benefits in the hands of only a few companies from the United States and China. However, instead of establishing a unified architecture of global governance, the summit confirmed what had previously existed only as a trend: the global consensus on AI is fragmenting into competing regulatory philosophies. These disagreements are not technical but fundamental in nature—they reflect differing visions of the roles of the state, the market, and international institutions.
If three years ago concerns about the existential risks of AI dominated the debate and calls for preventive regulation were widespread, today the focus has shifted toward pragmatism: the key question is who will be able to integrate AI into public services, the economy, and infrastructure—and thereby shape the rules of the game.
AI today is not only a technology, but also an instrument of sovereignty, economic competition, and global influence. A range of regulatory models is emerging, and their interaction and rivalry will define the global AI architecture over the coming decade.
In this context, regulatory fragmentation appears not to be a temporary deviation, but as a new normal. Efforts to forge a universal global consensus are running up against fundamental differences in political philosophies, economic interests, and views on the role of the state. Under these conditions, a more realistic strategy lies in forming coalitions of countries to harmonize technical standards, share implementation practices, and gradually develop common data markets.
At the same time, a structural gap is widening between rhetoric and actual capabilities. For Global South countries, the issue is not a lack of ambition, but a deficit of “hard” capabilities—proprietary models, computing infrastructure, skilled personnel, and reliable energy resources. Without overcoming these constraints, ambitious declarations risk remaining symbolic gestures that do not affect the distribution of technological power.
The situation is further complicated by the pace at which AI is spreading. The technology is evolving in unprecedented cycles, shrinking to the scale of months. With such rapid technological development, detailed regulation risks becoming obsolete even before it is implemented. The AI Act is a clear example: the framework was developed prior to the widespread adoption of generative models and AI agents, forcing the European Union to repeatedly delay its full implementation in order to undertake substantial revisions.
At the same time, halting the deployment of AI is not an option. States that fail to integrate AI into their institutional fabric in a timely manner will face high costs of structural economic adjustment, institutional inertia, and increased dependence on external providers. In this context, participation in the increasingly discussed global AI race implies not only ambitious goals, but also concrete steps toward practical implementation. Ultimately, the key question is not who will draft the most comprehensive rules, nor even who will develop the most advanced foundational model, but how effectively a country can integrate AI into its institutions and economy.
The country that succeeds in designing the most practical and effective model for AI development will gain the greatest advantages from the coming “AI-ization” of the world.
From February 16 to 20, 2026, New Delhi hosted the India AI Impact Summit—the fourth and largest in a series of global artificial intelligence summits launched in 2023. The event brought together over 100,000 participants, around 20 heads of state and government, as well as delegations from more than 110 countries and 30 international organizations. In total, more than 400 sessions were held—such scale in itself amounted to a political statement.
Many hoped that the summit would serve as a starting point for shaping a sustainable trajectory that would enable middle powers to participate in defining the rules of AI development and prevent the concentration of its benefits in the hands of just a few companies from the United States and China. However, instead of producing a unified architecture of global governance, the summit confirmed what had previously been only a trend: the global consensus on AI is fragmenting into competing regulatory philosophies. These disagreements are not technical but fundamental in nature—they reflect differing conceptions of the roles of the state, the market, and international institutions.
From “Safety” to “Influence”: The Shift in the Global AI Agenda
The evolution of artificial intelligence summits reflects an ongoing search for shared interests. While the early meetings—the “safety” summits in Bletchley Park (2023) and Seoul (2024)—focused on risk mitigation and prevention measures, the “action” summit in Paris (2025) brought geopolitical and geoeconomic competition to the forefront, including the struggle for standards, markets, and national champions. In turn, the “influence” summit in New Delhi (2026) shifted the emphasis toward the practical deployment of technologies.
If three years ago concerns about the existential risks of AI dominated the agenda and calls for preventive regulation were widespread, today the focus has shifted toward pragmatism: the decisive question is who will be able to integrate AI into public services, the economy, and infrastructure—and thereby shape the rules of the game.
Equally symbolic was the choice of venue. For the first time, an event of this scale was held in a Global South country. AI governance is no longer the exclusive domain of Western elites and technology giants. The question, however, is whether this shift is matched by real capabilities.
The New Delhi Declaration: Symbol of Unity, Reality of Divergence
The culmination of the summit was the New Delhi Declaration on AI Impact, endorsed by 91 countries, including China, Russia, and the United States. Notably, Washington had demonstratively refused to sign the declaration of the 2025 Paris summit, considering its approach to AI risks and inclusivity to be excessive.
The declaration covers issues such as the democratization of access to computing power, data, and AI models; the expanded role of AI in healthcare, education, agriculture, and public services; as well as principles of accountability and human oversight. At the same time, the document has drawn justified criticism for its conceptual redundancy: in essence, these principles largely reproduce positions already articulated by the OECD, the G20, UNESCO, and previous summits.
This is precisely where the systemic problem lies. The document contains neither financial commitments, nor mechanisms for the creation and use of shared computing capacity, nor binding standards. As one commentator aptly put it: “Non-binding declarations are the international equivalent of LinkedIn likes—generous, free, and quickly forgotten.”
This skepticism is hardly an exaggeration. Behind the declaration lies a fundamental asymmetry: approximately 90% of the world’s AI computing infrastructure is controlled by just two countries—the United States and China. While the document calls for the equitable use of technology, it sidesteps the reality of the concentration of computing power, data, and know-how in the hands of a small number of states and corporations.
Nevertheless, it would be a mistake to dismiss the New Delhi Declaration altogether. The very fact that 91 participants endorsed it creates—albeit a rather tentative—platform where China, Russia, and the United States have, for the first time, come together, opening up the possibility to discuss the coexistence of already emerging, largely independent AI systems.
The Clash of AI Paradigms
By the spring of 2026, three distinct models had taken shape, each grounded in its own underlying philosophy.
The United States: Technological Dominance and Minimal Regulation
The Trump administration has prioritized minimal regulation of the sector in order to preserve U.S. global leadership in AI. Washington’s position was articulated with particular clarity by Michael Kratsios, Director of the White House Office of Science and Technology Policy, who stated: “We categorically reject global AI governance. The deployment of AI cannot lead to a bright future if it is subordinated to bureaucracy and centralized control.”
At the same time, leaders of major U.S. tech companies have spoken in favor of “reasonable regulation,” while the White House has taken a more cautious stance and has not rushed to advance regulatory measures.
The European Union: A Low-Risk Framework
The European approach is based on mandatory transparency requirements within a risk-based model, whereby the degree of state oversight, the obligations imposed on developers, and the strictness of regulation are directly determined by the level of risk that a given AI system poses to individuals.
The European Union was the first in the world to adopt a comprehensive AI law (the AI Act), which can be considered one of the most ambitious regulatory experiments in the history of technology policy. It provides mandatory conformity assessments for high-risk AI systems prior to their placement on the market, substantial turnover-based fines for violations, and—most importantly—has extraterritorial reach: its requirements apply to any provider serving clients in the EU, regardless of the location of its headquarters.
The logic of such frameworks is based on managing risks before harm occurs. However, the cost of this precautionary approach is higher compliance burdens for businesses and the potential flight of investors to more permissive jurisdictions.
Flexible Pragmatism
The third approach is not tied to a specific region, but is characterized by a reliance on voluntary frameworks and situational adaptation. This is not an absence of strategy, but a different logic—one in which the speed of technological deployment is valued more highly than the completeness of its regulation.
For example, on January 22, 2026, Singapore launched the world’s first governance framework for agentic AI—autonomous systems capable of planning and executing tasks with minimal human involvement. China’s model is also notable: it combines the creation of favorable conditions for rapid deployment with strict political oversight, including fines and stringent data localization requirements.
Russia’s approach, in turn, balances soft regulation—through an ethical code for the AI industry—with a risk-based framework that differentiates the scope for experimentation depending on the domain of AI application. Notably, although this often remains outside the focus of external observers, the Russian approach in many respects aligns with the model presented by India to the international community at the recent summit.
The Indian Model: A Fourth Path or a Tactical Compromise?
The key feature of the Indian model is that governance evolves in parallel with deployment, rather than preceding it. Rules are not rejected, but their sequencing is different: institutional adoption of technology comes before the consolidation of regulatory frameworks.
Instead of competing in the development of cutting-edge models, India is focusing on:
- institutional deployment of AI
- adaptation of open models to national priorities
- integration of legal mechanisms into the architecture of technologies
For example, the Sarvam AI model was presented at the summit, developed by fine-tuning the open-source foundation model Mistral on local language data. This is not a breakthrough at the scientific frontier, but rather a deployment strategy: to take existing technologies, adapt them, and embed them into the institutional fabric. For a country that cannot compete with companies like Google in terms of investment scale, this may be the only rational approach. It can be assumed that such a model may prove to be the most relevant for the majority of countries worldwide.
The United Nations: A Platform for Global Consensus
The United Nations seeks to position itself as a platform for global AI governance, aiming to bring together fragmented and rapidly evolving national regulatory practices within a single framework.
In particular, a number of initiatives have been launched across UN platforms to promote international scientific exchange and the development of AI talent. On the sidelines of the summit in India, the creation of an Independent International Scientific Panel on AI, composed of 40 experts, was announced. The group’s mandate is to produce an annual report with scientifically grounded, evidence-based assessments that synthesize and analyze existing research on the capabilities, risks, and implications of AI, thereby helping states shape their positions on its development and regulation based on expertise and data.
UN Secretary-General Antonio Guterres also announced a further initiative as part of the continued institutionalization of international cooperation—the creation of a dedicated platform: the Global Dialogue on AI Governance, scheduled to take place in Geneva in May 2026. As the Secretary-General noted, “Without a common baseline, fragmentation prevails—different regions will operate under incompatible policies and technical standards. This will increase costs, weaken safety, and deepen divisions.”
Serious concern is raised by the U.S., which views the very idea of multilateral governance as a threat, as well as by the firm conviction of the European bureaucracy in the need to extend the practices of the AI Act without taking into account the specificities of other countries. Under such conditions, UN initiatives risk facing an inability to deliver the declared goal of a common global strategy for AI development. At the same time, the United Nations remains the only international institution that enjoys universal recognition.
Conclusions and Outlook
AI today is not only a technology, but also an instrument of sovereignty, economic competition, and global influence. A range of regulatory models is emerging, and their interaction and rivalry will shape the global AI architecture over the coming decade.
Regulatory fragmentation thus appears not as a temporary deviation, but as a new normal. Efforts to forge a universal global consensus are running up against fundamental differences in political philosophies, economic interests, and views on the role of the state. Under these conditions, a more realistic strategy lies in forming coalitions of countries to harmonize technical standards, share implementation practices, and gradually develop common data markets.
At the same time, a structural gap is widening between rhetoric and actual capabilities. For countries of the Global South, the issue is not a lack of ambition, but a deficit of “hard” capacity—proprietary models, computing infrastructure, skilled personnel, and reliable energy resources. Without overcoming these constraints, ambitious declarations risk remaining symbolic gestures that do not affect the distribution of technological power.
The situation is further complicated by the pace at which AI is spreading. The technology is evolving at an unprecedented rate, with cycles of development now measured in months. Under such conditions, detailed regulation risks becoming obsolete even before it is implemented. The AI Act is a clear example: the framework was developed prior to the widespread adoption of generative models and AI agents, forcing the European Union to repeatedly delay its full implementation in order to undertake substantial revisions.
At the same time, halting the deployment of AI is not an option. States that fail to integrate AI into their institutional fabric in a timely manner will face high costs of structural economic adjustment, institutional inertia, and increased dependence on external providers. In this context, participation in the increasingly discussed global AI race implies not only ambitious goals, but also concrete steps toward practical implementation. Ultimately, the key question is not who will draft the most comprehensive rules, nor even who will develop the most advanced foundational model, but how effectively a country can integrate AI into its institutions and economy.
Consequently, the country that succeeds in designing the most practical and effective model for AI development will gain the greatest advantages from the coming “AI-ization” of the world.