top of page

The Race for AI and the New Paradox of International Security

Note: The views expressed in this text are solely those of the author and do not necessarily reflect the position of this website.


This analysis expands Sullivan and Feldman's thesis to a global perspective, examining how Artificial Intelligence reconfigures the balance of power, national sovereignty, and the stability of the international system.


Photo: Meer
Photo: Meer

The history of geopolitics is marked by the struggle for tangible resources such as land, trade routes, and energy sources. However, the 21st century introduces a variable that alters this logic, Artificial Intelligence (AI). Sullivan and Feldman’s text suggests that AI is not just a sectoral advancement, but a structural force that will determine the global hierarchy. In the international arena, the struggle focuses on a matter of national survival and strategic autonomy, in which control over data defines the weight of each State on the global chessboard.


The reconfiguration of forces


The geopolitics of AI can be analyzed through three critical dimensions that affect all global actors, beyond the Washington-Beijing axis:


The Sovereignty of Technical Layers


World power is now distributed across a hierarchical "stack."

  • At the base lies hardware and energy. Countries that control the manufacturing of advanced semiconductors or possess energy matrices capable of sustaining massive data centers become "anchor-States."

  • In the middle layer, software and models, a struggle unfolds between the proprietary model and open source. The former functions as an "enclosure" of intelligence, where powers and corporations maintain control via black boxes and APIs, ensuring security against the proliferation of risks but creating a structural dependence of "digital vassalage" for those who consume them. Open source, on the other hand, emerges as a tool for sovereignty and democratization, allowing nations to seek technological autonomy and adapt AI to their local contexts; however, it removes security "filters," allowing cutting-edge innovations to be quickly replicated or misused by rivals and non-state actors, accelerating the erosion of the competitive advantages of those who created them.

  • Finally, the data layer generates a new type of extractivism: populous nations provide the raw material, but the added value is captured by the powers that hold the processing capacity.


Eight Worlds


The "Eight Worlds" matrix proposed by the authors reveals a new global stratification, in summary:

  • The Powers: Those seeking superintelligence who can create a "singularity" of power that makes it impossible for rivals to catch up. Example: United States and China.

  • The Adopters: States that, although they do not create the base models, integrate AI deeply into their industries and military systems, gaining relevance through efficiency. Example: South Korea and Israel.

  • The Global South and the risk of dependence: For many countries, the danger is digital colonization. If AI is difficult to copy, the world could see an impassable gap between nations that think with their own algorithms and those that depend on foreign infrastructures to govern and produce. Example: Brazil and India.


The Security Dilemma


Unlike nuclear weapons, artificial intelligence does not depend on rare physical inputs or easily traceable infrastructures, making its proliferation potentially rapid, diffuse, and difficult to control. Its dual character, simultaneously civil and military, and its essentially invisible nature amplify risks to international stability. In a "easy proliferation" scenario, capabilities previously restricted to great powers fall within the reach of small States and even non-state actors, especially in areas such as cyberwarfare, large-scale disinformation, autonomous systems, and sensitive applications in biosecurity. This profoundly alters the distribution of power in the international system, favoring asymmetric strategies and reducing the structural advantage associated with traditional industrial or military power.


Furthermore, the speed of AI-based systems compresses the time for political and military decision-making, incentivizing the automation of strategic responses and increasing the risk of accidental escalations without clear political intent. Unlike nuclear deterrence, based on unequivocal attribution, immediate costs, and relatively stable red lines, AI operates in an environment of opacity, gradual effects, and diffuse responsibility, which weakens classic containment mechanisms. Added to this is the fact that many of the central actors in developing these technologies are private companies and laboratories, which fragments governance, dilutes state authority, and hinders the construction of effective international regimes.


In this context, while international cooperation is necessary to mitigate systemic and existential risks, strategic incentives push States toward a continuous technological race motivated by the fear of falling behind. The result tends to be an unstable and sub-optimal equilibrium, in which everyone recognizes the risks, but few are willing to slow down. Thus, AI does not only inaugurate a new dimension of power, but a pattern of chronic instability, in which global governance chases a technology that advances faster than the political capacity to regulate it.


Conclusion


In short, if AI reaches the level of superintelligence, the world may move toward an unprecedented technical unipolarity. If it becomes a common utility technology with easy diffusion, the advantage will belong to those with the best capacity for execution and social integration. The challenge for the international community will be to prevent the race for the technological frontier from resulting in a total fragmentation of the global system, where intelligence becomes the ultimate weapon of exclusion and control.


References



Comments


RI Talks All rights reserved ©

  • Youtube
  • Spotify
  • Instagram
  • LinkedIn
logo
bottom of page