Skip to content

The Real Risk of AGI: Geopolitical Disequilibria

Ultimately, the risks of AGI are not technical but human

Loading the Elevenlabs Text to Speech AudioNative Player...

The existential risks of artificial intelligence have long been framed in terms of rogue superintelligence: a godlike machine deciding that humanity is irrelevant or an obstacle to its goals. This narrative, rooted in cautionary science fiction, has dominated public discourse. Yet the more immediate and arguably greater threat posed by artificial general intelligence lies not in the machines themselves, but in the consequences of their creation for us—their human stewards. As the United States and China solidify their positions as the two dominant powers in artificial intelligence, their competition for supremacy introduces a game-theoretic dilemma, one that threatens to upend the fragile balance of our global order.

Game theory emerged as a framework to analyze precisely these kinds of high-stakes international competitions. At its core, game theory explores how rational actors navigate situations where their success depends not only on their own choices, but on the anticipated actions of their rivals. John von Neumann, one of the field's pioneers, developed these ideas in the shadow of World War II, when the need to predict and manage strategic conflict felt both urgent and existential. Von Neumann's Theory of Games and Economic Behavior formalized ideas that would later prove crucial in understanding—and preventing—catastrophic conflict between great powers.​​​​​​​​​​​​​​​​

The most famous concept in game theory is The Prisoner’s Dilemma, which illustrates the tension between individual and collective rationality. In this scenario, two prisoners, interrogated separately, face a choice: cooperate with each other by remaining silent or defect by betraying the other to reduce their own sentence. Rational actors, distrustful of their counterpart's intentions, often choose to defect, resulting in a worse outcome for both.

This dynamic eerily mirrors the current state of US-China relations, where mutual distrust threatens to override shared interests in safe AGI development. Each party seeks to develop AGI first, knowing that whoever succeeds could gain overwhelming economic, military, and strategic advantages. Yet this very competition raises the probability of catastrophic outcomes. Rushing to deploy AGI without sufficient safeguards could lead to accidents, misuse, or a destabilizing imbalance of power. Cooperation would yield a safer path forward, with shared oversight and slower development. But the fear of defection, leading to the other side achieving an insurmountable lead, makes cooperation precarious. The logic of the prisoner's dilemma pushes both Washington and Beijing toward an arms race.

Beyond the prisoner's dilemma, AGI introduces a variation of the security dilemma. In the traditional formulation, one nation's efforts to enhance its security—through military buildup, for example—unintentionally threaten other nations, prompting them to respond in kind. We see this playing out today: when China invests heavily in AI chips or scaling models, Washington views it as a potential threat, leading to export controls and increased domestic investment. Similarly, Chinese leaders interpret American AI advances through a lens of strategic competition, accelerating their own development programs.

AGI’s potential to deliver a decisive advantage lies in its ability to redefine foundational domains of power—military strategy, cyberwarfare, and weapons development—in ways that traditional systems will find difficult to counter. AGI could revolutionize strategic planning, mirroring its achievements in boardgames like Chess and Go. An AGI secretary of defense could simulate thousands of war-game scenarios in real time and devise preemptive strategies that exploit vulnerabilities no human strategist could anticipate. For example, an AGI system might coordinate simultaneous disruptions of supply chains, communications networks, and infrastructure in an adversary’s territory, destabilizing its ability to mount an effective defense without firing a single shot. Such capabilities would tilt the balance of power decisively toward nations that possess the most capable systems first.

Cyberwarfare will likely become a primary mode of engagement for nations wielding advanced AI systems. A troubling near-term possibility is the emergence of governments with fully autonomous software engineering and AI research capabilities, eliminating the talent gap that has long inhibited governments. These tools could enable new forms of cyber operations - ones where AI systems continuously probe networks for vulnerabilities, devise novel exploits, and coordinate complex multi-vector attacks far more quickly and creatively than human operators. The real danger may lie not in catastrophic attacks, which risk devastating retaliation, but in an AI's ability to find and exploit subtle vulnerabilities that slowly erode system integrity and institutional trust over time. Such capabilities could reshape cyber conflict while staying beneath traditional escalation thresholds.

Weapons innovation could see the most transformative impacts. AGI could accelerate the development of autonomous drone swarms and hypersonic interceptors, rendering existing missile arsenals obsolete. More profoundly, it could invent entirely new classes of weapons—from directed energy weapons that neutralize threats at the speed of light to space-based kinetic strike systems. These capabilities would fundamentally alter the logic of deterrence, potentially making preemptive strikes or unconventional warfare more likely as nations scramble to prevent their rivals from achieving unassailable dominance.

The consequences of non-cooperation are not hypothetical. A runaway AGI arms race could push nations to cut corners, accelerating the proliferation of advanced AI capabilities to non-state actors. Without shared safeguards and monitoring systems, terrorist groups, criminal syndicates, or even lone actors could acquire or replicate AGI systems, wielding unprecedented destructive power without the constraints that govern nation-states. Even more concerning, competitive pressure between major powers could lead them to deliberately arm proxy groups with AI capabilities, much as nuclear proliferation spread through covert state sponsorship. The resulting landscape—where superintelligent systems could fall into the hands of actors with apocalyptic ideologies or purely destructive aims—might pose an even greater threat than state-level competition. This new form of proliferation risk could rapidly destabilize the international order in ways that even nuclear weapons did not.

​​​​​​​​​​​​​​​​The scale of this challenge demands we expand our analytical frame. The advent of AGI isn't just another technological race—it likely heralds the emergence of a new world, one previously confined to science fiction. The sheer transformative power of superintelligence makes traditional great power rivalry not just dangerous, but potentially suicidal. This new reality demands new thinking about international relations.

Fortunately, the two key players—the United States and China—are not locked into an inevitable conflict. Unlike most rivals in history, they share no contested borders and are separated by vast oceans. Their economies are deeply intertwined through trade relationships that create mutual prosperity. While their competition is real, both nations share an existential interest in ensuring AGI's safe development. This shared stake in humanity's future could transcend their significant economic and strategic differences, much as the nuclear threat created space for US-Soviet cooperation despite deep ideological divides.​​​​​​​​​​​​​​​​

This brings us to a crucial point: before pursuing formal agreements or institutional frameworks, we must begin with personal diplomacy. While game theory typically models nations as abstract rational actors, history shows that international relations often turn on personal relationships between leaders. During the Cold War, the personal rapport between Reagan and Gorbachev proved crucial in creating openings for substantive cooperation.

This is particularly relevant given Chinese political culture, where personal relationships and face-to-face respect are foundational to serious cooperation. Rather than viewing this as peripheral to the game theory of AGI competition, it should be understood as a prerequisite to shifting the underlying payoff matrix. When leaders have personal trust and understanding, they can more effectively sell cooperation to their domestic constituencies and create space for their bureaucracies to work together constructively.

Beyond leader-to-leader dynamics, a fundamental shift in public narrative is required. The current discourse often frames China as a monolithic adversary, overlooking the richness and diversity of its people and culture. American leaders and institutions must actively reshape this narrative, fostering a genuine appreciation for the Chinese people and their contributions to the world. Social media, academic exchanges, cultural programs, and storytelling platforms can all serve as vehicles for building bridges between societies. When the public discourse shifts from caricatures and suspicion to mutual respect and understanding, leaders gain more latitude to pursue collaborative approaches rooted in shared humanity.​​​​​​​​​​​​​​​​

With this foundation of mutual understanding in place, the hard work of developing governance mechanisms must begin. AGI's unprecedented capabilities will likely require oversight systems that go well beyond existing regulatory frameworks. While nuclear arms control offers some lessons, AGI poses unique challenges - it's a dual-use technology that develops incrementally through commercial activity rather than a singular military breakthrough. Any approach would need to balance innovation, safety, and national security across technical, diplomatic, and research domains.​​​​​​​​​​​​​​​​

A comprehensive framework for AGI governance might include several key elements. First, a permanent US-China AGI Safety Council, composed of senior technical experts and policymakers from both nations, could establish shared safety standards and verification protocols. Unlike nuclear inspectors, these teams might need sophisticated tools to monitor AI development - including shared access to training data, model architectures, and testing protocols.

Second, effective crisis prevention might require mechanisms beyond traditional hotlines. Joint monitoring facilities, staffed by technical experts from both nations, could track rapid capability advances, novel emergent behaviors, and potential security vulnerabilities in real time. Early warning systems might detect signs of unintended AI behaviors or capability jumps before they destabilize the strategic balance.

Third, creating mechanisms for selective research collaboration on AGI safety could prove valuable. This might begin with sharing research findings on narrow technical challenges, establishing common standards for testing and evaluation, and developing shared protocols for addressing specific safety risks. A framework for limited technical exchange could promote progress on safety while respecting each nation's security interests.​​​​​​​​​​​​​​​​

While governance frameworks are important, building a genuinely positive relationship with China is the essential task. Within nations, we've evolved sophisticated systems of law and dispute resolution that make violence between states or regions virtually unthinkable. California and Texas compete vigorously but would never dream of military buildups against each other. This achievement—replacing force with law—stands as one of humanity's great advances. Yet in the international arena, we still largely operate in violent anarchy. Until nations can achieve what states have, friendship is our primary safeguard.

Ultimately, AGI represents not just a technological breakthrough but the dawn of a new era—one where great power rivalry is no longer tenable. Our continued existence is not guaranteed. Humanity may have survived the threat of nuclear annihilation so far, but we are not out of the woods. The development of AGI will be a point of bifurcation, where the fate of our species hinges on the choices of a handful of leaders. One path risks collapse, while the other offers the potential for unimaginable abundance. The collision between our Paleolithic emotions and godlike technology has been building for over a century—its crescendo is now upon us.

Latest