Skip to content

Founders of a New World

We’re not just inventing a new technology with AI, we’re birthing a new civilization

In conversations about artificial intelligence, there’s often an underlying sense of relief tied to various timeframes or milestones. If we can just stay safe 5 or 10 years, things will stabilize, and the long-term challenges of AI will be more manageable. But the arrow of time keeps going. It goes on past 10 years, it goes on past 100 years, it even goes on past 1000 years. We don’t have to just maintain our civilizational balance with AI for 10 more years, we have this partner for life.

It would be natural to question whether we should even go ahead with this plan. Have we really asked ourselves if we want to enter a scifi book? Unfortunately, I’m not sure there’s an emergency brake available to pull. We are both empowered and disempowered. Do we have societal free will? Sometimes our society seems less like a group to be led and more like a growing embryo on a preordained path of development. Nick Bostrom captures this concept beautifully in his new book Deep Utopia:

Humanity is riding on the back of some chaotic beast of tremendous strength, which is bucking, twisting, charging, kicking, rearing. The beast does not represent nature. It represents the dynamics of the emergent behavior of our own civilization. The technology-mediated, culture-inflected, game-theoretic interactions between billions of individuals, groups, and institutions. No one is in control. We cling on as best we can for as long as we can, but at any point, perhaps if we poke the juggernaut the wrong way or for no discernible reason at all, it might toss us into the dust with a quick shrug or possibly maim or trample us to death.

But one thing we can certainly do is think about it. We can treat the far flung future as real. This means wrestling with questions that feel almost absurd in their scope. Can we socially function with a fully automated economy? Is geopolitical equilibrium possible with ASI? Can cyborg diversity unravel social cohesion? If nothing else, maybe this long view perspective can give us the correct degree of humility over the gravity of our decisions.

The contemporary discourse around AI safety focuses on immediate technical challenges: preventing hallucinations, ensuring robust alignment, maintaining human control over critical systems. These are crucial concerns, but they're only the opening moves in a game that will unfold across millennia.

It goes without saying that the stakes are extremely high. Either we successfully integrate human and artificial intelligence or fail catastrophically in the attempt. The science fiction future is coming and soon.

We are unwittingly founding a new civilization. One that must safely coexist alongside an intelligence that will far exceed our own. Just as James Madison and Alexander Hamilton had to imagine how their constitutional framework would adapt to circumstances they could never predict, we must design for a future we can barely comprehend. The magnitude of this challenge demands that we think on their scale—not as engineers solving technical problems, but as architects of institutions that must outlive us by centuries.

Give this period the same weight you give America’s founding. We are birthing a new world whether we like it or not. Write your version of the federalist papers. Hold a constitutional convention. History is calling, meet the moment.

Latest