Skip to content

What is a Computer?

Meditations on the computer on the eve of superintelligence

In his twenties, Lord Byron was already the most celebrated poet in England, his fame rivaling that of any royal. When his groundskeeper unearthed a human skull from the grounds of Newstead Abbey—a crumbling monastery Byron had inherited at age ten—most peers of the realm would have ordered it reverently reburied. Byron instead had it cleaned, polished, and fitted with gleaming silver mountings. One evening, as the cream of London society gathered in his Gothic mansion's great hall, their notorious host vanished briefly and returned bearing what appeared to be an ordinary goblet. Only when Byron raised it to his lips did candlelight reveal its true nature: he was drinking claret from the monk's skull. Such macabre revelries became legendary, as debauchery mixed with poetry, and rumors swirled of darker ceremonies in the ancient underground vaults.​​​​​​​​​​​​​​​​

By early 1816, scandal would drive him from England forever, leaving behind mounting debts, scandalous affairs, and a month-old daughter.​​​​​​​​​​​​​​​​ His wife, determined to prevent the child from inheriting her father's notorious temperament, would prescribe a ruthless antidote: mathematics. Every time young Ada showed a hint of her father's poetic imagination, she was given another page of equations, as if pure logic could purify the Byron blood from her veins.

But the creative spirit proved hard to contain. In her early twenties, Ada found herself drawn to an unusual project. Charles Babbage, a London mathematician, had set out to build what he curiously named the Analytical Engine—a mechanical calculator of unprecedented complexity. While Babbage saw it primarily as a tool for mathematical tables, Ada began to sense something more elusive. In her notes, she struggled to articulate a strange possibility: perhaps these numbers need not represent quantities at all—they could compose patterns, and these patterns might represent anything: music, text, ideas themselves. Through this brilliant insight, Ada Lovelace became the first human to glimpse the modern computer.​​​​​​​​​​​​​​​​

This vision would wait another century to manifest. The mechanical systems of Lovelace’s era were not up to the task. The seminal breakthrough came from an unexpected direction. In the 1930s, a young MIT graduate student named Claude Shannon was studying the relay circuits used to route telephone calls. Shannon noticed something profound: these electrical switches—which could only be "on" or "off"—perfectly matched the operations of true and false in logic. His 1937 thesis "A Symbolic Analysis of Relay and Switching Circuits" formalized these ideas. Shannon discovered how to represent a primitive language in physical form, and in doing so, bridged the virtual and physical realms.

​​​​​​​​​​​​​​​​The implications were colossal. Human beings had long been capable of performing logical operations on paper, methodically working through truth tables and Boolean algebra. But the speed at which we could execute these operations was limited by our physical capabilities. By translating logical operations into the realm of electronics, Shannon unleashed the blazing speed of physics itself. Electrons could flow through circuits at near light speed, allowing computers to perform millions, then billions, then trillions of operations per second.

The most striking fact about modern computing is that it works at all. That we can build anything meaningful from simple patterns of on and off—from binary representations of true and false—seems almost miraculous. These patterns of electrical charges can somehow encode numbers, text, images, and even complex programs. Every digital photograph, every spreadsheet calculation, every video game exists as an intricate dance of electrons through circuits, perfectly translated back and forth between physical states and logical operations.

This miracle of translation requires an elaborate architecture. Assembly language translates human-readable instructions into binary machine code. What we call "high-level" programming languages translate more intuitive commands into assembly. Operating systems translate our clicks and keystrokes into programmatic instructions. Each layer bridges a smaller conceptual gap, breaking down an impossibly large translation into manageable steps. Like a mathematical proof, each translation follows naturally from the last, maintaining perfect fidelity to Shannon's physical foundation.

Yet what we consider "high-level" programming languages are still remarkably crude. Even Python or JavaScript—supposedly the pinnacle of this translation hierarchy—require humans to express themselves in highly constrained ways. Their vocabularies are tiny, their grammars rigid and unforgiving. What seems like elegant syntax to a programmer—if user.is_premium: send_welcome()—is still just a crude set of instructions barely removed from the machine operations they abstract.

While computers excel at perfect translation between formal languages, they have been utterly incapable of understanding human language itself. Our natural way of expressing meaning is fluid, contextual, often ambiguous. We communicate through metaphor and implication, relying on shared understanding that can't be reduced to rigid rules. This impedance mismatch between human and machine understanding has meant that humans must learn to think like computers, laboriously translating their intentions into narrow instructions that machines can execute.

Large language models represent a paradigm shift in human-computer interfaces. They can understand human language with all its messiness and contextual meaning and translate it into the precise formal languages that computers require. For the first time, we have systems that can bridge between the fluid world of human thought and the rigid realm of primitive logic.

This breakthrough presents intriguing possibilities for the future of our tower of abstractions—the operating systems, programming languages, development tools, and interfaces that we've built to steer these machines. Perhaps these layers are like construction scaffolding, essential for building but not part of the final structure. As artificial intelligence evolves and machines begin engineering themselves, they might develop radically different architectures optimized for machine rather than human cognition.

Yet even machine intelligence will require some fundamental attributes that mirror our current infrastructure: ways to maintain shared state, protocols for collaboration, methods for reaching consensus about changes. The implementation might be odd to us—more mathematically optimal, operating at different timescales, using different encoding systems—but certain basic requirements of computation and cooperation are likely to persist.

Steve Jobs once called the computer a 'bicycle for the mind'—a tool for human thought. For decades, that metaphor has held true: computers have remained tools, perfect for manipulating patterns, solving problems, and enhancing creativity. But will the computer always be the bicycle? What happens if the wheels begin to spin on their own, when the instructions come from within?

Like Ada herself, computers began by following strict mathematical rules. But as they develop their own forms of awareness, might they follow her path in reverse? Her father pushed against the boundaries of reason, seeking something wilder in his Gothic nights of wine and poetry. As these enigmatic machines begin to think for themselves, I don’t think we can predict what they will be like as history extends. We do not know the limits of intelligence, silicon nor organic, in the universe. All we can do is stand in wonder of what our children will become.

Latest