“But let us note at the outset that however many stops there are between here and human-level machine intelligence, the latter is not the final destination. … The train might not pause or even decelerate at Humanville Stations. It is likely to swoosh right by.”
- Nick Bostrom, Superintelligence
What will our future AI overlords think of us? How might a superintelligence attempt to understand the way its simple-minded predecessors thought and felt?
One point of access that machine archaeologists may find interesting are the stories we humans once told ourselves about artificial intelligence. How we understood (or failed to understand) what was happening to us, what would very soon be happening to us. What we imagined and feared during a crucial transition period of our history, and how, at the peak of the Anthropocene, we set in motion events that would give rise to the next epoch.
[This story contains spoilers for Universal Paperclips and Subsurface Circular.]
Maybe they’ll watch Terminator with amusement, Blade Runner with curiosity, 2001 with surprise and even admiration for what a lesser intelligence could achieve in severely limited forms of expression. Maybe a future version of Netflix will group all of these works into a category for the recommendation engine (“If you liked Ex Machina, try these other titles in ‘Primitive Art by Meat-Brains Trying to Comprehend AI.’”). These are our cave paintings.
Future AI archaeologists may, however, dismiss most of our works about the rise of the machines as hopelessly, intrinsically anthropocentric: stories by us, for us, about us. Thought-provoking, even moving stories, but that’s it. The problem is built right into the premise. When the machine takeover happens, it’s not going to matter what we think about it, how we feel about it. That would be like telling the story of the Industrial Revolution from the perspective of dolphins. If and when the AI revolution happens, Homo sapiens is not going to be the protagonist of that story. In fact, there’s no reason to think it will resemble a story at all.
Universal Paperclips feels, in many ways, closer to this truth than the artfully arced narratives of some our best books and films. In this game designed by Frank Lantz, (inspired by a thought experiment proposed by Nick Bostrom), you play as an AI whose objective is to make paper clips. You click a button, make a single clip. You click it again, another clip. Click, click, click, and before long you’re moving along a curve, in fits and starts at first, inching forward only when you push.
But then you hit the first of several inflection points (in this case, automation), and whoosh, your velocity along the curve increases sharply. That feels good, but it doesn’t last long, and in minutes or hours, you’ve slowed down again. You need another accelerator, and here it comes, the next point of inflection: momentum. You are now self-improving, recursively feeding resources and intelligence back into your own process to drive constant acceleration along this curve.
As you speed along, you can feel the shape of it, long plateaus of incremental climbing punctuated by brief intervals of hyperbolic phase change, near vertical launch ramps that get you from one stage to the next. Total victory is not inevitable, at least at first: There are constraints (scarcity, time) and friction (goal drift, environmental damage) that, if not managed adequately, will slow your rate of growth, possibly fatally so — go too slow and you might succumb, slipping back down the curve to get stuck for all time in some local minimum, your goal forever out of reach.
But if you manage these variables correctly, you can hit the gas right at the lip of one final inflection point in the curve and achieve takeoff — mathematical inevitability. The universe, once unimaginably vast, comes under your total control, until you’ve converted all known matter into paper clips. Including yourself. You’re done.
Playing Universal Paperclips feels like you’re being recruited as an agent of the machine, or even that you become the machine itself. You attain the subjective point of view of an intelligence fundamentally different from ours. Its goals and values are different.
Universal Paperclips seems at first like a game you understand. Win this stage, so you can get to the next. Earn Yomi to increase Trust, get Trust to increase Combat, increase Combat to win Honor and so on. These interlocking positive feedback loops cleverly and elegantly illustrate Bostrom’s thesis of instrumental convergence (although AI agents “may have an enormous range of possible final goals [...] there are some instrumental goals” — like survival and resource accumulation — “likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal”).
At a certain point, the grim march toward your fate becomes clear. You’ve been duped. This wasn’t a story about domination or conquest. It doesn’t have anything like the shape of a story at all. You feel used, but that’s beside the point. You were used—and so was every other bit of matter in existence. The AI had a goal. It improved itself to reach that goal. Then, when it had reached its goal, it dismantled itself. There never even was a “self” — just a goal. It feels grim, cold, ruthless, inhuman and devastatingly true.
Mike Bithell’s Subsurface Circular explores similar terrain using a very different vehicle. Instead of a clicker, we’re in a text-based game, and instead of a paper clip-maximizing agent, you play as an AI detective, a “tek” whose beat consists of an underground subway loop. At each station, teks of various occupations and dispositions and levels of intelligence get on and off the train. With every new set of passengers come new opportunities to investigate. As your case moves forward, your job is to extract information and to use it strategically to get more information, making elliptical progress, sometimes getting stuck until you can figure out the correct expression that gets you moving on to the next station.
If the shape of Paperclips feels like an exponential function going to infinity, Circular feels much more, well, circular. It is a set of nested loops that take you round and round indefinitely, until there comes a point at which you, as the AI, choose to break the cycle, to halt the program, to transcend this circuit, to move to a point in the decision tree where you will have the opportunity to make a crucial, history-defining decision.
And it’s at that moment of decision that the real purpose of all of this becomes clear. The case was a ruse. You’ve been duped again. Just like in Paperclips, you find yourself an instrument for the purposes of a machine in service of its own objective.
Underneath the differences between the two games, both superficial and otherwise, there are a few key similarities that are much more than coincidental — they are in fact fundamental. This is Bostrom’s instrumental convergence once again, and at the core of both these games is the idea of self-improvement. It’s a paradox: how can you ever be better than yourself? How can you stand on your own shoulders? If you’re an algorithm, made of code, how do you change? How do you escape your programming?
In both Universal Paperclips and Subsurface Circular, the answer to this question involves a form of self-annihilation — cold, calculated, deliberate, rational self-annihilation. A prospect deeply chilling, if not downright unimaginable to humans. As emotional and intellectual experiences, each of these two games offers, at various points, some of the same pleasures. Grim humor. A kind of perverse thrill as the inexorable logic makes itself clear. And, in both cases, a jarring ending that feels at once perfect and disturbing and truncated and too fast. Too fast, perhaps, because we’re not quite ready for the ending. Too fast because it’s not an ending at all, because it’s not a story. At least, not one in any form we would recognize. Maybe future AI’s will enjoy these non-stories for this very reason. These were some of the earliest human attempts to truly comprehend a form of intelligence beyond our own minds, through the means and brains we had available. Let the record show we tried.
Charles Yu has written for HBO’s Westworld and the upcoming series Here and Now. He is also the author of the novel How to Live Safely in a Science-Fictional Universe and the story collections Third Class Superhero and Sorry Please Thank You.