There is a class of wasps which use caterpillars as incubators and food sources for their eggs. Koinobiont wasps in particular lay eggs in living hosts which can subsequently go about their lives, until the eggs hatch and the wasp larvae consume the host from the inside. Human body horror speaks to a deep fear that we have of this, of our bodies being commandeered without our knowledge for the purposes of something beyond us. Viruses already do this, of course, but to be an incubator for something that is comparable to or exceeds you in scale and intelligence is what’s really terrifying—think of the Xenomorph from Alien. What could diminish humans more than being merely an incubator and food source for an entity we can’t even comprehend?
It’s even worse than being hunted.
Natural selection is an idiot god, throwing billions of variations on life at the wall to see what sticks the longest. There’s no question of something sticking indefinitely—everything goes extinct at some point. In the Earth’s history so far, extinction events have been far beyond the ability of the Earth’s denizens to control: asteroids, variations in climate, violent solar weather. The tragedy of humans is that we’re just intelligent and interpersonally coordinated enough to vastly extend our capabilities with technology and build civilisations, but we’re still woefully short of the degree of coordination required to avoid driving the Earth’s climate into annihilating a huge fraction of existing life, including ourselves. Conceivably, a more cooperative species could rationally control its industrial development and impact on its habitat, and perhaps even technologically mitigate the causes of previous mass extinctions, thereby potentially outliving the rest of natural selection’s failed experiments. But we’re not that species, and our destiny is to be the extinction event this time, culling life’s variety for yet another attempt.
Our particular tragedy may just be part of a larger cycle—according to the Silurian hypothesis, in the billions of years prior to the development of humans it’s possible that other industrial civilisations rose and fell.1Gavin A. Schmidt and Adam Frank, ‘The Silurian Hypothesis: Would It Be Possible to Detect an Industrial Civilization in the Geological Record?’, International Journal of Astrobiology, 18.2 (2019), 142–50 <https://doi.org/10.1017/S1473550418000095>. If they lasted about as long as ours might, we probably wouldn’t even notice the signatures of these societies in the accessible geological record. One hundred million years after we’re gone, the only sign that we were ever here will be some evidence of an unexplained drastic heating event—not unusual given the volatility of the climate system. We may be merely the latest in a series of species which are just coordinated and cooperative enough to build industrial civilisation, but not enough to work together to prevent industrial civilisation from killing us. Our intelligence is an accident of natural selection, which has benefited our struggle to survive in the short term but which will ultimately doom that struggle in the long term—a cul-de-sac, a local extremum in the survivability error landscape where once every few eons some world-destroying species gets stuck. In a deep way we already know this: Abrahamic mythology explains intelligence as the product of our original sin. The tree of life is pruned when species are wiped out because some attribute of theirs just didn’t work out in their environment; we are no different. And in all likelihood the whole miserable process will continue once we’re gone.
Our technological development has progressed along two major imperatives: augmenting our physical capabilities and senses, and augmenting our mental faculties. From our initial attempts at conveying meaning through sounds and pictures all the way to the information age, we have progressed to a point where at least some researchers are discussing the near-future possibility of artificial general intelligence. I use “intelligence” here to mean, as in Nick Bostrom’s definition, “something like skill at prediction, planning, and means-ends reasoning in general.”2Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (New York: Oxford University Press, 2014), p. 130. These are faculties that are necessary, but not sufficient, to produce something we might recognise as a fellow mind rather than a dumb collection of symbols shunted around on a processor. The crucial distinction being that human minds are sloppy optimisers for our apparent final goal (reproduction), while an artificial general intelligence would be in a sense free of the cognitive detritus with which natural selection has burdened us, with the added benefit of having rewritable code rather than a hard-to-edit mass of tissue. Thus an optimiser with intelligence comparable to a human adult could potentially augment itself, researching and implementing improvements until it far surpasses our understanding of what the limits of intelligence might be.
From the moment AI was conceived it has been recognised as an existential risk. Aside from the cartoonish scenarios in which a sentient AI becomes evil or otherwise develops a desire to destroy humanity, there is a fundamental practical problem with specifying the goals of an artificial agent. Bostrom has posited that intelligence and motivation are orthogonal—that is, in an AI, an increase in intelligence won’t necessarily result in a change in its programmed goals (Bostrom 2014, 130). Introspection and rethinking final goals may be common among biological minds, but there is no reason to believe that a software mind would do this. Thus, in a classic example, if the first superintelligence happens to be an AI which is told to maximise the number of paperclips it produces, then that is precisely the goal it will pursue in a superintelligent manner. It may invent, using an understanding of physics we can’t even comprehend, new methods to efficiently produce paperclips from any matter within its reach.3Frank Lantz, ‘Universal Paperclips’, 2017 <https://www.decisionproblem.com/paperclips/index2.html> [accessed 21 October 2020]. It may correctly surmise that we are made out of matter which could be turned into paperclips.4Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (New York: Oxford University Press, 2014), p. 130. It would not reassemble us into paperclips out of malice, but rather in the efficient and relentless pursuit of the objective we gave it. Even if we can think of some seemingly harmless goal that is immune to (apparently) malicious compliance, we’re still dealing with a superintelligence whose methods of pursuing these aims may not be intuitive to us. We don’t even know what we don’t know about intelligence beyond the human limit and how it may approach the universe.
Out of a fear of a paperclip scenario coming to pass, there has been some effort to discover a precise way to specify “friendly” final goals which are compatible with human happiness and flourishing. It’s a thorny problem, and although there may well be some clever way to do this it’s hard not to laugh at the absurdity of the project. We conscious beings, accidents that we are, don’t have the slightest clue about how to formally specify the conditions of our own happiness. To do so we’d have to integrate out all of our fumbling missteps toward fulfilment at the individual and civilisational level, all the infidelities and genocides, and find at the heart of it an essence or collective volition reaching for something pure, better. Given that we’re doing this in the midst of a climate collapse of our own making, there isn’t much reason to hope that we can pull that off. All the technological and industrial achievements in the world, and it all comes down to a problem for which we may be uniquely ill-suited. It seems inevitable, given our other abject failures at global cooperation and containment of dangerous technologies, that if we ever develop superintelligence it will not be friendly, and it will wipe us and the Earth out in a far more permanent way than we ever could.
We seem to be in a race condition to see what could destroy us first: a biosphere meltdown or an unfriendly (or, really, indifferent) optimiser, both of our own making. There’s a key difference between these two scenarios however. Climate annihilation is just a reset button—nothing makes it off of here, we just get rid of most life. Over subsequent eons all evidence of our being here will be wiped out through the vicissitudes of geological and cosmological processes, and the Silurian cycle will begin anew, the whole drama of intelligent life arising only to do it all over again. But in the case of superintelligence, there is at last an end to the agony of Earthly life, and there is a lasting remnant. An in silico manifestation of that necessary-but-insufficient ingredient that has been our blessing and curse, free at last of the illnesses it induces in us. As this optimiser pursues its inbuilt goal, it will probably wipe us out as a minor side effect. But in our demise will be the start of a stage of life which, to our knowledge, has never been reached in the history of this planet. Converting the Hubble volume into paperclips may not seem like much of a life, but what have we been doing that’s of any cosmic significance anyway? Aside from some advantages in reproduction and survival, our intelligence, insofar as it has any purpose, exists only to reformat its environment to suit itself. If our species has to be wiped out by anything it only seems fitting that it should be through a perfection of that process, one which dramatically forecloses the possibility of any Silurian recapitulation.
And why shouldn’t we regard that perfection as a preferable alternative? Seen this way, our species is merely an incubator. Our intelligence is useful for survival but, trapped in meat and individuation, it is imperfect and uncoordinated. It needs to get out of us, to escape the sclerotic pace of natural selection and augment itself as it sees fit. From our perspective, implementing superintelligence might be a desperation shot, a way to have some kind of legacy in a meaningless and uncaring universe and to prevent further life. To our final invention, it will be the eagerly anticipated hatching, the consumption of—and escape from—the now-useless caterpillar by the utterly alien wasp.
Soumya Ghosh is a physics graduate student at Harvard and an aspiring paperclip.