There is a class of wasps which use caterpillars as incubators and food sources for their eggs. Koinobiont wasps in particular lay eggs in living hosts which can subsequently go about their lives, until the eggs hatch and the wasp larvae consume the host from the inside. Human body horror speaks to a deep fear that we have of this, of our bodies being commandeered without our knowledge for the purposes of something beyond us. Viruses already do this, of course, but to be an incubator for something that is comparable to or exceeds you in scale and intelligence is what’s really terrifying—think of the Xenomorph from Alien. What could diminish humans more than being merely an incubator and food source for an entity we can’t even comprehend?
It’s even worse than being hunted.
Natural selection is an idiot god, throwing billions of variations on life at the wall to see what sticks the longest. There’s no question of something sticking indefinitely—everything goes extinct at some point. In the Earth’s history so far, extinction events have been far beyond the ability of the Earth’s denizens to control: asteroids, variations in climate, violent solar weather. The tragedy of humans is that we’re just intelligent and interpersonally coordinated enough to vastly extend our capabilities with technology and build civilisations, but we’re still woefully short of the degree of coordination required to avoid driving the Earth’s climate into annihilating a huge fraction of existing life, including ourselves. Conceivably, a more cooperative species could rationally control its industrial development and impact on its habitat, and perhaps even technologically mitigate the causes of previous mass extinctions, thereby potentially outliving the rest of natural selection’s failed experiments. But we’re not that species, and our destiny is to be the extinction event this time, culling life’s variety for yet another attempt.
Our particular tragedy may just be part of a larger cycle—according to the Silurian hypothesis, in the billions of years prior to the development of humans it’s possible that other industrial civilisations rose and fell.
Our technological development has progressed along two major imperatives: augmenting our physical capabilities and senses, and augmenting our mental faculties. From our initial attempts at conveying meaning through sounds and pictures all the way to the information age, we have progressed to a point where at least some researchers are discussing the near-future possibility of artificial general intelligence. I use “intelligence” here to mean, as in Nick Bostrom’s definition, “something like skill at prediction, planning, and means-ends reasoning in general.”
From the moment AI was conceived it has been recognised as an existential risk. Aside from the cartoonish scenarios in which a sentient AI becomes evil or otherwise develops a desire to destroy humanity, there is a fundamental practical problem with specifying the goals of an artificial agent. Bostrom has posited that intelligence and motivation are orthogonal—that is, in an AI, an increase in intelligence won’t necessarily result in a change in its programmed goals (Bostrom 2014, 130). Introspection and rethinking final goals may be common among biological minds, but there is no reason to believe that a software mind would do this. Thus, in a classic example, if the first superintelligence happens to be an AI which is told to maximise the number of paperclips it produces, then that is precisely the goal it will pursue in a superintelligent manner. It may invent, using an understanding of physics we can’t even comprehend, new methods to efficiently produce paperclips from any matter within its reach.
Out of a fear of a paperclip scenario coming to pass, there has been some effort to discover a precise way to specify “friendly” final goals which are compatible with human happiness and flourishing. It’s a thorny problem, and although there may well be some clever way to do this it’s hard not to laugh at the absurdity of the project. We conscious beings, accidents that we are, don’t have the slightest clue about how to formally specify the conditions of our own happiness. To do so we’d have to integrate out all of our fumbling missteps toward fulfilment at the individual and civilisational level, all the infidelities and genocides, and find at the heart of it an essence or collective volition reaching for something pure, better. Given that we’re doing this in the midst of a climate collapse of our own making, there isn’t much reason to hope that we can pull that off. All the technological and industrial achievements in the world, and it all comes down to a problem for which we may be uniquely ill-suited. It seems inevitable, given our other abject failures at global cooperation and containment of dangerous technologies, that if we ever develop superintelligence it will not be friendly, and it will wipe us and the Earth out in a far more permanent way than we ever could.
We seem to be in a race condition to see what could destroy us first: a biosphere meltdown or an unfriendly (or, really, indifferent) optimiser, both of our own making. There’s a key difference between these two scenarios however. Climate annihilation is just a reset button—nothing makes it off of here, we just get rid of most life. Over subsequent eons all evidence of our being here will be wiped out through the vicissitudes of geological and cosmological processes, and the Silurian cycle will begin anew, the whole drama of intelligent life arising only to do it all over again. But in the case of superintelligence, there is at last an end to the agony of Earthly life, and there is a lasting remnant. An in silico manifestation of that necessary-but-insufficient ingredient that has been our blessing and curse, free at last of the illnesses it induces in us. As this optimiser pursues its inbuilt goal, it will probably wipe us out as a minor side effect. But in our demise will be the start of a stage of life which, to our knowledge, has never been reached in the history of this planet. Converting the Hubble volume into paperclips may not seem like much of a life, but what have we been doing that’s of any cosmic significance anyway? Aside from some advantages in reproduction and survival, our intelligence, insofar as it has any purpose, exists only to reformat its environment to suit itself. If our species has to be wiped out by anything it only seems fitting that it should be through a perfection of that process, one which dramatically forecloses the possibility of any Silurian recapitulation.
And why shouldn’t we regard that perfection as a preferable alternative? Seen this way, our species is merely an incubator. Our intelligence is useful for survival but, trapped in meat and individuation, it is imperfect and uncoordinated. It needs to get out of us, to escape the sclerotic pace of natural selection and augment itself as it sees fit. From our perspective, implementing superintelligence might be a desperation shot, a way to have some kind of legacy in a meaningless and uncaring universe and to prevent further life. To our final invention, it will be the eagerly anticipated hatching, the consumption of—and escape from—the now-useless caterpillar by the utterly alien wasp.
Soumya Ghosh is a physics graduate student at Harvard and an aspiring paperclip.