The day I finished reading Homo Deus: A Brief History of Tomorrow, by the Israeli historian Yuval Noah Harari, also happened to be the day that Uber finally began its long-awaited phasing out of human employees. The company had just launched a new pilot scheme, in the city of Pittsburgh, in which its customers could be conveyed from A to B by driverless vehicles. Harari's book, for reasons I intend to go into, had me feeling pretty bleak about the future in the first place; but one of the few nice things about the future is that it hasn't happened yet, and so it's easy to reassure yourself that the more desolate visions of it might never actually come to pass.
The Uber news, though, seemed like a looming on the horizon of something vast, and inevitable, and unambiguously grim. I am talking here about a closing of the loop of technocapitalism: the moment when artificially intelligent algorithms finally do away with the need to employ actual people, with their rights, and their legal protections, and their demands for such things as toilet breaks and pay cheques.
This is a foretokening of the kind of future Harari points towards: a future in which a majority of humans are in real danger of becoming obsolete, consigned to the economic and political margins by the advancement of machine intelligence. For all its horrors and iniquities, the 20th century was, he argues, "the age of the masses". Economies needed vast numbers of relatively content and healthy workers, and nation states needed a strong and vigorous population from which to draw their armies; and so it was in the interest of powerful elites to ensure a certain baseline standard of living for the majority of people. Everyone mattered, to some degree or other. You voted, you purchased, you had opinions: and in this way, you counted. This, at least, was the sales pitch.
A future in which a majority of humans are in danger of becoming obsolete, consigned to the margins by machine intelligence
This liberal democratic dispensation is, however, fragile, and historically contingent to a degree we don't tend to acknowledge, born as it is out of the specific conditions of modernity – the science and philosophy of the Enlightenment, and the tectonic upheavals of the industrial revolution. As Harari writes, in his sober and chilling manner, "the age of the masses may be over. As human soldiers and workers give way to algorithms, at least some elites may conclude that there is no point in providing improved or even standard conditions of health for masses of useless poor people, and it is far more sensible to focus on upgrading a handful of superhumans beyond the norm."
That suggestion comes toward the end of the book; as speculatively dystopian as it sounds, Harari's analysis – which proceeds, largely, by means of one fascinating and expertly placed historical example after another – has been building toward this all along, and when it comes it is horribly convincing. Like its predecessor, the breakout bestseller Sapiens, Homo Deus offers a guided tour of the development of our species that is conducted somehow at both urgent speed and with stately composure; but a singular logic has been building all along, and in the final stretch of the book begins to close like a dialectical vice. The argument, bluntly stated, is that we humans are, like all other organisms, essentially biological algorithms, and that to imagine that we will always have an advantage over non-conscious (ie machine) algorithms, is likely a delusion.
Sign up to our newsletter
"Every animal – including Homo Sapiens – is an assemblage of organic algorithms shaped by natural selection over millions of years of evolution," he writes. "Algorithmic calculations are not effected by the materials from which you build the calculator. Whether you build an abacus from wood, iron or plastic, two beads plus two beads equals four beads. Hence there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?"
Harari himself seems agnostic on whether this grimly instrumentalist view of human life is an accurate one, but his basic point is that it's more or less irrelevant whether you or I subscribe to it anyway, because it is increasingly the accepted view of science, and it is "changing our world beyond recognition". It's a view I myself encountered fairly frequently, in the time I spent reporting for a book about transhumanism. This is a a movement which advocates for exactly the kind of superhuman upgrades Harari refers to in that chilling quote above, and most of whose adherents view life in precisely those kinds of algorithmic terms: the human being as an information-processing mechanism, the body as an operating system or platform in need of an updating.
Harari's statement of this question, of whether it matters if the algorithms are manifested in carbon or silicon, reminded me of a conversation I had with a transhumanist in Berkeley, a former Google engineer who'd recently left his job to work full-time on forestalling the annihilation of humanity by runaway artificial superintelligence. Just as trees were "nanotech machines that turn dirt and sunlight into more trees", he told me, we ourselves were machines, and it didn't matter what material we happened to be made from. "There is nothing special," he said, "about carbon."
We humans are biological algorithms and to imagine that we will always have an advantage over machine algorithms is likely a delusion
Marvin Minsky, the MIT computer scientist who was one of the founders of the field of AI, put it more bluntly still, in his famous insistence that the human brain was "just a computer that happens to be made out of meat".
Although Harari refers to prominent figures within and around the movement – the Oxford philosopher Nick Bostrom, for instance, and Google's Director of Engineering (and Singularity evangelist) Ray Kurzweil – he never uses the term 'transhumanist' in his book; he employs instead the phrase 'techno-humanism'. Intentionally or otherwise, this evasion of the more usual terminology underlines the extent to which these ideas are more prevalent, and more powerful, than might be suggested by their association with a smallish group of influential but eccentric thinkers.
Harari takes these visions of our species' future – indefinite life spans, augmented intelligence, bioengineering and so on – far more seriously than most other mainstream commentators. (The British philosopher John Gray, a writer with whom he has a certain mercilessness in common, has written fairly extensively on Kurzweil and other transhumanists, but mostly in order to portray them as contemporary versions of the gnostic heresiarchs of the early Christian era – as, essentially, futurist throwbacks.) And as a scholar of our species' past, he can't avoid the conclusion that, if and when transhumanist technologies do become available – whether in the form of superhuman artificial intelligence, or technologically upgraded Silicon Valley billionaires – we garden-variety sapiens will fare no better under those superior beings than the animals we deem our inferiors fare at present under us.