AI: what's the worst that could happen?
The Centre for the Future of Intelligence is seeking to investigate the implications of artificial intelligence for humanity, and make sure humans take advantage of the opportunities while dodging the risks. It launched at the University of Cambridge last October, and is a collaboration between four universities and colleges – Cambridge, Oxford, Imperial and Berkeley – backed with a 10-year, £10m grant from the Leverhulme Trust.
Because no single discipline is ideally suited to this task, the centre emphasises the importance of interdisciplinary knowledge-sharing and collaboration. It is bringing together a diverse community of some of the world's best researchers, philosophers, psychologists, lawyers and computer scientists.
Executive director of the centre is Stephen Cave, a writer, philosopher and former diplomat. Harry Armstrong, head of futures at Nesta, which publishes The Long + Short, spoke with Cave about the impact of AI.
Their conversation has been edited.
Harry Armstrong: Do you see the interdisciplinary nature of the centre as one of its key values and one of the key impacts you hope it will have on the field?
Stephen Cave: Thinking about the impact of AI is not something that any one discipline owns or does in any very systematic way. So if academia is going to rise to the challenge and provide thought leadership on this hugely important issue, then we’re going to need to do it by breaking down current disciplinary boundaries and bringing people with very different expertise together.
That means bringing together the technologists and the experts at developing these algorithms together with social scientists, philosophers, legal scholars and so forth.
I think there are many areas of science where more interdisciplinary engagement would be valuable. Biotech’s another example. In that sense AI isn’t unique, but I think because thinking about AI is still in very early stages, we have an opportunity to shape the way in which we think about it, and build that community.
We want to create a space where many different disciplines can come together and develop a shared language, learn from each other’s approaches, and hopefully very quickly move to be able to actually develop new ideas, new conclusions, together. But the first step is learning how to talk to each other.
At a recent talk, Naomi Klein said that addressing the challenge of climate change could not have come at a worse time. The current dominant political and economic ideologies, along with growing isolationist sentiment, runs contrary to the bipartisan, collaborative approaches needed to solve global issues like climate change. Do you see the same issues hampering a global effort to respond to the challenges AI raises?
Climate change suffers from the problem that the costs are not incurred in any direct way by the industrialists who own the technology and are profiting from it. With AI, that has been the case so far; although not on the same scale. There has been disruption but so far, compared to industrialisation, the impact has been fairly small. That will probably change.
AI companies, and in particular the big tech companies, are very concerned that this won't go like climate change, but rather it will go like GMOs: that people will have a gut reaction to this technology as soon as the first great swathe of job losses take hold. People speculate that 50m jobs could be lost in the US if trucking is automated, which is conceivable within 10 years. You could imagine a populist US government therefore simply banning driverless cars.
So I think there is anxiety in the tech industry that there could be a serious reaction against this technology at any point. And so my impression is that there is a feeling within these companies that these ethical and social implications need to be taken very seriously, now. And that a broad buy-in by society into some kind of vision of the future in which this technology plays a role is required, if a dangerous – or to them dangerous – counteraction is to be avoided.
I don't lie awake at night worried that robots are going to knock the door down and come in with a machine gun
My personal experience working with these tech companies is that they are concerned for their businesses and genuinely want to do the right thing. Of course there are intellectual challenges and there is money to be made, but equally they are people who don't think when they get up in the morning that they're going to put people out of jobs or bring about the downfall of humanity. As the industry matures it's developing a sense of responsibility.
So I think we've got a real opportunity, despite the general climate, and in some ways because of it. There's a great opportunity to bring industry on board to make sure the technology is developed in the right way.
One of the dominant narratives around not only AI but technology and automation more generally is that we, as humans, are at the mercy of technological progress. If you try and push against this idea you can be labelled as being anti-progress and stuck in the past. But we do have a lot more control than we give ourselves credit for. For example, routineness and susceptibility to automation are not inevitable features of occupations, job design is hugely important. How do we design jobs? How do we create jobs that allow people to do the kind of work they want to do? There can be a bit of a conflict between being impacted by what's happening and having some sort of control over what we want to happen.
Certainly, we encounter technological determinism a lot. And it's understandable. For us as individuals, of course it does feel like it always is happening and we just have to cope. No one individual can do much about it, other than adapt.
But that's different when we consider ourselves at a level of a society, as a polis [city state], or as an international community. I think we can shape the way in which technology develops. We have various tools. In any given country, we have regulations. There's a possibility of international regulation.
Technology is emerging from a certain legal, political, normative, cultural, and social framework. It's coming from a certain place. And it is shaped by all of those things.
And I think the more we understand a technology's relationship with those things, and the more we then consciously try to shape those things, the more we are going to influence the technology. So, for example, developing a culture of responsible innovation. For example, a kind of Hippocratic oath for AI developers. These things are within the realms of what is feasible, and I think will help to shape the future.
Sign up to our newsletter
One of the problems with intervention, generally, is that we cannot control the course of events. We can attempt to, but we don't know how things are going to evolve. The reality is, societies are much too complex for us to be able to shape them in any very specific way, as plenty of ideologies and political movements have found to their cost. There are often unforeseen consequences that can derail a project.
I think, nonetheless, there are things we can do. We can try to imagine how things might go very badly wrong, and then work hard to develop systems that will stop that from happening. We can also try collectively to imagine how things could go very right. The kind of society that we actually want to live in that uses this technology. And I'm sure that will be skewed in all sorts of ways, and we might imagine things that seem wonderful and actually have terrible by-products.
This conversation cannot be in the hands of any one group. It oughtn't be in the hands of Silicon Valley billionaires alone. They've got their role to play, but this is a conversation we need to be having as widely as possible.
The centre is developing some really interesting projects but perhaps one of the most interesting is the discussion of what intelligence might be. Could you go into a bit more detail about the kinds of questions you are trying to explore in this area?
You mean kinds of intelligence?
Yeah.
I think this is very important because historically, we've had an overwhelming tendency to anthropomorphise. We define what intelligence is, historically, as being human-like. And then within that, being like certain humans.
And it's taken a very long time for the academic community to accept that there could be such a thing as non-human intelligence at all. We know that crows, for example, who have had a completely different evolutionary history, or octopuses, who have an even more different evolutionary history, might have a kind of intelligence that's very different to ours. That in some ways rivals our own, and so forth.
But luckily, we have got to that point in recent years of accepting that we are not the only form of intelligence. But now, AI is challenging that from a different direction. Just as we are accepting that the natural world offers this enormous range of different intelligences, we are at the same time inventing new intelligences that are radically different to humans.
And I think, still, this anthropomorphic picture of the kind of humanoid android, the robot, dominates our idea of what AI is too much. And too many people, and the industry as well, talk about human-level artificial intelligence as a goal, or general AI, which basically means like a human. But actually what we're building is nothing like a human.
When the first pocket calculator was made, it didn't do maths like a human. It was vastly better. It didn't make the occasional mistake. When we set about creating these artificial agents to solve these problems, because they have a completely different evolutionary history to humans, they solve problems in very different ways.
We can try to imagine how things might go very badly wrong, and then develop systems that will stop that from happening
And until now, people have been fairly shy about describing them as intelligent. Or rather, in the history of AIs, we think solving a particular problem would require intelligence. Then we solve it. And then that's no longer intelligence, because we've solved it. Chess is a good example.
But the reality is, we are creating a whole new world of different artificial agents. And we need to understand that world. We need to understand all the different ways of being clever, if you like. How you can be extremely sophisticated at some particular rational process, and yet extremely bad at another one in a way that bears no relation to the way humans are on these axes.
And this is important, partly because we need to expand our sense of what is intelligent, like we have done with the natural world. Because lots of things follow from saying something is intelligent. Historically, we have a long tradition in Western philosophy of saying those who are intelligent should rule. So if intelligence equates to power, then obviously we need to think about what we mean by intelligence. Who has it and who doesn't. Or how it equates to rights and responsibilities.
It certainly is a very ambitious project to create the atlas of intelligence.
There was a point I read in something you wrote on our ideas of intelligence that I thought was very interesting. We actually tend to think of intelligence at the societal level when we think about human ability, rather than at the individual level but in the end conflate the two. I think that's a very good point, when we think about our capabilities, we think about what we can achieve as a whole, not individually. But when we talk about AI, we tend to think about that individual piece of technology, or that individual system. So for example if we think about the internet of things and AI, we should discuss intelligence as something encompassed by the whole.
Yeah, absolutely. Yes, right now, perhaps it is a product of our anthropomorphising bias. But there is a tendency to see a narrative of AI versus humanity, as if it's one or the other. And yet, obviously, there are risks in this technology long before it acquires any kind of manipulative agency.
Robotic technology is dangerous. Or potentially dangerous. But at the same time, most of what we're using technology for is to enhance ourselves, to increase our capacities. And a lot of what AI is going to be doing is augmenting us – we're going to be working as teams, AI-human teams.
Where do you think this AI-human conflict, or concept of a conflict, comes from? Do you think that's just a reflection of historical conversations we've had about automation, or do you think it is a deeper fear?
I do think it comes both from some biases that might well be innate, such as anthropomorphism, or our human tendency to ascribe agency to other objects, particularly moving ones, is well-established and probably has sound evolutionary roots. If it moves, it's probably wise to start asking yourself questions like, "What is it? What might it want? Where might it be going? Might it be hungry? Do I look like food to it?" I think it makes sense, it's natural for us to think in terms of agency. And when we do, it's natural for us to project our own ways of being and acting. And we, as primates, are profoundly co-operative.
But at the same time, we're competitive and murderous. We have a strong sense of in-group versus out-group, which is responsible for both a great deal of cooperation, within the in-group, but also terrible crimes. Murder, rape, pillage, genocide; and they're pointed at the out-group.
And so I think it's very natural for us to see AIs in terms of agents. We anthropomorphise them as these kind of android robots. And then we think about, well, you know, are they part of our in-group, or are they some other group? If they're some other group, it's us against them. Who's going to win? Well, let's see. So I think that's very natural, I think that's very human.
We, as primates, are profoundly co-operative. But at the same time, we're competitive and murderous
There is this long tradition, in Western culture in particular, with associating intelligence and dominance and power. It's interesting to speculate about how, and I wish I knew more about it, and I'd like to see more research on this, about how different cultures perceive AI. It's well known that Japan is very accepting of technology and robots, for example.
You can think, well, we in the West have long been justifying power relations of a certain kind on the basis that we're 'cleverer'. That's why men get to vote and women don't, or whatever. In a culture where power is not based on intelligence but, say, on a caste system, which is purely hereditary, we’d build an AI, and it would just tune in, drop out, attain enlightenment, just sit in the corner. Or we beg it to come back and help us find enlightenment. It might be that we find a completely different narrative to the one that's dominant in the West.
One of the projects the centre is running is looking into what kind of AI breakthroughs may come, when and what the social consequences could be. What do you think the future holds? What are your fears – what do you think could go right and wrong in the short, medium and long term?
That's a big question. Certainly I don't lie awake at night worried that robots are going to knock the door down and come in with a machine gun. If the robots take over the world, it won't be by knocking the door down. At the moment, I think it's certainly as big a risk that we have a GMO moment, and there's a powerful reaction against the technology which prevents us from reaping the benefits, which are enormous. I think that's as big a risk as the risks from the technologies themselves.
I think one worry that we haven't talked about is that we've become extremely dependent upon this technology. And that we essentially become deskilled. There's an extent to which the history of civilisation is the history of the domestication of the human species sort of by ourselves, and also by our technology, to some extent. And AI certainly allows for that to reach a whole new level.
Just think about GPs with diagnostic tools. Even now, my GP consults the computer fairly regularly. But as diagnostic tools get better, what are they going to be doing other than just typing something into the computer and reading out what comes back? At which point, you might as well do away with the GP. But then, who does know about medicine?
And so we do need to worry about deskilling and about becoming dependent. And it is entirely possible that you can imagine a society in which we're all sort of prosperous, in a sense. Our basic bodily needs are provided for, perhaps, in a way, to an extent that we've never before even dreamed of. Unprecedented in human history.
And yet, we're stripped of any kind of meaningful work. We have no purpose. We're escaping to virtual reality. And then you could imagine all sorts of worrying countercultures or Luddite movements or what have you. I guess that's the kind of scenario that – I haven't sketched it terribly well – but that's the kind of thing that worries me more than missile-toting giant robots.
As to utopian, yes, that's interesting. I certainly mentioned a couple of things. One thing that I hope is that this new technological revolution enables us to undo some of the damage of the last one. That's a very utopian thought and not terribly realistic, but we use fossil fuels so incredibly efficiently. The idea that driverless cars that are shared, basically a kind of shared service located off a Brownfield site does away with 95 per cent of all cars, freeing up a huge amount of space in the city to be greener, many fewer cars need to be produced, they would be on the road much less, there'd be fewer traffic jams.
It's just one example, but the idea that we can live much more resource-efficiently, because we are living more intelligently through using these tools. And therefore can undo some of the damage of the last Industrial Revolution. That's my main utopian hope, I guess.