'Every aspect of our lives will be transformed' – exploring the future of AI

Short

Favourited: A new centre has opened to study the positive and negative implications of AI and ethical quandaries it poses

20th October 2016

"The rise of powerful AI will be either the best or the worst thing ever to happen to humanity," Professor Stephen Hawking said last night in Cambridge, at the launch of the Centre for the Future of Intelligence (CFI).

The CFI is seeking to investigate the implications of AI for humanity, bringing together an interdisciplinary community of researchers, philosophers, psychologists, lawyers and computer scientists. But, with strong links to technologists and policymakers, it has clear practical goals. Its stated aim is "to work together to ensure that we humans make the best of the opportunities of artificial intelligence as it develops over coming decades." The £10m project is a collaboration between four universities and colleges – Cambridge, Oxford, Imperial and Berkeley – and is backed by the Leverhulme Trust.

There's an impressive array of academics and researchers taking part, led by academic director Huw Price (Bertrand Russell Professor of Philosophy at Trinity College Cambridge) and executive director Stephen Cave, a writer, philosopher and former diplomat.



Also speaking last night was Professor Maggie Boden of the University of Sussex, who has led the way in thinking about AI and sits on the centre's advisory panel. "AI is hugely exciting," she said. "Its practical applications can help us to tackle important social problems, as well as easing many tasks in everyday life. And it has advanced the sciences of mind and life in fundamental ways. But it has limitations, which present grave dangers given uncritical use. CFI aims to pre-empt these dangers, by guiding AI development in human-friendly ways."

Martin Rees is also among the members of the international advisory board.

Initially the CFI will focus on seven projects over three years. Among the research topics are: Science, Value and the Future of Intelligence; Policy and Responsible Innovation; Autonomous Weapons – Prospects for Regulation; and Trust and Transparency.

It is in its very early stages but, as Professor Hawking said last night, "Success in creating AI could be the biggest event in the history of civilisation… it could also be the last, unless we learn how to avoid the risks."

Read Hawking's speech in full:


It is a great pleasure to be here today to open this new Centre. We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it is a welcome change that people are studying instead the future of intelligence.

Intelligence is central to what it means to be human. Everything that our civilisation has achieved, is a product of human intelligence, from learning to master fire, to learning to grow food, to understanding the cosmos.

I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it.

Artificial intelligence research is now progressing rapidly. Recent landmarks such as self-driving cars, or a computer winning at the game of Go, are signs of what is to come. Enormous levels of investment are pouring into this technology. The achievements we have seen so far will surely pale against what the coming decades will bring.

The potential benefits of creating intelligence are huge. We cannot predict what we might achieve, when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation. And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation.

But it could also be the last, unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It will bring great disruption to our economy. And in the future, AI could develop a will of its own – a will that is in conflict with ours.

In short, the rise of powerful AI will be either the best, or the worst, thing ever to happen to humanity. We do not yet know which. That is why in 2014, I and a few others called for more research to be done in this area. I am very glad that someone was listening to me!

The research done by this centre is crucial to the future of our civilisation and of our species. I wish you the best of luck!



Republish

We want our stories to go far and wide; to be seen be as many people as possible, in as many outlets as possible.

Therefore, unless it says otherwise, copyright in the stories on The Long + Short belongs to Nesta and they are published under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

This allows you to copy and redistribute the material in any medium or format. This can be done for any purpose, including commercial use. You must, however, attribute the work to the original author and to The Long + Short, and include a link. You can also remix, transform and build upon the material as long as you indicate where changes have been made.

See more about the Creative Commons licence.

Images

Most of the images used on The Long + Short are copyright of the photographer or illustrator who made them so they are not available under Creative Commons, unless it says otherwise. You cannot use these images without the permission of the creator.

Contact

For more information about using our content, email us: [email protected]

HTML

HTML for the full article is below.