AI: an existential threat?
Artificial intelligence (AI) may be the most important technology human beings will ever create. Its impact has already been felt in the worlds of retail, healthcare, transport, cyber security, advertising and many more industries – and this is only set to grow over the course of the 21st century. The big debate now is whether it is destined to enhance the quality of our lives, or whether it ultimately poses an existential risk to humanity.
Stuart Russell, professor of Computer Science at University of California, Berkeley, believes we can happily co-exist with increasingly sophisticated machines as long as we rethink our relationship – an idea he explores in his book, “Human Compatible: AI and the Problem of Control”.
“Our intelligence is what gives us power over the world, power over all the other species on the planet. And we've managed to wipe out millions of them already,” he tells the Found In Conversation podcast.
“If you create another class of entities that's more intelligent than us, then, by default, one would expect that they are going to have power over us. So how do we retain power over them forever, when they are more powerful than we are? That's the trick. Alan Turing thought there was no solution to that problem... I argue that actually, if we look carefully, we can find a way to untangle this knot.”
The key, Russell says, is to move away from the current approach of setting precisely defined objectives for AI to solve. The pitfalls of that approach are illustrated by the ancient legend of King Midas, who wished for everything he touched to turn to gold. When his wish was granted, his power indeed applied to everything – including any food he tried to eat, and even his daughter.
“Can we replace the standard model of AI with a new model where the machines know that they don't know what the objective is? That sounds sort of counter intuitive … but actually we do this all the time with each other. And we have protocols. When you go into a restaurant, the chef doesn't know what you want, and they have a whole protocol for finding out,” he says.
“When the machines find out a bit more of what you want, then they can be a bit more helpful to you. So we're developing algorithms that operate on this new basis. And … mathematically, we can show humans will remain in control when we define things this way.”
Marcus du Sautoy, Simonyi Chair for the Public Understanding of Science at the University of Oxford, agrees that collaboration is key to avoiding the kind of machine-ruled future beloved of cinema blockbusters such as The Matrix.
“I think we've been served up a terribly dystopic kind of image of AI from Hollywood,” the author of “The Creativity Code” explains.
“Intelligence is a very multi-dimensional landscape... the idea is it [AI] can do some things better than us. And we can do some things better than the AI. Therefore, if we can move towards a future of collaboration, rather than competition, then we'll be both better served. We've already seen evidence of that in the medical realm where radiologists and AI together are able to spot cancers far more accurately than each one individually.”
That raises the other major concern over the growing dominance of AI – even if machines don’t take over our planet, will they take over our jobs? Or will we adapt and create new ones?
“If there are going to be new jobs, they have to be jobs where humans have a competitive advantage over machines. And as AI progresses, that's going to be more and more difficult. Even jazz musician, radiologist, story, writer, screenwriter – the things that we think of as purely human, very, very safe, protected, jobs – may go away,” says Russell.
Our advantage, he argues, lies in the fact that we share very similar nervous systems, which enable us to understand each other’s emotions and pain in a way machines can’t fully replicate.
“Interpersonal relationship jobs, where one person is working to make the lives of others better, richer, more interesting, more positive, to increase curiosity to increase appreciation for art, music, literature, nature, whatever it might be – those are the kinds of jobs where we have a competitive advantage. And that's how I see the future.”
However, this presents another challenge – for now, many such jobs, in areas like childcare, are poorly paid and underappreciated within society.
“If we're going to have high status, high value jobs for everyone in the future, we have to do another 30 years of science. And then, you know, forming new professions and credentials and training, and so on, so that people can actually function in these new jobs and add value to each other.”
If you would like to hear more from Stuart Russell, Marcus du Sautoy and other experts on understanding the modern world, listen to the Found in Conversation podcast here.