While the White House last week predicted that artificial intelligence (AI) had the capacity to take Americans' jobs in the not-so-distant future, English theoretical physicist Stephen Hawking is expecting something a little more extreme.
“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity,” he told an audience at the launch of the new Cambridge University’s Leverhulme Center for the Future of Intelligence (CFI) on Wednesday night in the U.K. “We do not know which.”
Over the past few years, Hawking, who once warned that AI could spell the end of mankind, has been joined by Tesla Motors CEO Elon Musk and Microsoft founder Bill Gates in sounding the alarm on the dangers associated with it.
“I am very glad someone was listening to me,” Hawking, a former professor at the university, said, in reference to his and others’ warnings.
But Hawking’s predictions for the future of AI — and humans — were far more optimistic than his previous doomsday comments. He praised the speed with which research on the subject has been carried out and the amount of funding the technology has received. He also expressed optimism for the global problems AI could solve.
“Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage to the natural world done by the last one, industrialization,” he said.
Funding for AI technology has boomed in the past five years, with investments rising to $2.39 billion in 2015 from $282 million in 2011, according to CB Insights. Top investors include such household names as Intel Capital, Google Ventures, GE Ventures, Samsung Ventures and Bloomberg Beta, the research firm found.
“We saw a slow trickle in investments in robotics, and suddenly, boom — there seem to be a dozen companies securing large investment rounds focusing on specific robotic niches,” San Francisco-based Bossa Nova chief executive Martin Hitch told the New York Times.
But while Hawking added that this innovation “could be the greatest event in the history of our civilization,” he also suggested it could be “the last,” and pointed to risks like “powerful autonomous weapons” and “new ways for the few to oppress the many.”
Though a recent White House report on the future of AI projected autonomous robots as “helpers, assistants, trainers and teammates of humans,” Hawking insisted on the dangers of superintelligence, in which machines exceed the cognitive abilities of humans.
“I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer,” he said. “In the future, AI could develop a will of its own — a will that is in conflict with ours.”