Elon Musk, the founder of SpaceX and Tesla Motors, who counts establishing a human colony on Mars among his many goals, has been quite vocal about the dangers of Artificial Intelligence (AI). Now, putting his money where his mouth is, the American billionaire has shelled out $7 million in grants to 37 projects aimed at keeping AI “robust and beneficial.”
The money was awarded to the Boston-based Future of Life Institute, which would now use it to research a host of questions in the field of computer science, law, policy and economics, among others, relevant to expected advances in AI, according to a statement published on the institute’s website Thursday.
Among the projects being funded is one that is currently studying how to keep AI-driven weapons under “meaningful human control” and another that seeks to ensure that the interests of “superintelligent systems” remain aligned to those of humans.
Many of the listed projects focus on understanding the decision-making processes that go on within the brains of AI systems, and have titles such as “understanding when a deep network is going to be wrong” and “teaching AI systems human values through human-like concept learning.”
The money for the latest projects comes from a $10 million donation Musk made to the institute in January.
“Here are all these leading AI researchers saying that AI safety is important. I agree with them, so I'm today committing $10 million to support research aimed at keeping AI beneficial for humanity,” Musk, who has, in the past, called AI humanity’s “biggest existential threat,” reportedly said at the time.
Musk is not the only prominent personality cautioning humanity against the development of artificially intelligent machines. Many others, including the likes of Stephen Hawking, Bill Gates and Apple co-founder Steve Wozniak, have expressed serious concerns over the rise of AI.
“Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,” Hawking said in May last year.