Elon Musk has been quite vocal with his dire predictions about Artificial Intelligence (AI). Taking another step toward his goal of “keeping an eye on what’s going on” in the field, the Tesla and SpaceX founder announced over the weekend that he had become one of the many big-ticket investors in the newly-minted nonprofit OpenAI.
In a statement released over the weekend, the OpenAI team said that their backers -- a group that includes Musk, PayPal co-founder Peter Thiel, and Y Combinator’s Sam Altman and Jessica Livingston, and Reid Hoffman -- had committed $1 billion to the project.
— Elon Musk (@elonmusk) December 11, 2015
“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the OpenAI team said, in the statement. “Since our research is free from financial obligations, we can better focus on a positive human impact. … The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right.”
Creation of AI machines imbued with human-level intelligence has always been controversial -- conjuring up images of Terminator-esque machines hell-bent on wiping out humanity. However, amid gloomy predictions about the AI’s impact on humans by the likes of Musk, Stephen Hawking and Bill Gates, AI research has been speeding up, leading to unprecedented advancements in the field of machine learning.
As a result, several global tech giants, including Facebook, Google, Microsoft and Apple, are now heavily invested in developing their own AI machines.
“As a nonprofit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We'll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies,” the OpenAI team said, in the statement. “It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.”