Robot Justin a humanoid two arm system developed by the German air and space agency DLR is presented in Oberpfaffenhofen
Alexander Dietrich of the German air and space agency, Deutsches Zentrum fuer Luft und Raumfahrt (DLR), works at humanoid two arm system robot Justin during a presentation in Oberpfaffenhofen near Munich June 1, 2011. The mobile robotic system Justin with its compliant controlled light weight arms and its two four finger hands allows the long range autonomous operation of the system. Sensors and cameras allow the 3D reconstruction of the robot's environment and therefore enable Justin to perform given tasks autonomously such as catching balls or serving coffee. Reuters

Even with excitement growing over the prospect of driverless cars and the development of machines that are smarter than ever, technology entrepreneur Elon Musk warns artificial intelligence could in fact present a greater threat to humanity than nuclear weapons. Musk, the CEO of the space transport company SpaceX, warned in a tweet during the weekend humans, by relying so much on technology, are turning the human brain to mush.

In referencing the book “Superintelligence,” which ponders what will happen when machines surpass human intelligence, Musk said the potential implications of artificial intelligence (AI) could be catastrophic.

Musk is in a better position than most to suggest autonomous technology might have negative outcomes. He’s currently the head of SpaceX, one of the leaders of the commercial space flight industry, and Tesla Motors (NASDAQ:TSLA), the electric car manufacturer. Musk has also invested in AI, telling CNBC earlier this year he put money into companies like DeepMind and Vicarious “not from the standpoint of actually trying to make any investment return,” but “just to keep an eye on what’s going on with artificial intelligence. I think there is a potentially dangerous outcome here.”

Some researchers have suggested AI will be a dominant theme in the lives of humans within just a few decades. The U.S. military is already at work on futuristic weapons, delivery companies are begging the government to allow drone research and even “Jeopardy!” is getting in on the action with the Watson supercomputer.

Gray Marcus, a professor of cognitive science, explained in the New Yorker, though, that such developments are no longer the thing of “Star Trek” episodes or Philip K. Dick novels.

“It’s likely that machines will be smarter than us before the end of the century -- not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine,” he wrote. “There might be a few jobs left for entertainers, writers and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information and reason in ways that we carbon-based units can only dimly imagine. And they will be able to do it every second of every day, without sleep or coffee breaks.”