For a visionary who talks about colonizing Mars in our lifetime and 600 miles-per-hour “hyperloop” passenger transport on Earth, billionaire inventor Elon Musk fears at least one thing about future: machines that can think for themselves. Forget ISIS and Ebola, the founder of Space Exploration Technologies and architect of the Tesla Model S electric luxury car said he thinks the development of artificial intelligence, AI, is “our biggest existential threat.”

Speaking for over an hour at a Massachusetts Institute of Technology aerospace symposium on Friday, Musk called on governments to begin regulating the development of computing that could lead machines to autonomous cognition and decision making. This development could be like “summoning a demon,” he said.

HAL 9000 would be “like a puppy dog,” Musk said during the 80-minute one-on-one with MIT President Rafael Reif. HAL 9000 was the fictional spacecraft computer in Arthur C. Clarke's “2001: A Space Odyssey,” who concludes that killing astronauts is better than lying to them.

This isn’t the first time Musk, who has invested in San Francisco-based AI firm Vicarious, warned about the potential dangers of giving sentience to machines. "There's some scary outcomes, and we should try to make sure the outcomes are good, not bad,” he said on CNBC in June.

His comments came a week after Stephen Hawking, the Oxford-born physicist known to have significantly advanced Albert Einstein’s theory of general relativity, went on HBO’s “Last Week Tonight” to warn that someday machines could outsmart us by adding their own design improvement.

"Artificial intelligence could be a real danger in the not-too-distant future," Hawking told “Last Week Tonight” host John Oliver.

Watch the Elon Musk one-on-one here, or in the embedded video below:

Watch the Stephen Hawking interview from earlier this year: