Don’t worry about artificial intelligence taking over the world and endangering humans — at least not to start with, said Demis Hassabis, the head of Google’s AI project.

Hassabis, CEO of Deep Mind, a British business Google bought in 2014, said he has rebuked the likes of Microsoft founder Bill Gates and SpaceX and Tesla founder Elon Musk for their comments on AI, the Times of London reported Monday. AI is the theory and development of computer systems that can perform tasks that mornally take human intelligence.

Read: Will Robots Take Over?

“I don’t think it’s very helpful for other people who are incredible in their domains commenting on something they actually know very little about, but because they are quite big celebrities now, more than just scientists or businessmen, it gets picked up a lot,” he said at a Cambridge Society for the Application of Research event.

He said the “general meme of fearfulness doesn’t help reasoned debate. It actually drives that debate away. ”

Deep Mind currently is analyzing patient data for the Royal Free Hospital in London and working on an agreement for optimizing the National Grid with an eye toward saving 10 percent of the U.K.’s energy.

Read: Artificial Intelligence Scientists Discuss Doomsday Plans In Arizona Desert

Hassabis, who as a game designer came up with “Evil Genius,” said the company has sought input from philosophers and mathematicians, as well as from digital engineers, on how to build an AI machine that won’t run amok (think “Battlestar Gallactica”). He said the real danger would come from a self-improving “seed” that would allow AI to rewrite its source code without human oversight.

“I think human extinction will probably occur, and technology will likely play a part in this,” Hassabis’ partner Shane Legg said in a 2011 interview with Less Wrong.

Facebook already is using AI for targeted advertising, photo tagging and curated news feeds. Microsoft and Apple use it to power Cortana and Siri. Google has always used it for its search engine.

Musk has been warning about the dangers of AI for three years and invested in Deep Mind to keep an eye on the technology. He suggested last month at the World Government Summit in Dubai that the way to escape the dangers of AI would be for there to be “some sort of merger of biological intelligence and machine intelligence,” suggesting a neural lace that could be injected directly into the brain to enable direct communication with computers. He told the April issue of Vanity Fair that might be possible within five years.

Physicist Stephen Hawking has warned technology needs to be controlled and ways of identifying threats quickly have to be put in place. He told the Times “some form of world government” needs to be established to keep humans from destroying themselves and to keep AI from growing so powerful it could kill us off unintentionally.

“The real risk with AI isn't malice but competence,” Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.”

Bring on the Three Laws of Robotics!