The world's leading Artificial Intelligence (AI) and robotics experts, including Tesla's Elon Musk and Google's Mustafa Suleyman, have urged the United Nations to take action to prevent the development of killer robots before it is too late. 

The letter signed by 116 experts from 26 countries opens with the words, "As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm."

Though none has been build yet, conceptually a killer robot is fully autonomous and can engage, target and kill humans without any human intervention.

Unlike a cruise missile or a remotely piloted drone, where humans make all the target decisions, a quadcopter with AI, for example, can search and destroy people that meet pre-defined criteria on its own.

“Retaining human control over use of force is a moral imperative and essential to promote compliance with international law, and ensure accountability,” Mary Wareham, advocacy director, Arms Division, Human Rights Watch, wrote in January.

A coordinated international coalition of non-governmental organizations dedicated to bringing about a preemptive ban of fully autonomous weaponry — The Campaign to Stop Killer Robots — was started in April 2013.

In 2015, more than 1,000 experts, researchers and scientists, including Stephen Hawking, Apple co-founder Steve Wozniak and Musk, wrote an open letter warning about the dangers of autonomous weapons, the BBC reported. A possible ban was discussed then by U.N. committees.

A breakthrough was reached in 2016 when the fifth review conference of the United Nations Convention on Conventional Weapons (CCW) saw countries hold formal talks to expand their deliberations on fully autonomous weapons.

The conference also saw the establishment of a Group of Governmental Experts (GGE) chaired by India's ambassador to the U.N., Amandeep Gill.

Russia at first vehemently objected to the establishing of the group on grounds it was “premature” and countries had not yet decided on what exactly was a working definition of lethal autonomous weapons systems. Moscow, however, at the last minute, confirmed it would not block consensus on formalizing the process.

Another breakthrough came last year when China became the first permanent member of the UN Security Council to say that new international law was needed on fully autonomous weapons. It cited the precedent of the 1995 CCW protocol that pre-emptively banned blinding lasers.

France, the United Kingdom and the U.S. have not called for a new international law, but rather for greater transparency in the development of these weapon systems and sharing best practices.

Human Rights Watch that emphasizes the importance of a preemptive ban believes this to be not an effective enough policy to stop the development of these weapon systems before it is too late.

As of now, the tools are in place for a complete ban, if countries can muster their will, but time is running out. According to Human Rights Watch, over a dozen countries are developing autonomous weapon systems. The United States, China, Israel, South Korea, Russia, and the United Kingdom were specifically mentioned.

In the open letter to the United Nations CCW, the experts welcomed the decision to create the GGE.

“We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies,” the letter read.

Referring to it as a “Pandora’s box” that would be hard to close once opened, the experts called for a ban on using AI in weaponry, the BBC reported.

More than the revolution of gunpowder and the nuclear weapons, this third revolution would “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act,” the letter implored.

The letter launched the opening of the International Joint Conference in Melbourne on Monday, the Guardian reported.

"Unlike other potential manifestations of AI, which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability," Ryan Gariepy, the founder of Clearpath Robotics, who signed the letter said, according to CNN.

Proponents of killer robots suggest current laws of warfare might be sufficient to deal with potential problems if they are deployed.