A short film made by campaigners and scientists shows tiny drones hunting and killing with ruthless precision and without human guidance. The movie, released by the campaign group Stop Autonomous Weapons, highlights the perils of autonomous weapons falling into the wrong hands. It shows students in a school classroom being attacked by drones, armed with explosives. The drones identified and neutralized targets and did not need any instructions during the mission.

This gruesome reminder of the destructive potential of Artificial Intelligence (AI)-integrated weapons displays autonomous drones that can find, follow and fire at targets independently. To make a strong case, the film was shown during a meeting held to discuss lethal autonomous weapons systems at the United Nations (UN), Palais des Nations, Monday.

Stuart Russell, a leading AI scientist at the University of California in Berkeley, presented the video during the UN Convention on Conventional Weapons meeting in Geneva.  

According to him, the list of autonomous weapons does not end with the ruthless swarms of killer drones, which are equipped with explosives and use facial recognition, GPS, voting and social media data to establish and pursue targets, but also consists of automatic firearms and explosives.

“The technology illustrated in the film is simply an integration of existing capabilities. It is not science fiction. In fact, it is easier to achieve than self-driving cars, which require far higher standards of performance,” the Guardian reported Russell as saying.

The people who campaign against the development of this technology argue it will soon be able to make its own decisions before killing in the near-future. This puts people at risk of sudden attack that cannot be traced.

"Pursuing the development of lethal autonomous weapons would drastically reduce international, national, local and personal security," Russell said in the report.

The military is already heavily invested in the development of robot warfare to reduce human involvement in battle in order to decrease loss of life. Examples of thriving AI in military is seen at the sentry tower that guards South Korea’s border with the North, which can track targets up to 4km away.

The people who are against this technology believe making robots more intelligent could prove disastrous because giving machines the power over who lives and dies crosses a moral line.

AI is cheap to create, due to which scientists believe autonomous weapons could be mass produced in the near-future. As technology is not difficult to duplicate, we cannot rule out the possibility that people looking to cause harm might end up using it too.

If making these lethal drones is easier than creating a self-driving car, several people could theoretically develop them provided they have materials. This scary scenario is what the movie tries to highlight.

“Professional codes of ethics should also disallow the development of machines that can decide to kill a human,” Russell added.

In August, the world’s leading robotics and AI pioneers called on the UN to ban the development and use of killer robots. The  open letter, signed by Tesla’s CEO Elon Musk and the founder of Alphabet’s Deep Mind AI unit Mustafa Suleyman, warned an urgent ban was needed to prevent a ‘third revolution in warfare’, after gunpowder and nuclear arms. So far, 19 countries have called for a ban, including Argentina, Egypt and Pakistan.

Even Stephen Hawking warned the public about the future of robotics.

"I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans," he was quoted saying recently in a report by International Business Times.