Artificial Intelligence (AI) in warfare has been growing rapidly. Several weapons now use integrated AI software to slowly reduce the number of soldiers in direct mortal peril. These weapon systems can target and attack anyone without human intervention. But, the growth of this technology is raising a few eyebrows. Several prominent scientists have already questioned the future of AI machinery simply because of the unpredictability. These weapons are capable of a lot of damage and need to be handled carefully.

In August, 116 experts on AI including Tesla's Elon Musk signed an open letter to the UN to take action against the growing threat posed by AI warfare. The letter warned the UN that AI-integrated weapons are "a third revolution in warfare" after gunpowder and the invention of nuclear weapons. The letter called "lethal autonomous" technology a "Pandora's box."

Keeping this in mind, the United Nations organized a meeting on Monday, where a group of experts gathered to discuss lethal autonomous weapons systems at the United Nations (UN) Palais des Nations in Geneva.

Representatives from more than 70 UN member states were scheduled to attend the meeting. According to a press announcement from the UN, this is the first formal inter-governmental discussion on what machine autonomy means for the law of armed conflict and the future of international security. This also marks the first meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts on lethal autonomous weapons systems.

The CCW is the formal clause that is accepted to ban the use of inhumane weapons which includes any weapon that injures through fragments, incendiary weapons, mines and booby-traps.

Though this agreement was established in 1980 no provisions have been added since the astronomical development and integration of drones and growing AI reliance in general warfare. The recent additions to the CCW have been the 1995 protocol that preemptively banned blinding lasers.

Now, the CCW gathered to review the rules and guidelines that surround these AI bots that some argue are capable of wide scale damage to the innocent.

"It's time for countries to move from talking about the ethical and other challenges raised by lethal autonomous weapons concerns to taking preventative action," Mary Wareham, coordinator of the Campaign to Stop Killer Robots, said in a statement. "Endorse the call of the scientific community and non-governmental organizations to preemptively ban weapons that would select and fire on targets without meaningful human control."

Another campaign also launched a hard-hitting video which shows killer drones capable of zeroing in on their particular target, using facial recognition, and carrying out a coordinated mass killing in a classroom. This video also shows us that we currently have all the technology in place to make this devastating device that can maneuver past heavy gunfire and even trained sniper shots to reach its target. It has been described as "unstoppable" in the video.

IBT reported in August that robots have the potential to change warfare more than the invention of gunpowder and nuclear weapons.

via GIPHY

This week's meeting isn't focused on a ban on fully autonomous weapons, but it's a step in that direction. The people formulating the laws feel that guiding the tech would be a better option than a full ban. As several nations are involved, laws restricting it’s use should be considered over a complete ban.

According to the UN release, the meeting aims at a conversation about the legal and ethical challenges with forthcoming military technology. The next step will likely be a mandate to continue work on autonomous weapons within the framework of the CCW.