Nations dawdle on agreeing rules to control ‘killer robots’ in future wars

Nations are investing in developing lethal autonomous weapons systems which can identify, target, and kill a person all on their own – but there are no international laws governing their use

By Nita Bhalla

NAIROBI, Jan 17 (Thomson Reuters Foundation) – Countries are rapidly developing “killer robots” – machines with artificial intelligence (AI) that independently kill – but are moving at a snail’s pace on agreeing global rules over their use in future wars, warn technology and human rights experts.

From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare – but they all have human supervision.

Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own – but to date there are no international laws governing their use.

“Some kind of human control is necessary … Only humans can make context-specific judgements of distinction, proportionality and precautions in combat,” said Peter Maurer, President of the International Committee of the Red Cross (ICRC).

“(Building consensus) is the big issue we are dealing with and unsurprisingly, those who have today invested a lot of capacities and do have certain skill which promise advantages to them, are more reluctant than those who don’t.”

The ICRC oversaw the adoption of the 1949 Geneva Conventions that define the laws of war and the rights of civilians to protection and assistance during conflicts and it engages with governments to adapt these rules to modern warfare.

AI researchers, defence analysts and roboticists say LAWS such as military robots are no longer confined to the realm of science fiction or video games, but are fast progressing from graphic design boards to defence engineering laboratories.

Within a few years, they could be deployed by state militaries to the battlefield, they add, painting dystopian scenarios of swarms of drones moving through a town or city, scanning and selectively killing their targets within seconds.


This has raised ethical concerns from human rights groups and some tech experts who say giving machines the power of life and death violates the principles of human dignity.

Not only are LAWS vulnerable to interference and hacking which would result in increased civilian deaths, they add, but their deployment would raise questions over who would be held accountable in the event of misuse.

“Don’t be mistaken by the nonsense of how intelligent these weapons will be,” said Noel Sharkey, chairman of the International Committee for Robot Arms Control.

“You simply can’t trust an algorithm – no matter how smart – to seek out, identify and kill the correct target, especially in the complexity of war,” said Sharkey, who is also an AI and robotics expert at Britain’s University of Sheffield.

Experts in defence-based AI systems argue such weapons, if developed well, can make war more humane.

They will be more precise and efficient, not fall prey to human emotions such as fear or vengeance and minimise deaths of civilians and soldiers, they add.

“From a military’s perspective, the primary concern is to protect the security of the country with the least amount of lives lost – and that means its soldiers,” said Anuj Sharma, chairman of India Research Centre, which works on AI warfare.

“So if you can remove the human out of the equation as much as possible, it’s a win because it means less body bags going back home – and that’s what everyone wants.”



Related Posts

This website uses cookies to ensure you get the best experience on our website. Visit our Privacy & Terms of Use here.

Skip to content