Reports that the state-run university-based ‘Korea Advanced Institute of Science and Technology’ (KAIST) has been working on military robot research with defence company Hanwa have resulted in threats of a boycott by more than 50 AI researchers from 30 countries.

Killer Robots?

Although the threat of the boycott of KAIST appears to have been effective in exposing and causing KAIST to agree to stop any work related to the development of lethal autonomous weapons (killer robots), the story has raised questions about ethical red-lines and the regulation of technology in this area.

KAIST opened its research centre for the convergence of national defence and artificial intelligence on 20 February, with the reported intention of providing a foundation for developing national defence technology. It has been reported that a now-deleted announcement about the work of the centre highlighted a focus on areas like AI-based command and decision systems, navigation algorithms, large-scale unmanned undersea vehicles, AI-based smart aircraft training systems, as well as smart object tracking and recognition technology.

Fast Exchange of Letters

It has been reported that almost immediately after a letter containing the signatures of more than 50 AI researchers expressing concern about KAIST’s alleged plans to develop artificial intelligence for weapons, KAIST sent its own letter back saying that it would not be developing any lethal autonomous weapons.

The President at the university, Shin Sung-chul, went on to say that no research activities that were counter to human dignity, including autonomous weapons lacking meaningful human control, had been conducted. Shin Sung-chul is also reported as saying that KAIST had actually been trying to develop algorithms for “efficient logistical systems, unmanned navigation and aviation training systems”, and that KAIST is significantly aware of ethical concerns in the application of all technologies including AI.

Who / What Is Hanwha Systems?

Hanwha Systems, the named partner from the defence / military world in the project, is a major weapons manufacturer based in South Korea. The company is known for making cluster munitions, which are banned in 120 countries under an international treaty.

Outright Ban Expected

To accompany the welcome re-assurances from KAIST that it will not be researching so-called “killer robots”, it is widely expected that the next meeting of the UN Security Council countries in Geneva, Switzerland will call for an outright ban on AI weapons research and killer bots.

Already Exists

As well as the Taranis military drone, built by the UK’s BAE Systems, which can technically operate autonomously, ‘robots’ with military applications already exist. For example, South Korea’s Dodaam Systems manufactures a fully autonomous “combat robot”, which is actually a stationary turret that can detect targets up to 3km away. This ‘robot’ is reported to have already been tested on the militarised border with North Korea, and is reported to have been bought by the United Arab Emirates and Qatar.

What Does This Mean For Your Business?

Many of the key fears about AI and machine learning centre on machines learning to make autonomous decisions that result in humans being injured or attacked. It is no surprise therefore, that reports of possible research into the development of militarised, armed AI robots play on fears such as those expressed by Tesla and SpaceX CEO Elon Musk who famously described AI as a “fundamental risk to the existence of civilisation.”

Even with the existing autonomous combat turret in Korea there are reported “self-imposed restrictions” in place that require a human to deliver a lethal attack i.e. to make the actual attack decision. Many fear that the development of any robots of this kind represents a kind of Pandora’s box, and that tight regulations and built-in safeguards are necessary in order to prevent ‘robots’ from making potentially disastrous decisions on their own.

It should be remembered that AI presents many potentially beneficial opportunities for humanity when it is used ethically and productively. Even in a military setting, for example, an AI robot that could e.g. effectively clear mines (instead of endangering more humans) has to be a good idea.

The fact is that AI currently has far more value-adding, positive, and useful applications for businesses in terms of cost-cutting, time-saving, and enabling up-scaling with built-in economies.