The Turkish-made weapon, an STM Kargu-2 drone, uses Artificial Intelligence to identify and establish targets. One of these devices is said to have autonomously decided to pursue and attack Haftar troops who were withdrawing from disputed areas.. It is a new chapter in the history of autonomous weapons, in which they demonstrate the ability to hunt and attack human beings without first asking for specific orders in this regard.

References to the attack took up a few paragraphs in the 500-page report by United Nations experts on Libya, and went unnoticed until the magazine Bulletin of the Atomics Scientist released them on May 20, in an analysis article the dangers and advantages of AI-based weapons that learn and operate autonomously.

Kargu is a drone loafer e kamikaze of four rotors, capable of selecting and attacking targets based on the automatic classification of objects using AI algorithms. Upon reaching the area identified for the attack, the quadcopter detonates an explosive charge it carries, spreading shrapnel over a wide area.

The Turkish-made drone has the capacity to operate in swarm formation with up to 20 similar devices. The programming also includes the mode Fire&Forget (fires and extinguishes), by providing coordinates, in addition to security systems for abort the mission and return to base at any time during the operation – including when attacking.

Videos published on YouTube by the manufacturer, STM, exemplify the capabilities and effects of these drone attacks.

Attack context

The UN report classifies the Kargu-2 as an autonomous “lethal” weapon and claims that one of these devices operated by the forces of the Libyan National Agreement Government, GNA, “Hunted and attacked remotely” fighters identified as Haftar Affiliate Forces, HAF. Then the Operation Peace Storm, code name for the military campaign launched by the GNA army at the end of March 2020 to dislodge entrenched HAFs in strategic positions along the coast.

The Kargu-2s were launched to destroy the HAF’s aerial warheads.

Lethal autonomous armed systems were programmed to attack targets without the need for a connection between the operator and the ammunition: in fact, a real ability fire, forget and find”(Fire, delete and search), says the UN report.

GNA’s superior airpower forced General Haftar’s combat units to withdraw. “During the retreat, they were subject to constant harassment by unmanned aerial vehicles and lethal autonomous weapon systems ”, indicate the rapporteurs.

One of the main targets was the Pantsir S-1 manned mobile systems of surface-to-air missiles, developed in a Russian / Emirates partnership. Without being explicit about fatalities in the autonomous drone attack, the report mentions “heavy losses” verified in these combat units.

“Estes [sistemas] suffered heavy losses, even when they were used electronically passively to avoid GNA interference ”, reveals the report. “With the Pantsir S-1 threat neutralized, Haftar units were left in practice without any protection against remote air strikes.”
Another incident
If there have been human casualties, this would be the first confirmed worldwide case of human beings killed by autonomous weapons AI-based. The reference that the drone “remotely hunted and attacked” the HAF soldiers is attributed in the report, in a footnote, to “a single confidential source”, being impossible to verify.

Almost two years ago there was talk of another suspicious incident, a group of farmers targeted by autonomous drone attacks in the Wazir Tangi area in Nangarhar province, Afghanistan. The accusation came from local tribal authorities and referred to at least 30 rural workers killed and another 40 injured.

The Afghan government has recognized only one military operation in the area. The weapons’ mission would be to destroy a cache of Islamic militants but target identification failed and the attack victimized farmers who had gathered around a campfire.

The debate on whether or not restrictions on the use of autonomous weapons is needed has just begun.

Calls for its total ban already involve personalities like the late Stephen Hawking and Elon Musk, which point to the inability of AI to distinguish between military and civilian, or to establish the legitimacy of targets.

Its defenders say that, on the contrary, autonomous weapons will be crucial in combating rapid threats like drone swarms and are likely to reduce the threat to the civilian population, since they make fewer mistakes than human-operated systems.
Risk analysis
The Bulletin mentions nine important questions to understand the operation of autonomous devices, whether or not weapons, their limits and the associated risks.

The first concerns making decisions. The IAs of robots operate based on algorithms, “taught” from visualization programs through intensive data training to reliably classify objects, for example distinguishing school buses from armed cars. The complexity of the database may however be insufficient and the AI ​​may learn the “lesson” poorly. The magazine cites the case of a company that considered it a good idea to use AI to make decisions about new employees, until management realized that the machine put the person named Jared and playing at the top of the ranking priorities. lacrosse in high school.

In an autonomous weapon, cumulative learning from errors can lead to tragic consequences, which requires programmers and manufacturers to anticipate them.

Continuous testing and multiple scenarios must be considered and implemented, argues the author of the article.

too target identification can be complicated. A disguised armed car can bypass the system or be impossible to detect between trees. The distinction between a farmer armed with a stick or a shotgun to defend his fields and a soldier, especially if the clothes are similar, is almost impossible for a machine, at least for now.

The role of human beings in the control of these weapons is also a matter for reflection, with the author of the article suggesting that autonomy is optional and reserved for very short periods and specific occasions. It also recommends that the explosive charge carried by drones be minimal and never of an atomic nature, to minimize the effects if something goes wrong.

Swarming operations of autonomous weapons also multiply the risks, as the analysis is communicated and shared. Drones coordinate their actions, which can lead to a cascade of errors, especially in collective decision-making.

The technology of autonomous weapons is recent, but Libya has been the ideal theater of war for it to be tested and perfected, making its presence more and more common in areas of conflict. It is to be expected that extremist armed groups will adopt these devices more and more frequently, which requires measures to control marketing. and the development of neutralization tactics.
Around the corner

It is necessary to underline that civil society already benefits from some of the advances in AI, mainly through surveillance drones, robots vacuum cleaners, mobile phones or televisions, not to mention vehicles. It should be stressed that confidence in a car’s range can make a driver more easily distracted, with increased risk of accidents.

In cities, the error capacity of AI rises, for example, exponentially due to the multiplicity of similar objects, often obscured by vegetation.

Weather conditions are also a factor to consider, as rain or fog interferes with the ability of a drone, for example, to detect obstacles on a road and issue an alert.

Border or climatic surveillance operations are other functions in which the autonomy of these devices can be an advantage. The scope of its role will however require legal reflection.

.