The revelation raises concern over terminator-style AI weapons which might kill folks in battle with none human management. The drone was deployed in March final 12 months through the battle between the Libyan authorities forces and a breakaway army faction led by Khalifa Haftar, commander of the Libyan National Army.
The report on the incident from the UN Security Council’s Panel of Experts on Libya was obtained by the New Scientist journal.
The drone was a Kargu-2 quadcopter created by Turkish army tech firm STM.
The weapon has an explosive cost and could be aimed toward a goal and detonates on impression.
The report, printed earlier this 12 months, stated how Haftar’s forces had been “hunted down and remotely engaged” by the drones which had been working in a “highly effective” autonomous mode which required no human controller.
Writing in The Bulletin of the Atomic Scientists, he stated: “Current machine learning-based systems cannot effectively distinguish a farmer from a soldier.
“Farmers might hold a rifle to defend their land, while soldiers might use a rake to knock over a gun turret. … Even adequate classification of a vehicle is difficult.”
Mr Kallenborn defined how with no human to make a judgement name, the dangers are too excessive.
He added: “Any given autonomous weapon has some chance of messing up, but those mistakes could have a wide range of consequences.
“The highest risk autonomous weapons are those that have a high probability of error and kill a lot of people when they do.
“Misfiring a .357 magnum is one thing; accidentally detonating a W88 nuclear warhead is something else.”