{"id":55015,"date":"2023-10-20T16:29:41","date_gmt":"2023-10-20T16:29:41","guid":{"rendered":"http:\/\/startupsmart.test\/2023\/10\/20\/why-we-should-welcome-killer-robots-not-ban-them-startupsmart\/"},"modified":"2023-10-20T16:29:41","modified_gmt":"2023-10-20T16:29:41","slug":"why-we-should-welcome-killer-robots-not-ban-them-startupsmart","status":"publish","type":"post","link":"https:\/\/www.startupsmart.com.au\/uncategorized\/why-we-should-welcome-killer-robots-not-ban-them-startupsmart\/","title":{"rendered":"Why we should welcome ‘killer robots’, not ban them – StartupSmart"},"content":{"rendered":"
\"\"<\/div>\n

The open letter<\/a> signed by more than 12,000 prominent people<\/a> calling for a ban on artificially intelligent killer robots, connected to arguments for a UN ban on the same, is misguided and perhaps even reckless.<\/p>\n

\u00a0<\/p>\n

Wait, misguided? Reckless? Let me offer some context. I am a robotics researcher and have spent much of my career reading and writing about military robots<\/a>, fuelling the very scare campaign that I now vehemently oppose.<\/p>\n

\u00a0<\/p>\n

I was even one of the hundreds of people<\/a> who, in the early days of the debate, gave their support to the International Committee for Robot Arms Control (ICRAC<\/a>) and the Campaign to Stop Killer Robots<\/a>.<\/p>\n

\u00a0<\/p>\n

But I\u2019ve changed my mind.<\/p>\n

\u00a0<\/p>\n

Why the radical change in opinion? In short, I came to realise the following.<\/p>\n

\u00a0<\/p>\n

The human connection<\/h2>\n

The signatories are just scaremongers who are trying to ban autonomous weapons that \u201cselect and engage targets without human intervention\u201d, which they say will be coming to a battlefield near you within \u201cyears, not decades\u201d.<\/p>\n

\u00a0<\/p>\n

But, when you think about it critically, no robot can really kill without human intervention. Yes, robots are probably already capable of killing people using sophisticated mechanisms that resemble<\/em> those used by humans, meaning that humans don\u2019t necessarily need to oversee a lethal system while it is in use. But that doesn\u2019t mean that there is no human in the loop.<\/p>\n

\u00a0<\/p>\n

We can model the brain, human learning and decision making to the point that these systems seem capable of generating creative solutions to killing people, but humans are very much involved in this process.<\/p>\n

\u00a0<\/p>\n

Indeed, it would be preposterous to overlook the role of programmers, cognitive scientists, engineers and others involved in building these autonomous systems. And even if we did, what of the commander, military force and government that made the decision to use the system? Should we overlook them, too?<\/p>\n

\u00a0<\/p>\n

We already have automatic killing machines<\/h2>\n

We already have weapons of the kind for which a ban is sought.<\/p>\n

\u00a0<\/p>\n

The Australian Navy, for instance, has successfully deployed highly automated weapons in the form of close-in weapons systems (CIWS<\/a>) for many years. These systems are essentially guns that can fire thousands of rounds of ammunition per minute, either autonomously via a computer-controlled system or under manual control, and are designed to provide surface vessels with a last defence against anti-ship missiles.<\/p>\n