{"id":33824,"date":"2023-10-20T14:31:56","date_gmt":"2023-10-20T14:31:56","guid":{"rendered":"http:\/\/startupsmart.test\/2023\/10\/20\/of-cats-and-cliffs-the-ethical-dilemmas-of-the-driverless-car-startupsmart\/"},"modified":"2023-10-20T14:31:56","modified_gmt":"2023-10-20T14:31:56","slug":"of-cats-and-cliffs-the-ethical-dilemmas-of-the-driverless-car-startupsmart","status":"publish","type":"post","link":"https:\/\/www.startupsmart.com.au\/uncategorized\/of-cats-and-cliffs-the-ethical-dilemmas-of-the-driverless-car-startupsmart\/","title":{"rendered":"Of cats and cliffs: the ethical dilemmas of the driverless car – StartupSmart"},"content":{"rendered":"
We make decisions every day based on risk \u2013 perhaps running across a road to catch a bus if the road is quiet, but not if it\u2019s busy. Sometimes these decisions must be made in an instant, in the face of dire circumstances: a child runs out in front of your car, but there are other dangers to either side, say a cat and a cliff. How do you decide? Do you risk your own safety to protect that of others?<\/p>\n
\u00a0<\/p>\n
Now that self-driving cars are here<\/a> and with no quick or sure way of overriding the controls \u2013 or even none at all \u2013 car manufacturers are faced with an algorithmic ethical dilemma. On-board computers in cars are already parking for us, driving on cruise control, and could take control in safety-critical situations. But that means they will be faced with the difficult choices that sometimes face humans.<\/p>\n \u00a0<\/p>\n How to programme a computer\u2019s ethical calculus?<\/p>\n \u00a0<\/p>\n \u00a0<\/p>\n <\/p>\n \u00a0<\/p>\n What if the car also included its driver and passengers in this assessment, with the implication that sometimes those outside the car would score more highly than those within it? Who would willingly climb aboard a car programmed to sacrifice them if needs be?<\/p>\n \u00a0<\/p>\n A recent study<\/a> by Jean-Francois Bonnefon from the Toulouse School of Economics in France suggested that there\u2019s no right or wrong answer to these questions. The research used several hundred workers found through Amazon\u2019s Mechanical Turk<\/a> to analyse viewpoints on whether one or more pedestrians could be saved when a car swerves and hits a barrier, killing the driver. Then they varied the number of pedestrians who could be saved.<\/p>\n \u00a0<\/p>\n Bonnefon found that most people agreed with the principle of programming cars to minimise death toll, but when it came to the exact details of the scenarios they were less certain. They were keen for others to use self-driving cars, but less keen themselves. So people often feel a utilitarian instinct to save the lives of others and sacrifice the car\u2019s occupant, except when that occupant is them.<\/p>\n \u00a0<\/p>\n Intelligent machines<\/b><\/p>\n <\/b><\/p>\n Science fiction writers have had plenty of leash to write about robots taking over the world (Terminator<\/a> and many others), or where everything that\u2019s said is recorded and analysed (such as in Orwell\u2019s 1984<\/a>). It\u2019s taken a while to reach this point, but many staples of science fiction are in the process of becoming mainstream science and technology. The internet and cloud computing have provided the platform upon which quantum leaps of progress are made, showcasing artificial intelligence against the human.<\/p>\n \u00a0<\/p>\n In Stanley Kubrick\u2019s seminal film 2001: A Space Odyssey<\/a>, we see hints of a future, where computers make decisions on the priorities of their mission, with the ship\u2019s computer HAL saying: \u201cThis mission is too important for me to allow you to jeopardise it\u201d.<\/p>\n \u00a0<\/p>\n \n
<\/b><\/p>\n