My first post touched on the developing idea and implementation of autonomous vehicles (AVs). When I first thought of this idea to look into self driving cars, I was curious. I thought it was an interesting direction we were headed in with our technology and I wanted to see how it all worked.
Automatic vehicles have been developing for a little while now and even though this has been happening, they are still very new…new to the roads and to the public. They are very complicated things. With my further exploration the words I have come across a lot have been algorithms, systems, data, programs, sensors and much more. What I was going to explore was the ethics of these cars. My initial questions were along the lines of…who was going to be responsible for crashes if they were to occur? What are the consequences? And can laws keep up? I have learnt that there is so much more to this idea and to these questions.
If we are thinking of a less complicated concept, AVs can be good because it eliminates human error. People can get distracted causing fatalities, whereas a programmed system solely designed to drive and follow signals, sensors and signs won’t get distracted. What self driving cars lack is human instinct, BUT would its instinct be better and more accurate than human instinct?
So here is where it gets complicated. In theory that sounds good, but that’s only theory. There are so many aspects that go into the actual technology. The Never Ending Self-Driving Car Project indicates there are many systems such as localisation systems and maps so vehicles can understand where they are. Perception systems so they know what is going on around them and also planning systems so they can go from A to B. Also involved is the actual software that makes the car steer without hands and use the pedals without feet pushing down. Then you add in weather, different cities and their driving culture and different terrain. ‘More than half a million lines of code will power the various systems and algorithms which are constantly updated’, just like phones and apps.
Looking closely into the ethics, I’ve come across the ‘Trolley Problem’. This involves a trolley (tram) on a collision course for 5 unsuspecting people, and you have an option to switch it to another track where it will only hit one person and asks what you would do. People are using this example to relate to autonomous cars, if they were to come across an unavoidable accident. Code and systems can be put into place for these vehicles, however a sense of ethics is quite difficult to bring into algorithms for a computer to follow. How should a car be programmed if it encounters an unavoidable accident? As a human’s reaction would be just that…a reaction. A panicked instinct with no anticipation however a car’s reaction would be an actual decision. What happens when harm can’t be entirely avoided, that is the question.
An article looked at the idea of utilitarian AVs, programmed to minimise the death toll of an accident. But would people be willing to get in a car that might be programmed to self sacrifice to minimise the deaths in an accident? They’re results were that people were comfortable with utilitarian AVs if they do minimise the death toll and especially if situations don’t involve the sacrifice of the owner. This is an interesting way of looking into ethics however usually ethics aren’t as clear as ‘would you kill 5 people or 1’. Not every ethical question is as dramatic or has one right answer. As well as putting the autonomous cars into a real driving situation there are many more subtle dilemmas that it could face with uncertain outcomes.
The Conversation’s article makes a clear point in understanding the way it works: ‘Self-driving vehicles currently work by collecting data from an array of sensors, which is then interpreted by various algorithms. These algorithms tell the vehicle where to drive, at what speed and when to stop.’ But this data is limited. If we are to completely rely on these vehicles to drive us around and for it to use this data, it needs to be able to access however much it needs to work. So we would have to improve our mobile networks. Something that also needs to change is the way we use and keep up with technology. With this technology continually developing with many variations, people may not know how to use it. An example of this is the Tesla Autopilot feature which can ‘keep speed, change lanes and self park but also requires the drivers to keep their eyes on the road and hands on the wheel in order to avoid accidents and take control.’ The driver according to the investigation had received many signals of warning however the drivers hands were not detected on the wheel for 6 seconds before the collision into the barricade.
In my presentation and Digital Artefact which will still be a Podcast, I don’t plan on arguing for any side. I am going to give the facts that I know and have read and present many opinions and sides. This will allow me to present my research in an engaging way and have people really think about this future technology and have them think how it could be involved in their lives.
‘Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?’, Oct, 2015. J Bonnefon, A Shariff, I Rahwan. http://pdfs.semanticscholar.org/13d4/56d4c53d7b03b90ba59845a8f61b23b9f6e8.pdf
‘Why Ethics Matters for Autonomous Cars’, May 2016. Patrick Lin. https://link.springer.com/chapter/10.1007/978-3-662-48847-8_4
‘From Trolleys to Risk: Models for Ethical Autonomous Driving. Jan 2017, Noah Goodall. American Journal of Public Health. https://ajph.aphapublications.org/doi/full/10.2105/AJPH.2017.303672
‘The Trolley Dilemma Would You Kill one person to save five’, June 2016, https://theconversation.com/the-trolley-dilemma-would-you-kill-one-person-to-save-five-57111
‘When will Self-Driving Cars be Ready?’ April 2018. Aarian Marshall. Wired. https://www.wired.com/story/when-will-self-driving-cars-ready/
‘Tesla car that crashed and killed driver was running on Autopilot, firm says’ April 2018. https://www.theguardian.com/technology/2018/mar/31/tesla-car-crash-autopilot-mountain-view
‘Driverless Cars are Forcing Cities to Become Smart’ April 2018. Saber Fallah. https://theconversation.com/driverless-cars-are-forcing-cities-to-become-smart-94707