The driverless car ethical dilemma: Can we stop anthropomorphizing machines?

I have been following the developments in driverless cars and was attracted by a discussion with Professor Richard Feenstra, who specialises in judgement and decision-making. He used the “trolley problem” (a classical thought experiment involving your reactions to an impending accident where someone has to get hurt) to pose the question about driverless cars: “If things do go wrong, who is making the ethical decision behind the behaviour of the car?” He also goes further and asks, “Would it be ethical to program the car differently for different people – based for instance on an evaluation of their utility to society?”

“The more technology seems to have humanlike mental capacities, the more people trust it to perform its intended function competently.”

The psychology of driverless cars has also been investigated, and a study showed that people were more comfortable in giving up control if the driverless car were somewhat anthropomorphic. Yet from our earliest age we can see human aspects, particularly faces, in the most unlikely things. We play with vehicles with faces, which can display emotions and make decisions. Cars with their conveniently placed headlights and grilles, like eyes and mouths, even look a little human. Is this not also playing a role in our reaction to driverless cars? Can we expect a machine to be involved in an ethical decision without making it human?

“There is an universal tendency among mankind to conceive all beings like themselves, and to transfer to every object, those qualities, with which they are familiarly acquainted, and of which they are intimately conscious. We find human faces in the moon, armies in the clouds; and by a natural propensity, if not corrected by experience and reflection, ascribe malice or good-will to every thing, that hurts or pleases us.”

David Hume

Machine safety

With most machines, bringing them to a stop is the “safest” option. So many mobile machines have been already made with a defined safety regime that the idea that you, personally, must have any decision to make or responsibility to take is inapplicable, both as a person, as an owner, and as a designer. This applies to any (ground) transport means in which you are a passenger. For example, many trains run so quickly that the driver is incapable of reliably seeing the signals or braking in time to avoid hitting anything they can already see, and so the safety systems are effectively in charge. These safety systems are multiply-redundant, and have to systematically prove that they have a chance of less than about one in ten million that in any hour they will make a safety-related error that will not be caught (that’s about the same risk as being hit by an asteroid this year). This is called “safety integrity.” 

Cars do not depend on automatic safety systems; all responsibility is on the driver, like with trains before automatic safety systems came along. This has been borne out in law and applied by the insurance industry. Risk reduction and control is made by assumptions: everyone follows the highway code, has the appropriate permit, adjusts speed to conditions, and that the motorways are generally protected from pedestrians wandering across them. To balance this, young or risky drivers pay higher insurance premiums. None of this stops accidents occurring, though it does limit the likelihood and the consequences.

The ethical question might therefore be put “What level of added risk am I willing to take in the case of driverless cars?” The answer to that, I think, is simple and universal: “None.” To the second part of the question, “Who takes responsibility?”, the answer will almost certainly be “The owner – unless there is a systematic fault with the driverless car, in which case the manufacturer.” The car will not be considered a legal entity and therefore have no ethical dilemma to solve. As to the final question, “Which person dies?”, there is only one answer: in that moment, there is no way of knowing the future or reliably judging the consequences, so the question will not arise. The only thing your car might reliably know is how many social media connections you have, which doesn’t comfort me. If we consider the driverless car to be capable of omniscience and perfect judgement – why, we have found our God.

Since under no circumstances will the driverless safety system need or be able to prove more than its general safety integrity level (like the trains), nor that the consequences of an accident were worse than what the average driver might have caused, the first reactions in law are likely to be limiting its scope (speed limits, only certain lanes,…), until there is an agreed international standard on driverless car performance, like the EN 5012x standards in rail. The fact that these exist (and others like them in other areas) shows that we need not consider ourselves incapable of adapting them to the case in hand.

“Out of this nettle – danger – we pluck this flower – safety.”

William Shakespeare

These standards will cover performance, reliability, interoperability, hackability, and availability, as well as defining breakdown response criteria. If all the train safety systems fail, it is usually possible for the driver to “run on sight” – that is, at a speed where the drivers can ensure stopping before reaching any obstacle they can detect. The level of risk may have to be rethought for driverless cars, as the train is restricted in its movements to the track and the area around it. If an automatic car control system were on the motorway and decided it couldn’t follow the road safely, maybe it would have to head for the hard shoulder rather than stop where it was. Giving control over to a designated human on board in an emergency might be possible, but will effectively demand driver-like vigilance and capability from that person and undo a lot of the benefits of the driverless car.

Conclusion

The fallacy for me in this debate is that the car is currently anthropomorphized. Today, it is entirely under the control and responsibility of the driver, and we react to it as an extension of that person’s body, or as possessing a personality in its own right. The wish to treat it as something other than a machine will have to be conquered. This will likely eliminate much of the pleasure of owning a car.

In the end I would look to the insurance industry for guidance. If they think the risk of unprovoked accidents is high, the premiums will be correspondingly huge!

References

Bonnefon, J. F., Shbadriff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science.

Waytz, A., Heafner, J., Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. doi:10.1016/j.jesp.2014.01.005 (quoted in Association for Psychological Science, November 19, 2014)

Share on facebook
Share on google
Share on twitter
Share on linkedin
Jeremy Williams

Jeremy Williams

I am a senior manager of business units, projects, and products in technical industries. My focus is on solving challenges with my team that help people live better lives, while respecting the work/life balance of my team and myself.

Leave a Comment

About Us

Launchpad 5 is a consultancy company that helps your company grow through aligning your strategy, customers, market, product/service portfolio and lifecycle, and helps put in place the means for growth – people and Investment.

Recent Posts

Follow Us