Driverless Car Is About To Crash. Who Decides Who Gets To Live?

A self-driving car is approaching a pedestrian crossing on a two-lane road.

Suddenly, the brakes fail. A crash is unavoidable, and it cannot find a trajectory to save everyone.

It can continue ahead and drive through the crossing, killing a teenager, or it can swerve into the other lane, crashing into a concrete barrier and killing its three elderly passengers.

Who should it spare and who does it kill? What if the pedestrian was pregnant, appeared fit or, even, homeless? What if they were breaking the law?

And a morbid question for your Friday: Who lives or dies?

These hypothetical scenarios, could soon be a reality.

Car manufacturers and government agencies are putting together frameworks and testing for when machines control the roads. 

Amid promises of reduced congestion and road-related fatalities, is the reality that an autonomous car could also determine who lives and who dies.

At the crossroads of the "well-being they create and the harm they cannot eliminate" is a series of moral dilemmas that researchers believe we need to agree on -- when it comes to getting them resolved.

“Decisions about ethical principles that will guide autonomous vehicles cannot be left solely to either the engineers or the ethicists," said researchers from the Massachusetts Institute of Technology, in a study published in the Nature journal.  

"For consumers to switch from traditional human-driven cars to autonomous vehicles, and for the wider public to accept the proliferation of artificial intelligence-driven vehicles on their roads, both groups will need to understand the origins of the ethical principles that are programmed into these vehicle." 

The researchers created the Moral Machine to test how millions of people from 233 countries and territories would respond to these hypothetical dilemmas.

It presented participants with a series of scenarios involving driverless cars on a two-lane road.

Each one involved various combinations of pedestrians and passengers, the car could remain on its original course or swerve into another lane. This would alter the outcome and the human toll.

The strongest preferences from 39.61 million decisions included sparing humans over animals and sparing more lives and young lives.

These results are "essential building blocks" for machine ethics, according to the researchers.

Image: Moral Machines

Interestingly, there were some notable clashes with the ethical rules proposed in 2017 by the German Ethics Commission on Automated and Connected Driving -- the first-known attempt to provide official guidelines for the ethical choices of autonomous vehicles.

One rule states that in dilemma, the protection of human life should enjoy top priority over animal life. But another leaves open the possibility of whether vehicles should be programmed to sacrifice the few to spare the many.

"The same German rule states that any distinction based on personal features, such as age, should be prohibited," said Hussein Dia, an Associate Professor in Transport Engineering at the Swinburne University of Technology.

"This clearly clashes with the strong preference for sparing the young that was assessed through Moral Machine."

Image: Moral Machines

The findings showed broad cultural differences around the world, with the decision to spare young people over the elderly being much stronger in the global south than the far east, and the Islamic world.

Those in Central and South America, alongside France, showed a stronger preference for sparing women and fit people, while those from countries with greater income inequality were more likely to weigh in social status.

Dia said the findings demonstrate the challenges of developing "uniform" ethical principles.

"Regardless of how rare these unavoidable accidents will be, these principles should not be dictated to us based on commercial interests.

We (the public) need to agree beforehand how they should be addressed and convey our preferences to the companies that will design moral algorithms, and to the policymakers who will regulate them," Dia said.

READ MORE: Driverless Tractors, Underground Robots And Bees With Backpacks

The researchers were first to highlight the many nuances they couldn't consider, such as the uncertainty involved in predicting how any accident would unfold.

This was picked up by Professor Toby Walsh from the Optimisation Research Group at Data61 who said the conclusions should be treated with "immense caution".

"How people say they will behave is not necessarily how they will actually do so in the heat of the moment," Walsh said.

He said studies about peoples' attitudes don't tell us how autonomous cars should drive.

Ultimately, Dia said the findings should kickstart a "global conversation".

"The work not only provides fascinating insights into the moral preferences and societal expectations that should guide autonomous vehicle behaviour.

"It also sets to establish how these preferences can contribute to developing global, socially acceptable principles for machine ethics."

Contact the author: ebrancatisano@networkten.com.au