Saturday, October 1, 2016

Update on the debate regarding possible liability for injuries caused by autonomous cars

I have been following the debate regarding the development of so called "self driving cars" or "autonomous cars and the debate on the legal issues that will arise regarding liability for injuries caused by them.  My previous posts (with lots of links to more information) are here, here, here and here.

One of the more interesting questions that is being debated is whether a car should be programmed to kill its occupants if it means saving the lives of other people or whether government regulations should focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual.

This philosophical question - usually referred to as the trolley car problem - has been the subject of discussion in philosophy classes and books for a long time.  (You can watch such a class at Harvard here).

Interestingly, however, Techdirt is reporting that according to some engineers, the trolley problem should not be an issue when it comes to autonomous cars.  Or, at least, not yet.  For now, engineers are concerned with more basic problems.  As the article concludes:
[The trolley questions is] still a question that needs asking, but with no obvious solution on the horizon, engineers appear to be focused on notably more mundane problems. For example one study suggests that while self-driving cars do get into twice the number of accidents of manually controlled vehicles, those accidents usually occur because the automated car was too careful -- and didn't bend the rules a little like a normal driver would (rear ended for being too cautious at a right on red, for example). As such, the current problem du jour isn't some fantastical scenario involving an on-board AI killing you to save a busload of crying toddlers, but how to get self-driving cars to drive more like the inconsistent, sometimes downright goofy, and error-prone human beings they hope to someday replace. 
You can read the article (and the comments posted below it) here

No comments: