One of the more interesting questions that is being debated is whether a car should be programmed to kill its occupants if it means saving the lives of other people or whether government regulations should focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual.
This philosophical question - usually referred to as the trolley car problem - has been the subject of discussion in philosophy classes and books for a long time. (You can watch such a class at Harvard here).
Interestingly, however, Techdirt is reporting that according to some engineers, the trolley problem should not be an issue when it comes to autonomous cars. Or, at least, not yet. For now, engineers are concerned with more basic problems. As the article concludes:
[The trolley questions is] still a question that needs asking, but with no obvious solution on the horizon, engineers appear to be focused on notably more mundane problems. For example one study suggests that while self-driving cars do get into twice the number of accidents of manually controlled vehicles, those accidents usually occur because the automated car was too careful -- and didn't bend the rules a little like a normal driver would (rear ended for being too cautious at a right on red, for example). As such, the current problem du jour isn't some fantastical scenario involving an on-board AI killing you to save a busload of crying toddlers, but how to get self-driving cars to drive more like the inconsistent, sometimes downright goofy, and error-prone human beings they hope to someday replace.You can read the article (and the comments posted below it) here.
No comments:
Post a Comment