Wednesday, May 11, 2016

Thoughts on tort liability and autonomous vehicles

There is a growing amount of literature on possible issues related to tort liability and autonomous vehicles, aka self-driving cars.  If you search using those phrases in SSRN, for example you will find 10 to 20 articles. 

I have to confess I have not been keeping up with the literature but today I was reading an article in Smithsonian magazine (my favorite magazine, by the way) and found this quote by Chris Gerdes, who is described as “one of the leading engineers identifying novel problems facing autonomous driving and writing the code to solve them”:   “Autonomous vehicles don’t eliminate human error.  They shift it from the driver to the programmer.”

Obviously, this notion might prove to be extremely important in the future when someone has to decide whether to impose liability for injuries caused by an autonomous vehicle.

I also found interesting the description of how the programmer is working to identify and help solve the future problems: “Part of what Gerdes does is huddle with a team that includes not just engineers and programmers but also moral philosophers, and what has emerged is an ethical framework, or set of decision trees.”

The mention of moral philosophers, of course, made me think of the “trolley problem” (or here) which makes sense because this is the type of decision a programmer may have to find a solution to in order for the autonomous car to “act.”  

If you are not familiar with the “trolley problem” take a look at the first 13 minutes of this video.  Essentially, the question is whether you would act to switch a trolley from one track where it is headed to kill five people onto a track where it will kill one other person.  I would not want to be the computer programmer in charge of deciding this type of thing in order to tell a car what to do. 

And if you think the trolley problem is too far fetched, think of a more common problem.  How should the autonomous car react when a child darts in front of it?  Should it simply stop even though it senses there is a car behind which might hit it and hurt the passengers in it?  Should it veer to avoid the child but head to a collision into another car? And so on.  The possibilities are endless. 

Currently, we - humans - make those decisions based on reaction time and instinct and when injuries are caused, other humans pass value judgment on the conduct based on legal standards which depend on the circumstances. 

How would - or should - all of this change in cases of injuries caused by autonomous vehicles given that the responsibility for making decisions is transferred to a computer programmer?   Should the standard of care change to take into account the work of the programmer rather than the circumstances of the accident? 

No comments: