Monday, July 11, 2016

More thoughts on tort liability and autonomous vehicles -- UPDATED

A few days ago I posted a comment on issues related to the possibility of liability for accidents involving autonomous (aka "self driving") cars.  See here.  In it, I commented on the fact that because the possible liability would be shifted from the driver to the programmer, we would have to consider the decision making process that programmers would use to determine what a car should do when facing the possibility of an accident, particularly if it involved making a decision between choices that would cause different types of injuries to others.

I am revisiting the question today because of a new article in Slate precisely on that issue.  It talks about how programmers are studying "the ethics of so-called crash-optimization algorithms" which seek to enable a self-driving car to “choose” the course of action that would cause the least amount of harm or damage. However, as the article goes on to discuss, what happens when all the choices would result in damage? What happens when one result would cause little damage to the occupant of the car but would likely cause catastrophic damage to another? How should the car be programmed to react?  What is the reasonably prudent thing to do?  Is it to always protect the occupant, who after all, expects the car to offer safety?  Or should the car avoid the worst type of possible injury, even if it means causing injury to the occupant?  The possibilities are almost endless.

You can read the full article here.

Meanwhile, another article, also published in Slate (and available here), argues that "Congress may need to provide a certain amount of legal immunity for creators of driverless car technologies, or at least create an alternative legal compensation system for when things go wrong."

The article acknowledges that one possible approach to the issues raised by liability for injuries caused by autonomous vehicles is to allow courts to apply tort law rules, or to develop new ones, just as we have always done.  That way the law would develop to provide the necessary balance in the societal cost and benefit analysis.

Yet, the article rejects this approach and proposes federal government intervention and regulation instead using the regulation over vaccines as an analogy.  I think this reasoning is flawed.

First of all, what's wrong with allowing the law to develop as it always has through the common law process by applying, or modifying, principles of tort law?  Courts have forever considered the consequences of imposing liability and have either expanded or limited the reach of the possible liability based on many factors.  As the article states, "So, if the autonomous car maker of the future ends up putting a fleet of defective robot cars on the road that they knew had serious programming issues, courts would force them to pay for any resulting damages. As a result, those driverless car makers will need to invest in better insurance policies to protect against that risk."

Someone explain to me why that would be a bad thing.

The article then takes on the issue of whether there should be liability on companies who provide the cars as a "service."  The product liability approach would not apply in such cases because those possible defendants would not be in the market of selling products.  The article argues:
"the car of the future is more likely to be . . . a fleet of robot cars that are just sitting out there waiting for us to hail them for a ride. As cars become more of a service than a final good, liability will rapidly shift to the owner of the fleet of cars and away from end users. But if all the liability falls on the manufacturer or fleet owners of driverless cars, there’s one big pitfall with this approach. America’s legal system lacks a “loser-pays” rule—i.e., the party who loses the case covers the other party’s legal fees—which means a perverse incentive exists to file potentially frivolous lawsuits at the first sign of any trouble. If enough lawsuits start flying, it could seriously undermine this potentially unprecedented public health success story. That’s why it may be necessary to limit liability in some fashion to avoid the chilling effect that excessive litigation can have on life-enriching innovation"
There are many things wrong with this simplistic analysis.  Let's start with the claim that liability will "shift" to the owner of the fleet of cars and away from the end users.  First, this implies that liability can be imposed on the owner of the fleet just because it is the owner of the fleet.  This is wrong.  Since the owner of the fleet is providing a service, its liability would not be strict.  It could be vicarious liability based on the negligence of one of its employees, or it could be direct liability based on its own negligence.  But in either case, the liability would be based on negligence which would require the plaintiff to prove the conduct and that it should be considered to be negligent to begin with.  Providing a car, by itself is not negligent.  The plaintiff would have to argue that there is something in the process of providing the car or in the type of car that makes it negligent to provide it to the public.  And if that is the case, again, someone explain to me why it would be a bad thing to allow the court system to operate as a way to help make the products and the process safer.  This is how the history of tort law has worked to make cars and transportation in general safer over the years. 

Second, the article's assertion implies that liability is assigned either to the defendant or to the plaintiff.  In fact, in all but 4 or 5 jurisdictions in the United States liability can be, and often is, shared by the parties.  In most of those jurisdictions, the plaintiff can actually lose the right to recover if their portion of the blame is high enough.  This, of course, is what we know as comparative negligence (and in those 4 or 5 retrograde jurisdictions as contributory negligence).  Changing the analysis as to who can be liable has no effect on who would be liable, much less on the consequences of how the possible liability is allocated.

Having said, that, though, since the consumer of transportation in the article's car of future scenario does nothing other than get in the car, it might be difficult to argue their conduct was somehow negligent and that it contributed to the injury.  For this reason, the "shift" in possible liability is not caused by the legal analysis but by the technology itself which takes human error out of the equation.  If the person formerly known as the driver of the car has no control over the car, it can hardly be said they acted in a way that creates an unreasonable risk of injury to others, unless you argue that getting into an autonomous vehicle is, by itself, negligent. And who wants to argue that?

Third, the article's assertion seems to be based on the notion that all of a sudden there will be a massive increase in lawsuits, and frivolous lawsuits at that which will lead to dogs and cats living together and the end of the world as we know it.  Give me a break.  Anyone who knows anything about tort law knows that tort law claims are a small percentage of civil litigation.  New technology does not necessarily lead to more litigation.  And, even if it does, if more litigation leads to better safety, then more litigation is a good thing.

The article goes on to suggest that one potential model to solve the problem can be found in the National Childhood Vaccine Injury Act of 1986.   This is certainly a possible approach but it must not be forgotten that vaccines fall within a very distinct category of products: those that are unavoidably dangerous.  These are products that can not be made safer but whose social benefits outweigh the risks they create.  Should we be eager to pronounce that autonomous cars should be considered in this same category of products?  I am not.  Not yet, at least.  We haven't seen an autonomous car in the market yet, so why would we be so eager to say there is no way they can be made safer?  And if there is no way to avoid the dangers they create, I suggest what we should be doing is asking whether we are willing to tolerate the risks rather than say they should be rejected precisely because they are unavoidably unsafe.

The article concludes:  "Initially, the tort system should be allowed to run its course because it may be the case that the gains are so enormous that frivolous lawsuits are not even a cost factor. But if excessive litigation ensues over just a handful of incidents and begins discouraging more widespread adoption, Congress might need to consider an indemnification regime that ensures the technology is not discouraged but which also compensates the victims. Creating this system will have challenges of its own, but the life-saving benefits of driverless cars are well worth overcoming a few roadblocks"

I agree with the first part.  There are many issues to deal with as the industry continues to move forward with the notion of autonomous cars and we should let the tort system continue to develop.

UPDATE (7-11-16):  TechDirt has a short post on the ethical dilemmas that smart car programming presents.   It starts by framing the question this way:   "Should your car be programmed to kill you if it means saving the lives of dozens of other people? For example, should your automated vehicle be programmed to take your life in instances where on board computers realize the alternative is the death of dozens of bus-riding school children?"  Interestingly, it points out that "people often support the utilitarian "greater good" model -- unless it's their life that's at stake. A new joint study by the Toulouse School of Economics, the University of Oregon and MIT has found that while people generally praise the utilitarian model when asked, they'd be less likely to buy such an automated vehicle or support regulations mandating that automated vehicles (AVs) be programmed in such a fashion . . . To further clarify, the surveys found that if both types of vehicles were on the market, most people surveyed would prefer you drive the utilitarian vehicle, while they continue driving self-protective models. . ."

1 comment:

Unknown said...

Hello Professor Bernabe! I found this article very interesting, and I would like to add some more information that you may find relevant. First, your post assumes, at least minimally, that these vehicles were be completely automated. As the technology develops there is likely to be a fail-safe mechanism by which a driver may take control of certain aspects of the experience. This human-to-system interaction poses additional obstacles in terms of liability, especially as programmers will somewhat be obligated to take human nature into consideration in developing these technologies. For example, can manufacturers, engineers, and lawmakers expect people to constantly pay attention, sit in the driver's seat, or maintain driving education and ability? This seems highly unlikely. Could the developers be liable for their inability to foresee the likely human responses? (throwback to the palsgraf man on the door of our classroom 1L year.) Would the "driver" be liable for failing to respond, even though automated vehicles are supposed to be "automated?" How would warranties play into the mix?

On the other end of the spectrum, you have the manufacturers displaying increasing control over the vehicles after they leave their control. This type of technology, even if fully automatic, would require regular software updates to ensure safety and proper function. However, if a manufacturer were to update the software automatically, they may face legal hurdles due to privacy laws. On the other hand, if they merely notified the buyer of the updates, human nature, once again, becomes a factor. I wonder, how often do people actually update their phones right away when prompted? If a driver failed to accept a critical software update for any reason, whether by choice or ignorance, then lives could be in danger.

There are other added risks such as the viability of mapping software and connectivity issues. These vehicles are/will be programmed with maps that are set to detect traffic signs and the like, which will be updated as time progresses to include new signs that appear on the cameras. However, the detection and addition of these signs would require time for the mapping software to update the information, and as such, it would be very difficult to have the real-time updates that automated vehicles require to operate in the safest manner. If the car didn't stop, who would be at fault? The engineers producing/updating the maps? The manufacturer? The driver? Anyone? Additionally, connectivity is critical in the operation of these vehicles, and in some locations and situations, the connection may be lost or interfered with.

When considering all of the factors, including the ethical considerations you mentioned in your last post, I do agree that common law development is the only way to go until we are fully aware of the risks and parameters of this technology.