Saturday, June 11, 2016

More thoughts on tort liability and autonomous vehicles

A few days ago I posted a comment on issues related to the possibility of liability for accidents involving autonomous (aka "self driving") cars.  See here.  In it, I commented on the fact that because the possible liability would be shifted from the driver to the programmer, we would have to consider the decision making process that programmers would use to determine what a car should do when facing the possibility of an accident, particularly if it involved making a decision between choices that would cause different types of injuries to others.

I am revisiting the question today because of a new article in Slate precisely on that issue.  It talks about how programmers are studying "the ethics of so-called crash-optimization algorithms" which seek to enable a self-driving car to “choose” the course of action that would cause the least amount of harm or damage. However, as the article goes on to discuss, what happens when all the choices would result in damage? What happens when one result would cause little damage to the occupant of the car but would likely cause catastrophic damage to another? How should the car be programmed to react?  What is the reasonably prudent thing to do?  Is it to always protect the occupant, who after all, expects the car to offer safety?  Or should the car avoid the worst type of possible injury, even if it means causing injury to the occupant?  The possibilities are almost endless.

You can read the full article here.

Meanwhile, another article, also published in Slate (and available here), argues that "Congress may need to provide a certain amount of legal immunity for creators of driverless car technologies, or at least create an alternative legal compensation system for when things go wrong."

The article acknowledges that one possible approach to the issues raised by liability for injuries caused by autonomous vehicles is to allow courts to apply tort law rules, or to develop new ones, just as we have always done.  That way the law would develop to provide the necessary balance in the societal cost and benefit analysis.

Yet, the article rejects this approach and proposes federal government intervention and regulation instead using the regulation over vaccines as an analogy.  I think this reasoning is flawed.

First of all, what's wrong with allowing the law to develop as it always has through the common law process by applying, or modifying, principles of tort law?  Courts have forever considered the consequences of imposing liability and have either expanded or limited the reach of the possible liability based on many factors.  As the article states, "So, if the autonomous car maker of the future ends of up putting a fleet of defective robot cars on the road that they knew had serious programming issues, courts would force them to pay for any resulting damages. As a result, those driverless car makers will need to invest in better insurance policies to protect against that risk."

Someone explain to me why that would be a bad thing.

The article then takes on the issue of whether there should be liability on companies who provide the cars as a "service."  The product liability approach would not apply in such cases because those possible defendants would not be in the market of selling products.  The article argues:
"the car of the future is more likely to be . . . a fleet of robot cars that are just sitting out there waiting for us to hail them for a ride. As cars become more of a service than a final good, liability will rapidly shift to the owner of the fleet of cars and away from end users. But if all the liability falls on the manufacturer or fleet owners of driverless cars, there’s one big pitfall with this approach. America’s legal system lacks a “loser-pays” rule—i.e., the party who loses the case covers the other party’s legal fees—which means a perverse incentive exists to file potentially frivolous lawsuits at the first sign of any trouble. If enough lawsuits start flying, it could seriously undermine this potentially unprecedented public health success story. That’s why it may be necessary to limit liability in some fashion to avoid the chilling effect that excessive litigation can have on life-enriching innovation"
 There are many things wrong with this simplistic analysis.  Let's start with the claim that liability will "shift" to the owner of the fleet of cars and away from the end users.  First, this implies that liability can be imposed on the owner of the fleet just because it is the owner of the fleet.  This is wrong.  Since the owner of the fleet is providing a service, its liability would not be strict.  It could be vicarious liability based on the negligence of one of its employees, or it could be direct liability based on its own negligence.  But in either case, the liability would be based on negligence which would require the plaintiff to prove the conduct and that it should be considered negligent to begin with.  Providing a car, by itself is not negligent.  The plaintiff would have to argue that there is something in the process of providing the car or in the type of car that makes it negligent to provide it to the public.  And it that is the case, again, someone explain to me why it would be a bad thing to allow the court system to operate as a way to help make the products and the process safer.  This is how the history of tort law has worked to make cars and transportation in general safer over the years. 

Second, the article's assertion implies that liability is assigned either to the defendant or to the plaintiff.  In fact, in all but 4 or 5 jurisdictions in the United States liability can be, and often is, shared by the parties.  In most of those jurisdictions, the plaintiff can actually lose the right to recover if their portion of the blame is high enough.  This, of course, is what we know as comparative negligence (and in those 4 or 5 retrograde jurisdictions as contributory negligence).  Changing the analysis as to who can be liable has no effect on who would be liable, much less on the consequences of how the possible liability is allocated.

Having said, that, though, since the consumer of transportation in the article's car of future scenario does nothing other than get in the car, it might be difficult to argue their conduct was somehow negligent and that it contributed to the injury.  For this reason, the "shift" in possible liability is not caused by the legal analysis but by the technology itself which takes human error out of the equation.  If the person formerly known as the driver of the car has no control over the car, it can hardly be said they acted in a way that creates an unreasonable risk of injury to others, unless you argue that getting into an autonomous vehicle is, by itself, negligent. And who wants to argue that?

Third, the article's assertion seems to be based on the notion that all of a sudden there will be a massive increase in lawsuits, and frivolous lawsuits at that which will lead to dogs and cats living together and the end of the world as we know it.  Give me a break.  Anyone who knows anything about tort law knows that tort law claims are a small percentage of civil litigation.  New technology does not necessarily lead to more litigation.  And, even if it does, if more litigation leads to better safety, then more litigation is a good thing.

The article goes on to suggest that one potential model to solve the problem can be found in the National Childhood Vaccine Injury Act of 1986.   This is certainly a possible approach but it must not be forgotten that vaccines fall within a very distinct category of products: those that are unavoidably dangerous.  These are products that can not be made safer but whose social benefits outweigh the risks they create.  Should we be eager to pronounce that autonomous cars should be considered in this same category of products?  I am not.  Not yet, at least.  We haven't seen an autonomous car in the market yet, so why would we be so eager to say there is no way they can be made safer?  And if there is no way to avoid the dangers they create, I suggest what we should be doing is asking whether we are willing to tolerate the risks rather than say they should be rejected precisely because they are unavoidably unsafe.

The article concludes:  "Initially, the tort system should be allowed to run its course because it may be the case that the gains are so enormous that frivolous lawsuits are not even a cost factor. But if excessive litigation ensues over just a handful of incidents and begins discouraging more widespread adoption, Congress might need to consider an indemnification regime that ensures the technology is not discouraged but which also compensates the victims. Creating this system will have challenges of its own, but the life-saving benefits of driverless cars are well worth overcoming a few roadblocks"

I agree with the first part.  There are many issues to deal with as the industry continues to move forward with the notion of autonomous cars and we should let the tort system continue to develop.

Monday, May 30, 2016

Movie mini review: Attacking the Devil: Harold Evans and the Last Nazi War Crime

The 2014 documentary "Attacking the Devil: Harold Evans and the Last Nazi War Crime" is now available for streaming on Netflix.  If you have access to it, I recommend it.

Before there was a "spotlight" team at the Boston Globe, there was the "insight team" at London's The Sunday Times.  This documentary tells the story of how Harold Evans, the editor of the Sunday Times pursued the story behind the injuries caused by the drug Thalidomide which was used to control morning sickness but which caused tens of thousands of children to be born with serious defects.

Under Evans' leadership, the Times pursued the story and fought restrictive laws that limited the publication of the facts behind the story.  The story is not only about products liability and the pursuit of justice for the injured victims but also about restrictions on freedom of the press. 

The Sunday Times' fight to win compensation for the victims took more than a decade, as Evans tenaciously pursued the drug companies through the English courts and beyond.

Saturday, May 28, 2016

Judge awards $7 million in case involving a wife getting exposed to asbestos in her husband's work clothes

As you probably know, mesothelioma is a rare form of cancer caused by exposure to, and breathing of, asbestos fibers. Most asbestos cases involve plaintiffs who were exposed to asbestos directly (usually at work), but there are a few cases that have recognized liability for "second hand exposure."  These usually involve plaintiffs who were exposed to asbestos when the person who had been exposed directly brought the fibers home embedded in their clothes. 

For example, back in 2013, I reported (here) that a jury in California awarded $27.3 million in damages to a woman who developed mesothelioma as a result of her contact with her husband's clothes.  Just about a year ago, I reported (here) that a Washington state jury awarded $3.5 million to the estate of a woman who died of mesothelioma for the same reason.

Now comes news (also here) that a trial judge in Louisiana has awarded $7 million to the surviving family members of a woman who died of cancer after years of washing her husband's asbestos-tainted clothes. 

Plaintiffs in these cases argue that the defendant should be liable because the injury is a foreseeable consequence of the risk created by the negligence conduct or product (depending on whether the claim is for negligence or strict liability).  This, obviously, is just an argument in support of the element of proximate cause and defendants, therefore, reply that viewed this way, liability could extend too far thus defeating the reason for proximate cause in the first place.  Since the notion of proximate cause is used to limit the reach of possible liability, they argue liability should be limited to the injury to the person who was exposed directly.  Otherwise, any bystander who came in contact with this person could sue.  What if the worker, instead of going straight home after work, went to a bar every day for a beer or two with friends and other co-workers.  Any "regulars" at the bar could have a cause of action.  

It is not a frivolous argument, and I am guessing some courts have (or will) accept it.  But, as we have seen, a few have either rejected it or at least replied to it that even if the possible liability should be limited to exclude those bystanders, it should not exclude the immediate family members.  

Friday, May 27, 2016

More thoughts on possible liability for the consequences of the conduct of app users: Is Uber dangeruous?

Over at the New York Personal Injury Lawyer Blog, Eric Turkewitz has posted a very interesting short comment called Uber Cars are Uber Dangerous.  You should read the full comment but the bottom line is that it is possible to argue that the way Uber works, it encourages its drivers to check and reply to text messages while driving.  If this is the case, it can then be argued that Uber is acting in a way that creates unreasonable risks of harm to others, and therefore, that it should be liable for injuries that result.  This would be particularly important in jurisdictions that have adopted the view that someone who sends a text may be liable if he or she knows that the recipient is driving at the time and is likely to be distracted by the text.  There are reported cases on this in at least two states.

It should also be noted that this theory of liability is not dependent on finding that Uber drivers are employees as it would be if the argument is based on vicarious liability (an issue that is currently being litigated in several cases in different jurisdictions).

Wednesday, May 11, 2016

Thoughts on tort liability and autonomous vehicles

There is a growing amount of literature on possible issues related to tort liability and autonomous vehicles, aka self-driving cars.  If you search using those phrases in SSRN, for example you will find 10 to 20 articles. 

I have to confess I have not been keeping up with the literature but today I was reading an article in Smithsonian magazine (my favorite magazine, by the way) and found this quote by Chris Gerdes, who is described as “one of the leading engineers identifying novel problems facing autonomous driving and writing the code to solve them”:   “Autonomous vehicles don’t eliminate human error.  They shift it from the driver to the programmer.”

Obviously, this notion might prove to be extremely important in the future when someone has to decide whether to impose liability for injuries caused by an autonomous vehicle.

I also found interesting the description of how the programmer is working to identify and help solve the future problems: “Part of what Gerdes does is huddle with a team that includes not just engineers and programmers but also moral philosophers, and what has emerged is an ethical framework, or set of decision trees.”

The mention of moral philosophers, of course, made me think of the “trolley problem” (or here) which makes sense because this is the type of decision a programmer may have to find a solution to in order for the autonomous car to “act.”  

If you are not familiar with the “trolley problem” take a look at the first 13 minutes of this video.  Essentially, the question is whether you would act to switch a trolley from one track where it is headed to kill five people onto a track where it will kill one other person.  I would not want to be the computer programmer in charge of deciding this type of thing in order to tell a car what to do. 

And if you think the trolley problem is too far fetched, think of a more common problem.  How should the autonomous car react when a child darts in front of it?  Should it simply stop even though it senses there is a car behind which might hit it and hurt the passengers in it?  Should it veer to avoid the child but head to a collision into another car? And so on.  The possibilities are endless. 

Currently, we - humans - make those decisions based on reaction time and instinct and when injuries are caused, other humans pass value judgment on the conduct based on legal standards which depend on the circumstances. 

How would - or should - all of this change in cases of injuries caused by autonomous vehicles given that the responsibility for making decisions is transferred to a computer programmer?   Should the standard of care change to take into account the work of the programmer rather than the circumstances of the accident? 

Tuesday, May 10, 2016

Child has a claim for wrongful death even though father died before child was born

The ABA Journal is reporting (with links to more information) that the Iowa Supreme Court has decided that a child whose father died before she was born can sue for loss of companionship in a wrongful death case. Similar opinions have been published in

This is an interesting question that I know a little bit about.  One of the very first cases I ever handled as a young lawyer in private practice involved the same issue.  In my case, a couple were expecting a child when the father was killed by a negligent driver in an accident.  The child was later born without incident and the mother hired our firm to represent her in a claim against the driver.  We included a claim for the child based on wrongful death.  Like in the case in Iowa, the defendant argued that the child could not claim to have suffered an injury since the child never met the father (along the lines of you can't miss what you never had to begin with).  I prepared a brief in reply to the motion to dismiss and won, and the case settled soon after that.

I had not looked into the issue since then, so now I am interested in catching up to see what has been written about it.  Apparently, in addition to the case in Iowa, there are other reported decisions in Massachusetts and Wisconsin. I will add them to my summer reading list.

Should a social media app company be liable for the damages caused by a driver distracted by the app while driving? -- UPDATED

Earlier today I posted a note about a recent case on whether a person who sends a text to someone who is driving should be liable for injuries caused by the driver if the driver is distracted by the text.  Now, here is a story with a similar theme.

Abnormal Use is reporting on an interesting case from Georgia in which social media giant Snapchat has been sued for allegedly causing a motor vehicle accident in which the at-fault motorist was distracted while using the application.  You should go to AU and read the full report to get the whole story, which I will just summarize here.

The plaintiff in the case is claiming that the defendant driver struck her car while going at more than 100 miles an hour while using Snapchat.  According to the report Snapchat has a feature that will show the speed you are going when taking a photo or posting a video.  Apparently, according to the plaintiff's allegations, the driver wanted to show the world she was going more than 100 mph (with passengers in her car, one of whom was pregnant, by the way).  She failed in her attempt to show this because she rammed into the car in front of her just before posting causing the plaintiff brain damage.  But she was definitely able to show how stupid and reckless she is, and more since after the accident, she had the bright idea to snap and post a photo of herself on a stretcher.

But now back to whether the plaintiff should have a cause of action against Snapchat.  One could argue that Snapchat should not be liable for the irresponsible conduct of the driver in this case.  After all, the driver was the one that acted and caused the accident, right?  Why should the social media company be responsible for how the user uses the app?

The problem is that the issue is not that simple.  The fact that one actor is more culpable than the other, does not mean the other is not culpable at all.  The relevant question is whether Snapchat should be considered culpable at all, and to answer that question first we have to ask whether the plaintiff can make an argument in support of a claim against Snapchat.

I think she can. As, Abnormal Use points out, "While a speed filter may be an interesting piece of technology, we assume in order for it to be useful there would need to be “speed” involved. The filter probably lacks the appeal of users taking a leisurely stroll through Central Park. We can appreciate the plaintiffs’ argument that the filter incentivizes users to go fast and, unfortunately, the most available means of doing so is by car."

In other words, Snapchat created a feature that it knew or should have known would induce users to speed thus creating risks of harm.  Thus, it seems to me that you can argue that Snapchat created an unreasonable risk of a foreseeable type of harm, which gives the plaintiff support for a prima facie case.

Whether Snapchat should be as liable as the driver,... that's is different another question.

But whether you can make an argument that Snapchat should be liable at all...  Yes, I think you can...

UPDATE 5/10/16, 8:40 pm:  Abnormal Use has a published a longer post on the Snapchat lawsuit after downloading and testing the app.

New case on whether a person who sends a text to someone who is driving can be liable if the driver gets distracted by the text

Back in 2013 I reported that the New Jersey State Appeals Court had ruled that texting someone while that person is driving may cause the sender to be liable if an accident occurs. See here, here and here.  To my knowledge, that was the only case that had been decided that way.  Until now.

Eric Turkewitz of the NY Personal Injury Law Blog is reporting that a lower court in Pennsylvania has reached a similar result in a case called Gallatin-v-Gargiulo. You can read Eric's comment here.

Monday, May 9, 2016

Case against Uber based on alleged sexual assault by driver can go forward; Court rejects motion to dismiss based on allegation that the driver was independent contractor

The ABA Journal is reporting today that a San Francisco federal judge has denied Uber’s motion to dismiss cases based on sexual assaults committed by Uber drivers.  Uber's argument was based on the allegation that the drivers in question were independent contractors.

Obviously, if it is true that the drivers are independent contractors, the claims based on vicarious liability will be dismissed eventually.  But the judge thought it is too early to tell.  The question of the relationship between Uber and its drivers is being litigated elsewhere.  There are at least two pending class actions against Uber, regarding whether drivers should be classified as employees or independent contractors.

So the claims based on vicarious liability are still in play.  In addition, the plaintiffs have alleged claims based on direct liability based on negligence in hiring.  Those claims would not be affected if it is ultimately decided that the drivers are independent contractors.