Friday, February 10, 2017

Tort Reform efforts in Arkansas; bad news for victims

As reported in the TortsProf blog:

You may recall that last fall the Arkansas Supreme Court struck down a tort reform ballot initiative.  It's back.  The Arkansas Times reports:
A proposed amendment to the Arkansas Constitution filed last week in the Senate seeks to limit the amounts that can be awarded to claimants in civil actions, such as medical malpractice lawsuits. If Senate Joint Resolution 8 is approved by the General Assembly — and with a long list of co-sponsors, it seems likely to pass — it would appear before voters on the 2018 ballot. Sen. Missy Irvin (R-Mountain View) is the lead sponsor.
Like the so-called "tort reform" measure on the 2016 ballot (which was disqualified by the state Supreme Court not long before the election), SJR 8 would place a cap of $250,000 on noneconomic damages, meaning compensation for hard-to-quantify personal losses such as pain and suffering. The proposed amendment also places a cap on punitive damages, though that ceiling is more flexible. But SJR 8 also would give the legislature control over the rules of pleading, practice and procedure in the judicial branch, thus taking power away from Arkansas courts and giving it to the General Assembly.

Thursday, February 9, 2017

Med Mal quiz


True or False:

Medical malpractice cases represent less than 5 percent of all state tort cases.

Medical malpractice insurance claims have dropped by half since 2003, and doctors are paying less for malpractice insurance today than they did in 2001, even without adjusting for inflation.

Medical malpractice insurance industry profits must be suffering!

Medical malpractice lawsuits put physicians at a significant risk of insolvency or personal bankruptcy.

When a state caps damages, health care costs related to physician spending (Medicare Part B) drops.

Doctors in high-risk specialties like neurosurgical spine surgery, who practice in states that have limited patients’ rights to sue, order significantly fewer tests and procedures because they are less likely to be sued (i.e., less “defensive medicine).

For the answers to these and other important and interesting questions go to The PopTort.

Wednesday, February 1, 2017

New article on the effect of artificial intelligence on tort law; should we change the standard of care to what a reasonable computer (programmer) would have done under the circumstances?

Ryan Abbott, professor of law and medicine, has written an article in which he discusses changes in technology and how they might affect tort law. Abbott appears to be the first to suggest that tort law should treat artificial intelligence (AI) machines like people when it comes to liability issues. Also, he suggests that the standard of care used to evaluate human conduct should be based on the competency of a computer (AI) (assuming, of course, that the AI proves to be consistently safer than a human being). This means that defendants' conduct would no longer be compared to what a hypothetical, reasonable person would have done under the circumstances, but to what a computer would have done, which, according to the author, "would benefit the general welfare.” The full article is here.

The proposition raises some very interesting issues, starting with whether it is valid to assume that a computer can always make better choices than a human, and whether a human has the capacity to make the choices a computer would make.  And by the way, let's not forget that computers don't think.  They are programmed to think; by a human programmer.  How should that factor into the equation?

Spoiler alert:  If you have not watched the movie Sully, stop reading right now.  Watch the movie, then come back. Otherwise, go on...
.
.
.
.
.
.
.
.
.
This was an issue in the movie Sully.  If you recall, the movie tells the story of the investigation regarding the decision by Captain Sully to "land" an airplane in the Hudson river rather than to do what a computer plan would have suggested - to turn around and try to get back to one of two nearby airports.  Sully, as he put it, "eye-balled" it and decided he would not have enough time.  He used his human intuition and experience to determine what was "reasonable" under the circumstances.  The computer, with the benefit of no real time pressure, fear and human emotion concluded that was the wrong course of action and that it would have been not only possible but safer to fly to an airport.  In the movie, and I assume that's what happened in real life, Sully convinced the FAA to alter the test to include some of the human aspects of the decision making process.  Once that was done, the result of the test was different.

Watch the movie.  Think about it.  Get back to me.    


Thanks to the TortsProf blog for the link to the article.