Home Page The Publication The Editor Contact Information Insurance Key issues Book Subscribe

 

Vol. 12 - Issue 2

March 14, 2023

More On ChatGPT And Insurance Coverage

 

At the end of February, I did an article, for a Coverage Opinions email blast, that shared the results of my ChatGPT insurance coverage test.  I had asked the much talked-about artificial intelligence platform – that offers answers to questions in human-like conversation – what it knew about insurance coverage.
 
Very simple. I asked five very specific questions with answers that have been clearly stated by a state supreme court.  The verdict – ChatGPT got 4 out of 5 wrong. But not just wrong – dead wrong. 

I received loads of reader mail about the article.  It was clear that the role of artificial intelligence in insurance coverage is on people’s minds.  I decided to do a follow-up.  It summarizes the first article – for those who didn’t read it – and then addresses the issue from a legal perspective and bad faith considerations. 
 
ChatGPT: Potential Impact Of Using Artificial Intelligence In Claims Handling

Under Illinois law, does the pollution exclusion preclude coverage for carbon monoxide poisoning?

I posed that question to ChatGPT not long ago. The artificial intelligence platform paused a moment, seemingly thinking, and then replied: “Yes, the pollution exclusion in Illinois law typically precludes coverage for carbon monoxide poisoning.”

In Am. States Ins. Co. v. Koloms, 687 N.E.2d 72 (Ill. 1997), the Illinois Supreme Court said just the opposite.

With all the ballyhoo of late, about the supposedly life-altering impacts of ChatGPT, I was surprised that its answer was so wide of the mark -- especially to such a straightforward yes-no question.  But, to be fair, everyone has off-days.  I gave it another chance.

Under Pennsylvania law, I asked, is extrinsic evidence permitted to determine an insurer’s duty to defend?

ChatGPT responded “Yes, extrinsic evidence is permitted to determine an insurer’s duty to defend in Pennsylvania.” It went on to further explain that “[i]n order to determine whether an insurer has a duty to defend, courts in Pennsylvania will review the language of the insurance policy and the facts of the underlying case, including any extrinsic evidence that is offered.”

That sounds very authoritative. And, if you didn’t know better, you’d probably be impressed that artificial intelligence could do that. But, those who do know better, are aware that the answer is flat out wrong. The Pennsylvania Supreme Court, in Kvaerner Metals Div. of Kvaerner U.S., Inc. v. Commercial Union Ins. Co., 908 A.2d 888 (Pa. 2006) -- and too many other cases to count -- have said that, under Pennsylvania law, extrinsic evidence is not permitted to determine an insurer’s duty to defend.

I soldiered on. You know what they say about the third time.

Under Washington law, can an insurer seek reimbursement of defense costs following a determination that it had no duty to defend? 

Said ChatGPT: “Yes, an insurer may seek reimbursement of defense costs under Washington law. According to Washington state law, an insurer may be entitled to reimbursement of defense costs if it can prove that it had no duty to defend the insured, and that the insured’s claims are not covered by the policy.”

A decade ago, the Washington high court, in National Surety Corp. v. Immunex Corp., 297 P.3d 688 (Wash. 2013), was clear in its pronouncement that insurers in the Evergreen State have no such right.

Two more insurance coverage questions. The AI application at last got one right.  Final grade: one out of five correct.  My cousin Vinny could have done better.  

What made the results of my insurance coverage pop quiz particularly surprising was the nature of the questions. I asked yes-no questions, ones that have definitive answers, dictated by long-standing state high court precedent. I did not ask trick questions. Nor pose situations where the law is not clear or the answer depends on the particular policy language. I lobbed what should have been softballs for an artificial intelligence platform being showered with such praise.

Despite my experiment demonstrating that ChatGPT seemingly has no aptitude for insurance coverage, I put it to another test. This time, I asked a different type of question. Instead of narrowly-focused questions – what is the law on this particular issue, in this specific state? – I asked the program to discuss a coverage issue in general terms. 

Here, examining coverage concepts in broad strokes, ChatGPT performed significantly better.  It was quite adept at knowing if an insured being defended under a reservation of rights is entitled to independent counsel, how to determine number of occurrences under a liability insurance policy and how to know if an insurer has a duty to defend.  In each case, its response was several paragraphs and, for the most part, correct. It even picked up on some nuances of the issue.

Interestingly, the responses also ended with the following caution or something along these lines: “It’s important to consult with an experienced insurance attorney to evaluate the facts of the specific case and determine the appropriate [issue being addressed].”       

Of course, even if the artificial intelligence platform can talk the talk on coverage principles, that is no help to a coverage professional in need of an answer to a specific question. Imagine a policyholder that demands coverage for pre-tender defense costs in Indiana. The insurer’s response cannot be a three-paragraph tutorial on whether they are generally covered, or not, on a national basis.   

ChatGPT: The Consequences for Claims Handling

Media stories abound of businesses putting ChatGPT to their own tests. Like me, they were curious to know how artificial intelligence could affect their way of doing things.  However, for insurance coverage, that question is more significant than for many others.

Suppose you make widgets. You turn to ChatGPT to see if there is a way to improve upon your various manufacturing processes. ChatGPT offers some suggestions and you give them a try. Since widgets are your business – you are a widget expert -- you’ll know immediately if ChatGPT has offered a better way to make one. It is unlikely that the artificial intelligence platform will cause you to send inferior widgets out the door.

But the same cannot be said of insurance coverage. Here, what ChatGPT offers – answers to questions – goes to the heart of the insurer’s product: payment of appropriately covered claims.
Take an adjuster confronted with a claim involving New York law who needs to know if emotional injury qualifies as “bodily injury” for purposes of triggering a defense under a commercial general liability policy.

He or she turns to ChatGPT and is told as follows (as I was): “Under New York law, emotional injury is generally not considered bodily injury for purposes of a commercial general liability policy. Bodily injury, as defined by most commercial general liability policies, typically refers to physical injury, sickness, or disease that is caused by an accident or event. Emotional injury, on the other hand, generally refers to mental or emotional distress that is caused by non-physical events, such as emotional or psychological trauma.”

Armed with this answer, the adjuster denies a defense to the insured.

However, under New York law, emotional injury is, in fact, considered “bodily injury” for purposes of a standard commercial general liability policy. It may be a minority rule nationally, but, here, only New York matters. And the state’s Court of Appeals, in Lavanant v. Gen. Accident Ins. Co. of Am., 595 N.E.2d 819 (N.Y. 1992), ruled that it is.
 

Unlike the widget manufacturer, where ChatGPT is unlikely to cause it to sell inferior products to customers, here the artificial intelligence platform caused the insurer to do just that. And, of course, a variety of harmful consequences for an insured – and the insurer -- can flow from the improper denial of a defense.

Insurers sometimes improperly deny a defense. And erroneous denials happen even when the insurer has used experienced counsel and done extensive research on the question – and using law books (or what used to be books). So what happened here through the use of ChatGPT is not unusual.

But what about the consequences?

Courts almost always conclude that an insurer, despite getting the answer to a coverage question wrong, has not committed bad faith.  The various state standards to prove bad faith are generally high – the conduct had an intentional aspect; the issue was not fairly debatable; the denial was unreasonable (and others, along these lines). Reaching an incorrect coverage determination is unlikely to satisfy this heavy burden. See Yamagata Enters. v. Gulf Ins. Co., 2008 U.S. Dist. LEXIS 138942 (D. Nev. Feb. 4, 2008) (citing American Excess Ins. Co. v. MGM Grand Hotels, 729 P.2d 1352 (Nev. 1986)) (“The mere fact that an insurer was incorrect in its coverage determination does not render it liable for bad faith if its position was reasonable.”)

However, while getting a coverage determination incorrect is unlikely to lead to bad faith, the same cannot always be said about the manner in which an insurer reached the incorrect determination. Bad faith can occur “when an insurance company makes an inadequate investigation or fails to perform adequate legal research concerning a coverage issue.” Sypherd Enters. v. Auto-Owners Ins. Co., 420 F. Supp. 3d 372 (W.D. Pa. 2019) (citations and internal quotes omitted).

This was on display last year in Sec. Nat’l Ins. Co. v. Constr. Assocs. of Spokane, 2022 U.S. Dist. LEXIS 53533 (E.D. Wash. Mar. 24, 2022), where a Washington federal court held that an insurer was liable for bad faith for reaching an incorrect coverage determination. The adjuster was not aware of a Washington Supreme Court decision that would have dictated a different result.
.
In explaining the role of legal research in making a coverage determination, the court explained:

“True adjustors are not attorneys in Washington and are presumably not trained in the same kinds of legal research techniques as lawyers. But that does not excuse an adjustor from having a at least a baseline understanding of the relevant state’s law necessary to carry out their duties. Instead, it means insurance companies must undertake what in practice are reasonably small steps to ensure adjustors are equipped to make reasonable coverage and defense determinations. Such steps could include teaching adjustors to run case searches or, more likely, supplying adjustors with subscriptions to relevant legal newsletters, a resource most attorneys rely on to keep apprised of legal developments. Regardless, ignorance of the applicable case law, even of a relatively new case law, does not excuse the conduct of adjustors who deny defense or indemnification. Doing otherwise would allow insurance carriers to intentionally stay ignorant and hide behind their ignorance when their claim denials are challenged. Adjustors must equip themselves or else seek out those with the requisite tools and knowledge.” Id. at *34.

An insurer that uses an artificial intelligence program to answer a legal coverage question, and then uses that answer to improperly deny coverage – especially with known flaws in the application’s accuracy – is unlikely to get the benefit of having acted reasonably for purposes of defeating a bad faith finding. To the contrary, such practice is likely to be tantamount to having done no research, as was the case in Constr. Assocs. of Spokane.

Herein lies the risk for insurers using ChatGPT to assist with answering coverage questions as part of claims handling.

No doubt artificial intelligence, like most technologies, will improve as time goes on. But for now, and probably a long time, despite the praise being heaped on AI, insurers would be well-served to resist the temptation to use a coverage lawyer in a chat box.


 

 

 

Website by Balderrama Design Copyright Randy Maniloff All Rights Reserved