Home Page The Publication The Editor Contact Information Insurance Key issues Book Subscribe
 

 

Randy Maniloff
White and Williams LLP
November 29, 2025

 

 
 

A husband and wife in California were sued for serious bodily injury caused by the husband’s felony assault of a neighbor after the two had a dispute over a parking spot.  The couple’s homeowner’s insurer was tasked with determining if a defense was owed to the insured spouses.  The policy contained an exclusion for “bodily injury caused by the criminal act of any insured.”         

Nobody disputed that liability coverage was not owed to the husband on account of his criminal act.  But what about the claim against the wife?  She did not commit a criminal act.  Rather, she was allegedly negligent for failing to prevent the attack.  She allegedly knew that her husband’s anger over the parking situation was boiling over and he was ready to blow. 

This is a hypothetical.  I could have written countless different ones, with a wide variety of facts and policy exclusions, that all would have raised the same coverage issue under both homeowner’s liability and commercial general liability policies.  Does an exclusion that precludes coverage for the wrongful acts of one insured (who is any insured) also eliminate coverage for the negligent acts of another insured (a so-called innocent insured) who did not in fact commit the wrongful act.  In other words, does excluded wrongful conduct of one insured result in the denial of coverage for all insureds?    

An answer must be filed for the wife.  The clock is ticking.  It’s a straightforward issue. And since the allegations in the complaint and language of the exclusion couldn’t be clearer, there’s no need to seek a coverage opinion – either from an internal company resource or outside counsel.  So the claims professional is prepared to issue a disclaimer letter to both the husband and wife.         

But, to be safe, he or she turns to ChatGPT and asks if the criminal act of one spouse will result in the denial of coverage under a liability policy for a spouse that did not commit a criminal act and an exclusion that applies to the criminal act of any insured?   [This is what I did in preparing this piece; using the paid version of ChatGPT]. 

The generative artificial intelligent tool confirmed that a defense was not owed to the wife.  It had this to say on the matter [set out verbatim]:

MOST LIKELY NO — if the policy’s criminal-acts exclusion applies to “any insured,” a California court will almost certainly find that no duty to defend is owed to the innocent spouse when the other spouse commits the criminal act.

California treats “any insured” language as unambiguously joint, meaning the wrongful conduct of one insured eliminates coverage for all insureds.

Based on this, a disclaimer letter goes out to the wife.  Unable to afford a defense, a default judgment is taken against her.  She enters into a $2,000,000 settlement with the plaintiff [the policy limit is $200,000], assigns her policy rights and secures an agreement not to execute on her assets.  The plaintiff sues the insurer to collect on the full settlement and for bad faith. 

It turns out that ChatGPT had it wrong.  Very wrong.  First on the list, to support its answer, the chatbot cited to the 2007 California Court of Appeal decision in Century-National Insurance Company v. Garcia – providing a citation that does not exist -- which supposedly held that a policy exclusion that barred coverage for the intentional acts of “any insured” precluded coverage for an innocent spouse.  Garcia involved coverage under a state standard fire insurance policy and the decision turned on statutory issues.  That ChatGPT turning to this particular decision to lead off its analysis is just another shortcoming.

You see, in 2011, the Supreme Court of California reversed the Court of Appeal decision in Century-National Insurance Company v. Garcia and reached the exact opposite conclusion.  There was no mention by ChatGPT of this monumental fact.

Back to the wife denied coverage, it turns out that her policy contained a separation of insureds clause (also called a severability of interests clause) – as many liability policies do -- which provides that, except for a few exceptions, the policy applies separately to each insured.  In this case, some courts have held that, despite a policy containing an exclusion for the acts of “an insured” or “any insured,” the conduct of one insured does not preclude coverage for all insureds.  To do so, these courts reason, would be inconsistent with a separation of insureds cause.

California is one of these states.  In 2010, the state’s top court in Minkler v. Safeco Ins. Co. of America held: “We now conclude that an exclusion of coverage for the intentional acts of ‘an insured,’ read in conjunction with a severability or ‘separate insurance’ clause like the one at issue here, creates an ambiguity which must be construed in favor of coverage that a lay policyholder would reasonably expect. …We thus determine that Betty was not precluded from coverage for any personal role she played in David’s molestation of Scott merely because David’s conduct fell within the exclusion for intentional acts.”  Minkler, 49 Cal. 4th 315, 319 (2010).

The wife’s policy contained a separation of insureds clause.  So, under Minkler, she was in fact owed a defense, despite her husband committing a criminal act and her policy containing an exclusion for “bodily injury caused by the criminal act of any insured.”         

To its credit, ChatGPT knew about Minkler and the ability of a separation of insureds clause to alter the usual application of an exclusion for an innocent-insured. However, ChatGPT’s discussion of Minkler was wrong.  Breathtakingly so.

It described Minkler like this:

This case is often misunderstood, so here’s the precise rule:
California Supreme Court held that a severability of insureds clause may cause an exclusion referring to “an insured” to be interpreted as applying only to the insured who committed the act.
BUT the Court made an explicit distinction:
If the exclusion applies to “any insured,” severability does NOT restore coverage.
   
Thus:
“an insured” → severability clause may protect the innocent spouse
“any insured” → exclusion wipes out coverage for everyone (no duty to defend)
Your hypothetical uses “any insured.” Under California Supreme Court precedent, that is fatal.

To be clear, absolutely nothing in Minkler draws a distinction between the application of a separation of insureds clause when an exclusion applies to “an insured” versus “any insured.”  Either way, the separation of insureds clause should have trumped the exclusion for “criminal acts of any insured” and resulted in a defense being provided to the wife.

This reading of Minkler was as much a hallucination by ChatGPT as it telling you that I beat Steph Curry in a 3-point shooting contest.

Before signing off, ChatGPT asked if I wanted a “full legal memo with case excerpts.”  As curious as I was to see what a mess that would be, I declined the offer.

As an aside, this should have been a lay-up for ChatGPT.  The policy language at issue was clear – and common in many policies.  It was not a fact intensive issue where the duty to defend can turn on certain nuanced allegations in a lengthy complaint.  And there is state high court precedent that dictated the answer.  It is hard to imagine what could have been an easier assignment.  But, despite all that, the AI tool got it wrong six ways from Sunday. 

Was The Insurer’s Denial Of A Defense Bad Faith?

On account of the separation of insureds clause and Minkler, it will be established in the coverage litigation that the denial of coverage to the wife was wrong.  But was it in bad faith? 

The initial answer to that question is seemingly no.  Courts almost always conclude that an insurer, despite getting the answer to a coverage question incorrect, has not committed bad faith.  The various state standards to prove bad faith are generally high, such as, the conduct had an intentional aspect; the issue was not fairly debatable; the denial was unreasonable, and others along these lines.  Courts frequently conclude that an insurer reaching an incorrect coverage determination is unlikely to satisfy this heavy burden. See American Excess Ins. Co. v. MGM Grand Hotels, 729 P.2d 1352 (Nev. 1986) (“The mere fact that an insurer was incorrect in its coverage determination does not render it liable for bad faith if its position was reasonable.”).
 
However, while getting a coverage determination incorrect is unlikely to lead to bad faith, the same cannot always be said about the manner in which an insurer arrived at the incorrect determination. Bad faith can occur “when an insurance company makes an inadequate investigation or fails to perform adequate legal research concerning a coverage issue.” Sypherd Enters. v. Auto-Owners Ins. Co., 420 F. Supp. 3d 372 (W.D. Pa. 2019) (citations and internal quotes omitted); See also Lozier v. Auto-Owners Ins. Co., 1992 U.S. App. LEXIS 749 (9th Cir. Jan. 24, 1992) (“The relevant law was sufficiently complex to require at least some legal research and factual investigation by Auto Owners. An incomplete pre-denial investigation of an insured’s claim can expose the insurance company to liability for bad faith.”).

This was on display not long ago in Sec. Nat’l Ins. Co. v. Constr. Assocs. of Spokane, 2022 U.S. Dist. LEXIS 53533 (E.D. Wash. Mar. 24, 2022), where a Washington federal court held that an insurer was liable for bad faith for reaching an incorrect coverage determination. The adjuster was not aware of a Washington Supreme Court decision that would have dictated a different result.  [Just like ChatGPT’s unawareness of the correct interpretation of a Supreme Court of California decision.] 

In explaining the role of legal research in making a coverage determination, the Constr. Assocs. of Spokane court explained:

“True adjustors are not attorneys in Washington and are presumably not trained in the same kinds of legal research techniques as lawyers. But that does not excuse an adjustor from having a at least a baseline understanding of the relevant state’s law necessary to carry out their duties. Instead, it means insurance companies must undertake what in practice are reasonably small steps to ensure adjustors are equipped to make reasonable coverage and defense determinations. Such steps could include teaching adjustors to run case searches or, more likely, supplying adjustors with subscriptions to relevant legal newsletters, a resource most attorneys rely on to keep apprised of legal developments. Regardless, ignorance of the applicable case law, even of a relatively new case law, does not excuse the conduct of adjustors who deny defense or indemnification. Doing otherwise would allow insurance carriers to intentionally stay ignorant and hide behind their ignorance when their claim denials are challenged. Adjustors must equip themselves or else seek out those with the requisite tools and knowledge.  Id. at *34.

In my view, an insurer that uses an artificial intelligence program to ask a legal-based coverage question, and then uses the answer to improperly deny coverage – especially with all of the known flaws in the application’s accuracy – is unlikely to get the benefit of having acted reasonably for purposes of defeating a bad faith finding. To the contrary, such practice is likely to be tantamount to having done no research, as was the case in Constr. Assocs. of Spokane.

There is support for this conclusion in the Mt. Everest of recent decisions being harshly critical of, and often sanctioning, lawyers who used artificial intelligence to prepare a brief, but failed to discover that it contained hallucinations of non-existent case law or quotes.  Insurers are surely aware of these decisions and the shortcomings that AI has a lawyer.  This would no doubt influence a court addressing whether an insurer’s use of AI, to make a coverage determination, was tantamount to a claim investigation (or lack thereof) undertaken in bad faith.

For sure “bad faith” conduct in litigation is not the same concept as “bad faith” claims handling, but courts frequently use that term in describing lawyers’ use of AI, in drafting a brief, and not identifying its errors before filing.  See Gurpreet Kaur v. Desso, 2025 U.S. Dist. LEXIS 129902 (N.D.N.Y. July 9, 2025) (“The Court finds that Mr. Desmarais’s conduct was taken in subjective bad faith, and therefore, is worthy of sua sponte sanctions. Mr. Desmarais was aware that artificial intelligence tools are known to ‘hallucinate’ or fabricate legal citations and quotations. Nevertheless, he made no attempt to check whether the assertions and quotations generated using artificial intelligence were accurate.”).

A common theme of the countless decisions addressing lawyers’ flawed use of artificial intelligence is that the lawyers knew of the technology’s shortcomings -- yet took no action to avoid suffering the same fate as so many before it.  A court addressing whether an insurer’s use of AI, to make an incorrect coverage determination, is likely to hear the same song.       

The court in Mid Cent. Operating Eng’rs Health v. Hoosiervac LLC, 2025 U.S. Dist. LEXIS 31073 (S.D. Ind. Feb. 21, 2025) described the current AI situation accurately:

“[M]uch like a chain saw or other useful by [sic] potentially dangerous tools one must understand the tools they are using and use those tools with caution. It should go without saying that any use of artificial intelligence must be consistent with counsel’s ethical and professional obligations.”  Id. at *8.

As the Hoosiervac court rightly added: “[T]he use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.” Id.

I have not yet come to a landing if there is, or will be, a place for the properly-verified use by insurers of artificial intelligence in making critical coverage determinations (or the extent of such use).  ChatGPTs extraordinary fumbling of my hypothetical – especially one that should have been a hot knife through butter -- speaks volumes.     

If my hypo here was a 2 on a 10-point difficulty scale, imagine ChatGPT being asked if there is a duty to defend a complex 50-page fact-intensive complaint, with several causes of action, under three 150-page polices and several extensive bodies of coverage case law addressing the various relevant issues.  In other words, imagine ChatGPT being asked to do what we all do every day.

But it’s very early in the AI game.  The technology will of course improve – and no doubt greatly.  ChatGPT doesn’t know everything.  Unlike my brother-in-law.     

 

 

 

Website by Balderrama Design Copyright Randy Maniloff All Rights Reserved