Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know – Hunton Andrews Kurth LLP

Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know

Insurance coverage lawsuits often hinge on the plain and ordinary meaning of specific words or phrases. But not every word in an insurance policy can be defined. Yet without stable and predictable definitions, neither policyholders nor insurers can establish a clear and consistent scope of coverage. In a recent concurring opinion, Eleventh Circuit Judge Kevin Newsom suggests that artificial intelligence (AI) large language models (LLMs) could help resolve these definitional debates. His opinion inSnell v. United Specialty Insurance Company, No. 22-12581, 2024 WL 2717700 (11th Cir. May 28, 2024) highlights the pros and cons of calling upon technology to supply plain meaning.

This approach may even offer promise for a fundamental issue plaguing the insurability of AI risk, whichwe discussed last month. That is, how to define AI to ensure a functional and predictable scope of coverage?

LLMs as a Tool in the Interpretive Toolkit

InSnell, an insured sought coverage under a Commercial General Liability policy in connection with a lawsuit brought after a child sustained injuries while using an in-ground trampoline. The insurer denied coverage and refused to defend the lawsuit. The lawsuit alleged that Snell, a landscaper, negligently installed the trampoline in a clients backyard. The district court found that coverage would turn on whether installation of the trampoline amounted to landscaping, as that term was used in the policy. But the policy did not supply a definition for the term landscaping. The court, therefore, turned to the common, everyday meaning of the term, which the district court found to not include trampoline installation.

The Eleventh Circuit ultimately affirmed the district courts decision based on Alabama-law specific grounds unrelated to the meaning of landscaping. Yet, of particular note, in a concurring opinion, Judge Newsom suggested that LLMs like OpenAIs ChatGPT, Googles Gemini and Anthropics Claude could help discern the ordinary meaning of undefined words in legal instruments, including insurance policies.

Judge Newsom identified several benefits to using LLMs for this purpose. LLMs train on vast amounts of ordinary-language datamuch more than might be available through a dictionarymaking them particularly adept at determining common usage. They understand context, which helps detect nuanced language patterns. LLMs are also increasingly accessible, making their use an inexpensive research tool for judges, lawyers and ordinary citizens. In addition, using LLMs to facilitate ordinary meaning interpretation enhances the transparency of the interpreting process because queries and outputs from LLMs can be fully disclosed to the public.

Judge Newsom also pointed out potential drawbacks. LLMs can hallucinate and generate incorrect information at times. They might not fully capture offline speech, potentially missing how underrepresented groups use language. There is also the risk that lawyers, judges or litigants may try to manipulate LLMs to favor their preferred interpretation. Moreover, over-reliance on LLMs could lead to dystopian robo judges.

Despite these concerns, Judge Newsom suggests that AI could be another tool in the judiciarys toolbox, alongside dictionaries and semantic canons.

Can LLMs Define AI?

Asdiscussed previously, we have identified the general inability to clearly and uniformly define AI for purposes of insuring AI risks. Following Judge Newsoms suggestion, we asked ChatGPT to provide a stable and predictable definition of AI for use in insurance policies to answer the question posed in our last article. That is, how should insurance policies define AI to ensure a functional scope of coverage? Heres ChatGPTs response:

Artificial Intelligence (AI) refers to the capability of a machine to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and natural language understanding. AI systems include technologies like machine learning algorithms, neural networks, and natural language processing, which enable these machines to analyze data, recognize patterns, and make decisions or predictions based on that analysis.

ChatGPT even explained why its definition is a good one. It explained that this definition is comprehensive, covering all AI capabilities and ensuring all current and foreseeable technologies are included. It is clear and specific, identifying key components of AI systems, and technologically neutral, making it adaptable to future advancements. By focusing on AIs functional aspects, it aligns with practical insurance concerns. This definition provides a clear framework, reducing disputes and aligning with existing legal frameworks, making it easier to use in legal contexts. It also minimizes differing interpretations, leading to more straightforward claims processes and fewer litigation risks. Whether this AI-generated definition or some iteration might provide a functional definition for purposes of insurance remains to be seen.

Conclusion

As policyholders and insurers work to resolve the age-old task of supplying meaning to undefined terms, or defining new risks like those posed by AI, they might find it useful to follow Judge Newsoms recommendation and use AI among the other tools in their toolkits to resolve definitional debates. For now, however, while landscapers and acrobats can rest assured knowing that trampolines are not landscaping (at least in the 11thCircuit), the more vexing insurance-related AI issue remains: whatisAI?

Go here to see the original:

Should Artificial Intelligence Supply Plain Meaning? The 11th Circuit Wants to Know - Hunton Andrews Kurth LLP

Related Posts

Comments are closed.