Insurance

Artificial intelligence on trial

Stephen Jones

Product Manager, Litigation Intelligence, Casualty and Financial Lines

David Loughran

Senior Director, Model and Data Product Management, Casualty and Financial Lines

In April 2025, the AI Futures Project published a provocative disaster scenario entitled AI 2027 in which the capabilities of AI agents evolve from unreliable personal assistant today to superintelligence in late 2027. To call this merely a disaster scenario is surely an understatement. By 2030, humanity as we know it, and the casualty industry along with it, is wiped out entirely. The scenario is intended as a wake-up call to policy makers who, according to the authors, can still avert disaster with strong oversight and international cooperation.

In that same timeframe, Arvind Narayanan and Sayash Kapoor, Princeton computer scientists and authors of the book AI Snake Oil, published a competing assessment of AI as 'normal technology' that poses risks, but not the risks we imagine comes with unleashing superintelligence on humanity two years from today. In their view, AI could be transformative, but, like past technological revolutions, its economic and societal impacts will occur on the timescale of decades. While investment in AI is accelerating, its adoption in critical applications will be slow even if continuing to scale large language models can overcome the persistent problem of hallucination. This offers society ample opportunity to manage AI’s catastrophic potential along with the mundane, but very real risks that have already emerged: widespread addiction to social media applications, emotional dependence on misguided AI companions, entrenched bias, pollution of our physical and information ecosystem, the outsourcing of human creativity, and an erosion of social trust.

A worldview of AI as normal technology, of course, puts AI back in the domain of technologies with which casualty insurers are well acquainted: technologies that diffuse slowly through the economy along with the gradual emergence of information about their safety and the likelihood legal systems will hold those who profit from these technologies liable for any harm they cause. And, to that last point, while recent years have not delivered a sudden leap toward superintelligence, they have seen a number of important developments in how U.S. courts view AI liability, which is of critical importance to assessing the risk casualty assumes when underwriting AI applications going forward.
 

The first major AI products liability decision

In October 2024, Maria Garcia filed a federal lawsuit in the Middle District of Florida against Character Technologies and Google alleging their AI 'characters' (chatbots) are responsible for her son’s suicide (Garcia v. Character Technologies, Inc). More than 2,700 lawsuits have been filed alleging the design of social media and gaming platforms has harmed the mental health of individual users, but the Garcia case was the first to allege bodily injury from interaction with an AI chatbot. In a subsequent order, the court rejected the Garcia defendants’ motion to dismiss, allowing plaintiff’s allegations to proceed. Interestingly, the arguments raised by the defendants — along with the court’s analysis and resolution of these issues — bear a strong resemblance to arguments made in social media multidistrict litigation (MDL).

  • Is an AI chatbot a product or a service? The Garcia defendants argued that a products liability cause of action was inappropriate because the chatbot is a service and not a product. The court rejected this argument, citing a similar decision in the social media MDL and pointing to plaintiff’s allegations of design defects in defendants’ chatbot, such as the lack of age confirmation or reporting mechanisms. According to the court, plaintiff’s claims arose from these allegedly defective features of the chatbot product, and not from the ideas or expressions that embody the service aspect of the chatbot. 
  • Are an AI chatbot’s harmful messages protected speech? The Garcia defendants argued that the First Amendment bars plaintiff’s claims because the chatbot’s messages should be considered the protected speech of the users who generate the messages. Again, this is similar to social media litigation where defendants claim First Amendment protections for their users’ allegedly harmful content. As in those cases, the court here declined to dismiss the case on First Amendment grounds. 
  • Who in the supply chain can be held liable for harm caused by AI chatbots? The defendant Google argued that it should not be held liable because it was not the provider of the chatbot. The plaintiff only alleged that (a) the large language model (LLM) underlying the chatbot was built at Google and made publicly available, and (b) Google provided Character Technologies with access to Google Cloud’s technical infrastructure to power the LLM. The court found that these facts are sufficient to allege that Google is liable as a "component part manufacturer" in a products liability claim, largely because the plaintiff alleged that the LLM itself caused the app to be defective, as opposed to other components. This is one area where the Garcia court and plaintiff go further than in the social media cases filed to date. In those cases, only the social media companies themselves are being sued. In Garcia, by contrast, AI companies that develop LLMs might get roped into litigation involving downstream uses of that technology.
     

When does Section 230 offer immunity?

Section 230 of the Communications Decency Act immunizes computer service providers from liability as a speaker or a publisher of content provided by an information content provider. The applicability of Section 230 has been an important issue in litigation against social media companies where allegedly harmful content is posted on social media by third parties. However, AI defendants have generally not raised Section 230, likely because the AI companies themselves materially contribute to the content at issue in a way social media companies do not.

That said, as new and different kinds of AI applications face litigation, Section 230 will likely be raised at some point, so it’s important that we follow how courts are applying the law. Looking at recent judicial activity that address Section 230, we see two important cases that help illuminate the outlines of when immunity might apply to AI.

  • When Section 230 applies. In Patterson v. Meta Platforms Inc. , a New York state appeals court held that social media companies could not be held responsible for content posted on their platforms that allegedly led to a mass shooter targeting Black customers in a Buffalo grocery store in 2022. The plaintiffs pointed to alleged design defects that algorithmically recommended violent and racist content and caused the shooter to become addicted to the social media platforms. But the court was not persuaded that these were defects that would overcome Section 230 immunity, following other recent decisions holding that an interactive computer service does not lose Section 230 immunity because the company automates its editorial decision-making. According to the court, this activity — that of a traditional publisher — is precisely the kind of activity intended to be protected. 
  • When Section 230 does not apply. On the other side of Section 230, in State of North Carolina v. TikTok Inc. , a North Carolina state court found that Section 230 doesn’t shield TikTok from a lawsuit alleging that TikTok intentionally designed a product that addicts children, resulting in anxiety, depression, sleep deprivation, and an increased risk of self-harm. Here, the defendants’ allegedly harmful conduct did not fall under the traditional role of a publisher, in contrast to the automated editorial decision-making in Patterson, and so the court refused to apply Section 230 immunity. The court found it important that the State of North Carolina was not seeking to hold the defendants liable for traditional editorial activities such as "monitoring, altering, or removing [] content, or for failing to do those things."

What does this mean for the applicability of Section 230 to AI liability? Looking to these two social media cases can be instructive. Where AI agents are delivering user-generated content without materially contributing to the content, or where the AI agent is otherwise acting in the traditional role of a publisher, courts might see Section 230 as providing some measure of immunity from liability. Where material portions of the content are being generated by the AI agent itself, Section 230 is unlikely to apply. And defendants may not even raise the issue in such situations, as has been the case in AI chatbot lawsuits thus far.

 

What’s next for AI liability?

The Garcia case surviving a motion to dismiss means we’ll be seeing more cases that test the bounds of liability for AI-related technologies. Since the decision was filed, we’ve already seen a similar case against OpenAI for its ChatGPT product, along with three new chatbot bodily injury cases against Character Technologies. With the Texas Attorney General announcing an investigation into chatbots that target children and purport to provide mental health services, we’re seeing attention on this issue from advocates, the press, plaintiffs’ attorneys, and regulators, particularly to the extent AI services like these are targeting minors. In that context, the AI liability cases we’ve seen thus far fit right into the context of the wider addictive software litigation.

Amidst all this scrutiny of AI targeted to children, it may be surprising to see moves like the continuing wave of partnerships between AI developers and consumer facing companies. One example is the partnership between OpenAI and toy-maker Mattel to collaborate on a “series of products and experiences.” Smaller start-ups are already hard at work on this, like Curio Interactive with their AI plushies Grok, Grem, and Gabbo.

It’s early days and we don’t know whether AI will be world-changing or merely 'transformative.' But if Narayanan and Kapoor are right and AI turns out to be normal technology, we can expect companies to adopt it in response to normal economic incentives. Companies will have an incentive to deploy AI slowly internally, with careful safeguards to protect against security risks in critical internal applications, but much less incentive to deploy it carefully in products and services available to the public. This is precisely the type of 'normal externality' emanating from 'normal technology' that tort law is intended to address.


LEARN MORE

Artificial intelligence on trial

Talk to us about emerging risks that mandate forward-looking analytics, like AI. Get evergreen litigation trends by using our litigation tracker. Let's connect.