Insurance

Artificial intelligence on trial

Authors: David Loughran, Senior Director - Product Managment, Moody's; Stephen Jones, Associate Director - Product Management, Moody's

 

In April 2025, the AI Futures Project published a provocative disaster scenario entitled AI 2027 in which the capabilities of AI agents evolve from an unreliable personal assistant in 2025 to attaining superintelligence by late 2027.

To call AI 2027 a disaster scenario is surely an understatement, as extrapolating further, by 2030, humanity as we know it, and the casualty insurance industry along with it, would be wiped out entirely. The AI 2027 scenario is deliberately intended as a wake-up call to policymakers who, according to the authors, can still avert disaster with strong oversight and international cooperation.

In that same timeframe, but on the other side of the AI coin, Princeton University computer scientists Arvind Narayanan and Sayash Kapoor, authors of the book AI Snake Oil, published a competing assessment of AI as 'normal technology' that poses risks but not the risks we imagine would come with unleashing superintelligence on humanity two years from now.

In their view, AI could be transformative, but, like past technological revolutions, the economic and societal impacts would occur over decades.

While investment in AI is accelerating, its adoption in critical applications will be slow, even if the continued scaling of large language models (LLMs) can overcome the persistent problem of hallucination.

With more time available, therefore, society has ample opportunity to manage AI’s catastrophic potential along with the very real risks that have already emerged: widespread addiction to social media applications, emotional dependence on misguided AI companions, entrenched bias, pollution of our physical and information ecosystem, the outsourcing of human creativity, and an erosion of social trust.

A worldview of AI as normal technology, of course, puts AI back in the domain of technologies with which casualty insurers are well acquainted: technologies that diffuse slowly through the economy and a gradual emergence of information about their safety, with the likelihood that legal systems will hold those who profit from these technologies liable for any harm they cause.

To that last point, while in recent years AI has not delivered a sudden leap toward superintelligence, it has seen several important developments that have shaped how U.S. courts view AI liability; of critical importance when assessing the risk casualty assumes for underwriting AI applications.

 

The first major AI products liability decision

In October 2024, Megan Garcia filed a federal lawsuit in the Middle District of Florida against Character Technologies and Google, alleging their AI 'characters' (chatbots) are responsible for her son’s suicide (Garcia v. Character Technologies, Inc).

More than 2,700 lawsuits have been filed alleging that the design of social media and gaming platforms has harmed the mental health of individual users, but the Garcia case was the first to allege bodily injury from interaction with an AI chatbot.

In a subsequent order, the court rejected the Garcia defendants’ motion to dismiss, allowing the plaintiff’s allegations to proceed. Interestingly, the arguments raised by defendants — along with the court’s analysis and resolution of these issues — bear a strong resemblance to arguments made in Social Media Multidistrict Litigation (MDL).

- Is an AI chatbot a product or a service? The Garcia defendants argued that a product liability cause of action was inappropriate because the chatbot is a service and not a product.

The court rejected this argument, citing a similar decision in the Social Media MDL, pointing to the plaintiff’s allegations of design defects in the defendants’ chatbot, such as a lack of age verification or reporting mechanisms. According to the court, plaintiff’s claims arose from these allegedly defective features of the chatbot product, and not from the ideas or expressions that embody the service aspect of the chatbot.

- Are an AI chatbot’s harmful messages protected speech? The Garcia defendants argued that the U.S. Constitution's First Amendment bars the plaintiff’s claims because the chatbot’s messages should be considered the protected speech of the users who generate the messages.

Again, this is similar to social media litigation, where defendants claim First Amendment protections for their users’ allegedly harmful content. As in those cases, the court here declined to dismiss the case on First Amendment grounds.

- Who in the supply chain can be held liable for harm caused by AI chatbots? The defendant, Google, argued that it should not be held liable because it was not the chatbot provider.

The plaintiff only alleged that:

(a) The large language model (LLM) underlying the chatbot was built at Google and made publicly available, and

(b) Google provided Character Technologies with access to Google Cloud’s technical infrastructure to power the LLM.

The court found that these facts are sufficient to allege that Google is liable as a '... component part manufacturer' in a products liability claim, largely because the plaintiff alleged that the LLM itself caused the app to be defective, as opposed to other components.

This is one area where the Garcia court and plaintiff go further than in the social media cases filed to date. In those cases, only the social media companies themselves are being sued. In Garcia, by contrast, AI companies that develop LLMs might get brought into litigation involving downstream uses of that technology.

 

When does Section 230 offer immunity?

Section 230 of the U.S. Communications Decency Act immunizes computer service providers from liability as a speaker or a publisher of content provided by an information content provider.

The applicability of Section 230 has been important in litigation against social media companies, where allegedly harmful content is shared on social media networks via third parties. However, AI defendants have generally not raised Section 230, likely because AI companies themselves materially contribute to potentially contested content where social media companies do not.

That said, as new and different kinds of AI applications face litigation, Section 230 will likely be raised at some point, so we must follow how courts are applying the law. Looking at recent judicial activity addressing Section 230, we see two important cases that help illuminate when immunity might apply to AI:

When Section 230 applies: In Patterson v. Meta Platforms, Inc., a New York State appeals court held that social media companies could not be held responsible for content posted on their platforms that allegedly led to a mass shooter targeting Black customers in a Buffalo, N.Y., grocery store in 2022.

The plaintiffs pointed to alleged design defects that algorithmically recommended violent and racist content and caused the shooter to become addicted to the social media platforms. But the court was not persuaded that these were defects that would overcome Section 230 immunity, following other recent decisions holding that an interactive computer service does not lose Section 230 immunity because the company automates its editorial decision-making. According to the court, this activity — that of a traditional publisher — is precisely the kind of activity intended to be protected.

When Section 230 does not apply: On the other side of Section 230, in State of North Carolina v. TikTok Inc., a North Carolina state court found that Section 230 doesn’t shield TikTok from a lawsuit alleging that TikTok intentionally designed a product that addicts children, resulting in anxiety, depression, sleep deprivation, and an increased risk of self-harm.

Here, the defendants’ allegedly harmful conduct did not fall within the traditional publisher role, in contrast to the automated editorial decision-making in the Patterson case, so the court refused to apply Section 230 immunity. The court found it important that the State of North Carolina was not seeking to hold the defendants liable for traditional editorial activities such as "monitoring, altering, or removing [] content, or for failing to do those things."

What does this mean for the applicability of Section 230 to AI liability? Looking at these two social media cases can be instructive. Where AI agents are delivering user-generated content without materially contributing to the content, or where the AI agent is otherwise acting in the traditional role of a publisher, courts might see Section 230 as providing some measure of immunity from liability.

Where an AI agent generates material portions of content, Section 230 is unlikely to apply, and defendants may not even raise the issue in such situations, as has been the case in AI chatbot lawsuits thus far.

 

What’s next for AI liability?

With the Garcia case surviving a motion to dismiss, we’ll see more cases that test the bounds of liability for AI-related technologies. Since the decision was filed, we’ve already seen a similar case against OpenAI for its ChatGPT product, along with three new chatbot bodily injury cases against Character Technologies.

With the Texas Attorney General announcing an investigation into chatbots that target children and purport to provide mental health services, we are seeing attention on this issue from advocates, the press, plaintiffs’ attorneys, and regulators, particularly to the extent AI services like these are targeting minors. In that context, the AI liability cases we’ve seen thus far fit into the wider context of addictive software litigation.

Amid all this scrutiny of AI products targeting children, it may be surprising to see initiatives, such as the continuing wave of partnerships between AI developers and consumer-facing companies.

One example is the partnership between OpenAI and toy-maker Mattel to collaborate on '... a series of products and experiences.” Smaller start-ups are already hard at work on this, such as toymaker Curio Interactive and their AI plushies Grok, Grem, and Lingo.

It’s early days, and we don’t know whether AI will be world-changing or merely 'transformative.' But if Narayanan and Kapoor are right and AI turns out to be normal technology, we can expect companies to adopt it in response to normal economic incentives.

Companies will have an incentive to deploy AI slowly internally, with careful safeguards to protect against security risks in critical internal applications, but much less incentive to deploy it carefully in products and services available to the public. This is precisely the type of 'normal externality' emanating from 'normal technology' that tort law is intended to address.

 

Discover more about CoMeta®, emerging risk intelligence to track over 300 emerging risks mapped to companies, industries, and policy portfolios, allowing for earlier underwriting adjustments, exposure monitoring, and reserving actions, here.


LEARN MORE

Moody's insurance solutions

Our differentiated solutions bring together technology, data and analytics and insights, helping insurers, reinsurers, and brokers address their most complex challenges and make better decisions with confidence – therefore helping to close the insurance gap and drive performance.