As generative artificial intelligence (“GenAI”) continues to be rapidly adopted by individuals and organizations across a wide range of industries, courts are seeing a corresponding surge in legal disputes arising from its development and use. These cases span a wide range of issues, primarily focusing on: (i) the nature and sourcing of input data used in AI model training and whether such use constitutes fair use; and (ii) the alleged harms created by the deployment of GenAI products and services, including questions of design, built-in safeguards, and the heightened risks posed to vulnerable groups like minors. While GenAI companies most commonly find themselves as defendants in such claims, they have also assumed the role of plaintiffs by bringing constitutional challenges against newly enacted laws imposing obligations on organizations developing and deploying GenAI products and services. This client alert provides a snapshot of the arguments advanced in product liability cases against GenAI companies in the United States and separately examines the constitutional challenges brought by these companies, followed by our insights on emerging trends. The alert does not address cases involving intellectual property implications.
Product liability litigations
Wrongful death claims
The majority of product liability cases against GenAI companies (sometimes also naming their investors and employees as co-defendants) arise from wrongful death claims in which a user of a GenAI product or service has died. In these cases, plaintiffs (typically the estates of deceased individuals) allege that GenAI companies are liable on the basis that their AI models incorporate inadequate safety standards and defective designs that contributed to the individual’s death. Specific allegations include failures to warn users of known dangers and the use of design features that manipulate users, e.g., by maximizing engagement and fostering emotional dependency. Alongside strict liability claims, plaintiffs commonly assert negligence, arguing that GenAI companies failed to exercise reasonable care in designing their systems to prevent foreseeable harm and failed to provide adequate warnings to users. A notable and emerging development is the expansion of wrongful death claims to scenarios in which plaintiffs allege that a GenAI product induced a user to harm a third party—that is, an individual who was not themselves a user of the platform.
Nonconsensual Intimate Images of Individuals (“NCII”) and deepfakes
An increasing number of product liability claims involve allegations that GenAI products generate NCII materials, including about minors, and harmful deepfakes. Plaintiffs in these cases assert the design and deployment of GenAI products as defective, arguing that companies have failed to implement adequate safety guardrails (e.g., training-data filtering, red-teaming, and image classifiers) that would prevent the large-scale creation and dissemination of NCII and deepfakes materials. Notably, this area is also attracting significant regulatory and enforcement attention. At the federal level, the enactment of the Take It Down Act prohibits the publication of NCII, whether authentic or computer-generated, and requires covered platforms to remove such content within 48 hours of receiving a valid removal request from an affected individual. At the state level, a bipartisan coalition of attorneys general has raised concerns regarding whether GenAI companies are taking adequate steps to prevent the generation of such material.
Litigations involving constitutional challenges
As state legislatures continue to enact AI laws imposing disclosure obligations on GenAI companies, we are seeing an emerging body of constitutional challenges to such regulations. Notably, California’s Generative AI Training Data Transparency Act, which requires GenAI developers to publicly publish detailed summaries of their training datasets, has been challenged on constitutional grounds, where the plaintiffs argue that the Act: (i) compels speech by requiring public disclosure from GenAI companies in violation of the First Amendment; and (ii) violates the Fifth Amendment by forcing public disclosure of training datasets that may qualify as trade secrets, amounting to an unconstitutional taking without just compensation. The federal court recently denied the plaintiff’s motion for a preliminary injunction, upholding the Act at this stage. The court found that compelled disclosure of “purely factual and non-controversial information” is permissible where reasonably related to a substantial government interest. In doing so, the court also noted that the required disclosures likely constitute commercial speech, which is subject to intermediate scrutiny (i.e., under which a law is upheld if it directly advances a substantial governmental interest and is not more extensive than necessary), rather than the more demanding strict scrutiny standard (i.e., under which a law is upheld if it is narrowly tailored to advance a compelling governmental interest and that the law is the least restrictive means of serving that interest). Similarly, this intermediate scrutiny approach was echoed by the US District Court for the Southern District of New York, which dismissed a First Amendment challenge to New York’s Algorithmic Pricing Disclosure Act, requiring businesses using personalized algorithmic pricing to provide consumers with a clear and conspicuous disclosure. The two decisions together signal an emerging judicial consensus that AI-related disclosure obligations are likely analyzed under the commercial speech standard.
Beyond disclosure-related constitutional challenges, a GenAI company is separately challenging its government designation as a supply chain risk to national security, asserting the First Amendment violations on the basis that the designation penalizes the company for its views, and Fifth Amendment due process violations. This case represents a novel intersection of national security authority and constitutional protections for GenAI companies.
Emerging trends
Product liability claims
While wrongful death claims remain the primary basis for product liability litigation against GenAI companies, the categories of alleged harm are expanding, most recently to NCII and deepfake-related claims. Notably, in one of the leading wrongful death cases, the parties have notified the court that they have reached a settlement in principle. As the terms remain undisclosed, this development reflects the parties’ pursuit of private resolution.
Legislative developments at state level
State legislatures have also been active in enacting laws targeting GenAI companies. Recent examples include California’s S.B. 243, Washington’s H.B. 2225, and Oregon’s S.B. 1546. These laws primarily require AI chatbot operators to: (i) clearly disclose to users that they are interacting with an AI rather than a human; (ii) implement safeguards to prevent outputs that could induce suicidal thoughts or feelings; and (iii) apply heightened protocols where the operator suspects the user is a minor, including restrictions on interactions that simulate emotional dependence or romantic interest. Importantly, several of these laws also create a private right of action, e.g., Washington’s H.B. 2225, Oregon’s S.B. 1546 and California’s A.B. 621 (in relation to victims of deepfake pornography), that provide plaintiffs with new legal grounds for claims against GenAI companies. Finally, there is also a growing legislative trend, pioneered by California’s A.B. 316, explicitly preventing companies’ reliance on an “autonomous AI” defense to liability for resulting harms.
Federal Deregulation Efforts
At the federal level, the regulatory landscape is moving in a markedly different direction. The Trump Administration has signaled a deliberate deregulatory posture toward AI, most prominently through its AI Action Plan and the establishment of an AI Litigation Task Force within the Department of Justice, tasked with challenging what the Administration characterizes as “onerous” state AI laws. These efforts, however, should not be read as a push for a complete regulation-free environment. As clarified most recently in the White House’s National AI Legislative Framework, the Administration’s preferred approach is for Congress to establish a “minimally burdensome national standard” that would preempt state AI laws imposing undue burdens on AI innovation; and therefore, effectively favoring a unified but light-touch federal framework over a fragmented patchwork of state regulations. Notably, the Framework also encourages Congress to pursue legislation that, among others, would: (i) strengthen parental controls over children’s privacy settings, screen time, and related safeguards; (ii) prevent government from coercing AI providers into banning, compelling, or altering content based on partisan or ideological agendas; and (iii) explore a collective negotiation framework enabling intellectual property rights holders to seek compensation from AI providers for the commercial use of their IP-protected materials. Consistent with this direction, the bipartisan AI Foundation Model Transparency Act (H.R. 8094) was recently introduced on March 26, 2026. The Act aims to establish transparency requirements governing how AI foundation models are built, trained, and deployed, and would direct the FTC to set standards for the information that high-impact foundation models must disclose to the FTC and the public.
As GenAI litigation and legislation continue to evolve at a rapid pace, companies developing and deploying GenAI products and services face an increasingly complex legal landscape at both the state and federal levels. It is therefore important for organizations to closely monitor these developments and proactively assess their litigation exposure and regulatory compliance obligations. That said, particularly in the absence of Congressional action, given that existing state AI laws are already in place and largely enforceable, the most prudent approach is to continue complying with state AI laws until there is greater clarity.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2026 White & Case LLP