The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


Overview of AI Copyright Litigation

In 2026, we can expect important developments in the legal landscape of generative AI and copyright. Dozens of copyright infringement lawsuits targeting the training and development of AI models—capable of generating text, images, video, music and more—are advancing toward dispositive rulings. The central issue remains whether training AI models using unlicensed copyrighted works is infringing or instead constitutes fair use under Section 107 of the U.S. Copyright Act. Courts consider four factors in determining whether a particular use is fair: (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the portion used and (4) the effect of the use upon the potential market for or value of the copyrighted work. The thrust of this inquiry is whether the use is transformative—serving a different purpose or function from the original work—or merely usurps the market for the original by reproducing its protected expression. As courts establish legal frameworks for AI training and protection of AI-generated outputs, companies and boards should closely monitor developments to fully understand the risks and opportunities of AI implementation.

The following is part of our annual publication Selected Issues for Boards of Directors in 2026. Explore all topics or download the PDF.


AI adoption is now mainstream: 88% of businesses use AI in at least one function, with global spending expected to exceed $1.5 trillion in 2025 and approach $2 trillion in 2026. As organizations race to scale AI, many have relied upon traditional vendor risk management policies to vet third-party AI vendors and tools; however, implementation of third-party AI tools presents distinctive risks that require tailored due diligence, auditing, contracting and governance. Because businesses are accountable for outputs generated by third-party AI tools and for vendors’ processing of prompts and other business data, boards and management should ensure legal, IT and procurement teams apply a principled, risk-based approach to vendor management that addresses AI‑specific considerations.

As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

For more insights and analysis from Cleary lawyers on policy and regulatory developments from a legal perspective, visit What to Expect From a Second Trump Administration.

On December 11, 2025, President Donald Trump signed an executive order titled Establishing A National Policy Framework For Artificial Intelligence (the “Order”)[1]. The Order’s policy objective is to “enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI”[2] and comes after Congress considered but did not advance federal legislation that would have preempted state AI regulation earlier this year. The Order justifies federal intervention on three grounds:

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company.

Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself. Together, these incidents illustrate immediate implications for corporate governance, contracting and security programs as companies integrate AI with their business systems. Below, we explain how these attacks were orchestrated and what steps businesses should consider given the rising cyber risks associated with the adoption of AI.

On November 4, 2025, the UK High Court handed down judgment in Getty Images v. Stability AI,[1] a case emphasized for its significance to content creators and the AI industry and “the balance to be struck between the two warring factions”.[2] Despite significant public interest in the lawsuit, the issues that remained before the court on the “diminished”[3] case were limited (after Getty abandoned its primary infringement claims during trial). The judgment dismisses Getty’s remaining claims of secondary copyright infringement. While some claims of trademark infringement asserted by Getty were upheld, Justice Joanna Smith DBE acknowledged the findings were “extremely limited in scope”.[4]

On 10 October 2025, Law No. 132/2025 (the “Italian AI Law”) entered into force, making Italy the first EU Member State to introduce a dedicated and comprehensive national framework for artificial intelligence (“AI”). The law references the AI Act (Regulation (EU) 2024/1689) and grants the government broad powers to implement its principles and establish detailed operational rules. It also sets out the institutional structure responsible for overseeing AI in Italy, mandating to specific authorities the promotion, coordination, and supervision of this strategically important sector.

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.