Featured

Judge rules AI company Anthropic didn’t break copyright law but must face trial over pirated books

In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the law by training its chatbot Claude on millions of copyrighted books.

But the company is still on the hook and could now go to trial over how it acquired those books by downloading them from online “shadow libraries” of pirated copies.

U.S. District Judge William Alsup of San Francisco said in a ruling filed late Monday that the AI system’s distilling from thousands of written works to be able to produce its own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative.”

“Like any reader aspiring to be a writer, Anthropic’s (AI large language models) trained upon works not to race ahead and replicate or supplant them – but to turn a hard corner and create something different,” Alsup wrote.

But while dismissing the key copyright infringement claim made by the group of authors who sued the company last year, Alsup also said Anthropic must still go to trial over its alleged theft of their works.

Anthropic had no entitlement to use pirated copies for its central library,” Alsup wrote.

A trio of writers – Andrea Bartz, Charles Graeber and Kirk Wallace Johnson – alleged in their lawsuit last summer that Anthropic committed “large-scale theft” by allegedly training its popular chatbot Claude on pirated copies of copyrighted books, and that the company “seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.”

As the case proceeded over the past year in San Francisco’s federal court, documents disclosed in court showed Anthropic’s internal concerns about the legality of their use of online repositories of pirated works. So the company later shifted its approach and attempted to purchase copies of digitized books.

“That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,” Alsup wrote.

The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram.

Anthropic – founded by ex-OpenAI leaders in 2021 – has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way.

But the lawsuit filed last year alleged that Anthropic’s actions “have made a mockery of its lofty goals” by tapping into repositories of pirated writings to build its AI product.

Copyright © 2025 The Washington Times, LLC.

Source link

Related Posts

1 of 1,262