
Generative vs. Creative: A court verdict on AI training has exposed an Anthropic-shaped chink in US copyright law
Gift this article
In what is shaping up to be a long, hard fight over the use of creative works, round one has gone to the AI makers. In the first such US decision of its kind, District Judge William Alsup said Anthropic's use of millions of books to train its artificial-intelligence model, without payment to the sources, was legal under copyright law because it was 'transformative—spectacularly so."
In what is shaping up to be a long, hard fight over the use of creative works, round one has gone to the AI makers. In the first such US decision of its kind, District Judge William Alsup said Anthropic's use of millions of books to train its artificial-intelligence model, without payment to the sources, was legal under copyright law because it was 'transformative—spectacularly so."
The closely watched ruling is a warning of what lies ahead under existing copyright laws. Designed to protect creative freedom, the 'fair use' doctrine that Anthropic used to successfully defend its actions is now the most potent tool for undermining the creative industry's ability to support itself in the coming age of AI.
If a precedent has been set, as several observers believe, it stands to cripple one of the few possible AI monetization strategies for rights holders, which is to sell licenses to firms for access to their work. Some of these deals have already been made while the 'fair use' question has been in limbo, deals that emerged only after the threat of legal action. This ruling may have just taken future deals off the table. Also Read: Pay thy muse: Yes, AI does owe royalties for stolen inspiration
For context, it's useful to understand how Anthropic built the large language model that underpins its popular AI chat bot, Claude. First, according to court filings, it downloaded pirated copies of at least 7 million books to avoid the 'slog" (its chief executive officer wrote) of acquiring them through more legitimate means. Later, thinking better of the outright theft, the company decided to buy millions of used physical books (usually one copy per title), telling distributors it wanted to create a 'research library." Anthropic staff then removed the spines, scanned the pages into a digital format and destroyed the originals.
This library was used to train Anthropic's LLM, giving Claude the kind of smarts it can charge money for. The chatbot offers limited use for free but a fuller experience for $20 a month, and more for businesses. As of its last funding round, Anthropic was valued at $61.5 billion. (As a guide, publisher Simon and Schuster was sold in 2023 for $1.62 billion.)
Anthropic was sued by a group of three authors whose books were in the training data. In the judge's ruling, he said that Anthropic's acquisition of pirated material was unlawful, damages for which will be assessed at a trial. That was the one piece of bad news for the company. The far bigger news was how the ruling gives the green light to Anthropic—and every other AI firm building LLMs in this way—by declaring everything else it did aboveboard. Millions of books were ingested and repurposed, their knowledge sold on without a penny ever going to the originators. Judge Alsup's ruling, which follows the law tightly, serves as an important example of its now critical blind spots. Also Read: ChatGPT plays Ghibli well: Will genuine originality suffer?
The first part of the 'fair use' test was pretty easy to pass: The material that comes out of Claude is significantly different from what goes in. 'Sensationally," different, Judge Alsup wrote, deeming it to clear the test's bar. That is undoubtedly true because the law (quite reasonably) deals only with the precise output while ignoring the fundamental knowledge or idea that underpins it.
A trickier test is whether the existence of Claude diminishes the authors' ability to sell their books. In this, Alsup again stressed that because what comes out of Claude isn't an exact replica, or a substantial knock-off, then the market for buying the books is left fully intact. This misses the point of an AI bot. Turning to one—rather than, say, a library (which pays for its books), or a newspaper (which pays its contributors)—is a shortcut that reduces the need to interact with the source material at all.
Consider Google's AI Overviews feature, which synthesizes content from news and other sources into easily digestible answers, saving the need to visit websites directly. It works great: Traffic to websites has plummeted, taking with it the business model that supports their existence. Matthew Prince, CEO of online security group Cloudflare, put it in starker terms. Speaking at an event in Cannes, Prince said that for every web visit Anthropic sends a publisher's way, it crawls that site for information 60,000 times. 'People aren't following the footnotes," he warned.
Given the nature of how a book is acquired, it's impossible to have an equivalent stat, but the logic clearly extends: AI reduces the need to go to the source and, therefore, the opportunity for publishers to sell it to people and generate income to support the creation of more of it.
Another argument thrown out by the court was concern that Claude could be used to create competing works—that the AI will be used to generate an alternative to the book because it knows everything in it. On this, Alsup agrees that's likely, but adds:
Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition.
This most clearly exposes the severe limitations of copyright law, where no framework is provided to account for the existence and application of an incredible writing machine that has swallowed up 7 million stolen books. If it does account for it, it does so shortsightedly, considering it—as Alsup writes—to be 'no different" from a child because both are being given things to read and taught how to write. An absurdity: One is a human schoolchild, the other is a machine. This changes the conversation immensely.
A child might read 10 books a year if we're lucky. The creation of the books she reads is supported by a parent or school buying them for her. If she decides to write, it's one of life's miracles—a chance for her imagination to flow onto the page. Despite being just a child, or perhaps because of it, her writing will be fresh and unique, laden, between the lines or otherwise, with lived experience. The home she grew up in, the friends she's met, the dreams she has—all will influence how she interprets the contents of the books she has read, determining how she chooses to pass on that knowledge. Her writing will contain contradictions, flaws and humanity. Most important for this debate, her 'competing work" is additive. She will contribute. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us
The machine downloads 7 million books and learns nothing—for it cannot learn, at least not in any true sense of the word. It does not contribute; it copies. Sure, it may synthesize information in ways that may surprise us, but it does so only thanks to the hard and uncompensated work of others. It can have no lived, or new, experiences.
For sure, a competent new knowledge tool may have been created, but AI doesn't so much generate new value as it does transfer it—from one place, the original source, to another: itself. That's not in and of itself a problem; many technologies do this. But this value transfer should command a fee to the originator if the copyright law's stated goal of advancing original works of authorship is to be met for generations to come.
AI is already a phenomenal technology that I use daily. My monthly AI bill across multiple services now exceeds what I pay for any other types of subscriptions. I pay those costs because I understand running an AI platform is expensive, what with all those data centers, power plants, Nvidia chips and engineering talent that must be amassed. Alsup was right when he wrote that 'the technology at issue was among the most transformative many of us will see in our lifetimes."
But that doesn't mean it shouldn't pay its way. Nobody would dare suggest Nvidia CEO Jensen Huang hand out his chips for free. No construction worker is asked to keep costs down by building data center walls for nothing. Software engineers aren't volunteering their time to Meta in awe of Mark Zuckerberg's business plan—they instead command salaries of $100 million and beyond.
Yet, as ever, those in the tech industry have decided that creative works, and those who create them, should be considered of little or no value and must step aside in service of the great calling of AI—despite being every bit as vital to the product as any other factor mentioned above. As science-fiction author Harlan Ellison said in his famous sweary rant, nobody ever wants to pay the writer if they can get away with it. When it comes to AI, paying creators of original work isn't impossible, it's just inconvenient. Legislators should leave companies no choice. ©Bloomberg
The author is Bloomberg Opinion's US technology columnist. Topics You May Be Interested In

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
TTD defends AI strategy to cut waiting time for darshan
1 2 Tirupati: Tirumala Tirupati Devasthanams (TTD) chairman BR Naidu and former chief secretary LV Subramanyam, who also earlier served as TTD EO, sparred over the use of Artificial Intelligence (AI) to reduce waiting time for darshan at Tirumala, drawing everyones attention to the contentious subject of the integration of AI in temple services. The former CS, who was at Tirumala on Sunday, found fault with the attempt to reduce the waiting time for Darshan at Tirumala to less than 3 hours with the help of AI. "It is impossible to reduce the darshan waiting time below three hours through AI as there are several limitations at the temple. Instead, the TTD could focus it's attention on improvising the pilgrim amenities on the hill town", LV Subramanyam asserted. Differing with the former chief secretary, TTD chaimam BR Naidu told reporters that the new trust board had resolved in November last year to integrate AI in temple services with a noble intention to reduce the inconvenience caused to the visiting devotees due to long waiting hours in the queue lines. "The TTD has consulted global leaders in technology and AI to arrive at tailor made solutions for reducing the waiting time for darshan below three hours. In addition, equal focus for integrating various other pilgrim services with AI to extend seamless services to the multitude of devotees arriving on a pilgrimage to Tirumala is also being explored", BR Naidu asserted. The TTD chairman also faulted LV Subramanyam for his misleading comments on TTD's plans to adopt and integrate AI in temple services. "He has worked as TTD EO in the past and is very well aware of the inconvenience caused to the devotees who are sometimes forced to wait even for 72 hours in the queue for darshan. How can he fault the TTD which is exploring the possibilities to reduce the waiting time for Darshan with the help of AI", BR Naidu said. Get the latest lifestyle updates on Times of India, along with Friendship Day wishes , messages and quotes !


NDTV
6 hours ago
- NDTV
Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton. "Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month. "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking." Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret. Warning about AI Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts. In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening. In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.


Time of India
6 hours ago
- Time of India
Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'
Culture Over Cash You Might Also Like: Billionaire Vinod Khosla predicts AI teachers will disrupt education and careers. Here's how — BigTechPod (@BigTechPod) The AI Hiring Wars: A Battle for Brains Buying Purpose? Not Quite, Says Amodei In the escalating turf war for top AI talent, Anthropic CEO Dario Amodei has delivered a pointed, and slightly humorous, critique of Meta 's aggressive recruitment tactics. Speaking on the Big Technology Podcast, Amodei painted a vivid picture: "If Mark Zuckerberg throws a dart at a dartboard and it hits your name, that doesn't mean you should be paid ten times more than the guy next to you who's just as skilled."His remarks come amid widespread reports of Meta launching an all-out offensive to poach AI engineers from rivals like OpenAI, Apple, Google, and Anthropic itself. Yet Amodei claims his startup has remained largely untouched. 'Some [employees] wouldn't even talk to Meta,' he said, asserting that their culture and mission are more attractive than any compensation package Meta can has reportedly been dangling massive offers, with some packages surpassing $200 million for a single hire, according to Business Insider and WIRED. Amodei, however, says Anthropic refuses to match such sums, insisting on fair and consistent pay across the board."I recently posted in our company Slack that we will not compromise our compensation principles or fairness if someone gets a big offer," he shared. In his view, rewarding one employee disproportionately just because they were on Meta's radar would be unjust to their equally capable this stance, Meta has managed to lure away at least one former Anthropic engineer—Joel Pobar—but Amodei suggests their broader impact has been latest AI moonshot, the Superintelligence Lab , has ignited a fierce scramble for elite minds. OpenAI's Chief Research Officer Mark Chen likened it to a break-in after losing several staffers overnight. Meanwhile, OpenAI CEO Sam Altman accused Meta of deploying 'giant offers' to lure talent, with some signing bonuses rumored to top $100 is unapologetic about the ambition. In an internal memo seen by CNBC, he claimed, 'Developing superintelligence is coming into sight,' declaring his goal to bring personal AI to every individual, not just enterprise Meta may have the resources, Amodei questions whether mission-driven AI work can be bought. 'Zuckerberg is trying to buy something that can't be bought,' he said during the podcast, underscoring Anthropic's long-term focus on safe and ethical AI sentiment resonates with other industry leaders too. OpenAI continues to frame itself as a purpose-first organization, while Meta's flashier, big-money moves risk creating tension even within its own teams. As CNBC reported, some insiders at Meta worry that a talent-heavy, cash-fueled approach could lead to ego clashes and fractured the current AI landscape, where demand far outpaces supply, the value of a skilled AI researcher is rivaling that of a professional athlete. Yet, for companies like Anthropic and OpenAI, the real challenge isn't just retaining talent—it's maintaining a sense of purpose amid the frenzy.