
What Two Judicial Rulings Mean for the Future of Generative AI
More than 40 lawsuits have been filed against AI companies since 2022. The specifics vary, but they generally seek to hold these companies accountable for stealing millions of copyrighted works to develop their technology. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation.
In each case, the judges decided that the tech companies were engaged in 'fair use' when they trained their models with authors' books. Both judges said that the use of these books was 'transformative'—that training an LLM resulted in a fundamentally different product that does not directly compete with those books. (Fair use also protects the display of quotations from books for purposes of discussion or criticism.)
At first glance, this seems like a substantial blow against authors and publishers, who worry that chatbots threaten their business, both because of the technology's ability to summarize their work and its ability to produce competing work that might eat into their market. (When reached for comment, Anthropic and Meta told me they were happy with the rulings.) A number of news outlets portrayed the rulings as a victory for the tech companies. Wired described the two outcomes as ' landmark ' and ' blockbuster.'
But in fact, the judgments are not straightforward. Each is specific to the particular details of each case, and they do not resolve the question of whether AI training is fair use in general. On certain key points, the two judges disagreed with each other—so thoroughly, in fact, that one legal scholar observed that the judges had 'totally different conceptual frames for the problem.' It's worth understanding these rulings, because AI training remains a monumental and unresolved issue—one that could define how the most powerful tech companies are able to operate in the future, and whether writing and publishing remain viable professions.
So, is it open season on books now? Can anyone pirate whatever they want to train for-profit chatbots? Not necessarily.
When preparing to train its LLM, Anthropic downloaded a number of 'pirate libraries,' collections comprising more than 7 million stolen books, all of which the company decided to keep indefinitely. Although the judge in this case ruled that the training itself was fair use, he also ruled that keeping such a 'central library' was not, and for this, the company will likely face a trial that determines whether it is liable for potentially billions of dollars in damages. In the case against Meta, the judge also ruled that the training was fair use, but Meta may face further litigation for allegedly helping distribute pirated books in the process of downloading—a typical feature of BitTorrent, the file-sharing protocol that the company used for this effort. (Meta has said it 'took precautions' to avoid doing so.)
Piracy is not the only relevant issue in these lawsuits. In their case against Anthropic, the authors argued that AI will cause a proliferation of machine-generated titles that compete with their books. Indeed, Amazon is already flooded with AI-generated books, some of which bear real authors' names, creating market confusion and potentially stealing revenue from writers. But in his opinion on the Anthropic case, Judge William Alsup said that copyright law should not protect authors from competition. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote.
In his ruling on the Meta case, Judge Vince Chhabria disagreed. He wrote that Alsup had used an 'inapt analogy' and was 'blowing off the most important factor in the fair use analysis.' Because anyone can use a chatbot to bypass the process of learning to write well, he argued, AI 'has the potential to exponentially multiply creative expression in a way that teaching individual people does not.' In light of this, he wrote, 'it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars' while damaging the market for authors' work.
To determine whether training is fair use, Chhabria said that we need to look at the details. For instance, famous authors might have less of a claim than up-and-coming authors. 'While AI-generated books probably wouldn't have much of an effect on the market for the works of Agatha Christie, they could very well prevent the next Agatha Christie from getting noticed or selling enough books to keep writing,' he wrote. Thus, in Chhabria's opinion, some plaintiffs will win cases against AI companies, but they will need to show that the market for their particular books has been damaged. Because the plaintiffs in the case against Meta didn't do this, Chhabria ruled against them.
In addition to these two disagreements is the problem that nobody—including AI developers themselves—fully understands how LLMs work. For example, both judges seemed to underestimate the potential for AI to directly quote copyrighted material to users. Their fair-use analysis was based on the LLMs' inputs — the text used to train the programs—rather than outputs that might be infringing. Research on AI models such as Claude, Llama, GPT-4, and Google's Gemini has shown that, on average, 8 to 15 percent of chatbots' responses in normal conversation are copied directly from the web, and in some cases responses are 100 percent copied. The more text an LLM has 'memorized,' the more it can potentially copy and paste from its training sources without anyone realizing it's happening. OpenAI has characterized this as a 'rare bug,' and Anthropic, in another case, has argued that 'Claude does not use its training texts as a database from which preexisting outputs are selected in response to user prompts.'
But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much more of its training text than was previously thought, including near-exact copies of books such as Harry Potter and the Sorcerer's Stone and 1984.
That study was co-authored by Mark Lemley, one of the most widely read legal scholars on AI and copyright, and a longtime supporter of the idea that AI training is fair use. In fact, Lemley was part of Meta's defense team for its case, but he quit earlier this year, criticizing in a LinkedIn post about 'Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness.' (Meta did not respond to my question about this post.) Lemley was surprised by the results of the study, and told me that it 'complicates the legal landscape in various ways for the defendants' in AI copyright cases. 'I think it ought still to be a fair use,' he told me, referring to training, but we can't entirely accept 'the story that the defendants have been telling' about LLMs.
For some models trained using copyrighted books, he told me, 'you could make an argument that the model itself has a copy of some of these books in it,' and AI companies will need to explain to the courts how that copy is also fair use, in addition to the copies made in the course of researching and training their model.
As more is learned about how LLMs memorize their training text, we could see more lawsuits from authors whose books, with the right prompting, can be fully reproduced by LLMs. Recent research shows that widely read authors, including J. K. Rowling, George R. R. Martin, and Dan Brown may be in this category. Unfortunately, this kind of research is expensive and requires expertise that is rare outside of AI companies. And the tech industry has little incentive to support or publish such studies.
The two recent rulings are best viewed as first steps toward a more nuanced conversation about what responsible AI development could look like. The purpose of copyright is not simply to reward authors for writing but to create a culture that produces important works of art, literature, and research. AI companies claim that their software is creative, but AI can only remix the work it's been trained with. Nothing in its architecture makes it capable of doing anything more. At best, it summarizes. Some writers and artists have used generative AI to interesting effect, but such experiments arguably have been insignificant next to the torrent of slop that is already drowning out human voices on the internet. There is even evidence that AI can make us less creative; it may therefore prevent the kinds of thinking needed for cultural progress.
The goal of fair use is to balance a system of incentives so that the kind of work our culture needs is rewarded. A world in which AI training is broadly fair use is likely a culture with less human writing in it. Whether that is the kind of culture we should have is a fundamental question the judges in the other AI cases may need to confront.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
Nvidia Stock Is at a Peak - What's the Best Play Here for NVDA?
Three weeks ago, we recommended Nvidia Inc. (NVDA) stock in a June 22 Barchart article and shorting out-of-the-money puts. Now, NVDA is near its target prices, and the short play is successful. What is the best play here? NVDA is at $171.30, up over 4.5% today. Trump OK'd an export license to sell its powerful H20 AI chips to China after Nvidia's CEO, Jensen Huang, met with President Trump. The Wall Street Journal said this has been a top seller for Nvidia in China and was specially designed for the Chinese market. How to Buy Tesla for a 13% Discount, or Achieve a 26% Annual Return Alibaba Stock is Well Off Its Highs - What is the Best Way to Play BABA? Generate Income on MSTR Without Owning The Stock (Yet) Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! My prior price target was $178 per share using an estimated 55% forward free cash flow (FCF) margin (i.e., FCF/sales), as well as a 2.85% FCF yield valuation metric (i.e., 35x FCF multiple). Last quarter, Nvidia made a 59% FCF margin. So, if this continues over the coming year, NVDA stock could have further to go. Moreover, analysts have lifted their price targets. Let's look at this. In Q1 ending April 27, Nvidia generated $26.1 billion in FCF on $44.06 billion in sales. That represents a 59.2% FCF margin. Over the trailing months, according to Stock Analysis, it's generated $72.06 billion FCF on $148.5 billion in sales, or a 48.5% FCF margin. So, it seems reasonable to assume Nvidia could make at least a 57% FCF margin going forward. Here's how that would work out. Analysts expect sales to rise to a range between $199.89 billion this year ending Jan. 2026 and $251.2 billion next year. That puts it on a next 12 months (NTM) run rate of $225.5 billion. Moreover, now that it will be able to sell to China again, let's assume this pushes sales at least 5% higher to $236.8 billion: $236.8b NTM sales x 57% FCF margin = $135 billion FCF Just to be conservative, let's use a 55% margin on the lower NTM sales estimates: $225.5 billion x 55% margin = $124 billion FCF So, our estimate is that FCF over the next 12 months could range between $124 billion and $135 billion, or about $130 billion on average Therefore, using a 30x FCF multiple (i.e., the same as dividing by 3.33% FCF yield): $230b x 30 = $6,900 billion market cap (i.e., $6.9 trillion) That is 65% over today's market cap of $4.178 trillion, according to Yahoo! Finance (i.e., at $171.35 p/sh). In other words, NVDA stock could be worth 65% more, or $291.55 per share. $171.35 x 1.65 = $282.73 price target That is what might happen over the next 12 months (NTM) if analysts' revenue targets are hit and its FCF margin averages 56%. Analysts have closer price targets. The average of 66 analysts surveyed by Yahoo! Finance is $173.92. However, that is higher than three weeks ago, when I reported that the average was $172.60. Moreover, which tracks recent analyst recommendations, now reports that 39 analysts have a $200.71 price target, up from $179.87 three weeks ago. One way to play this is to sell short out-of-the-money puts. That way, an investor can set a lower buy-in price and still get paid extra income. In my last Barchart article on June 22 ("Make Over a 2.4% One-Month Yield Shorting Nvidia Out-of-the-Money Puts"), I suggested shorting the $137 strike price put option expiring July 25. The yield was 2.48% over the next 34 days (i.e., $3.40/$137.00). Today, that contract is almost worthless, as it's trading for just 8 cents. In other words, the short seller of these puts has made almost all the money (i.e., the stock has risen, making the short-put play successful). The investor's account has little chance of getting assigned to buy 100 shares per put contract at $137.00 on or before July 25. It makes sense to roll this over by doing a 'Buy to Close" and entering a new trade to 'Sell to Open' at a later expiry period and higher strike price. For example, the Aug. 29 expiry period, 45 days to expiry or DTE (which is after the expected Aug. 27 Q2 earnings release date), shows that $155.00 strike price put has a midpoint premium of $3.93. So, the short-put yield is: $393/$155.00 = 0.2535 = 2.535% over 45 days That works out to an annualized expected return (ER) of +20.28% (i.e., 2.535% x 8). So, even if NVDA stock stays flat, the investor stands to make good money here shorting these puts every 45 days (assuming the same yield occurs). There seems to be a low risk here, given that the delta ratio is just 23%. But, given how volatile NVDA has been, and that the stock is at a peak, it might make sense to use some of the income received to buy puts at lower strike prices. Keep in mind that the breakeven point, i.e., the price where an unrealized loss could occur, is $151.07: $155-$3.93 = $151.07 That is 11.8% below today's price. But it is not uncommon for a stock like NVDA to fall 20% from its peak. That would put it at $137.00. So, using some of the income to buy long puts at $140 or $145 is not unreasonable. That would cost between $144 and $204 ($174 on average) for the $15,500 investment (net of $393 already received): $393 income - $174 long hedge = $219, or $219 / $15,500 invested in short put play = 1.41% New Breakeven = $15,500 = $174 = $15,326 or $153.26 per put contract This means that the investor's potential (unrealized) loss is between $14,250 and $15,326, or -$1,076 net on the $15,326 net investment, or -7%. But keep in mind that this is only an unrealized loss. The investor would have protected himself from a much lower downside by buying long puts from the income received. And, after all, the price target is substantially higher, so the investor might be willing to hold on or even sell out-of-the-money call options to recoup some of the unrealized loss. The bottom line here is that NVDA has room to move higher. Shorting OTM puts with a lower strike price long put hedge is one good way to play this. On the date of publication, Mark R. Hake, CFA did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Sign in to access your portfolio
Yahoo
25 minutes ago
- Yahoo
Top Stock Movers Now: Nvidia, AMD, Newmont, BlackRock, and More
Major U.S. equities indexes were mixed at midday Tuesday while China trade developments in the semiconductor sector boosted tech stocks. Shares of MP Materials soared after Apple said it would invest $500 million in the owner of America's only operational rare earths mine. Gold miner Newmont's CFO suddenly resigned, and shares U.S. equities indexes were mixed at midday Tuesday as tech stocks got a boost from semiconductor stocks. The tech-heavy Nasdaq climbed, while the S&P 500 and Dow dropped. Nvidia (NVDA) and Advanced Micro Devices (AMD) shares surged after the Trump administration reversed course and said it will allow the companies to sell key AI chips to China again. Shares of MP Materials (MP) soared after Apple (AAPL) said it would invest $500 million in the owner of America's only operational rare earths mine. Apple shares rose as well. Shares of medical equipment provider Steris (STE) gained on an upgrade from Morgan Stanley, which pointed to positive developments in its sterilization operations and market trends. Newmont (NEM) was the worst-performing stock in the S&P 500 as the gold miner's CFO, Karyn Ovelmen, suddenly resigned. Shares of BlackRock (BLK) dropped after the investment manager missed quarterly revenue estimates, even as it posted a record $12.5 trillion in assets under management. Wells Fargo (WFC) shares declined after the bank's net interest income came in below forecasts, and it lowered its guidance. Oil and gold futures declined. The yield on the 10-year Treasury note fell. The U.S. dollar was up on the euro, pound, and yen. The rally in cryptocurrencies stalled, with prices for most major digital coins lower. Read the original article on Investopedia Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


Forbes
26 minutes ago
- Forbes
McDonald's AI Breach Reveals The Dark Side Of Automated Recruitment
Millions of McDonald's job applicants had their personal data exposed after basic security failures ... More left the company's AI hiring system wide open. If you've ever wondered what could go wrong with an AI-powered hiring system, McDonald's just served up a cautionary tale. This week, security researchers revealed that the company's McHire website—a recruitment platform used by over 90% of McDonald's franchisees—left the personal information of millions of job applicants exposed to anyone with a browser and a little curiosity. The culprit: Olivia, an AI chatbot from designed to handle job applications, collect personal information, and even conduct personality tests. On paper, it's a vision of modern efficiency. In reality, the system was wide open due to security flaws so basic they'd be comical if the consequences weren't so serious. What Went Wrong? It didn't take a sophisticated hacker to find the holes. Researchers Ian Carroll and Sam Curry started investigating after Reddit users complained that Olivia gave nonsensical responses during the application process. After failing to find more complex vulnerabilities, the pair simply tried logging into the site's backend using '123456' for both the username and password. In less than half an hour, they had access to nearly every applicant's personal data—names, email addresses, phone numbers, and complete chat histories—with no multifactor authentication required. Worse still, the researchers discovered that anyone could access records just by tweaking the ID numbers in the URL, exposing over 64 million unique applicant profiles. One compromised account had not even been used since 2019, yet remained active and linked to live data. As Carroll told Wired, 'I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more.' Why Security Fundamentals Still Matter Experts agree that the real shock isn't the technology itself—it's the lack of security basics that made the breach possible. As Aditi Gupta of Black Duck noted, the McDonald's incident was less a case of advanced hacking and more a 'series of critical failures,' ranging from unchanged default credentials and inactive accounts left open for years, to missing access controls and weak monitoring. The result: an old admin account that hadn't been touched since 2019 was all it took to unlock a massive trove of personal data. For many in the industry, this raises bigger questions. Randolph Barr, CISO at Cequence Security, points out that the use of weak, guessable credentials like '123456' in a live production system is not just a technical slip—it signals deeper problems with security culture and governance. When basic measures like credential management, access controls, and even multi-factor authentication are missing, the entire security posture comes into question. If a security professional can spot these flaws in minutes, Barr says, 'bad actors absolutely will—and they'll be encouraged to dig deeper for other easy wins.' And this isn't just about AI or McDonald's. Security missteps of this kind tend to follow each new 'game-changing' technology. As PointGuard AI's William Leichter observes, organizations often rush to deploy the latest tools, driven by hype and immediate gains, while seasoned security professionals get sidelined. It happened with cloud, and now, he says, 'it's AI's turn: tools are being rolled out hastily, with immature controls and sloppy practices.' Automation and the Illusion of Security McDonald's isn't alone in betting big on AI to speed up hiring and make life easier for franchisees and HR teams. Automated chatbots like Olivia are supposed to streamline applications, assess candidates, and remove human bottlenecks. But as this incident shows, convenience can't come at the expense of basic digital hygiene. Simple safeguards—unique credentials, robust authentication, and proper access controls—were missing entirely. The rush to digitize and automate HR brings with it a false sense of security. When sensitive data is managed by machines, it's easy to assume the system is secure. But technology is only as strong as the practices behind it. Lessons for the Future If there's a lesson here, it's that technology should never substitute for common sense. Automated hiring systems, especially those powered by AI, are only as secure as the most basic controls. The ease with which researchers accessed the McHire backend shows that old problems—default passwords, missing MFA—are still some of the biggest threats, even in the age of chatbots. Companies embracing automation need to build security into the foundations, not as an afterthought. And applicants should remember that behind every 'friendly' AI bot is a company making choices about how to protect—or neglect—their privacy. The Price of Convenience The McDonald's McHire data leak is a warning to every company automating hiring, and to every job seeker trusting a bot with their future. Technology can streamline the process, but it should never circumvent or subvert security. The real world isn't as neat as a chatbot's conversation tree. If we aren't careful, the push for convenience will keep putting real people at risk.