
Federal judge rules copyrighted books are fair use for AI training
A federal judge has sided with Anthropic in a major copyright ruling, declaring that artificial intelligence developers can train on published books without authors' consent.
The decision, filed Monday in the U.S. District Court for the Northern District of California, sets a precedent that training AI systems on copyrighted works constitutes fair use. Though it doesn't guarantee other courts will follow, Judge William Alsup's ruling marks the first of dozens of ongoing copyright lawsuits to give an answer on fair use in the context of generative AI.
It's a question that's been raised by creatives across various industries for years since generative AI tools exploded into the mainstream, allowing users to easily produce art from models trained on copyrighted work — often without the human creator's knowledge or permission.
AI companies have been hit with a slew of copyright lawsuits from media companies, music labels and authors since 2023. Artists have signed multiple open letters urging government officials and AI developers to constrain the unauthorized use of copyrighted works. In recent years, companies have also increasingly inked licensing deals with AI developers to dictate terms of use for their artists' works.
Alsup on Monday ruled on a lawsuit filed by three authors — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — last August, who claimed that Anthropic ignored copyright protections when it pirated millions of books and digitized purchased books to feed into its large language models, which helped train them to generate human-like text responses.
'The copies used to train specific LLMs were justified as a fair use,' Alsup wrote in the ruling. 'Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.'
His decision stated that Anthropic's use of the books to train its models, including versions of its flagship AI model Claude, was 'exceedingly transformative' enough to fall under fair use.
Fair use, as defined by the Copyright Act, takes into account four factors: the purpose of the use, what kind of copyrighted work is used (creative works get stronger protection than factual works), how much of the work was used, and whether the use hurts the market value of the original work.
'We are pleased that the Court recognized that using 'works to train LLMs was transformative — spectacularly so,'' Anthropic said in a statement, quoting the ruling. 'Consistent with copyright's purpose in enabling creativity and fostering scientific progress, 'Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.''
Bartz and Johnson did not immediately respond to requests for comment. Graeber declined to comment.
Alsup noted, however, that all of the authors' works contained 'expressive elements' earning them stronger copyright protection, which is a factor that points against fair use, although not enough to sway the overall ruling.
He also added that while making digital copies of purchased books was fair use, downloading pirated copies for free did not constitute fair use.
But aside from the millions of pirated copies, Alsup wrote, copying entire works to train AI models was 'especially reasonable' because the models didn't reproduce those copies for public access, and doing so 'did not and will not displace demand' for the original books.
His ruling stated that although AI developers can legally train AI models on copyrighted works without permission, they should obtain those works through legitimate means that don't involve pirating or other forms of theft.
Despite siding with the AI company on fair use, Alsup wrote that Anthropic will still face trial for the pirated copies it used to create its massive central library of books used to train AI.
'That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft,' Alsup wrote, 'but it may affect the extent of statutory damages.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Finextra
38 minutes ago
- Finextra
NAB hires Lloyds' Pete Steel for new digital, data and AI role
Australia's NAB has appointed Lloyds Banking Group's Pete Steel to the newly created role of group executive, digital data and artificial intelligence. 0 Reporting to CEO Andrew Irvine, Steel will lead the bank's digital, data and AI teams and initiatives as well as be accountable for design, customer onboarding and NAB's digital unit ubank. He joins in January, subject to regulatory approvals. Steel is currently managing director, customer experience at British giant Lloyds, overseeing 16,000 people responsible for consumer sales and service, digital, artificial intelligence, personalisation, branches, call centres and advisers. Before that, he founded fintech Expertli and spent 16 years at CBA in executive roles including group chief digital officer. Says Irvine: 'Digital, data and AI are critical enablers for the delivery of our strategic ambition of customer-centricity and now is the right time to have an executive solely accountable and focussed on accelerating our progress in these areas. 'Pete's deep experience in using digital and technology solutions to deliver for customers and driving commercial outcomes will be a valuable addition to my executive leadership team.' Steel's appointment to the new role is another indication of the growing movement among banks to embed AI at the top of their leadership structures. His current employer Lloyds recently hired former AWS executive Dr Rohit Dhawan as director of AI and advanced analytics and added Aritra Chakravarty head of agentic AI and Magdalena Lis as head of 'responsible' AI.


Business News Wales
3 hours ago
- Business News Wales
AI Breakthroughs Drive Expansion of ‘Airlock' Testing Programme to Support AI-Powered Healthcare Innovation
A £1 million boost to the Medicines and Healthcare products Regulatory Agency's (MHRA) pioneering AI Airlock programme will expand access to a first-of-its-kind regulatory testing ground where companies can work directly with regulators to safely test new AI-powered medical devices and explore how to bring them to patients faster, through streamlined regulations. Applications for the second round of the programme have opened and follow a successful pilot phase that saw four breakthrough AI technologies, including software that could help doctors create personalised cancer treatment plans, and a tool to help hospitals, AI developers, and regulators monitor AI performance in real time, tested in a regulatory 'sandbox' environment. Similar to an airlock on a spacecraft, the 'sandbox' testing space creates a boundary between experimental AI and fully approved medical technology used in the real world. This initiative builds on commitments in the UK Government's AI Opportunities Action Plan and the government response to the Regulatory Horizons Council report on regulation of AI as a medical device to enable safe AI innovation through strategic guidance to regulators and enhance their AI capabilities. This programme is backed by the UK Government's new Regulatory Innovation Office (RIO), which is supporting regulators to test more agile, flexible ways of working that can keep pace with emerging technologies like AI. Science Minister, Lord Vallance, said: 'Backing innovation means backing better regulation – and that's what the RIO is here to do. 'Smarter, faster approaches like the AI Airlock are helping to cut red tape, bring safe new technologies to patients quicker, and ease pressure on our NHS – fuelling the Government's Plan for Change.' Health Minister, Baroness Merron, said: 'AI has huge potential to improve healthcare, and we need to use it safely and responsibly. The AI Airlock programme is a great example of how we can test new technology thoroughly while still moving quickly. 'This £1 million investment will help bring new medical tools to patients faster and strengthen the UK's position as a global leader in healthcare innovation.' Those selected for the next round of the AI Airlock programme will be able to test their AI healthcare products under careful supervision allowing for regulatory challenges to be identified early and adjustments made. James Pound, MHRA Interim Executive Director, Innovation and Compliance, said: 'Traditional regulatory pathways weren't designed with AI's unique characteristics in mind – including its capacity to analyse large quantities of data and help automate existing manual processes. The AI Airlock programme helps address this gap by creating a supervised testing ground where these novel technologies and challenge areas can be safely investigated. 'The technologies and devices which have been evaluated to date have shown the limitless potential of AI to improve patient outcomes, free up NHS resources, and enhance the accuracy and efficiency of healthcare services. 'With AI, we must balance robust oversight with flexibility that doesn't stifle innovation, and this programme achieves that balance.' Four projects were selected for the inaugural AI Airlock cohort, each focused on addressing critical healthcare challenges using AI. Among them was health technology multinational Philips' Radiology Auto Impression project which tested the use of generative AI to automate the writing of radiologists' final impressions – a critical section of radiology reports that summarises key findings from imaging procedures. Working directly with MHRA experts through weekly meetings, the team gained valuable insights about the need to involve their end users – radiologists – to help define testing strategies. As Yinnon Dolev, Philips' Advanced Development NLP (Natural Language Processing) Tech Lead noted, the collaboration with regulators was 'almost unheard of' and provided 'a catalyst for meaningful progress expediting our development activities.' OncoFlow, another first round project, looked at the use of AI to help healthcare professionals create personalised management plans for cancer patients, with the potential to reduce waiting times for cancer appointments, leading to earlier treatment and the possibility of significantly increasing patients' chances of survival. Co-founder Aruni Ghose said the Airlock programme provided his team with the chance to validate the product in a simulated clinical setting and 'pressure-test it against real regulatory standards' which has helped the company accelerate its progress 'from idea to a validated MVP (Minimum Viable Product).' Rounding out the cohort have been two projects; one by Automedica Ltd, investigating the regulatory advantages of using retrieval-augmented generation (RAG) technologies with verified knowledge bases and Large Language Models (LLMs); and the other by health tech startup Newton's Tree testing its Federated AI Monitoring Service (FAMOS) to identify and mitigate AI risks in clinical settings, including performance drift or safety issues. Results from all four pilot projects will be published later this year, providing valuable insights that will shape the AI Airlock programme moving forward and help inform broader regulatory approaches to the effective and safe use AI in healthcare. Eligible candidates for the second cohort must demonstrate that their AI-powered medical device has the potential to deliver significant benefits to patients and the NHS, presents a new treatment approach, and offers a regulatory challenge ready to be tested in the Airlock programme. Applications for cohort two will close on 14 July 2025.


Daily Mirror
4 hours ago
- Daily Mirror
Facebook groups hit with 'mass suspensions' after Meta technical error
Meta is warning users that it has suspended thousands of Facebook Groups due to a technical error. The company says its working to fix the issue but has not shared what's causing the widespread suspensions Meta is facing global outrage after a spate of mass bans has swept through Instagram and Facebook, now hitting Facebook Groups hard with scores of users barred from one of the social media platform's key features. TechCrunch reports that thousands of groups around the world have been suspended, sparking outrage and coordinated efforts on other platforms like Reddit to exchange information. Meta's spokesperson, Andy Stone, acknowledged the problem, confirming that the tech giant was on the case. "We're aware of a technical error that impacted some Facebook Groups. We're fixing things now," he stated in an email. The cause behind the widespread bans remains a mystery, but speculation points towards a glitch in AI moderation systems. Affected users have shared that many of the banned groups were unlikely targets for moderation, focusing on harmless topics such as money-saving tips, parenting advice, pet ownership, gaming, Pokémon, and mechanical keyboard aficionados. Admins of the Facebook Groups have been left baffled by ambiguous warnings citing violations for "terrorism-related" content or nudity, which they vehemently deny ever posting, reports the Express. The scale of the issue is significant, with both small and large groups affected, some boasting memberships ranging from tens to hundreds of thousands, and even reaching into the millions. Users caught up in a recent Facebook group ban wave are being advised by their peers to hold off on appealing the suspension, hoping it will be lifted automatically once a bug is fixed. Reddit's Facebook community (r/facebook) is currently awash with posts from frustrated group admins and members upset over the sudden removal of their groups. Reports are flooding in about entire groups being taken down in one fell swoop, with some users expressing disbelief at the reasons given for the bans, such as a bird photography group with nearly a million followers being flagged for nudity. Some users insist their groups were diligently moderated against spam, citing examples like a family-friendly Pokémon group with close to 200,000 members that was accused of referencing "dangerous organisations," or an interior design group with millions of members receiving the same charge. A few Facebook Group admins who have invested in Meta's Verified subscription, which promises priority customer support, have managed to receive assistance. However, others have shared that their groups faced suspension or complete deletion without resolution. The connection between this issue and the broader pattern of bans affecting individual Meta users remains uncertain, but it appears to be part of a larger problem plaguing social networks. Alongside Facebook and Instagram, social networks such as Pinterest and Tumblr have also been hit with complaints about mass suspensions in recent weeks. This has led users to suspect that AI-automated moderation efforts are the culprits. Pinterest at least owned up to its blunder, stating that the mass bans were due to an internal error, but it denied that AI was the problem. Tumblr stated its issues were linked to tests of a new content-filtering system but did not specify whether that system involved AI. When questioned last week about the Instagram bans, Meta declined to comment. Users are now rallying behind a petition that has already collected more than 12,380 signatures, urging Meta to tackle the issue. Others, including those whose businesses were impacted, are seeking legal recourse. Meta has yet to reveal what's causing the issue with either individual accounts or groups.