
US judge backs using copyrighted books to train AI
A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment.District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act."Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision."The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books.Tremendous amounts of data are needed to train large language models powering generative AI.Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation."We are pleased that the court recognized that using 'works to train LLMs was transformative,'" an Anthropic spokesperson said in response to an AFP query.The judge's decision is "consistent with copyright's purpose in enabling creativity and fostering scientific progress," the spokesperson added.- Blanket protection rejected -The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT.However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.Along with downloading books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital formats, according to court documents.Anthropic's aim was to amass a library of "all the books in the world" for training AI models on content as deemed fit, the judge said in his ruling.While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement , the judge ruled, regardless of eventual training use.The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages.Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options."Judge Alsup's decision is a mixed bag," said Keith Kupferschmid, chief executive of US nonprofit Copyright Alliance."In some instances AI companies should be happy with the decision and in other instances copyright owners should be happy."Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives.The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
2 hours ago
- Time of India
AI as the new OS: Can enterprises trust a thinking machine?
Artificial Intelligence has long been touted as a transformative force, but the discussion at the ETCIO Annual Conclave 2025 pushed that narrative into more provocative territory: Is AI becoming the cognitive operating system of the enterprise—and can we trust it to think? Moderated by Shipra Malhotra, the session brought together Navendu Agarwal, Group CIO, Ola; Shuchi Mahajan, SVP & Head - Fraud Prevention, Analytics, Digital & Customer Awareness, HDFC Bank; Binit Jha, CDO, IDBI Bank, for a candid conversation on what happens when AI becomes not just an assistant, but the actual decision-making engine of the enterprise. From explainability and regulatory friction to cultural shifts and cognitive accountability, the panel explored the real-world implications of enterprise-wide AI adoption. From dashboards to decisions: A paradigm shift Navendu Agarwal offered a compelling metaphor to frame the shift: 'AI is no longer a tool—it's the second brain of the enterprise.' Previously, dashboards and BI tools supported decision-making. Today, AI does the heavy lifting of assimilating data across systems and summarizing it contextually, empowering decision-makers directly. This shift redefines agility: enterprise leaders no longer have to depend on layers of interpretation—they can interface with AI directly to derive business value. Agarwal argued that generative models have fundamentally altered the nature of enterprise intelligence. LLMs can now analyze and interpret massive data volumes without needing task-specific training. 'It takes the power away from technologists and gives it back to decision-makers,' he said. Regulation vs. innovation: BFSI's balancing act In the highly regulated world of banking, Binit Jha described AI's rapid evolution as both a boon and a bottleneck. 'AI moves every 24 hours. Banks and regulators don't,' he quipped. For institutions like IDBI Bank—both regulated and a public-sector entity—the challenge lies in matching AI's pace within the constraints of compliance. Yet Jha emphasized that AI isn't optional. The bank is categorizing use cases based on risk and regulatory clarity to accelerate safe implementation. 'We have to implement it anyway,' he said. The key is to identify areas where AI can operate within existing guardrails—like analytics, scoring, and workflow automation—while human oversight governs final decisions. Trust is the new OS: Why AI must be explainable A 2024 PwC India study cited during the session revealed a trust gap: 73% of Indian enterprises trust AI for operational decisions, but only 28% trust it for strategic or financial ones. The difference? Explainability. Shuchi Mahajan emphasized that while AI augments speed and efficiency—especially in fraud analytics and customer scoring—it cannot replace human judgment where empathy, ethics, and reputational risk are involved. 'Cognitive accountability will become the soul of this new OS,' she said. The future belongs to organizations that embed trust as a cultural cornerstone. She advocated for a 360-degree trust model that begins with employees. As AI introduces uncertainty about job security, Mahajan stressed the need for leaders to upskill teams not in coding, but in applied AI fluency. 'People must know how to prompt and use AI to solve problems—not build it,' she noted. That shift requires cultural transformation as much as technical modernization. Defining the red lines: Where AI must not decide Both Mahajan and Jha outlined clear boundaries where AI should never operate autonomously: Safety: Anything involving physical or human safety must have human AI decisions impacting organizational credibility need a conscience Legacy data and opaque models can entrench hidden biases. Navendu Agarwal added the nuance: 'AI should never replace cognitive work—it should accelerate it.' He underscored that decision-making must stay human unless it's entirely predictable and rule-based. 'AI is your intern—it delivers. But interpretation is your job.' Redefining leadership for the AI-driven enterprise Looking five years ahead, the panel explored what strategic shifts CXOs must undergo. Mahajan asserted that tech-first leadership must evolve into trust-building leadership—across employees, partners, regulators, and customers. Jha suggested a three-stage roadmap: Self-Assessment: Understand your current capabilities, infrastructure, and team Rollouts: Focus on safe, efficiency-enhancing use cases with high trust Vision: Build a 10-year view of AI-driven transformation, emphasizing ROI, flexibility, and speed. Navendu Agarwal agreed but reframed AI as an accelerator, not a replacement. 'Technology becomes more accessible, less intimidating. You don't need a tech PhD to lead transformation—you need curiosity and clarity,' he said. This will democratize innovation, enable faster decision-making, and redefine the CXO role as strategic orchestrator rather than operational executor. The session concluded with a powerful truth: AI may think, but humans remain responsible. As Mahajan put it, 'If AI is the OS, accountability is the conscience.' Success will belong to enterprises that view AI not as a plug-and-play solution, but as a deeply integrated partner—governed by values, ethics, and strategic clarity. AI is no longer just a tool—it's becoming the enterprise's cognitive core. But trust, governance, and strategic leadership are non-negotiable. Enterprises must move from AI adoption to AI accountability—balancing speed with ethics, and intelligence with empathy.


Time of India
2 hours ago
- Time of India
GCCs as the testbed for agentic AI transformation
India and Southeast Asian has always been at the epicenter of a tech wave. Be it dotcom, telcom, 5G, AI, GenAI and now the latest Agentic AI, it's no doubt that they have evolved into testbeds for 'agentic AI' – AI agents that autonomously perform tasks with minimal human oversight. They are no longer mere back-office cost centers. These GCCs now drive innovation across retail, healthcare, BFSI, and travel, piloting AI solutions in real business AI usage is rising, with 25% of organizations using Gen AI expected to deploy it by 2025, increasing to 50% by 2027, found a Deloitte survey. GCCs have taken on more specialized and critical functions such as Software product engineeringData and Analytics (For ex, fraud prevention, product development)Environmental, Social and Governance (Monitoring sustainability metrics, compliance etc.)Enterprise Functions (Centralizing HR, legal, risk, reporting) AI talent in GCCs driving innovation GCCs isn't a back office implementing innovation born globally. They leverage deep pools of developer and data science talent to implement hands-on AI interventions. Skilled engineers in India-based GCCs are developing AI-powered cybersecurity solutions, optimizing supply chains with machine learning, and automating cloud operations – directly contributing to global innovation. Industry use cases/case studies Global Capability Centers (GCCs) are at the forefront of agentic AI adoption, driving transformative change across industries such as banking, healthcare, life sciences, high-tech, and retail. In banking, a leading GCC is reimagining contact center operations using agentic AI to elevate customer experiences while significantly reducing operational costs. In healthcare, one GCC is accelerating legacy modernization efforts within payer-side systems—projects that would traditionally take years are now being completed in a fraction of the time. Meanwhile, a top high-tech GCC is advancing AI model lifecycle management, helping the organization maintain its leadership position at the cutting edge of innovation. This shift is further fueled by enterprise-grade platforms that enable agentic workflows at scale. For instance, several GCCs are now deploying AI copilots to streamline operations across finance, HR, and compliance. These agents can interpret documents, interact with internal systems, and make decisions within defined guardrails—improving both speed and accuracy. Major AI platforms in the industry help empower GCCs to accelerate story generation by up to 20% and improve automation coverage by as much as 70%, enabling faster, more scalable AI-driven product development. Such innovations underscore the pivotal role GCCs play not just in executing AI vision, but in shaping it. Domain-specific AI agent training A key advantage of GCC is the ability to train AI agents on domain-specific data and context. Enterprises are rapidly ramping up such projects – Deloitte predicts that 25% of GenAI-using companies will launch pilot 'agentic AI' systems in 2025 (rising to 50% by 2027) Much of this experimentation is happening in GCCs, where industry experts and data reside. GCC teams fine-tune large language models and AI tools on proprietary datasets (e.g. medical records, financial transactions), creating specialized AI agents for their sector. As an illustration, a healthcare GCC might train an AI for diagnostics using hospital data, while a retail GCC trains recommendation engines on local consumer behavior. Leveraging domain specific agent training for scalable AI To scale AI performance, many GCCs are integrating localized knowledge graphs into their AI stack. A knowledge graph organizes an organization's domain knowledge – products, services, regulations, local languages – into a structured, interconnected format that AI models can quickly parse. When AI agents use localized knowledge graphs, they can answer questions or make decisions more accurately because they're pulling from a clear, structured, and context-aware source of truth — not just guessing from general training data. This makes AI smarter, safer, and more relevant for real-world business use. Emerging global GCC hubs GCCs – from India and beyond – have become linchpins of AI transformation , enabling organizations to pilot agentic AI initiatives, train domain-specific AI agents, and harness local knowledge at scale for competitive advantage. India is playing a central role in the GCC landscape, with a GCC market projected to reach a staggering $100 billion by 2029 . The country is rapidly transforming from a cost-saving destination into a global hub for innovation.


Indian Express
2 hours ago
- Indian Express
Shark Tank India takes a dig at Narayana Murthy's '70-hour work week' comment: ‘Turn your millionaire bosses into billionaires'
Registrations for the fifth season of Shark Tank India are now open. Sony made the announcement with a teaser that took a dig at Infosys founder Narayana Murthy's controversial comments about implementing 70-hour work weeks. He had previously said that he doesn't believe in the idea of 'work-life balance', and said that he was disappointed when India moved away from six-day work weeks. Reiterating his stance, he said at the CNBC-TV18 Global Leadership Summit last year, 'I am sorry, I have not changed my view. I will take this with me to my grave.' The announcement video unfolds in a tongue-and-cheek manner, with several fictional CEOs complaining about their employees having quit to start their own businesses. One of them complains about not being able to play golf anymore because his employee isn't there to carry his bags. Another complains about having to take 'sharing' rides to work, but in a helicopter. The trailer pointed fingers at rich CEOs exploiting the workforce because of their greed. A voiceover sarcastically urged potential candidates to continue making their wealthy employers even wealthier by not quitting their jobs and not starting their own entrepreneurial ventures. Also read – Aman Gupta accuses new Shark Tank India judges of trying to whitewash their image through the show: 'They saw Anupam and I become heroes' 'You can change (your bosses') lives. So, please work overtime. Sacrifice your personal life, your hobbies, and your start-up dreams. Work 70 hours for your bosses, at least until AI doesn't replace you. Keep working, so that your millionaire bosses become billionaires. Save your bosses, do not apply to Shark Tank India,' the voiceover said, as visuals of overworked employees logging in from beyond the grave while CEOs party on a yacht played in the background. In the past, several 'sharks' have offered their opinions about working 70 hours in a week. Recently, appearing on the Prakhar Ke Pravachan podcast, Aman Gupta said that such demands are unnecessary. Viraj Bahl, who appeared on the fourth season of the show, said on The Rockford Circle podcast that a 70-hour work week is 'crazy'. He said, 'Either compensate them in the same way you are getting compensated for working 70 hours, proportionately, or do not expect them to work 70 hours. I think it's an obsolete idea. Seventy hour weeks should not happen.' Namita Thapar wrote on LinkedIn, 'Say NO to 70-hour weeks, or if you want that life, say NO to being a parent!' Anupam Mittal was the only one who endorsed the idea, writing in response to Namita, 'Not counting but am pretty sure you are up there in terms of weekly hours Better to embrace and find work-life harmony than to continue to be torn between work-life balance.'