logo
Meta wins copyright case over AI training, but legal questions remain

Meta wins copyright case over AI training, but legal questions remain

Hans India5 hours ago

Meta, the parent company of Facebook, has won a copyright lawsuit filed by authors including Sarah Silverman and Ta-Nehisi Coates. The authors accused Meta of using pirated copies of their books without permission to train its AI model, Llama. However, US District Judge Vince Chhabria ruled that the plaintiffs failed to provide adequate evidence that Meta's AI system harmed the market for their works.
While the ruling was in Meta's favor, the judge clarified that this case does not confirm that training AI with copyrighted content is legal. Instead, it highlights that the plaintiffs didn't present the right legal arguments.
Chhabria emphasized that in many circumstances, using copyrighted materials without permission to train AI would be unlawful. This nuanced stance contrasts with another San Francisco judge's ruling earlier this week in favor of AI firm Anthropic, which deemed such usage fair use.
The fair use doctrine is a crucial defense for AI companies, allowing limited use of copyrighted materials without explicit permission. Meta welcomed the decision, describing fair use as vital to building transformative AI systems.
Meanwhile, the authors' legal team criticized the judgment, citing an 'undisputed record' of Meta's large-scale use of copyrighted works.
The broader copyright battle between AI companies and creators continues to intensify. Companies like OpenAI, Microsoft, and Anthropic face multiple lawsuits from writers, journalists, and publishers over the use of copyrighted materials in AI training.
Judge Chhabria warned of a future where AI-generated content could flood the market, undermining the value of original human-created works and disincentivizing creativity. Despite the legal win, the debate over AI's use of protected content is far from over.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta in talks to acquire AI voice startup PlayAI for talent push
Meta in talks to acquire AI voice startup PlayAI for talent push

Business Standard

timean hour ago

  • Business Standard

Meta in talks to acquire AI voice startup PlayAI for talent push

Meta Platforms Inc. is in advanced talks to acquire PlayAI, a small startup using artificial intelligence to replicate voices, part of the social media company's push to nab top talent and catch up in the AI race. Meta is expected to acquire the Palo Alto, California-based startup's technology and some of its employees, according to people familiar with the matter, who asked not to be named sharing private information. The deal is not yet finalized and could still change, the people said. Financial terms under discussion could not be learned. A Meta spokesperson declined to comment. A representative for PlayAI did not respond to request for comment. Meta Chief Executive Officer Mark Zuckerberg has made AI the company's top priority this year as it competes with rivals like Alphabet Inc.'s Google and OpenAI to build AI features. Meta invested $14.3 billion in data-labeling startup Scale AI earlier this month and recruited the firm's CEO to join a new 'superintelligence' team that Zuckerberg is building. Zuckerberg has also poached AI researchers from Google, Sesame AI Inc. and OpenAI for Meta's new 'superintelligence' team. Meta recently hired three OpenAI researchers from the ChatGPT maker's Zurich office, according to a person familiar with the hires. OpenAI confirmed their departure but declined to comment beyond that. The Wall Street Journal earlier reported Meta's latest hires. With the PlayAI deal, Meta could get added expertise to bring more voice features to its AI assistant and hands-free devices like smartglasses, a key area of focus for Zuckerberg. Other companies, including OpenAI and Google, have also added voice capabilities into their AI systems to build more compelling digital assistants. PlayAI creates AI-powered voice features with the goal of being as 'responsive as a conversation between two people,' according to a company blog post. The startup announced a $21 million funding round in late 2024 from several investors, including Kindred Ventures, Y Combinator and 500 Global. Zuckerberg has been actively hunting for AI deals. Meta held acquisition talks with Perplexity AI Inc. before finalizing its ScaleAI investment, Bloomberg News reported earlier this month. Meta also discussed a possible takeover of AI video startup Runway AI Inc., but it never reached a formal offer level, Bloomberg reported.

Fathoming America's plan to manage AI proliferation
Fathoming America's plan to manage AI proliferation

The Hindu

timean hour ago

  • The Hindu

Fathoming America's plan to manage AI proliferation

The announcement by the United States of the rescission of its Framework for AI Diffusion, a set of export controls for Artificial Intelligence (AI) technology announced earlier this year, has been viewed as a good thing. The Framework was considered counterproductive to AI technology development and diplomatic relations. However, recent developments suggest that controls on AI are likely to persist, albeit in different forms. A flawed blueprint Earlier this year, during the final week of its tenure, the Joe Biden administration announced the AI Diffusion Framework. Combining export controls and export licences for AI chips and model weights, it effectively viewed AI like nuclear weapons. Under the proposed framework, countries such as China and Russia were embargoed, trusted allies were favoured, and others restricted in their access to advanced AI technology. The rationale for these rules was that computational power dictates AI capabilities: the greater the compute, the better the AI. In the last decade, the compute used in advanced AI models has nearly doubled every 10 months. Following this logic, for the U.S. to preserve its lead, it needed to prevent adversaries from acquiring powerful compute while ensuring that AI development stays within the U.S. and its close allies. While export controls on AI hardware predated the framework, they were not sweeping. The Framework aimed to tighten these controls and establish a predictable system to streamline regulatory processes and standardise conditions. However, imposing such sweeping restrictions, affecting adversaries and partners alike, brought many unintended effects, proving counterproductive. The framework set a concerning precedent for technology cooperation with the U.S., especially for its allies. It signalled U.S. willingness to dictate how other nations conducted their affairs, incentivising them to hedge against U.S. actions. Consequently, U.S. allies had reasons to invest in alternatives to the U.S. ecosystem, pursuing their own strategic autonomy and technological sovereignty. Additionally, the framework would treat AI, a civilian technology with military applications, as if it were a military technology with civilian uses. Unlike nuclear technology, AI innovation is inherently civilian in its origins and international in scope. Confining the development geographically within the U.S. could prove counterproductive. Finally, the system created an enduring incentive for the global scientific ecosystem to develop pathways to circumvent the need for powerful compute to make powerful AI, thereby undermining the very lever that the U.S. sought to employ. China's DeepSeek R1 exemplifies this. Years of export controls spurred algorithmic and architectural breakthroughs, enabling DeepSeek to rival the best AI models from the U.S. with a fraction of the compute. Such trends can make export controls on AI chips an ineffective policy instrument. It is for these reasons that the Trump administration revoked the AI Diffusion Framework. This is welcome news for India, which was not favourably placed under the framework. However, the underlying U.S. thinking and approach towards AI diffusion will likely persist, manifesting in other forms. The AI technology race is still on, and the U.S. intent to restrict Chinese access to AI chips still endures. The possible replacement Notwithstanding the rescinded Framework, the current U.S. administration has taken firm steps toward further preventing Chinese access to AI chips. For instance, in March 2025, the administration expanded the scope of the existing export controls and added several companies to its entity list (blacklist). It has also released several new guidelines to strengthen the enforcement of these controls. New provisions are reportedly under consideration, such as on-chip features to monitor and restrict the usage of AI chips. These could include rules at the hardware level limiting chip functionality or restricting certain use cases. Recently, U.S. lawmakers introduced new legislation mandating built-in location tracking for AI chips to prevent their illicit diversion into China, Russia and other countries of concern. In effect, these measures seek to enforce the goals of the AI diffusion framework technologically rather than through trade restrictions. The related concerns Such measures are problematic in their own way. New concerns related to ownership, privacy and surveillance will proliferate. While malicious actors might be sufficiently motivated to circumvent these controls, legitimate and beneficial use by others could be inadvertently discouraged. Such developments undermine user autonomy and lead to trust deficits. Just like the old framework, this will lead to concerns about losing strategic autonomy for any nation buying AI chips. Yet again, both adversaries and allies will feel compelled to hedge against their reliance on the U.S. AI ecosystem and invest in alternatives. The rescission of the AI Diffusion Framework represents a notable policy reversal. Yet, it appears to be more a change in tactics than a fundamental shift in the U.S. strategy to manage AI proliferation. Should these technologically-driven control measures gain traction in U.S. policy discourse and be implemented, they risk replicating the negative consequences of the original AI Diffusion Framework. Ultimately, should this path be pursued, it would indicate that the crucial lessons from the Framework and its eventual withdrawal have not been fully assimilated, potentially jeopardising the very U.S. leadership in AI it ostensibly seeks to protect. Rijesh Panicker is a Fellow at the Takshashila Institution. Bharath Reddy is an Associate Fellow at the Takshashila Institution. Ashwin Prasad is a Research Analyst at the Takshashila Institution

India leads with 92% employees embracing GenAI tools, against global average of 72%
India leads with 92% employees embracing GenAI tools, against global average of 72%

India Gazette

time2 hours ago

  • India Gazette

India leads with 92% employees embracing GenAI tools, against global average of 72%

New Delhi [India], June 26 (ANI): India is leading the global GenAI charge, with 92 per cent of employees embracing such tools, well ahead of the global average of 72 per cent, according to a new report by Boston Consulting Group (BCG). AI is now woven into the fabric of daily work, with 72 per cent of respondents using it regularly. But the true value of AI is being captured by a smaller subset of companies that go beyond tool deployment to fully redesign workflows, according to the new report from BCG titled 'AI at Work 2025: Momentum Builds, But Gaps Remain', released on Thursday. The third edition of BCG's annual survey, based on responses from over 10,600 workers across 11 countries, reveals that while AI adoption is strong overall, only 51 per cent of frontline employees are regular users--a figure that has stagnated. Meanwhile, the Global South continues to lead in adoption, with India at 92 per cent and the Middle East at 87 per cent as the nations with the highest levels of regular use. Yet these two high-use countries also report the greatest fear about automation's impact, far higher than the 41 per cent of all global respondents who worried their roles could disappear within the next decade. 'The country (India) also ranks among the top nations experimenting with AI agents, with 17 per cent of employees reporting integration into their workflows, placing India in the global top three. However, this rapid adoption brings new challenges. Nearly half (48 per cent) of Indian employees fear job displacement over the next decade, highlighting a growing sense of uncertainty,' Nipun Kalra, Managing Director and Senior Partner; India Leader - BCG X, BCG. 'Furthermore, only about one-third of the workforce feels adequately trained to leverage AI's potential fully. As we move from early adoption to delivering real business impact, Indian enterprises must invest in structured training, in-person coaching, and leadership enablement to scale value both responsibly and inclusively.' The BCG report underlined three key levers to boost AI adoption. Proper Training: Only 36 per cent of employees feel adequately trained in AI use. Those who receive five or more hours of training--especially in person and with coaching--are significantly more likely to become regular users. Access to the Right Tools: Over half of respondents (54 per cent) say they would use AI tools even if not authorised, with Gen Z and Millennials especially prone to bypass restrictions. This 'shadow AI' poses rising security risks. Strong Leadership Support: Just 25 per cent of frontline workers say their leaders provide enough guidance on AI. Where leadership is engaged, adoption and employee optimism are markedly higher. 'Companies cannot simply roll out GenAI tools and expect transformation,' said Sylvain Duranton, Global Leader of BCG X and a coauthor of the report. 'Our research shows the real returns come when businesses invest in upskilling their people, redesign how work gets done, and align leadership around AI strategy.' (ANI)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store