
Reddit sues AI giant Anthropic over content use
Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.
The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.
Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.
The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.
"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.
According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.
The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.
Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.
In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."
Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.
Those deals have helped lift Reddit's share price since it went public in 2024.
Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.
Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.
AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.
Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
27 minutes ago
- Time of India
For Some Recent Graduates, the AI Job Apocalypse may Already be Here
HighlightsUnemployment for recent college graduates has risen to 5.8%, with a notable increase in job displacement due to advancements in artificial intelligence, particularly in technical fields like finance and computer science. Many companies are adopting an 'AI-first' approach, with some executives reporting a halt in hiring for lower-level positions as artificial intelligence tools can now perform tasks that previously required human employees. Dario Amodei, Chief Executive Officer of Anthropic, has predicted that artificial intelligence could eliminate half of all entry-level white-collar jobs within the next five years. This month, millions of young people will graduate from college and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favour of artificial intelligence. That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fuelled, at least in part, by rapid advances in AI capabilities. You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had 'deteriorated noticeably.' Oxford Economics, a research firm that studies labour markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains. 'There are signs that entry-level positions are being displaced by artificial intelligence at higher rates,' the firm wrote in a recent report. But I'm convinced that what's showing up in the economic data is only the tip of the iceberg. In interview after interview, I'm hearing that firms are making rapid progress toward automating entry-level work and that AI companies are racing to build 'virtual workers' that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become 'AI-first,' testing whether a given task can be done by AI before hiring a human to do it. One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a mid-level title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by AI coding tools. Another told me that his startup now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company. Anecdotes like these don't add up to mass joblessness, of course. Most economists believe there are multiple factors behind the rise in unemployment for college graduates, including a hiring slowdown by big tech companies and broader uncertainty about President Donald Trump's economic policies. But among people who pay close attention to what's happening in AI, alarms are starting to go off. 'This is something I'm hearing about left and right,' said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of AI on workers. 'Employers are saying, 'These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.'' Using AI to automate white-collar jobs has been a dream among executives for years. (I heard them fantasising about it in Davos back in 2019.) But until recently, the technology simply wasn't good enough. You could use AI to automate some routine back-office tasks — and many companies did — but when it came to the more complex and technical parts of many jobs, AI couldn't hold a candle to humans. That is starting to change, especially in fields, such as software engineering, where there are clear markers of success and failure. (Such as: Does the code work or not?) In these fields, AI systems can be trained using a trial-and-error process known as reinforcement learning to perform complex sequences of actions on their own. Eventually, they can become competent at carrying out tasks that would take human workers hours or days to complete. This approach was on display last week at an event held by Anthropic, the AI company that makes the Claude chatbot. The company claims that its most powerful model, Claude Opus 4, can now code for 'several hours' without stopping — a tantalising possibility if you're a company accustomed to paying six-figure engineer salaries for that kind of productivity. AI companies are starting with software engineering and other technical fields because that's where the low-hanging fruit is. (And, perhaps, because that's where their own labour costs are highest.) But these companies believe the same techniques will soon be used to automate work in dozens of occupations, ranging from consulting to finance to marketing. Dario Amodei, Anthropic's CEO, recently predicted that AI could eliminate half of all entry-level white-collar jobs within five years. That timeline could be wildly off, if firms outside tech adopt AI more slowly than many Silicon Valley companies have, or if it's harder than expected to automate jobs in more creative and open-ended occupations where training data is scarce.


India Today
30 minutes ago
- India Today
Anthropic co-founder Jared Kaplan says Claude access for Windsurf was cut because of OpenAI
Anthropic co-founder Jared Kaplan has confirmed that Anthropic deliberately cut Windsurf's direct access to its Claude models due to ongoing reports that OpenAI plans to acquire Windsurf. Kaplan's reasoning is that 'it would be odd for us to be selling Claude to OpenAI' through a third party. In this case, it is response and confirmation comes after Windsurf CEO Varun Mohan publicly slammed Anthropic for cutting off Windsurf's first-party access to Claude 3.x models with less than a week's notice, forcing the popular AI-native IDE (short for Integrated Development Environment) to make last-minute adjustments for its user base. This was not a one-off incident either. Earlier, Anthropic had barred Windsurf users from accessing the new Claude Sonnet 4 and Opus 4 models on day one of was widely speculated that the purported OpenAI acquisition would be a big bone of contention, since logic dictates that Anthropic may not want OpenAI – a competing AI brand – to have any type of open window to its user data which it could then use to train its own ChatGPT models. Kaplan has basically admitted to this conspiracy theory, giving a bit of an insight into Anthropic's core reasoning behind – what some might call – severing ties with a platform used by over a million developers globally. There are two reasons. One is that Anthropic – like any other company – would want to focus on long-term customers, those it can have long-term partnerships with. Secondly, it won't be smart to spend resources – meaning compute – which is limited to clients that may or may not be around in the near did not address the elephant in the room, which is whether it was okay with OpenAI getting access to its data if it ends up buying Windsurf, as per reports. Obviously, he did not make any comment on where the industry would go if this became a common practice, just like he did not say if Windsurf users should expect uninterrupted access to Claude without Anthropic keys anytime CEO Varun Mohan has called it a 'short-term' issue, hinting that discussions are probably on for some middle ground. In the meantime, Windsurf is actively working to bring new capacity online while launching a promotional scheme for Google's Gemini 2.5 Pro, offering it at 0.75x its original price. Also, it has implemented a "bring-your-own-key" (BYOK) system for Claude Sonnet 4 and Opus 4 as well as for the Claude 3.x models, while removing direct access for free users and those on Pro plan trials.'We have been very clear to the Anthropic team that our priority was to keep the Anthropic models as recommended models and have been continuously willing to pay for the capacity,' Mohan said in a blog post, adding that 'We are concerned that Anthropic's conduct will harm many in the industry, not just Windsurf.'


Hans India
36 minutes ago
- Hans India
OpenAI Academy Set to Launch in India Under MoU with IndiaAI Mission
In a landmark move to strengthen AI education and innovation in India, OpenAI has signed a memorandum of understanding (MoU) with the Indian government's IndiaAI Mission to launch the OpenAI Academy—marking its first international rollout. As part of this collaboration, OpenAI and IndiaAI will jointly deliver curated artificial intelligence (AI) training content via the OpenAI Academy and the IndiaAI FutureSkills portal. These offerings will be accessible to a wide audience, including public sector professionals, in multiple languages—English, Hindi, and four regional languages—ensuring inclusivity in skill-building. Jason Kwon, Chief Strategy Officer at OpenAI, emphasized India's key role in the global AI landscape, stating, 'With the second-largest number of ChatGPT users, India ranks among the top countries actively building AI technologies.' Speaking at the launch event, Abhishek Singh, CEO of IndiaAI Mission, identified one of India's critical hurdles in AI development: access to computing power. 'This has been addressed by providing around 34,000 GPUs at affordable rates—less than a dollar per GPU hour—significantly lower than global prices,' he said. In addition to computational resources, Singh also spotlighted the AI Kosh platform, which offers a rich repository of datasets across sectors, along with essential tools, AI models, and a sandbox environment to encourage innovation. Through a virtual message, Union Minister Ashwini Vaishnaw described the initiative as a milestone for the country. 'This partnership is a significant step towards advancing our shared goal of democratizing access to knowledge and technology,' he said. This launch follows Kwon's global tour to engage with international policymakers on responsible AI deployment and governance. The academy initiative in India is aligned with OpenAI's broader commitment to make AI education globally accessible and responsible. In a move reflecting its growing investment in the Indian market, OpenAI last month introduced local data residency for Indian users. This ensures that data from ChatGPT Enterprise, ChatGPT Edu, and the OpenAI API platform will now be stored within the country, addressing key regulatory and privacy concerns. Further expanding its impact, OpenAI also announced the extension of its AI for Impact Accelerator Program to India. As part of this phase, 11 nonprofit organizations in the country have been selected to receive API credits and technical grants totaling $150,000. This initiative is backed by philanthropic partners including The Agency Fund, Tech4Dev, and These selected nonprofits will gain access to hands-on technical mentorship, cohort-based learning programs, and early access to OpenAI tools—empowering them to develop AI-driven solutions for social challenges. By combining educational outreach, infrastructure support, and social impact investment, OpenAI's partnership with IndiaAI is poised to significantly boost India's AI capabilities, while also ensuring that AI development remains inclusive, ethical, and transformative.