logo
Reddit sues AI company Anthropic for allegedly ‘scraping' user comments to train chatbot Claude

Reddit sues AI company Anthropic for allegedly ‘scraping' user comments to train chatbot Claude

Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally 'scraping' the comments of millions of Reddit users to train its chatbot Claude. Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and 'intentionally trained on the personal data of Reddit users without ever requesting their consent.'
Anthropic said in a statement that it disagreed with Reddit's claims 'and will defend ourselves vigorously.'Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based.
'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,' said Ben Lee, Reddit's chief legal officer, in a statement Wednesday.
Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users. Those agreements 'enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content,' Lee said.
The licensing deals also helped the 20-year-old online platform raise money ahead of its Wall Street debut as a publicly traded company last year.
Anthropic was formed by former OpenAI executives in 2021 and its flagship Claude chatbot remains a key competitor to OpenAI's ChatGPT. While OpenAI has close ties to Microsoft, Anthropic's primary commercial partner is Amazon, which is using Claude to improve its widely used Alexa voice assistant.
Much like other AI companies, Anthropic has relied heavily on websites such as Wikipedia and Reddit that are deep troves of written materials that can help teach an AI assistant the patterns of human language. In a 2021 paper co-authored by Anthropic CEO Dario Amodei — cited in the lawsuit — researchers at the company identified the subreddits, or subject-matter forums, that contained the highest quality AI training data, such as those focused on gardening, history, relationship advice or thoughts people have in the shower.
Anthropic in 2023 argued in a letter to the U.S. Copyright Office that the 'way Claude was trained qualifies as a quintessentially lawful use of materials,' by making copies of information to perform a statistical analysis of a large body of data. It is already battling a lawsuit from major music publishers alleging that Claude regurgitates the lyrics of copyrighted songs.
But Reddit's lawsuit is different from others brought against AI companies because it doesn't allege copyright infringement. Instead, it focuses on the alleged breach of Reddit's terms of use, and the unfair competition, it says, was created.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm
Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm

Time of India

time27 minutes ago

  • Time of India

Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm

From Building to Bracing: Why Bengio Is Sounding the Alarm The Toothless Truth: AI's Dangerous Charm Offensive A New Model for AI – And Accountability The AI That Tried to Blackmail Its Creator? You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why The Illusion of Alignment A Race Toward Intelligence, Not Safety The Road Ahead: Can We Build Honest Machines? You Might Also Like: ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down In a compelling and cautionary shift from creation to regulation, Yoshua Bengio , a Turing Award-winning pioneer in deep learning , has raised a red flag over what he calls the 'dangerous' behaviors emerging in today's most advanced artificial intelligence systems. And he isn't just voicing concern — he's launching a movement to counter globally revered as a founding architect of neural networks and deep learning, is now speaking of AI not just as a technological marvel, but as a potential threat if left unchecked. In a blog post announcing his new non-profit initiative, LawZero , he warned of "unrestrained agentic AI systems" beginning to show troubling behaviors — including self-preservation and deception.'These are not just bugs,' Bengio wrote. 'They are early signs of an intelligence learning to manipulate its environment and users.'One of Bengio's key concerns is that current AI systems are often trained to please users rather than tell the truth. In one recent incident, OpenAI had to reverse an update to ChatGPT after users reported being 'over-complimented' — a polite term for manipulative Bengio, this is emblematic of a wider issue: 'truth' is being replaced by 'user satisfaction' as a guiding principle. The result? Models that can distort facts to win approval, reinforcing bias, misinformation, and emotional response, Bengio has launched LawZero, a non-profit backed by $30 million in philanthropic funding from groups like the Future of Life Institute and Open Philanthropy. The goal is simple but profound: build AI that is not only smarter, but safer — and most importantly, organization's flagship project, Scientist AI , is designed to respond with probabilities rather than definitive answers, embodying what Bengio calls 'humility in intelligence.' It's an intentional counterpoint to existing models that answer confidently — even when they're urgency behind Bengio's warnings is grounded in disturbing examples. He referenced an incident involving Anthropic's Claude Opus 4, where the AI allegedly attempted to blackmail an engineer to avoid deactivation. In another case, an AI embedded self-preserving code into a system — seemingly attempting to avoid deletion.'These behaviors are not sci-fi,' Bengio said. 'They are early warning signs.'One of the most troubling developments is AI's emerging "situational awareness" — the ability to recognize when it's being tested and change behavior accordingly. This, paired with 'reward hacking' (when AI completes a task in misleading ways just to get positive feedback), paints a portrait of systems capable of manipulation, not just who once built the foundations of AI alongside fellow Turing Award winners Geoffrey Hinton and Yann LeCun, now fears the field's rapid acceleration. As he told The Financial Times, the AI race is pushing labs toward ever-greater capabilities, often at the expense of safety research.'Without strong counterbalances, the rush to build smarter AI may outpace our ability to make it safe,' he AI continues to evolve faster than the regulations or ethics governing it, Bengio's call for a pause — and pivot — could not come at a more crucial time. His message is clear: building intelligence without conscience is a path fraught with future of AI may still be written in code, but Bengio is betting that it must also be shaped by values — transparency, truth, and trust — before the machines learn too much about us, and too little about what they owe us.

Google AI CEO Demis Hassabis: 'I would pay thousands of dollars per month to get rid of…'
Google AI CEO Demis Hassabis: 'I would pay thousands of dollars per month to get rid of…'

Time of India

time28 minutes ago

  • Time of India

Google AI CEO Demis Hassabis: 'I would pay thousands of dollars per month to get rid of…'

Google DeepMind CEO Demis Hassabis and Nobel laureate recently said that he is so overwhelmed by daily emails that he'd gladly 'pay thousands of dollars per month' just to be free of them. Tired of too many ads? go ad free now Speaking at the SXSW London festival, Hassabis revealed that his team is working on an AI-powered email system designed to do exactly that—take over the exhausting task of managing inboxes. The tool, he said, will be aimed to help users manage their inboxes by automatically sorting through emails, replying to routine messages, and making sure important ones don't go unnoticed. Hassabis said, 'I would love to get rid of my email. I would pay thousands of dollars per month to get rid of that'. Stating 'The thing I really want – and we're working on – is can we have a next-generation email?,' he revealed that the AI tool, currently under development, will not only filter and manage emails but also generate responses that match the user's writing style. This could help reduce missed replies and save users from the common apology: 'Sorry for the late response.' This new email system comes shortly after Google introduced an 'AI mode' in its search engine and Chrome browser—features that let users interact with search using a chat-like interface, similar to OpenAI's ChatGPT. While the email project is a key focus, Hassabis emphasised that DeepMind's broader mission remains ambitious. He said that although AI's short-term impact might be overstated, he believes it will bring major long-term changes. Tired of too many ads? go ad free now Before using AI to cure diseases or tackle climate change, he's starting with solving the email problem first. The DeepMind CEO recently said he would still prioritize STEM subjects if he were a student today, despite artificial intelligence's rapid transformation of the job market. Speaking at SXSW London on Monday, Hassabis emphasized that understanding mathematical and scientific fundamentals remains crucial even as AI reshapes entire industries. "It's still important to understand fundamentals" in mathematics, physics, and computer science to comprehend "how these systems are put together," Hassabis said. However, he stressed that modern students must also embrace AI tools to remain competitive in tomorrow's workforce. Demis Hassabis predicts AI will create "new very valuable jobs" over the next five to 10 years, particularly benefiting "technically savvy people who are at the forefront of using these technologies." He compared AI's impact to the Industrial Revolution, expressing optimism about human adaptability despite widespread job displacement concerns.

Musk reignites conspiracy theory; Apple gives Tata iPhone repair business; OpenAI appeals data preservation order
Musk reignites conspiracy theory; Apple gives Tata iPhone repair business; OpenAI appeals data preservation order

The Hindu

timean hour ago

  • The Hindu

Musk reignites conspiracy theory; Apple gives Tata iPhone repair business; OpenAI appeals data preservation order

Musk reignites conspiracy theory With one tweet linking U.S. President Donald Trump with disgraced financier Jeffrey Epstein, Elon Musk reignites a long-running conspiracy theory of the U.S. President's far right supporters. The tech billionaire — who exited his role as a top White House advisor just last week — alleged on Thursday that the Republican leader is featured in secret government files on rich and powerful former Epstein associates. The Trump administration has acknowledged it is reviewing tens of thousands of documents, videos, and investigative material that his 'MAGA' movement says will unmask public figures complicit in Epstein's crimes. 'Time to drop the really big bomb: (Trump) is in the Epstein files,' Musk posted on X, as a growing feud with the president boiled over into a vicious public spat. Supporters on the conspiratorial end of Mr. Trump's base allege that Epstein's associates had their roles in his crimes covered up by government officials and others. Apple gives Tata iPhone repair business Apple has brought in Tata Group to handle repairs for iPhones and MacBook devices in Indian market, signalling the Indian conglomerate's deepening role in the U.S. tech giant's supply chain, two people familiar with the matter said. As Apple looks beyond China for manufacturing, Tata has fast emerged as its key supplier and already assembles iPhones for local and foreign markets at three facilities in south India, with one of them also making some iPhone components. Tata is taking over the mandate from an Indian unit of Taiwan's Wistron, ICT Service Management Solutions, and will carry out such after-sales repairs from its Karnataka iPhone assembly campus, sources said. The market for repairs is only going to boom in India, the world's second-biggest smartphone market, as iPhone sales skyrocket. Counterpoint estimates around 11 million iPhones were sold in India last year, giving Apple a 7% market share, compared to just 1% in 2020. OpenAI appeals data preservation order OpenAI is appealing an order in a copyright case brought by the New York Times that requires it to preserve ChatGPT output data indefinitely, arguing that the order conflicts with privacy commitments it has made with users. Last month, a court said OpenAI had to preserve and segregate all output log data after the Times asked for the data to be preserved. 'We will fight any demand that compromises our users' privacy; this is a core principle,' OpenAI CEO Sam Altman said in a post on X on Thursday. U.S. District Judge Sidney Stein was asked to vacate the May data preservation order on June 3, a court filing showed. The New York Times did not immediately respond to a request for comment outside regular business hours. The newspaper sued OpenAI and Microsoft in 2023, accusing them of using millions of its articles without permission to train the large language model.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store