logo
Reddit sues AI company Anthropic for allegedly ‘scraping' user comments to train chatbot Claude

Reddit sues AI company Anthropic for allegedly ‘scraping' user comments to train chatbot Claude

The Hilla day ago

Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally 'scraping' the comments of millions of Reddit users to train its chatbot Claude.
Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and 'intentionally trained on the personal data of Reddit users without ever requesting their consent.'
Anthropic said in a statement that it disagreed with Reddit's claims 'and will defend ourselves vigorously.'
Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based.
'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,' said Ben Lee, Reddit's chief legal officer, in a statement Wednesday.
Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users.
Those agreements 'enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content,' Lee said.
The licensing deals also helped the 20-year-old online platform raise money ahead of its Wall Street debut as a publicly traded company last year.
Anthropic was formed by former OpenAI executives in 2021 and its flagship Claude chatbot remains a key competitor to OpenAI's ChatGPT. While OpenAI has close ties to Microsoft, Anthropic's primary commercial partner is Amazon, which is using Claude to improve its widely used Alexa voice assistant.
Much like other AI companies, Anthropic has relied heavily on websites such as Wikipedia and Reddit that are deep troves of written materials that can help teach an AI assistant the patterns of human language.
In a 2021 paper co-authored by Anthropic CEO Dario Amodei — cited in the lawsuit — researchers at the company identified the subreddits, or subject-matter forums, that contained the highest quality AI training data, such as those focused on gardening, history, relationship advice or thoughts people have in the shower.
Anthropic in 2023 argued in a letter to the U.S. Copyright Office that the 'way Claude was trained qualifies as a quintessentially lawful use of materials,' by making copies of information to perform a statistical analysis of a large body of data. It is already battling a lawsuit from major music publishers alleging that Claude regurgitates the lyrics of copyrighted songs.
But Reddit's lawsuit is different from others brought against AI companies because it doesn't allege copyright infringement. Instead, it focuses on the alleged breach of Reddit's terms of use, and the unfair competition, it says, was created.
——
The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Artificial Intelligence Is Not Intelligent
Artificial Intelligence Is Not Intelligent

Atlantic

time36 minutes ago

  • Atlantic

Artificial Intelligence Is Not Intelligent

On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.

Bron & Claude Executive Director Bronson Burke to Speak at EPC Show 2025
Bron & Claude Executive Director Bronson Burke to Speak at EPC Show 2025

Associated Press

time37 minutes ago

  • Associated Press

Bron & Claude Executive Director Bronson Burke to Speak at EPC Show 2025

'Being invited to speak at EPC Show 2025 is more than an honor — it's a signal that Bron & Claude's work in civil and deep foundation construction is earning recognition at the highest levels'— Bronson Burke, Executive Director HOUSTON, TX, UNITED STATES, June 6, 2025 / / -- Bron & Claude proudly announces that Executive Director Bronson Burke will take the stage as an official speaker at the Energy Projects Conference & Expo ( EPC Show 2025 ). The event, recognized as North America's premier gathering for major energy infrastructure, will be held at the George R. Brown Convention Center in Houston, Texas. The speaking engagement marks a significant milestone for Bron & Claude, as Burke will join a distinguished lineup of industry leaders from Bechtel, McDermott, Shell, Fluor, Technip Energies, Kiewit, and Cheniere. His presence highlights the company's rapid rise as a key player in engineering, procurement, construction, and foundational energy infrastructure. 'Being invited to speak at EPC Show 2025 is more than an honor — it's a signal that Bron & Claude's work in civil and deep foundation construction is earning recognition at the highest levels,' said Burke. 'I look forward to sharing our insights and contributing to the conversations shaping the future of energy development.' As part of its deep involvement in EPC Show 2025, Bron & Claude will also serve as a Headline Sponsor and exhibitor, underscoring its commitment to leadership, innovation, and excellence in heavy civil construction and helical pile solutions. Attendees can visit Booth #C9 to connect with the Bron & Claude team and explore the company's turnkey project management capabilities. About EPC Show 2025: EPC Show 2025 will bring together 4,000+ industry professionals, 200+ exhibitors, and 150+ speakers across five specialized conferences — covering engineering, procurement, construction, permitting, project execution, technology licensing, and supply chain management. The show offers exclusive access to over $200 billion in capital projects and high-level networking, including over 640+ private meetings facilitated at the 2024 edition. Notable Speakers at EPC Show 2025 include: Paul Marsden, President – Energy, Bechtel Corporation Michael McKelvy, CEO, McDermott International Lorenzo Simonelli, Chairman & CEO, Baker Hughes Jillian Evanko, President & CEO, Chart Industries Mike Sommers, President & CEO, American Petroleum Institute Bronson Burke, Executive Director, Bron & Claude Conference Agenda Highlights: FREE Expo Registration now open Over 200 exhibitors showcasing cutting-edge technologies Specialized sessions across five core EPC disciplines Networking with 1,500+ EPC professionals and project owners About Bron & Claude: Bron & Claude is a nationwide construction company specializing in heavy civil construction, deep foundations, helical pile solutions, and turnkey engineering services. With a mission to 'Better the Best,' the company brings precision, safety, and unmatched quality assurance to every project. For interview requests, media credentials, or to schedule a meeting with Bron & Claude during EPC Show 2025, please contact: [email protected] or visit Allstream PR Allstream Energy Partners +1 832-496-3004 email us here Visit us on social media: LinkedIn Instagram YouTube Legal Disclaimer: EIN Presswire provides this news content 'as is' without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Why Reddit, Inc. (RDDT) Skyrocketed Today
Why Reddit, Inc. (RDDT) Skyrocketed Today

Yahoo

time41 minutes ago

  • Yahoo

Why Reddit, Inc. (RDDT) Skyrocketed Today

We recently published a list of . In this article, we are going to take a look at where Reddit, Inc. (NYSE:RDDT) stands against other Wednesday's best-performing stocks. Reddit grew its share prices by 6.68 percent on Wednesday to end at $118.21 apiece following news that it sued an Artificial Intelligence company for scraping its data without its permission. In a copy of the lawsuit posted on its website, Reddit, Inc. (NYSE:RDDT) claimed that AI firm Anthropic was training its Claude chatbot on its data since December 2021. A close up of two hands creating a social media post on Bumble Inc. app. According to Reddit, Inc. (NYSE:RDDT), filing a lawsuit was its final option to force Anthropic to stop its alleged unlawful practices. 'We will not tolerate profit-seeking entities like Anthropic commercially exploiting Reddit content for billions of dollars without any return for redditors or respect for their privacy,' the company said in a statement. 'A.I. companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data.' Anthropic has yet to issue a statement on the allegations. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store