logo
Reddit users' comments were used to train AI Chatbot, social media platforms sues artificial intelligence company

Reddit users' comments were used to train AI Chatbot, social media platforms sues artificial intelligence company

Economic Times2 days ago

Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users.
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally "scraping" the comments of millions of Reddit users to train its chatbot Claude. Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and 'intentionally trained on the personal data of Reddit users without ever requesting their consent.' Anthropic said in a statement that it disagreed with Reddit's claims "and will defend ourselves vigorously.'Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based. 'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,' said Ben Lee, Reddit's chief legal officer, in a statement Wednesday, as per a family.Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users. Those agreements 'enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content,' Lee said, AP reported.The licensing deals also helped the 20-year-old online platform raise money ahead of its Wall Street debut as a publicly traded company last year Among those who stood to benefit was OpenAI CEO Sam Altman, who accumulated a stake as an early Reddit investor that made him one of the company's biggest shareholders.Anthropic was formed by former OpenAI executives in 2021 and its flagship Claude chatbot remains a key competitor to OpenAI's ChatGPT. While OpenAI has close ties to Microsoft, Anthropic's primary commercial partner is Amazon, which is using Claude to improve its widely used Alexa voice assistant.Much like other AI companies, Anthropic has relied heavily on websites such as Wikipedia and Reddit that are deep troves of written materials that can help teach an AI assistant the patterns of human language.In a 2021 paper co-authored by Anthropic CEO Dario Amodei — cited in the lawsuit — researchers at the company identified the subreddits, or subject-matter forums, that contained the highest quality AI training data, such as those focused on gardening, history, relationship advice or thoughts people have in the shower.Anthropic in 2023 argued in a letter to the U.S. Copyright Office that the "way Claude was trained qualifies as a quintessentially lawful use of materials,' by making copies of information to perform a statistical analysis of a large body of data. It is already battling a lawsuit from major music publishers alleging that Claude regurgitates the lyrics of copyrighted songs.But Reddit's lawsuit is different from others brought against AI companies because it doesn't allege copyright infringement. Instead, it focuses on the alleged breach of Reddit's terms of use, and the unfair competition, it says, was created.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic working on building AI tools exclusively for US military and intelligence operations
Anthropic working on building AI tools exclusively for US military and intelligence operations

India Today

time20 minutes ago

  • India Today

Anthropic working on building AI tools exclusively for US military and intelligence operations

Artificial Intelligence (AI) company Anthropic has announced that it is building custom AI tools specifically for the US military and intelligence community. These tools, under the name 'Claude Gov', are already being used by some of the top US national security agencies. Anthropic explains in its official blog post that Claude Gov models are designed to assist with a wide range of tasks, including intelligence analysis, threat detection, strategic planning, and operational support. According to Anthropic, these models have been developed based on direct input from national security agencies and are tailored to meet the specific needs of classified introducing a custom set of Claude Gov models built exclusively for US national security customers,' the company said. 'Access to these models is limited to those who operate in such classified environments.'Anthropic claims that Claude Gov has undergone the same safety checks as its regular AI models but has added capabilities. These include better handling of classified materials, improved understanding of intelligence and defence-related documents, stronger language and dialect skills critical to global operations, and deeper insights into cybersecurity data. While the company has not disclosed which agencies are currently using Claude Gov, it stressed that all deployments are within highly classified environments, and the models are strictly limited to national security use. Anthropic also reiterated its 'unwavering commitment to safety and responsible AI development.'Anthropic's move highlights a growing trend of tech companies building advanced AI tools for defence. advertisementEarlier this year, OpenAI introduced ChatGPT Gov, a tailored version of ChatGPT that was built exclusively for the US government. ChatGPT Gov tools run within Microsoft's Azure cloud, giving agencies full control over how it's deployed and managed. The Gov model shares many features with ChatGPT Enterprise, but it places added emphasis on meeting government standards for data privacy, oversight, and responsible AI usage. Besides Anthropic and OpenAI, Meta is also working with the US government to offer its tech for military month, Meta CEO Mark Zuckerberg revealed a partnership with Anduril Industries, founded by Oculus creator Palmer Luckey, to develop augmented and virtual reality gear for the US military. The two companies are working on a project called EagleEye, which aims to create a full ecosystem of wearable tech including helmets and smart glasses that give soldiers better battlefield awareness. Anduril has said these wearable systems will allow soldiers to control autonomous drones and robots using intuitive, AR-powered interfaces.'Meta has spent the last decade building AI and AR to enable the computing platform of the future,' Zuckerberg said. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad.'Together, these developments point to a larger shift in the US defence industry, where traditional military tools are being paired with advanced AI and wearable tech.

OpenAI eyes collaboration with India for data centres
OpenAI eyes collaboration with India for data centres

Hindustan Times

timean hour ago

  • Hindustan Times

OpenAI eyes collaboration with India for data centres

OpenAI, which developed the artificial intelligence bot called ChatGPT, is in talks to partner with India under its 'OpenAI for Countries' initiative, a person familiar with the matter has said. 'OpenAI for Countries' is a new global initiative by the creator of ChatGPT to help interested national governments with efforts like building out data center capacity in coordination with the US government. OpenAI's chief strategy officer Jason Kwon, who is currently in Delhi, has been touring across Asia Pacific, covering countries like Japan, South Korea, Australia, India and Singapore, meeting governments and potential private-sector partners to discuss opportunities within the 'OpenAI for Countries' initiative. The 'for Countries' initiative works in a two-way partnership between the US and another country. In its pilot project announced with UAE last month, the middle=east country will house a 1-gigawatt AI computing cluster called Stargate in Abu Dhabi, while also investing in similar AI infrastructure in the US as part of the US-UAE AI Acceleration Partnership. While details of the India-US partnership under the 'OpenAI for Countries' initiative remain undisclosed, the company's goal is to pursue 10 projects with individual countries or regions as the first phase of this initiative, and expand from there, said a company blog post. In a step towards 'cultivating AI talent' in India, OpenAI, in partnership with MeitY's IndiaAI Mission, launched 'OpenAI Academy' to give India's developer community and network of startups and innovators access to AI education and tools. The two also signed a memorandum of understanding, making it OpenAI's first international launch of its education platform. The MoU outlines seven joint initiatives, including OpenAI contributing educational content to IndiaAI Mission's FutureSkills platform to support digital skilling. The educational content will also be hosted on the iGOT Karmayogi platform to train government officials, initially in English and Hindi, and later in four regional languages. Additionally, OpenAI will conduct regular webinars and in-person workshops in six cities, and provide up to $100,000 in API credits to 50 IndiaAI-approved fellows or startups. The partnership also includes a series of hackathons across seven states, expected to reach around 25,000 students, with winners receiving invitations to attend OpenAI Dev Day events internationally. 'We believe the latest AI frameworks and tools must be made widely accessible so that our startups, developers, and researchers can build faster, better, and at scale,' IT minister Ashwini Vaishnaw said in a video message at the event.

For Some Recent Graduates, the AI Job Apocalypse may Already be Here
For Some Recent Graduates, the AI Job Apocalypse may Already be Here

Time of India

time2 hours ago

  • Time of India

For Some Recent Graduates, the AI Job Apocalypse may Already be Here

HighlightsUnemployment for recent college graduates has risen to 5.8%, with a notable increase in job displacement due to advancements in artificial intelligence, particularly in technical fields like finance and computer science. Many companies are adopting an 'AI-first' approach, with some executives reporting a halt in hiring for lower-level positions as artificial intelligence tools can now perform tasks that previously required human employees. Dario Amodei, Chief Executive Officer of Anthropic, has predicted that artificial intelligence could eliminate half of all entry-level white-collar jobs within the next five years. This month, millions of young people will graduate from college and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favour of artificial intelligence. That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fuelled, at least in part, by rapid advances in AI capabilities. You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had 'deteriorated noticeably.' Oxford Economics, a research firm that studies labour markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains. 'There are signs that entry-level positions are being displaced by artificial intelligence at higher rates,' the firm wrote in a recent report. But I'm convinced that what's showing up in the economic data is only the tip of the iceberg. In interview after interview, I'm hearing that firms are making rapid progress toward automating entry-level work and that AI companies are racing to build 'virtual workers' that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become 'AI-first,' testing whether a given task can be done by AI before hiring a human to do it. One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a mid-level title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by AI coding tools. Another told me that his startup now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company. Anecdotes like these don't add up to mass joblessness, of course. Most economists believe there are multiple factors behind the rise in unemployment for college graduates, including a hiring slowdown by big tech companies and broader uncertainty about President Donald Trump's economic policies. But among people who pay close attention to what's happening in AI, alarms are starting to go off. 'This is something I'm hearing about left and right,' said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of AI on workers. 'Employers are saying, 'These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.'' Using AI to automate white-collar jobs has been a dream among executives for years. (I heard them fantasising about it in Davos back in 2019.) But until recently, the technology simply wasn't good enough. You could use AI to automate some routine back-office tasks — and many companies did — but when it came to the more complex and technical parts of many jobs, AI couldn't hold a candle to humans. That is starting to change, especially in fields, such as software engineering, where there are clear markers of success and failure. (Such as: Does the code work or not?) In these fields, AI systems can be trained using a trial-and-error process known as reinforcement learning to perform complex sequences of actions on their own. Eventually, they can become competent at carrying out tasks that would take human workers hours or days to complete. This approach was on display last week at an event held by Anthropic, the AI company that makes the Claude chatbot. The company claims that its most powerful model, Claude Opus 4, can now code for 'several hours' without stopping — a tantalising possibility if you're a company accustomed to paying six-figure engineer salaries for that kind of productivity. AI companies are starting with software engineering and other technical fields because that's where the low-hanging fruit is. (And, perhaps, because that's where their own labour costs are highest.) But these companies believe the same techniques will soon be used to automate work in dozens of occupations, ranging from consulting to finance to marketing. Dario Amodei, Anthropic's CEO, recently predicted that AI could eliminate half of all entry-level white-collar jobs within five years. That timeline could be wildly off, if firms outside tech adopt AI more slowly than many Silicon Valley companies have, or if it's harder than expected to automate jobs in more creative and open-ended occupations where training data is scarce.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store