logo
Reddit sues Anthropic over AI training data

Reddit sues Anthropic over AI training data

Qatar Tribune2 days ago

Agencies
Social media platform Reddit sued the artificial intelligence giant Anthropic on Wednesday, claiming that it is illegally 'scraping' the comments of millions of Reddit users to train its chatbot Claude.
Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and 'intentionally trained on the personal data of Reddit users without ever requesting their consent.' Anthropic said in a statement that it disagreed with Reddit's claims 'and will defend ourselves vigorously.' Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based.
'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,' said Ben Lee, Reddit's chief legal officer, in a statement Wednesday.
Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users.
Those agreements 'enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content,' Lee said.
The licensing deals also helped the 20-year-old online platform raise money ahead of its Wall Street debut as a publicly traded company last year.
Anthropic was formed by former OpenAI executives in 2021, and its flagship Claude chatbot remains a key competitor to OpenAI's ChatGPT.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Reddit sues Anthropic over AI training data
Reddit sues Anthropic over AI training data

Qatar Tribune

time2 days ago

  • Qatar Tribune

Reddit sues Anthropic over AI training data

Agencies Social media platform Reddit sued the artificial intelligence giant Anthropic on Wednesday, claiming that it is illegally 'scraping' the comments of millions of Reddit users to train its chatbot Claude. Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and 'intentionally trained on the personal data of Reddit users without ever requesting their consent.' Anthropic said in a statement that it disagreed with Reddit's claims 'and will defend ourselves vigorously.' Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based. 'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,' said Ben Lee, Reddit's chief legal officer, in a statement Wednesday. Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users. Those agreements 'enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content,' Lee said. The licensing deals also helped the 20-year-old online platform raise money ahead of its Wall Street debut as a publicly traded company last year. Anthropic was formed by former OpenAI executives in 2021, and its flagship Claude chatbot remains a key competitor to OpenAI's ChatGPT.

AI chatbots: New frontier of misinformation, not fact-checking
AI chatbots: New frontier of misinformation, not fact-checking

Qatar Tribune

time3 days ago

  • Qatar Tribune

AI chatbots: New frontier of misinformation, not fact-checking

Agencies As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support itsfalse claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence theclip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in SouthAfrica. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'

Google steps up AI game with ‘AI mode' and Veo 3
Google steps up AI game with ‘AI mode' and Veo 3

Qatar Tribune

time4 days ago

  • Qatar Tribune

Google steps up AI game with ‘AI mode' and Veo 3

Agencies Even though at the start of the unprecedented artificial intelligence race that kicked off at the end of 2022 and beginning of 2023, search giant Google looked like it lagged behind the Microsoft/OpenAI team and later on, Meta, the cards now appear to be reshuffling once again. In recent months and weeks, Google has become more aggressive in asserting it has caught up to competitors, particularly witnessed through the unveiling of 'AI Mode' and a new video-focused model, 'Veo 3.' In tech circles, the 'AI Mode' has been seen not only as a new feature but as a sort of paradigm and a significant addition to the company's AI offering that could actually put Google back on track to fight rising competition. Experts also consider it 'a strategic adaptation aligned with the rise of AI-powered search,' and 'an adjustment to evolving tech habits.' Google itself says it is 'our most powerful AI search, with more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web.' 'AI Mode has the potential to transform user experience through contextual responses and extensive data integration,' said Alp Cenk Arslan, an AI and security expert and an assistant professor at the Turkish National Police Academy. However, he argued that claiming that this move 'defends' Google's leadership in the search engine market 'may overlook current competitive dynamics.' He cited that its rivals, like Perplexity AI and OpenAI's SearchGPT, are pushing user habits by offering innovative solutions in niche areas such as academic research. Arlsan noted it's also important to consider the 2025–2026 search trends. 'In these years, search engines are likely to shift from traditional keyword-based models to AI-supported, multimodal systems' he told Daily Sabah.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store