Latest news with #PQPE


Hans India
5 days ago
- Business
- Hans India
Meta Contractors Accessed Private AI Chats Containing Personal Data: Report
Meta Platforms, the parent company behind Facebook and Instagram, is once again under fire over privacy concerns. According to a recent report by Business Insider, contractors hired to train Meta's artificial intelligence models were regularly exposed to sensitive and identifiable user information — including names, photos, emails, and even explicit content — during their review of AI conversations. Several contract workers, brought on board through third-party platforms such as Outlier (owned by Scale AI) and Alignerr, told the publication that they were tasked with evaluating thousands of real conversations users had with Meta's AI-powered assistants. In doing so, they encountered deeply personal content — from emotional outpourings and therapy-style confessions to flirtatious or romantic exchanges. Shockingly, one worker estimated that nearly 70% of the chats they reviewed contained some form of personally identifiable information (PII). This includes not only voluntarily shared names and email addresses but also images — both selfies and, in some cases, sexually explicit pictures — submitted by users who assumed their chats were private. Supporting documents reviewed by Business Insider also revealed that, in some instances, Meta itself provided additional user background such as names, locations, and hobbies. These were reportedly intended to help the AI offer more personalized and engaging responses. However, the report adds that even when Meta didn't provide such data, users often revealed it themselves during the course of their interactions, despite the company's privacy policies clearly discouraging users from disclosing personal details to the chatbot. Meta acknowledged that it does, in fact, review user interactions with AI tools to improve the system's quality. A spokesperson told Business Insider:'While we work with contractors to help improve training data quality, we intentionally limit what personal information they see.'The spokesperson added that Meta enforces 'strict policies' about who can access such data and how it must be handled. However, the contractors interviewed suggested otherwise. They claimed Meta projects exposed more unredacted personal data than similar initiatives at other tech companies. One such initiative, codenamed Omni, reportedly focused on enhancing user engagement in Meta's AI Studio, while another project, PQPE, encouraged the AI to tailor responses based on prior user conversations or data from social media profiles. One of the more concerning incidents cited involved a sexually explicit AI chat that contained enough identifiable information for a journalist to trace the user's actual Facebook profile within minutes. This report adds to Meta's growing list of controversies surrounding its handling of user data. The company previously faced major backlash during the Cambridge Analytica scandal in 2018, as well as criticism over reports of contractors listening to users' voice messages without adequate privacy protections. While using human reviewers to improve AI systems is common industry practice, Meta's history and the scale of unfiltered access reported here have reignited fears over the adequacy of its privacy safeguards.


India Today
5 days ago
- Business
- India Today
Meta contractors review private AI chats, sometimes seeing user names and photos: Report
Some conversations you've had with Meta's AI may not have been as private as you thought. According to a report by Business Insider, contract workers hired to train Meta's AI systems have reviewed thousands of real user chats, and in many cases, those conversations included names, email addresses, phone numbers, selfies, and even explicit images. Four contractors told the publication that they were regularly exposed to personal information while working on Meta AI projects. These individuals were reportedly hired through platforms called Outlier (owned by Scale AI) and Alignerr. The projects they worked on aimed to improve the quality and personalisation of Meta's AI responses, a process that involves reviewing real interactions between users and AI-powered said they often came across highly personal conversations, ranging from therapy-like sessions and rants about life, to flirty or romantic exchanges. One worker claimed that up to 70 per cent of the chats they reviewed included some form of personally identifiable information. Some users reportedly sent selfies or explicit images to the chatbot, believing the conversation to be seen by Business Insider reportedly showed that in some cases, Meta itself provided background user data, like names, locations, or hobbies, to help the AI personalise responses. In other cases, users voluntarily gave up this information during conversations, despite Meta's privacy policy warning users not to share personal details with the chatbot. One particularly concerning example described in the report involved a sexually explicit conversation with enough personal information for the reporter to locate a matching Facebook profile within which owns platforms like Facebook and Instagram, acknowledged that it does review user interactions with its AI. A spokesperson told Business Insider that it has "strict policies" governing who can access personal data and that contractors are instructed on how to handle any information they may come across. 'While we work with contractors to help improve training data quality, we intentionally limit what personal information they see,' the spokesperson reportedly contractors said projects run by Meta exposed more unredacted personal data than those of other tech companies. One project called Omni, run by Alignerr, aimed to boost engagement on Meta's AI Studio. Another project called PQPE, operated via Outlier, encouraged AI responses to reflect user interests pulled from past conversations or social isn't the first time Meta has come under scrutiny for its data practices. The company's history includes the 2018 Cambridge Analytica scandal and multiple reports over the years about contractors listening in on voice recordings without proper safeguards. While reviewing AI conversations with human help is common in the tech industry, Meta's track record has raised added concern. - EndsTrending Reel