logo
#

Latest news with #Hodkasia

Meta AI was leaking chatbot prompts and answers to unauthorized users
Meta AI was leaking chatbot prompts and answers to unauthorized users

Tom's Guide

time7 days ago

  • Tom's Guide

Meta AI was leaking chatbot prompts and answers to unauthorized users

A vulnerability discovered last year by a cybersecurity expert found that Meta AI has been letting chatbot users access the private prompts and AI-generated responses of other users through a flaw. As reported by Cybernews, Meta has since fixed the bug, however, for an undetermined amount of time users had unauthorized access to prompts and answers of any other user as a result of the leak. The vulnerability, which according to TechCrunch, was first disclosed to Meta on December 26, 2024 by cybersecurity expert and founder of AppSecure Sandeep Hodkasia, was corrected with a fix by Meta on January 24, 2025. Hodkasia was researching the way Meta AI lets logged in users modify their own prompts to regenerate texts and images; when a user edits their AI prompt, Meta's servers assign a unique number to it and the AI-generated response. Hodkasia analyzed his browser's network traffic while editing an AI prompt, and found he could modify this number to cause the servers to return a prompt and response from another user. This means the servers were not checking that the user requesting the prompt and its response were authorized to view it. Meta corrected the flaw and paid a $10,000 bug bounty to Hodkasia, a spokesperson for the company acknowledged the issue but stated the company had no evidence that the flaw had been exploited in the wild. This vulnerability follows one last month where Meta AI conversations were made public in the app, unintentionally exposing users' queries, highlighting how easy it is for AI chat interactions to cross security lines. As more and more companies begin using chatbots, they should be regularly ensuring that these chats remain private and confidential by checking them for potential security flaws – particularly if the chat history could contain sensitive information. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. Get instant access to breaking news, the hottest reviews, great deals and helpful tips.

Meta Fixes AI Privacy Bug That Exposed User Chats, Awards ₹8.5 Lakh to Ethical Hacker
Meta Fixes AI Privacy Bug That Exposed User Chats, Awards ₹8.5 Lakh to Ethical Hacker

Hans India

time16-07-2025

  • Hans India

Meta Fixes AI Privacy Bug That Exposed User Chats, Awards ₹8.5 Lakh to Ethical Hacker

Meta has resolved a critical privacy flaw in its AI chatbot platform that could have exposed users' private conversations to malicious actors. The vulnerability, flagged late last year, was responsibly disclosed by ethical hacker Sandeep Hodkasia, who was awarded a bug bounty of $10,000 (roughly ₹8.5 lakh) for his discovery. According to a report by TechCrunch, Hodkasia—founder of the cybersecurity firm AppSecure—reported the issue to Meta on December 26, 2024. The flaw, linked to the prompt editing feature in Meta's AI assistant, had the potential to allow unauthorized access to personal prompts and responses from other users. Meta users interacting with the AI platform can edit or regenerate prompts. These prompts, along with AI-generated replies, are each assigned a unique identification number (ID) by Meta's backend system. Hodkasia found that these IDs, which were visible through browser developer tools, followed a predictable pattern and were vulnerable to manipulation. 'I was able to view prompts and responses of other users by manually changing the ID in the browser's network activity panel,' Hodkasia explained. The major issue, he pointed out, was that Meta's system didn't verify whether the requester of a particular prompt actually owned it. That meant someone with modest technical knowledge could write a script to cycle through IDs, collecting sensitive user data at scale. The ease with which this vulnerability could be exploited made it particularly dangerous. Since the system lacked user-specific access checks, it effectively opened a backdoor to private AI conversations. Thankfully, Hodkasia chose to report the issue rather than exploit it. Meta confirmed it patched the flaw on January 24, 2025, following an internal review. The company also stated that there was no evidence suggesting the vulnerability had been exploited before Hodkasia's report. While the fix has been deployed, the incident has renewed concerns about data privacy in AI platforms. As tech giants race to roll out AI-powered products to stay ahead of the competition, lapses like this highlight the growing importance of robust security protocols. Meta launched its AI assistant and a standalone app earlier this year to compete with platforms like ChatGPT. However, its rollout has not been without issues. In recent months, some users reported that their supposedly private conversations were visible in the platform's public Discovery feed. Although Meta maintains that chats are private by default and only become public when explicitly shared, users argue that the app's interface and settings are confusing. Many claimed they were unaware that their personal inputs, including photos or prompts, might become publicly accessible. As AI tools become more integrated into daily life, incidents like this serve as a stark reminder of the need for transparency, user control, and stringent privacy protections. Meta's swift response and bug bounty program underscore the critical role of ethical hackers in maintaining digital safety.

Meta AI had a privacy flaw that let users see other people's chats, hacker gets Rs 8.5 lakh for reporting it
Meta AI had a privacy flaw that let users see other people's chats, hacker gets Rs 8.5 lakh for reporting it

India Today

time16-07-2025

  • Business
  • India Today

Meta AI had a privacy flaw that let users see other people's chats, hacker gets Rs 8.5 lakh for reporting it

Meta has reportedly fixed a significant security flaw in its AI chatbot platform that could have exposed users' private chats and AI-generated content to hackers. The issue was flagged by ethical hacker Sandeep Hodkasia, founder of security firm AppSecure. Hodkasia reported the vulnerability to Meta on 26 December 2024 and was awarded a bug bounty of $10,000 (approximately Rs 8.5 lakh) as a reward for privately disclosing the to TechCrunch, Hodkasia discovered a bug in Meta's AI platform related to how it handled the prompt editing feature. When users interact with Meta AI, they can edit or regenerate their previous prompts. Each prompt and its AI-generated response are assigned a unique identification number (ID) by Meta's servers. Hodkasia found that these IDs were not only visible through browser tools but were also easily explained that by manually changing the ID in his browser's network activity panel, he was able to access other users' private prompts and the responses generated by the AI. The real issue, he highlighted, was that Meta's system did not verify whether the person requesting to view the content was actually the one who had created it. This meant that any hacker could have written a simple script to automatically cycle through IDs and collect large amounts of sensitive content from other users without their authorisation. Hodkasia revealed that it was this simplicity of the ID structure that made it dangerously easy for anyone with basic technical skills to exploit the flaw. The vulnerability essentially bypassed all user-specific access checks, exposing private AI interactions to malicious Hodkasia's discovery, Meta addressed the issue by rolling out a fix on 24 January 2025 and confirmed to TechCrunch that their internal investigation found no evidence that the bug had been misused or the issue has been fixed, this incident has also raised concerns around the security and privacy of AI chatbots, especially as companies rush to build and launch AI-powered products to compete in the space. Meta also launched its AI assistant and dedicated app earlier this year to challenge rivals like ChatGPT. However, in the past few months, the AI platform has come under fire for several other privacy-related missteps. Some users previously reported that their AI conversations were publicly viewable, despite assuming they were users reported incidents where their own posts or the private conversations of others appeared in Meta AI's public Discovery feed. This raised serious privacy concerns. While Meta says that chats are private by default and only become public if users explicitly share them, users noted that the app's confusing settings and vague warnings have left many people unaware about the fact that their personal photos or prompts made to Meta AI could end up visible to others.- Ends

Meta fixes bug that could leak users' AI prompts and generated content
Meta fixes bug that could leak users' AI prompts and generated content

Yahoo

time15-07-2025

  • Business
  • Yahoo

Meta fixes bug that could leak users' AI prompts and generated content

Meta has fixed a security bug that allowed Meta AI chatbot users to access and view the private prompts and AI-generated responses of other users. Sandeep Hodkasia, the founder of security testing firm Appsecure, exclusively told TechCrunch that Meta paid him $10,000 in a bug bounty reward for privately disclosing the bug he filed on December 26, 2024. Meta deployed a fix on January 24, 2025, said Hodkasia, and found no evidence that the bug was maliciously exploited. Hodkasia told TechCrunch that he identified the bug after examining how Meta AI allows its logged-in users to edit their AI prompts to re-generate text and images. He discovered that when a user edits their prompt, Meta's back-end servers assign the prompt and its AI-generated response a unique number. By analyzing the network traffic in his browser while editing an AI prompt, Hodkasia found he could change that unique number and Meta's servers would return a prompt and AI-generated response of someone else entirely. The bug meant that Meta's servers were not properly checking to ensure that the user requesting the prompt and its response was authorized to see it. Hodkasia said the prompt numbers generated by Meta's servers were 'easily guessable,' potentially allowing a malicious actor to scrape users' original prompts by rapidly changing prompt numbers using automated tools. When reached by TechCrunch, Meta confirmed it fixed the bug in January and that the company 'found no evidence of abuse and rewarded the researcher,' Meta spokesperson Ryan Daniels told TechCrunch. News of the bug comes at a time when tech giants are scrambling to launch and refine their AI products, despite many security and privacy risks associated with their use. Meta AI's standalone app, which debuted earlier this year to compete with rival apps like ChatGPT, launched to a rocky start after some users inadvertently publicly shared what they thought were private conversations with the chatbot. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store