Latest news with #SandeepHodkasia


Tom's Guide
2 days ago
- Tom's Guide
Meta AI was leaking chatbot prompts and answers to unauthorized users
A vulnerability discovered last year by a cybersecurity expert found that Meta AI has been letting chatbot users access the private prompts and AI-generated responses of other users through a flaw. As reported by Cybernews, Meta has since fixed the bug, however, for an undetermined amount of time users had unauthorized access to prompts and answers of any other user as a result of the leak. The vulnerability, which according to TechCrunch, was first disclosed to Meta on December 26, 2024 by cybersecurity expert and founder of AppSecure Sandeep Hodkasia, was corrected with a fix by Meta on January 24, 2025. Hodkasia was researching the way Meta AI lets logged in users modify their own prompts to regenerate texts and images; when a user edits their AI prompt, Meta's servers assign a unique number to it and the AI-generated response. Hodkasia analyzed his browser's network traffic while editing an AI prompt, and found he could modify this number to cause the servers to return a prompt and response from another user. This means the servers were not checking that the user requesting the prompt and its response were authorized to view it. Meta corrected the flaw and paid a $10,000 bug bounty to Hodkasia, a spokesperson for the company acknowledged the issue but stated the company had no evidence that the flaw had been exploited in the wild. This vulnerability follows one last month where Meta AI conversations were made public in the app, unintentionally exposing users' queries, highlighting how easy it is for AI chat interactions to cross security lines. As more and more companies begin using chatbots, they should be regularly ensuring that these chats remain private and confidential by checking them for potential security flaws – particularly if the chat history could contain sensitive information. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Yahoo
3 days ago
- Business
- Yahoo
Cybersecurity Firm AppSecure Identifies Critical Flaw in Meta.AI Leaking Users' AI Prompts and Responses, Rewarded $10,000
SINGAPORE, July 17, 2025--(BUSINESS WIRE)--AppSecure, a cybersecurity firm specializing in penetration testing and red teaming, has discovered a critical vulnerability in Meta's generative AI chatbot platform. If left unaddressed, the flaw could have allowed other users' data and private AI interactions to be leaked. Sandeep Hodkasia, CEO and Founder of AppSecure Security, identified the issue during a security research exercise. His investigation revealed that GraphQL API was unintentionally exposing prompts and outputs generated by other users. This oversight posed a risk of unauthorized access to personal and potentially sensitive conversations within the platform. Fortunately, no evidence of misuse or exploitation was found. The flaw originated from a missing authorization check in GraphQL API, specifically within the useAbraImagineReimagineMutation query. The system used a media_set_id to manage user interactions, but it didn't validate whether the person making the request actually owned that ID. As a result, any logged-in user could alter the media_set_id parameter and gain access to prompts and AI-generated content created by others. AppSecure reported the vulnerability to Meta on December 26, 2024. They looked into the issue and rolled out a temporary fix on January 24, 2025, with it being permanently resolved on April 24, 2025. In their official response, Meta said: "You demonstrated an issue where a malicious actor could access users' prompts and AI-generated media via a certain GraphQL query, potentially allowing an attacker to access users' private media. We mitigated this and found no evidence of abuse." Recognizing the significance of the finding, Meta awarded $10,000 for the key vulnerability and an additional $4,550 for related issues identified during the same investigation. "This wasn't about chasing a bounty — it was about securing a system millions are starting to trust," clarifies Sandeep. "If a platform as robust as can have such loopholes, it's a clear signal that other AI-first companies must proactively test their platforms before users' data is put at risk." As more companies rapidly deploy generative AI models, the surface area for potential attacks continues to grow. AppSecure's findings highlight the need for a proactive approach to security, especially in systems that handle user-generated content, prompt history, or model outputs. AppSecure has a reputation for carefully and responsibly uncovering important security vulnerabilities. Many AI-focused companies trust AppSecure to help protect their systems. The company actively tests how users interact with AI platforms and examines the behind-the-scenes processes to find hidden flaws that could cause security risks. This hands-on approach helps businesses fix issues before they become serious threats. "Security is not just about fixing problems after they appear; it's about anticipating risks and acting before damage occurs," adds Sandeep. "That's why leading companies work with us to identify real-world risks early and build AI platforms that stay secure and reliable from the very beginning." About AppSecure Security AppSecure Security is a CREST-accredited Penetration testing firm that identifies and addresses critical vulnerabilities through real-world attack simulations. The experienced team focuses on testing web applications, APIs, and networks to expose hidden risks before threats can cause harm. By following industry standards and taking a proactive approach, AppSecure helps businesses strengthen their defenses and stay ahead of evolving cyber challenges, making it a trusted partner for comprehensive security solutions. View source version on Contacts Media Contact:Name: Sandeep HodkasiaWebsite: pr@


Phone Arena
4 days ago
- Business
- Phone Arena
A man proved Meta's AI platform is not so secure and got paid $10,000
If you think that a given AI platform is safe because it's backed by a multi-billion dollar company, well, think again. A man who managed to find a security bug on Meta's AI platform was rewarded with $10,000 by Zuck and co. Meta has recently resolved a critical security flaw that exposed private prompts and AI-generated responses from its Meta AI chatbot to other users, a report by TechCrunch reads. The issue was discovered by Sandeep Hodkasia, founder of security testing firm AppSecure, who reported the vulnerability back in December 2024. For his disclosure, Meta awarded him $10,000 through its bug bounty program (if you happen to find anything, don't hesitate to report on it). The company confirmed that the bug is now patched, and stated that there was no evidence of malicious exploitation. However, that should ring a bell for everyone who uses AI without a second thought. I won't be the one who tells you to avoid AI like the plague, but one should definitely act cautiously. A line of code could cost you dearly. Image by Meta Hodkasia uncovered the flaw while examining how Meta AI lets logged-in users edit prompts to regenerate responses. He noticed that each edited prompt was assigned a unique identifier by Meta's back-end systems. By intercepting network traffic during this process, he realized that altering the identifier allowed access to other users' prompts and responses. The problem stemmed from Meta's failure to validate whether a user was authorized to view a given prompt. According to Hodkasia, the identifiers were predictable, which could have enabled attackers to automate the process and collect sensitive user inputs at discovery comes amid broader criticism of Meta AI's privacy practices. Since the launch of its stand-alone app earlier this year, users have inadvertently exposed private conversations by misunderstanding sharing options. The app includes a feature allowing users to share interactions publicly, but many appear unaware that they are posting personal queries, images, and even audio clips for public viewing. Some of these slip-ups have revealed highly sensitive details, from questions about financial crimes and legal troubles to personal data like home addresses. Yikes! Despite the company's heavy investment in AI, the Meta AI app has seen limited adoption, with about 6.5 million downloads since its April 29 release, according to app analytics firm Appfigures. Well, nothing is perfect, but a couple more bugs like that and Meta will have to find a new name for the platform. Like Google did with Bard, that is now called Gemini. Secure your connection now at a bargain price! We may earn a commission if you make a purchase Check Out The Offer


Hans India
4 days ago
- Hans India
Meta Fixes AI Privacy Bug That Exposed User Chats, Awards ₹8.5 Lakh to Ethical Hacker
Meta has resolved a critical privacy flaw in its AI chatbot platform that could have exposed users' private conversations to malicious actors. The vulnerability, flagged late last year, was responsibly disclosed by ethical hacker Sandeep Hodkasia, who was awarded a bug bounty of $10,000 (roughly ₹8.5 lakh) for his discovery. According to a report by TechCrunch, Hodkasia—founder of the cybersecurity firm AppSecure—reported the issue to Meta on December 26, 2024. The flaw, linked to the prompt editing feature in Meta's AI assistant, had the potential to allow unauthorized access to personal prompts and responses from other users. Meta users interacting with the AI platform can edit or regenerate prompts. These prompts, along with AI-generated replies, are each assigned a unique identification number (ID) by Meta's backend system. Hodkasia found that these IDs, which were visible through browser developer tools, followed a predictable pattern and were vulnerable to manipulation. 'I was able to view prompts and responses of other users by manually changing the ID in the browser's network activity panel,' Hodkasia explained. The major issue, he pointed out, was that Meta's system didn't verify whether the requester of a particular prompt actually owned it. That meant someone with modest technical knowledge could write a script to cycle through IDs, collecting sensitive user data at scale. The ease with which this vulnerability could be exploited made it particularly dangerous. Since the system lacked user-specific access checks, it effectively opened a backdoor to private AI conversations. Thankfully, Hodkasia chose to report the issue rather than exploit it. Meta confirmed it patched the flaw on January 24, 2025, following an internal review. The company also stated that there was no evidence suggesting the vulnerability had been exploited before Hodkasia's report. While the fix has been deployed, the incident has renewed concerns about data privacy in AI platforms. As tech giants race to roll out AI-powered products to stay ahead of the competition, lapses like this highlight the growing importance of robust security protocols. Meta launched its AI assistant and a standalone app earlier this year to compete with platforms like ChatGPT. However, its rollout has not been without issues. In recent months, some users reported that their supposedly private conversations were visible in the platform's public Discovery feed. Although Meta maintains that chats are private by default and only become public when explicitly shared, users argue that the app's interface and settings are confusing. Many claimed they were unaware that their personal inputs, including photos or prompts, might become publicly accessible. As AI tools become more integrated into daily life, incidents like this serve as a stark reminder of the need for transparency, user control, and stringent privacy protections. Meta's swift response and bug bounty program underscore the critical role of ethical hackers in maintaining digital safety.


India Today
4 days ago
- Business
- India Today
Meta AI had a privacy flaw that let users see other people's chats, hacker gets Rs 8.5 lakh for reporting it
Meta has reportedly fixed a significant security flaw in its AI chatbot platform that could have exposed users' private chats and AI-generated content to hackers. The issue was flagged by ethical hacker Sandeep Hodkasia, founder of security firm AppSecure. Hodkasia reported the vulnerability to Meta on 26 December 2024 and was awarded a bug bounty of $10,000 (approximately Rs 8.5 lakh) as a reward for privately disclosing the to TechCrunch, Hodkasia discovered a bug in Meta's AI platform related to how it handled the prompt editing feature. When users interact with Meta AI, they can edit or regenerate their previous prompts. Each prompt and its AI-generated response are assigned a unique identification number (ID) by Meta's servers. Hodkasia found that these IDs were not only visible through browser tools but were also easily explained that by manually changing the ID in his browser's network activity panel, he was able to access other users' private prompts and the responses generated by the AI. The real issue, he highlighted, was that Meta's system did not verify whether the person requesting to view the content was actually the one who had created it. This meant that any hacker could have written a simple script to automatically cycle through IDs and collect large amounts of sensitive content from other users without their authorisation. Hodkasia revealed that it was this simplicity of the ID structure that made it dangerously easy for anyone with basic technical skills to exploit the flaw. The vulnerability essentially bypassed all user-specific access checks, exposing private AI interactions to malicious Hodkasia's discovery, Meta addressed the issue by rolling out a fix on 24 January 2025 and confirmed to TechCrunch that their internal investigation found no evidence that the bug had been misused or the issue has been fixed, this incident has also raised concerns around the security and privacy of AI chatbots, especially as companies rush to build and launch AI-powered products to compete in the space. Meta also launched its AI assistant and dedicated app earlier this year to challenge rivals like ChatGPT. However, in the past few months, the AI platform has come under fire for several other privacy-related missteps. Some users previously reported that their AI conversations were publicly viewable, despite assuming they were users reported incidents where their own posts or the private conversations of others appeared in Meta AI's public Discovery feed. This raised serious privacy concerns. While Meta says that chats are private by default and only become public if users explicitly share them, users noted that the app's confusing settings and vague warnings have left many people unaware about the fact that their personal photos or prompts made to Meta AI could end up visible to others.- Ends