
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account.
The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced.
'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says.
OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.'
Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack.
'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures.
Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read.
In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt.
That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account.
Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes.
The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely.
While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
27 minutes ago
- Yahoo
Euna Solutions Wins 2025 AWS Champions Award for AI Innovation in Public Sector Procurement
Euna Procurement's AI project summaries & keywords feature transforms the public bidding experience for government agencies and suppliers ATLANTA & TORONTO, August 07, 2025--(BUSINESS WIRE)--Euna Solutions®, a leading provider of purpose-built, cloud-based solutions for the public sector, today announced it has been named a 2025 AWS Champions Award winner for its groundbreaking use of artificial intelligence to modernize the public sector bidding process. Powered by AWS's secure and scalable AI/machine learning infrastructure, Euna Solutions' latest innovation — AI Project Summaries & Keywords — is transforming how public procurement professionals and suppliers interact with complex bid opportunities. Available through Euna Procurement, the feature automatically generates clear, concise project descriptions and relevant keyword metadata from lengthy and technical RFx documents. "Public procurement plays a critical role in how communities access goods and services," said Tom Amburgey, CEO of Euna Solutions. "By leveraging the power of AWS and generative AI, we're reducing complexity and making it easier for suppliers to engage, which helps agencies increase competition and improve outcomes for the people they serve." Streamlining Procurement for All Procurement teams have long struggled with inefficient manual processes, unclear project descriptions, and mismatched bids. Euna Procurement's AI Project Summaries address these challenges by: Instantly summarizing dense bid documents into digestible, actionable content Auto-tagging projects with accurate, searchable keywords to improve discoverability Saving procurement staff time by automating repetitive content creation Helping suppliers quickly determine bid relevance and respond more confidently The result is a more accessible and transparent procurement process that boosts supplier participation and drives stronger competition. Raising the Bar for Public Sector Technology The AWS Champions Award recognizes organizations that use AWS technologies to drive meaningful transformation. Euna's award-winning AI Project Summaries exemplify how targeted innovation can eliminate longstanding friction in the procurement cycle and deliver real-time value for agencies, suppliers, and taxpayers alike. About Euna Solutions Euna Solutions® is a leading provider of purpose-built, cloud-based software that helps public sector and government organizations streamline procurement, budgeting, payments, grants management, and special education administration. Designed to enhance efficiency, collaboration, and compliance, Euna Solutions supports more than 3,400 organizations across North America in building trust, enabling transparency, and driving community impact. Recognized on Government Technology's GovTech 100 list, Euna Solutions is committed to advancing public sector progress through innovative SaaS solutions. To learn more, visit View source version on Contacts Media contact:Michael TeboGabriel Marketing Group (for Euna Solutions)Phone: 571-835-8775Email: michaelt@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
27 minutes ago
- Yahoo
OpenAI Opens Up With New GPT-OSS Models
OpenAI, backed by Microsoft (MSFT), just stepped deeper into the open-source world with two new open-weight AI modelsgpt-oss-120b and gpt-oss-20btaking direct aim at Google's (NASDAQ:GOOG) Gemini CLI and DeepSeek's (DEEPSEEK) R1. The bigger model, 120b, is designed to run in data centers or on high-end hardware with Nvidia (NVDA) H100 GPUs, while the smaller 20b model works on most desktops and laptops. According to Amazon Web Services (NASDAQ:AMZN), the 120b model running on Bedrock is up to 3x more price-performant than Gemini, 5x better than DeepSeek-R1, and even 2x better than OpenAI's own o4 model. At this scale, giving developers open access is a game-changer, said Atul Deo from AWS, calling it a major step forward for enterprise AI. The models are released under the Apache 2.0 license, so developerseven commercial teamscan use them freely without worrying about copyright or patents. The training data and model code however are not publicly available, so these models are open-weight, but not available through Hugging Face, GitHub, and is signaling it's ready to compete openlynot just behind closed APIs. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
27 minutes ago
- Yahoo
Nvidia Slams Backdoor Speculation Over AI Chips
Nvidia (NASDAQ:NVDA) is firmly denying rumors that its chips contain secret backdoors or kill switches, calling the idea reckless and flat-out wrong. The company's response comes after China summoned the chipmaker last week over concerns tied to its H20 AI chips. In a blog post Tuesday, Nvidia said it has never embedded backdoors into its hardware and wouldn't, because it would be a gift to hackers and hostile actors. It warned that building in secret kill switches would not only be dangerous, but could erode global trust in U.S. tech. Warning! GuruFocus has detected 5 Warning Signs with NVDA. There's no such thing as a good' secret backdoor, Nvidia wrote. Single-point controls are a bad idea, and vulnerabilities should be fixed not created. The comments land just as U.S. lawmakers propose tighter controls on chip exports, including adding tech to verify chip locations. Nvidia's H20 shipments to China were recently greenlit under a limited export exemption, following a broader ban in April. Even though Nvidia says there are no backdoors or kill switches in its chips, Chinese regulators aren't totally convinced. They're worried those features might still be possible, at least in part. That puts Nvidia in a tough spot caught between Washington pushing for AI dominance and Beijing demanding full hardware transparency. The H20 chip, built to stay within U.S. export rules, has ended up right in the middle of that power struggle. In the high-stakes AI chip war, Nvidia is drawing a clear lineno hidden access, no kill switch, no compromises on trust. All eyes now turn to how U.S.-China tech relations evolve from here. This article first appeared on GuruFocus.