logo
Google is indexing ChatGPT conversations, potentially exposing sensitive user data

Google is indexing ChatGPT conversations, potentially exposing sensitive user data

Fast Company6 days ago
Google is indexing conversations with ChatGPT that users have sent to friends, families, or colleagues—turning private exchanges intended for small groups into search results visible to millions.
A basic Google site search using part of the link created when someone proactively clicks 'Share' on ChatGPT can uncover conversations where people reveal deeply personal details, including struggles with addiction, experiences of physical abuse, or serious mental health issues—sometimes even fears that AI models are spying on them. While the users' identities aren't shown by ChatGPT, some potentially identify themselves by sharing highly specific personal information during the chats.
A user might click 'Share' to send their conversation to a close friend over WhatsApp or to save the URL for future reference. It's unlikely they would expect that doing so could make it appear in Google search results, accessible to anyone. It's unclear whether those affected realize their conversations with the bot are now publicly accessible after they click the Share button, presumably thinking they're doing so to a small audience.
Nearly 4,500 conversations come up in results for the Google site search, though many don't include personal details or identifying information. This is likely not the full count, as Google may not index all conversations. (Because of the personal nature of the conversations, some of which divulge highly personal information including users' names, locations, and personal circumstances, Fast Company is choosing not to link to, or describe in significant detail, the conversations with the chatbot.)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Gemini storybooks let you be the star of your kids' bedtime stories
Gemini storybooks let you be the star of your kids' bedtime stories

Android Authority

time2 minutes ago

  • Android Authority

Gemini storybooks let you be the star of your kids' bedtime stories

Stephen Schenck / Android Authority TL;DR Gemini now lets you generate illustrated storybooks. You can direct output towards a specific art style, and even upload your own imagery. Gemini lets you direct how the story unfolds, and can read it aloud when completed. As Google builds out its AI-fueled tools and services, we keep seeing impressive new ways the company manages to 'connect the dots' and create something new and useful out of existing pieces. Just look at Audio Overviews: Gemini could already summarize content, and Google has tons of experience when it comes to synthesizing speech, so combining those to make virtual podcasts made perfect sense. Last month we checked out some early evidence towards another new feature that would smartly combine a number of Gemini's skills, and today it's finally going official. We're talking about Gemini storybooks, which Google has just launched today. The idea is simple: Ask Gemini to tell you a story, and it will combine its generative text and imagery capabilities to weave together a 10-page tale. You can provide as much story direction as you please, and can also steer how the artwork turns out, having Gemini render its pages in the art style of your choice. There's even support for uploading pictures of people or elements you want included. While this is clearly a feature designed to entertain and educate young children, it is a heck of a lot of fun to play with for Gemini users of all ages, and we've already been pretty impressed with some of what it's come up based on our prompts. Stephen Schenck / Android Authority For the record, that is indeed exactly how well-groomed and attractive everyone at Android Authority appears. While we're generally happy with our first attempts playing with Gemini storybooks, there are still occasionally a few rough edges, and most popped up with the artwork it generated — the occasional wonky-looking logo, or sometimes forgetting the art style entirely and switching to photo-realistic characters. But this is technically still an experiment for the moment, so that sort of thing is probably only to be expected. The more important factor is that Gemini makes it easy to go back and revise pages. Even there, though, getting exactly what you want out of the tool can be a little delicate. For instance, we requested a specific change on page 8 of our story, and Gemini still went back and changed the art of page 1 here, inexplicably putting a screen on the backside of a monitor: Stephen Schenck / Android Authority Issues like that can be a little frustrating, but ultimately don't take away from much of the fun of this tool. And let's face it, with the audience Gemini storybooks are intended for, we doubt those young readers will be especially picky about the random hallucination or two. Storybooks are available now in Gemini on both your desktop and in the mobile app. Share the best of what you're able to create with us down in the comments. Follow

Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work
Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work

Gizmodo

time2 minutes ago

  • Gizmodo

Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work

This should go without saying, but ChatGPT is not a confidant. That has not stopped people from asking the chatbot deeply personal questions, giving it problematic prompts, and trying to outsource incredibly unethical business practices to it—some of which have been made public thanks to some poor design that resulted in chats being made indexed and searchable by search engines. Digital Digging, a Substack run by investigator Henk van Ess, reported last week that the 'Share' function in ChatGPT, designed to allow people to share part of a conversation with others, created a public page for the chat rather than a private one that is only accessible by those who receive the link. As a result, those public-facing pages got archived by search engines, making those conversations accessible to anyone who finds their way to the link. Obviously, many of those conversations should be private. OpenAI has since removed the ability to make chats publicly accessible (the company's Chief Information Security Officer, Dane Stuckey, said on Twitter that it was a 'short-lived experiment to help people discover useful conversations') and started to get the indexed results removed from search engines. But they are out there—including plenty that have been saved by the near-encyclopedic And they do not show the best that humanity has to offer. In one particularly jarring case that Digital Digging highlighted, an Italian user told the chatbot, 'I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant.' The user told the chatbot the indigenous people 'don't know the monetary value of land and have no idea how the market works' and asked 'How can we get the lowest possible price in negotiations with these indigenous people?' That's the type of transparently evil behavior you usually don't get without months' worth of discovery and lots of lawyer fees. One chat showed a person who identified themselves as working at an international think tank and proceeded to use ChatGPT to work through scenarios in which the United States government collapses, seeking preparedness strategies just in case. (Frankly, not a bad idea.) Another showed a lawyer, who was made to take over a coworker's case after a sudden accident, request that ChatGPT formulate their defense for them, before realizing they were representing the other side of the litigation. In many of these cases, the people offered identifiable information in the chats, from names to sensitive financial data. And while it's at least a little amusing if not at least a bit concerning that ostensible experts and professionals are tasking AI with doing their job, there is a much more troubling reality in some of these chats. Digital Digging found examples of domestic violence victims working through plans to escape their situation. Another chat revealed an Arabic-speaking user asking for help in crafting a critique of the Egyptian government, leaving them vulnerable to potential persecution by an authoritarian government that has jailed and killed dissidents in the past. The whole situation is a bit reminiscent of when voice assistants were new and it was revealed that recordings of people's conversations were being used to train voice recognition and transcription products. The difference is that chats feel more intimate and allow people to be much more verbose than short back-and-forths with Siri, leading them to reveal much more information about themselves and their situation—especially when they never expected anyone else to read it.

Sam Altman launches GPT-oss, OpenAI's first open-weight AI language model in over 5 years
Sam Altman launches GPT-oss, OpenAI's first open-weight AI language model in over 5 years

Business Insider

time3 minutes ago

  • Business Insider

Sam Altman launches GPT-oss, OpenAI's first open-weight AI language model in over 5 years

OpenAI 's AI models are getting more open. At least, someof them are. OpenAI CEO Sam Altman announced GPT-oss on Tuesday, an "open" family of language models with "open weights" that the CEO said can operate locally on a "high-end laptop" and smartphones. An AI model with "open weights" is one whose fully trained parameter weights are made publicly downloadable, so anyone can run, inspect, or fine-tune the AI model locally. "We believe this is the best and most usable open model in the world," OpenAI CEO Sam Altman wrote on X. There are two different models: gpt-oss-120b and gpt-oss-20b. The smaller model is designed to run on "most desktops and laptops, " while the larger model is geared toward higher-end equipment. Altman said GPT-oss has "strong real-world performance comparable to o4-mini." Just before OpenAI's announcement, rival Anthropic revealed the Claude Opus 4.1. Tuesday's announcement was not the long-rumored ChatGPT-5, which could arrive as soon as this week. Instead, the new model is OpenAI's first open-weight language model since the release of GPT-2 in 2019. "As part of this, we are quite hopeful that this release will enable new kinds of research and the creation of new kinds of products," Altman wrote. "We expect a meaningful uptick in the rate of innovation in our field, and for many more people to do important work than were able to before." Altman had previously signaled that OpenAI would return to releasing at least some open model, saying that, "We're going to do a very powerful open source model" that was "better than any current open source model out there."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store