
Meta users don't know their intimate AI chats are out there for all to see
A man wants to know how to help his friend come out of the closet. An aunt struggles to find the right words to congratulate her niece on her graduation. And one guy wants to know how to ask a girl — 'in Asian' — if she's interested in older men.
Ten years ago, they might have discussed those vulnerable questions with friends over brunch, at a dive bar, or in the office of a therapist or clergy member. Today, scores of users are posting their often cringe-making conversations about relationships, identity and spirituality with Meta's AI chatbot to the app's public feed — sometimes seemingly without knowing their musings can be seen by others.
Meta launched a stand-alone app for its AI chatbot nearly two months ago with the goal of giving users personalized and conversational answers to any question the could come up with — a service similar to those offered by OpenAI's ChatGPT or Anthropic's Claude. But the app came with a unique feature: a discover field where users could post their personal conversations with Meta AI for the world to see, reflecting the company's larger strategy to embed AI-created content into its social networks.
Since the April launch, the app's discover feed has been flooded with users' conversations with Meta AI on personal topics about their lives or their private philosophical questions about the world. As the feature gained more attention, some users appeared to purposely promote comical conversations with Meta AI. Others are publishing AI-generated images about political topics such as Trump in a diapers, images of girls in sexual situations and promotions to their businesses. In at least one case, a person whose apparently real name was evident asked the bot to delete an exchange after posing an embarrassing question.
The flurry of personal posts on Meta AI is the latest indication that people are increasingly turning to conversational chatbots to meet their relationship and emotional needs. As users ask the chatbots for advice on matters ranging from their marital problems to financial challenges, privacy advocates warn that users' personal information may end up being used by tech companies in ways they didn't expect or want.
'We've seen a lot of examples of people sending very, very personal information to AI therapist chatbots or saying very intimate things to chatbots in other settings, ' said Calli Schroeder, a senior counsel at the Electronic Privacy Information Center.
'I think many people assume there's some baseline level of confidentiality there. There's not. Everything you submit to an AI system at bare minimum goes to the company that's hosting the AI.'
Meta spokesman Daniel Roberts said chats with Meta AI are set to private by default and users have to actively tap the share or publish button before it shows up on the app's discover field. While some real identities are evident, people are free to able to pick a different username on the discover field.
Still, the company's share button doesn't explicitly tell users where their conversations with Meta AI will be posted and what other people will be able to see — a fact that appeared to confuse some users about the new app.
Meta's approach of blending social networking components with an AI chatbot designed to give personal answers is a departure from the approach of some of the company's biggest rivals. ChatGPT and Claude give similarly conversational and informative answers to questions posed by users, but there isn't a similar feed where other people can see that content. Video- or image-generating AI tools such as Midjourney and OpenAI's Sora have pages where people can share their work and see what AI has created for others, but neither service engages in text conversations that turn personal.
The discover feed on Meta AI reads like a mixture of users' personal diaries and Google search histories, filled with questions ranging from the mundane to the political and philosophical. In one instance, a husband asked Meta AI in a voice recording about how to grow rice indoors for his 'Filipino wife.' Users asked Meta about Jesus' divinity; how to get picky toddlers to eat food and how to budget while enjoying daily pleasures. The feed is also filled with images created by Meta AI but conceived by users' imaginations, such as one of President Donald Trump eating poop and another of the grim reaper riding a motorcycle.
Research shows that AI chatbots are uniquely designed to elicit users' social instincts by mirroring human-like cues that give people a sense of connection, said Michal Luria, a research fellow at the Center for Democracy and Technology, a Washington think tank.
'We just naturally respond as if we are talking to … another person, and this reaction is automatic,' she said. 'It's kind of hard to rewire.'
In April, Meta CEO Mark Zuckerberg told podcaster Dwarkesh Patel that one of the main reasons people used Meta AI was to talk through difficult conversations they need to have with people in their lives — a use he thinks will become more compelling as the AI model gets to know its users.
'People use stuff that's valuable for them,' he said. 'If you think something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong.'
Meta AI's discover feed is filled with questions about romantic relationships — a popular topic people discuss with chatbots. In one instance, a woman asks Meta AI if her 70-year-old boyfriend can really be a feminist if he says he's willing to cook and clean but ultimately doesn't. Meta AI tells her the obvious: that there appears to be a 'disconnect' between her partner's words and actions. Another user asked about the best way to 'rebuild yourself after a breakup,' eliciting a boilerplate list of tips about self-care and setting boundaries from Meta AI.
Some questions posed to Meta took an illicit turn. One user asked Meta AI to generate images of 'two 21 year old women wrestling in a mud bath' and then posted the results on the discover field under the headline 'Muddy bikinis and passionate kisses.' Another asked Meta AI to create an image of a 'big booty White girl.'
There are few regulations pushing tech companies to adopt stricter content or privacy rules for their chatbots. In fact, Congress is considering passing a tax and immigration bill that includes a provision to roll back state AI laws throughout the country and prohibit states from passing new ones for the next decade.
In recent months, a couple of high-profile incidents triggered questions about how tech companies handle personal data, who has access to that data, and how that information could be used to manipulate users.
In April, OpenAI announced that ChatGPT would be able to recall old conversations that users did not ask the company to save. On X, CEO Sam Altman said OpenAI was excited about '[AI] systems that get to know you over your life, and become extremely useful and personalized.'
The potential pitfalls of that approach became obvious the following month, when OpenAI had to roll back an update to ChatGPT that incorporated more personalization because it made the tool sycophantic and manipulative toward users. Last week, OpenAI's chief operating officer Brad Lightcap said the company intended to keep its privacy commitments to users after plaintiffs in a copyright lawsuit led by the New York Times demanded that OpenAI retain customer data indefinitely.
Ultimately, it may be users that push the company to offer more transparency. One user questioned Meta AI on why a 'ton of people' were 'accidentally posting super personal stuff' on the app's discover feed.
'Ok, so you're saying the feed is full of people accidentally posting personal stuff?' the Meta AI chatbot responded. 'That can be pretty wild. Maybe people are just really comfortable sharing stuff or maybe the platform's defaults are set up in a way that makes it easy to overshare. What do you think?'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
5 minutes ago
- Forbes
Change Your Gmail Password Now, Google Tells 2 Billion Users
Google says change your Gmail password now. I've said it before, and, unfortunately, I will continue repeating it unless you take action now: Gmail, like all email providers, is under attack. Don't just take my word for it; even Google admits that email attacks have targeted 61% of U.S. consumers. Let that sink in for a bit. OK, are you worried now? You should be, and you should also take immediate action to mitigate the chances of becoming another victim of email hackers. Google's vice-president of privacy, safety and security, Evan Kotsovinos, has issued a warning in which he 'strongly encourages' the 2 billion users of platform to make one straightforward change: replace your Gmail password now. Here's what you need to know and do. The majority of people still use passwords to sign into their Google accounts, which also means signing into their Gmail accounts. That's a terrifying thought, but one that's hardly surprising as we tend to be resistant to change, especially when something like security is concerned. The overused mantra of 'if it ain't broke, don't fix it' is often, and totally wrongly, used when I tell users that their password is putting their accounts, email, data, and money at risk. 'I've used that password for five years and never been hacked,' is a typical response. It's just a matter of time, buddy, and the cybersecurity landscape would suggest that time is fast running out. 'Over 60% of U.S. consumers perceive an increase in scams over the past year,' Kotsovinos said, 'with one-third personally experiencing a data breach.' Which is why one of Google's top security brains has also urged all users to stop using their passwords, which are painful to maintain and prone to phishing attacks. Google recommends that you change your Gmail password now to something more secure. And that doesn't mean a better password but something else entirely: a passkey. 'We want to move beyond passwords altogether,' Kotsovinos confirmed, 'while keeping sign-ins as easy as possible.' Passkeys are, Kotsovinos continued, phishing-resistant and can log you in using your face or fingerprint. 'When you pair the ease and safety of passkeys with your Google Account,' he concluded, 'you can then use Sign in with Google to log in to your favorite websites and apps — limiting the number of accounts you have to maintain.' This is all excellent advice, and given the ongoing cyber assault on Gmail accounts that we have been observing for many months now, often employing AI-powered resources, this advice should be followed immediately. Here's what to do. Preparation is everything, so Google advises that you ensure you have the following available before you start the passkey creation process: OK, with that out of the way, here's how to go from password to passkey in three simple steps: Congratulations, you can now use a passkey instead of your Gmail password to sign into your email account, knowing that you have just removed one of the primary methods hackers use to compromise your data. You can find out more about Google passkeys here.


CNET
12 minutes ago
- CNET
Liquid Glass Looks Great, but I'm More Hyped for These Low-Key iOS 26 Features
The next version of iPhone software is iOS 26 -- and no, that's not a typo. Apple announced the new name and new Liquid Glass look at WWDC 2025 earlier this week. The update to iOS 26 comes with a lot of features and improvements that I've wanted for some time, including call screening. Last year, we got Apple Intelligence, and the year before that, the company unveiled the Vision Pro. This year, Apple renamed all of its software and made it look more unified. It might not sound as big a deal as previous WWDCs, but if you hoped Apple would talk more about software and less about AI, then this was your year. Instead of getting iOS 19, Apple released iOS 26. Typically, a new version of iOS comes out in the fall, but it can receive smaller updates through the following spring and summer. So iOS 26 is a reference to the fact that it will get minor updates through 2026. The biggest update wasn't the iOS name change. It was Liquid Glass, a design overhaul that adds transparency and is aimed at reducing distraction. However, some people's instant reaction was less than favorable to the new look. I am fascinated by what I saw with Liquid Glass and look forward to testing it as part of the iOS 26 beta version. But Liquid Glass didn't get me as hyped as some of the other features Apple included in iOS 26, like the Camera app redesign. The theme at this year's WWDC is consistency -- consistency in terms of the overall experience an Apple device owner has. iOS 26 is a great example of this because it brings a sophisticated and contemporary feel to the iPhone's software. Check out my iOS 26 video above, in which I walk you through all the big features in the new iOS and show you a couple of smaller ones (some might find them mundane) that I'm excited about. iOS 26 will launch this fall and is available now as a developer beta. The public beta version will be available in July. You need an iPhone 11 or newer to use iOS 26. If you want to give it a spin before it's released, I recommend waiting for the public beta version and even then not running it on your main iPhone.


Forbes
15 minutes ago
- Forbes
How Are We Feeling About AI?
Top view black coffee in a white glass placed on an old cement table with coffee beans. In these chaotic times, a lot of outcomes boil down to a very simple thing – trust. I've heard from a number of experts over the last year, that trust is essential for AI deployment and integrating LLMs into business processes. We look at this all the time in terms of consumer technologies, and how they are adopted. It makes sense to take the temperature of the public right now, as it applies to this rapidly moving industry and how it's getting traction in our lives. Some telling information comes from a Pew research study where a significant majority of the public, in response to questions, said they think that AI will have a negative effect on the country, and will harm them personally. Charts from the study show that this number goes in the opposite direction for experts in AI, people who are better versed in the technologies and have a front row seat to how they work. Now, in terms of personal harm or benefit, it makes sense that the experts would have a rosier picture in mind, because they stand to gain more from this industry continuing to take off. You can see granular sentiments too, in terms of how AI will affect jobs, the economy, medical care, education, entertainment, the environment, personal relationships, and more. And don't forget elections, where a full 0% of the public respondents said AI would be a good influence, and only 11% of experts made that proposition. I also wanted to include this interesting anecdote from a colleague of mine, Juan Enriquez, when he spoke at Imagination in Action in April. I'll try to do his remark justice, without quoting him verbatim, but essentially, Enriquez explained that he sees his outlook change on AI over over time – using days of the week. On Mondays and Tuesdays, he said, he's enthusiastic about our AI future. On Wednesdays, he's not sure. On Thursdays and Fridays, he thinks the world's ending … and then he takes the weekends off. It's definitely good to take the weekends off, as many of us can attest. Enriquez also spent a good portion of his talk going over how the history of IT applies to today. He started out with punch cards, and their use in the textile industry, and paid homage to figures like Charles Babbage and Alan Turing, who he said simply lacked the 'horsepower' to put their ideas into practice. He also gave a nod to the first AI psychotherapist, Eliza, in 1966, and suggested that many people prefer to talk to a machine. 'Why?' he asked. 'Because it's more empathetic. So think about that one for a second. You're beginning to interact with machines on very personal stuff, and when you interact with that machine, it turns out that that machine is more empathetic than the humans that you're used to dealing with.' In addition, Enriquez covered the pace of AI, speaking of tightening compression cycles and adoption curves that are getting shorter. It reminds me of part of a report by Mary Meeker that I covered just recently, where she talked about ChatGPT assuming user velocity more quickly than the Internet, or the personal computer, or the main frame. This is a critical part of anlysis that many of us share in thinking about how big the scope of AI application will be. Here's another major point that Enriquez made that I think was fundamental: To call it 'AI,' he said, is a mistake, he said, referencing innovations like two chatbots talking to one another. 'It's really AIs,' he said. In other words, each iteration of this principle is its own entity, with its own digital brain, its own sphere of influence and its own neural build. Just like us! 'We may want to ask ourselves, as we're asking these questions, how far should we go?' he said. 'How fast should we go? What happens if we can't understand what those machines are doing or saying?' 'Some people are calling it intelligence,' Enriquez said. 'Some people are calling it artificial learning. Some people are calling x, y, z, but what you're actually looking at is something that looks an awful lot like an evolutionary tree of life … how do these things talk to one another? How are they operating this stuff? Maybe there are some common principles, but you're really looking at different results when you try different versions of this stuff, and those results will probably diverge more and more.' Enriquez asked us to ask the question: what can't AI do, and why? 'A Turing test for robots is, take a robot, drop it off anywhere in the city, have it be able to get into the house, find the kitchen, make a cup of coffee,' he said. 'So think about that. You've got to navigate to the house. You've got to figure out where the kitchen is, and then, God bless us, we have a lot of ways of making coffee. So are we going to use a French press? Are we going to use a Turing machine? Are we going to boil the thing, are we going to use Nescafe? Are we going to grind the beans? Where are the beans? Where are the filters? Right? The consequences of the machine being able to make a cup of coffee in a city is that all labor becomes 25 cents an hour.' You can watch the presentation for more, but I thought those questions were relevant to our general sentiment on AI. What things can it do to convince us that it's beneficial to us humans? Making coffee? Helping with medical diagnosis? In the end, I think that time will tell what will win us over in terms of recognizing our digital brethren as helpers and assistants, rather than just a vaguely scary concept.