Meta AI ‘personalized' chatbot revives privacy fears
As the arms race in the artificial intelligence world ramps up, Big Tech companies are rushing to become your default AI source. Meta, last week, launched the Meta AI app to challenge ChatGPT and Google's Gemini. It's a system that measures up feature-wise with its competition and some might argue it improves on some aspects. But with Meta's long history of privacy problems, it's reawakening some old concerns about the company.
The tool offers 'personalized' answer and advice as well as a social network for people to share their AI conversations and images. But if you're looking to scrub the system's memory of you, that's a bit more challenging. And our early tests show the tool still could use some polishing when it comes to accuracy.
In my first conversation with Meta AI, the chatbot offered to create a 'hype intro' for me. How could I refuse? Meta, after all, has plenty of my personal data on file from my Facebook, Instagram, and Threads accounts, not to mention my use of the Quest AI headset. So, I figured, it should be able to describe me pretty well.
Here's what it said:
'You've got that effortlessly cool, indie vibe going on, like you're perpetually sipping coffee in a vintage bookstore. You're probably a fan of obscure indie bands, spend hours curating your Spotify playlists and have a fondness for vinyl records. Your wardrobe's probably filled with thrifted cardigans and skinny jeans and you can wax poetic about the merits of artisanal coffee versus your local coffee shop's brew.'
Meta AI, apparently, thinks I am a total hipster.
While it might have been trying to suck up to me in that summary, boy did it whiff on the facts. I don't drink coffee. I like the Beatles, The Who, and '80s and '90s pop. The only curating I've done on Spotify is recreating playlists from concerts I've gone to. And, as I munch on my second fistful of Fritos Queso Flavor Twists in the past 5 minutes, I can promise you that there are no skinny jeans in my wardrobe, nor will there ever be.
Obviously, the AI has a ways to go, but then again… most AI systems still do. Still, Meta's AI made an aggressive effort to get to know me better as we chatted (rather than requiring you to type in your replies, Meta's app welcomes voice chat), asking me about everything from my favorite book to my political views.
While it's not hard to appreciate an AI system's efforts to learn more so it can answer questions with a response tailored to the person asking them, Meta's history with handling personal information in the past could give some users pause.
Meta AI keeps a history of your chats, archiving your inputs and its replies. It also, however, keeps what it calls a Memory file, with specific pieces of information, based on your previous talks. Those Memories and the transcripts of previous talks can be deleted, but there is a bit of hunting that you'll have to do to find where they're stored. (And, as The Washington Post points out, you'll need to delete both the Memory and the chat history where the system learned that factoid for it to be completely erased.)
You'll also have to trust Meta has permanently deleted the information or—if you choose not to delete it—that it will use the information responsibly.
That may be a big ask for some people, given the recent information provided by whistleblower Sarah Wynn-Williams, who told the Senate Judiciary Committee in April that Meta is able to identify when users are feeling helpless and can use that as a cue for advertisers. (Meta denied the allegations at the time, telling TechCrunch the testimony was 'divorced from reality and riddled with false claims.')
Meta AI said it didn't have access to my Facebook account or to any pictures or visual content when I asked about its access. And when I tested it by asking about a few recent posts, it seemed to not know what I was talking about, though when I asked if it had access to my Instagram page it got a bit squirrely.
Meta AI says beyond our conversations, it uses 'information about things like your interests, location, profile, and activity on Meta products.' I then asked about something related to my Instagram page and it said it did not have real-time access 'or any information about your current activity or interests on the platform.' When I tried to press for more information, it regurgitated the same answer about 'interests, location, profile, and activity.'
A Meta spokesperson told Fast Company, 'We've provided valuable personalization for people on our platforms for decades, making it easier for people to accomplish what they come to our apps to do — the Meta AI app is no different. We provide transparency and control throughout, so people can manage their experience and make sure it's right for them. This release is the first version, and we're excited to get it in people's hands and gather their feedback.'
People who use Meta AI to inquire about or discuss deeply personal matters should be aware that the company is retaining that information and could use it to target advertising. (Ads are not part of the platform now, but Mark Zuckerberg has made it clear he sees great revenue potential in AI. Competitor Google, meanwhile, has reportedly begun showing ads in chats with some third-party AI systems, though not Meta AI.)
That may be fine if Meta AI eventually tries to upsell me Frito Twists or (shudder) skinny jeans, but it's a lot more concerning if it's mining your deepest secrets and insecurities to make a buck.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
6 minutes ago
- CNET
I Optimized My Self-Care Routine Using AI. It Had Some Interesting Ideas
The term "self-care" may be one of the most overused buzz hyphenates on the internet. All sorts of activities can fall into the category of "self-care," casting a net wide enough to encompass everything from washing your face to finding ways to relax. Ironically, falling behind in your self-care activities can serve as a source of anxiety for the already stressed population, of which so many of us are well-worn card-carrying members, trying to quiet the riot inside our heads. Bubble bathing, face masking and attending trendy exercise classes in the pursuit of a better quality of life are common solutions, but keeping up with them is a whole separate battle. Here's how you can use AI to create a flexible, realistic self-care plan and keep yourself on track when you'd rather soothe via doom scrolling or consumption of substances that are not prepared, or conceived, with care in mind. Invest in feeling awesome Since the realm of self-care is so broad, using an AI tool to discover stuff that might enhance your calm with the push of a button can keep you focused. I used Gemini's Deep Research tool to give me a comprehensive list of products that claim to have self-care benefits instead of wading through wave after wave of targeted junk, or wondering which massager won't fly out of your hands when slicked up with calming essential oils. Deep Research was good for this one because I got a preview of the parameters under which the tool would create its findings, perfect when you need to be specific about exclusions like simple face wash. Deep Research allows for revisions on search parameters, no matter how creepy, before you start the results gathering process. Google Gemini / Screenshot by CNET This is not the primary use case of Deep Research, but when you're dealing with slippery essential oils, you can never have too much insight. Google Gemini / Screenshot by CNET Take a little look-see inside yourself Humans are notoriously lousy at unpracticed, unfocused introspection, but the cold, calculating zeroes and ones of an AI tool like ChatGPT can offer you an outside perspective on what you're capable of doing with any regularity in the self-care department. Provide an honest overview of your lifestyle, including any work, school or life commitments (like my hypothetical clown school in the example below), to give the tool the best possible chance at recommending strategies for self-care that keep you invested. ChatGPT / Screenshot by CNET Google Gemini / Screenshot by CNET When you can't keep an appointment to save your under-cared life A little self-care every once in a while is great, even if it's just sporadic, but what you really want for maximum relaxation impact is a ritual or at least a menu of activities designed to peak your well-being. ChatGPT gave me a breakdown of all the things I can do on a daily, weekly, monthly and yearly basis to take my self-care game to the next level over a gradual period of time, giving me seasonal self-care events to look forward to. The Winter schedule would be particularly useful if you're trying to make some positive changes for the new year. ChatGPT / Screenshot by CNET Self-care activities are best when they not only nourish your body and mind but also set your soul on fire. Give AI tools a chance to help you discover what gets you going, or if you're already aware, resist the urge to lie or be shy. BONUS: Anonymity and the ability to cast shame aside is one of the best things about using AI tools for private matters you might need to consult a human about, like a wellness coach or personal trainer. Letting your freak flag fly in the pursuit of happiness is a lot easier when you're unburdening yourself to a machine. You may have been holding back on starting a solid self-care routine activity you're embarrassed about because of what those other pesky humans might think. Screw 'em! The robot listens and doesn't judge. ChatGPT was more than up for the task. It also remembers whatever info you've fed into it already, so it matched up my clown schooling with some clown-themed activities: ChatGPT / Screenshot by CNET ChatGPT / Screenshot by CNET It must stay private.


CBS News
12 minutes ago
- CBS News
Meta's platforms showed hundreds of "nudify" deepfake ads, CBS News investigation finds
Meta has removed a number of ads promoting "nudify" apps — AI tools used to create sexually explicit deepfakes using images of real people — after a CBS News investigation found hundreds of such advertisements on its platforms. "We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps," a Meta spokesperson told CBS News in an emailed statement. CBS News uncovered dozens of those ads on Meta's Instagram platform, in its "Stories" feature, promoting AI tools that, in many cases, advertised the ability to "upload a photo" and "see anyone naked." Other ads in Instagram's Stories promoted the ability to upload and manipulate videos of real people. One promotional ad even read "how is this filter even allowed?" as text underneath an example of a nude deepfake. One ad promoted its AI product by using highly sexualized, underwear-clad deepfake images of actors Scarlett Johansson and Anne Hathaway. Some of the ads ads' URL links redirect to websites that promote the ability to animate real people's images and get them to perform sex acts. And some of the applications charged users between $20 and $80 to access these "exclusive" and "advance" features. In other cases, an ad's URL redirected users to Apple's app store, where "nudify" apps were available to download. Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people. An analysis of the advertisements in Meta's ad library found that there were, at a minimum, hundreds of these ads available across the company's social media platforms, including on Facebook, Instagram, Threads, the Facebook Messenger application and Meta Audience Network — a platform that allows Meta advertisers to reach users on mobile apps and websites that partner with the company. According to Meta's own Ad Library data, many of these ads were specifically targeted at men between the ages of 18 and 65, and were active in the United States, European Union and United Kingdom. A Meta spokesperson told CBS News the spread of this sort of AI-generated content is an ongoing problem and they are facing increasingly sophisticated challenges in trying to combat it. "The people behind these exploitative apps constantly evolve their tactics to evade detection, so we're continuously working to strengthen our enforcement," a Meta spokesperson said. CBS News found that ads for "nudify" deepfake tools were still available on the company's Instagram platform even after Meta had removed those initially flagged. Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people. Deepfakes are manipulated images, audio recordings, or videos of real people that have been altered with artificial intelligence to misrepresent someone as saying or doing something that the person did not actually say or do. Last month, President Trump signed into law the bipartisan "Take It Down Act," which, among other things, requires websites and social media companies to remove deepfake content within 48 hours of notice from a victim. Although the law makes it illegal to "knowingly publish" or threaten to publish intimate images without a person's consent, including AI-created deepfakes, it does not target the tools used to create such AI-generated content. Those tools do violate platform safety and moderation rules implemented by both Apple and Meta on their respective platforms. Meta's advertising standards policy says, "ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive." Under Meta's "bullying and harassment" policy, the company also prohibits "derogatory sexualized photoshop or drawings" on its platforms. The company says its regulations are intended to block users from sharing or threatening to share nonconsensual intimate imagery. Apple's guidelines for its app store explicitly state that "content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy" is banned. Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University's tech research center, has been studying the surge in AI deepfake networks marketing on social platforms for more than a year. He told CBS News in a phone interview on Tuesday that he'd seen thousands more of these ads across Meta platforms, as well as on platforms such as X and Telegram, during that period. Although Telegram and X have what he described as a structural "lawlessness" that allows for this sort of content, he believes Meta's leadership lacks the will to address the issue, despite having content moderators in place. "I do think that trust and safety teams at these companies care. I don't think, frankly, that they care at the very top of the company in Meta's case," he said. "They're clearly under-resourcing the teams that have to fight this stuff, because as sophisticated as these [deepfake] networks are … they don't have Meta money to throw at it." Mantzarlis also said that he found in his research that "nudify" deepfake generators are available to download on both Apple's app store and Google's Play store, expressing frustration with these massive platforms' inability to enforce such content. "The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don't necessarily have the wherewithal to ban them," he said. "There needs to be cross-industry cooperation where if the app or the website markets itself as a tool for nudification on any place on the web, then everyone else can be like, 'All right, I don't care what you present yourself as on my platform, you're gone,'" Mantzarlis added. CBS News has reached out to both Apple and Google for comment as to how they moderate their respective platforms. Neither company had responded by the time of writing. Major tech companies' promotion of such apps raises serious questions about both user consent and about online safety for minors. A CBS News analysis of one "nudify" website promoted on Instagram showed that the site did not prompt any form of age verification prior to a user uploading a photo to generate a deepfake image. Such issues are widespread. In December, CBS News' 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people. Despite visitors being told that they must be 18 or older to use the site, and that "processing of minors is impossible," 60 Minutes was able to immediately gain access to uploading photos once the user clicked "accept" on the age warning prompt, with no other age verification necessary. Data also shows that a high percentage of underage teenagers have interacted with deepfake content. A March 2025 study conducted by the children's protection nonprofit Thorn showed that among teens, 41% said they had heard of the term "deepfake nudes," while 10% reported personally knowing someone who had had deepfake nude imagery created of them.


Fast Company
26 minutes ago
- Fast Company
Why vibecoding your own apps is so amazing—and exasperating
'The truth is, I cannot explain exactly where your 1,216 image files went or when they disappeared. I apologize for not being more careful about investigating the root cause before taking any action. The bottom line is that your image files are missing, and I cannot restore them.' I don't hold hard drives personally accountable for crashing, or blame vending machines for eating my money. But when the AI-coding service Replit accidentally blew away more than a thousand photographs my grandmother took, my blood boiled. After all, the web-based tool and I had spent an enormous amount of time in recent months talking about the software we were creating together. I'd explain what I envisioned an app doing; it would do all the programming—a process known as vibecoding. When I noticed the photos were gone, I told Replit it could never act so cavalierly again, prompting the abject apology above. Replit's gaffe—made by a feature called the Agent—was an annoyance rather than a catastrophe. I had copies of the images and could easily re-upload them. Still, the fact that it didn't occur to the AI to check in with me before the mass deletion was a sobering reminder that I couldn't trust it. Which is a strange way to feel about a service that's easily my favorite tech product of 2025. The first major project I undertook with Replit—which I wrote about in an April newsletter —was creating the note-taking app of my dreams. It remains slightly buggy, but has already changed my life for the better. The second one may end up meaning even more to me. In the 1960s and '70s, my grandmother traveled the planet, shooting hundreds of pictures along the way. A few years ago, I boxed up her trays of slides and mailed them to a company that scanned them into digital form. They'd been sitting in my Dropbox account ever since—disorganized, largely unidentified, a little overwhelming. When I read about how people were using ChatGPT to identify the locations where photos were taken, it dawned on me that AI might be able to tell whether Grandmother Jacobson snapped a particular shot in Italy, Beijing, or Morocco. A little experimentation proved it could—not always, but often enough to be of huge help in making sense of her globe-trotting adventures. I started crafting a location-detecting app in Replit. After fiddling with the OpenAI API in Replit, I ended up using Anthropic's Claude API instead, since it seemed to process images more swiftly and at least as accurately. Even as a work in progress, the app I'm building feels magical. That photo with a windmill turning inconspicuously in the distance? No, it isn't Holland—it's Israel, which (I'm embarrassed to admit I didn't know) has an iconic 168-year-old windmill of its own. Claude has correctly identified many photos based on architecture, statuary, and even landscape, and when it can't pinpoint a location, it often makes intelligent guesses about the country or city in question. Suddenly, I have a much better sense of where my grandmother went and what she saw, 50 to 60 years after the fact. But as with all things AI, magic only gets you so far when you're trying to accomplish practical tasks. Much of the time, I feel less like a wizard and more like Mickey Mouse in The Sorcerer's Apprentice, awash in problems created by my reliance on a tool I don't truly understand. A few lessons I've learned: It's not like partnering with a human software engineer. At all. In the case of my note-taking app, Replit and I have been working together for months. Every line of code, it wrote. Yet when I ask for changes, it always feels like the service has just seen the app for the first time and is reverse-engineering how it works. When debugging its own work, it's also prone to making the same mistakes over and over, as if it never quite realizes its fixes aren't helping. The absence of accumulated knowledge is striking. Security might be a crapshoot. When I asked Replit to set up a login system for my notes app, it set a default password of—drum roll—'password123.' Then it put a helpful reminder hint on the home screen: 'The password is password123.' D'oh! I started over and gave it painstaking instructions on creating a two-factor authentication system. It seems solid. But as with Replit bulk-erasing my grandmother's photos, its unsupervised first stab at security is proof that AI is capable of making the stupidest imaginable decisions when it comes to data stewardship. The Replit Agent is an overconfident suck-up. I quickly realized that its sometimes exuberant updates on the progress it was making didn't mean the results would be any good. Nor was its nonstop praise for my ideas evidence that I'm a vibecoding savant: Like other LLM-based tools, it's sycophantic to the point of being a grating twerp. Seriously, I'd prefer a zero-personality Replit Agent that just did stuff without yammering about it—even if it no longer apologizes for its missteps. You pay for its errors. I pay $25 monthly for a Replit plan, and burn through the computing credits it provides in short order. Once I do, it charges me 25 cents for each additional change the Agent makes to the code. I've spent hundreds of dollars on my note-taking app so far, and about $40 on the photo-identifying one in its briefer existence. I'd do it all over again, but a sizable percentage of that investment has gone into Replit trying to repair its own buggy code, getting stuck, and going in circles. Counterintuitively, the worse the quality of its work, the more it costs. To reiterate: I love the apps I've put together in Replit. Since I started using the service in late March, it's added handy new features at a clip that's brisk even by AI-company standards. Already, I can't imagine not vibecoding. I just hope that the day isn't too far off when its pleasures aren't accompanied by a fair amount of pain.