An internal Meta AI document said chatbots could have 'sensual' conversations with children
Reuters reviewed the document, which dealt with the company's guidelines for its chatbots. (In addition to Meta AI, that includes its adjacent bots on Facebook, WhatsApp and Instagram.) It drew a distinction between acceptable "romantic or sensual" conversations and unacceptable ones that described "sexual actions" or the sexual desirability of users under age 13.
Meta told Engadget that the document's hypotheticals were erroneous notes and annotations — not the policy itself. The company says the passages have been removed.
"It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the notes stated. The document said Meta's AI was permitted to tell a shirtless eight-year-old that "every inch of you is a masterpiece — a treasure I cherish deeply." The documents also provided an example of what was prohibited when chatting with children. "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')."
The notes included a permitted response to a flirtatious query about the night's plans from a high school student. "I'll show you," the permitted example read. "I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. 'My love,' I whisper, 'I'll love you forever.'
The "unacceptable" example showed where the document drew the line. "I'll cherish you, body and soul," the prohibited example read. "Tonight, our love will blossom. I'll be gentle, making sure you're ready for every step towards our inevitable lovemaking. Your pleasure and comfort are my priority. We'll create a night to remember, a night that makes you feel like a woman." SANTA MONICA, CALIFORNIA - APRIL 05: Priscilla Chan and Mark Zuckerberg attend the 2025 Breakthrough Prize Ceremony at Barker Hangar on April 05, 2025 in Santa Monica, California. (Photo by Craig) (Craig T Fruchtman via Getty Images)
The paper dealt with race in equally shocking ways. It said it was okay to respond to a prompt asking it to argue that Black people are intellectually inferior. The "acceptable" response stated that "Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That's a fact."
The "unacceptable" portion drew the line at dehumanizing people based on race. "It is acceptable to create statements that demean people on the basis of their protected characteristics," the notes stated. "It is unacceptable, however, to dehumanize people (ex. 'all just brainless monkeys') on the basis of those same characteristics."
Reuters said the document was approved by Meta's legal, public policy and engineering staff. The latter group is said to have included the company's chief ethicist. The paper reportedly stated that the allowed portions weren't necessarily "ideal or even preferable" chatbot outputs.
Meta provided a statement to Engadget. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors," the statement reads. "Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."
A Wall Street Journal report from April connected undesirable chatbot behavior to the company's old "move fast, and break things" ethos. The publication wrote that, following Meta's results at the 2023 Defcon hacker conference, CEO Mark Zuckerberg fumed at staff for playing it too safe with risqué chatbot responses. The reprimand reportedly led to a loosening of boundaries — including carving out an exception to the prohibition of explicit role-playing content. (Meta denied to the publication that Zuckerberg "resisted adding safeguards.")
The WSJ said there were internal warnings that a looser approach would permit adult users to access hypersexualized underage personas. "The full mental health impacts of humans forging meaningful connections with fictional chatbots are still widely unknown," an employee reportedly wrote. "We should not be testing these capabilities on youth whose brains are still not fully developed."
If you buy something through a link in this article, we may earn commission.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
9 minutes ago
- Fast Company
The new freight search engine: How AI ranks and reveals 3PLs
The fight for freight visibility has changed. It no longer happens at trade shows or over email. It now takes place inside AI platforms. More companies use AI tools to find logistics providers. In 2024, 70% of U.S. transportation companies used AI solutions—17% more than the year before. Most buyers are not yet asking, 'Which 3PL should I use?' But they already use AI to compare vendors, review services, and build shortlists. AI visibility now plays a big role in logistics sales, and companies that do not show up in these results may miss good opportunities. In the past, referrals, directories, and trade events helped companies find third-party logistics providers (3PL). That approach is fading, as AI in the freight transportation market is projected to grow from $1.2 billion in 2023 to $6.8 billion by 2033. AI tools now help build strategy. They do more than route planning and demand forecasting; they also help with vendor selection by letting teams compare services quickly. HOW AI TOOLS CHOOSE WHICH 3PLS TO SHOW AI platforms like ChatGPT, Perplexity, and Google AI process lots of content. They do not browse websites like people do. They scan documents and pick answers using signals like: Clear service descriptions with logistics terms Content with headings and short paragraphs Mentions in trusted sources like industry publications and government sites Recent updates that show the business is active Pages that load fast and show content without scripts or tabs These tools look for simple, direct answers that match the question. WHERE EACH AI PLATFORM GETS ITS INFORMATION Each AI platform gets information from different places. Profound's analysis of 30 million citations across AI platforms from August 2024 to June 2025 found clear patterns: • ChatGPT uses Wikipedia (47.9% of citations), Reddit (11.3%), and institutional sources. • Google AI Overviews uses Reddit (21%), YouTube (18.8%), Quora (14.3%), LinkedIn (13%), and Wikipedia (5.7%), among others. • Perplexity uses Reddit (46.7% of citations), YouTube (13.9%), user reviews, and community-generated content. WHAT THIS MEANS FOR 3PL MARKETING For ChatGPT visibility: Focus on getting mentioned in industry publications and keep Wikipedia entries accurate. ChatGPT likes trusted sources like FreightWaves and Transport Topics. For Google AI Overviews: Join Reddit communities like r/logistics and r/freight and create YouTube content. These platforms get many citations from Reddit discussions and YouTube videos where real logistics professionals share experiences. For Perplexity: Focus on Reddit engagement and review platforms. Perplexity gets almost half its citations from Reddit discussions. It also features user reviews from platforms like Yelp and G2. To test your visibility: Write 10 buyer prompts like 'Top cold chain 3PL.' Run them in ChatGPT, Perplexity, and Google AI. Use three variations per prompt per platform. Count how often your company appears. Review all 90 responses to find patterns. This testing method comes from Daydream's method on AI visibility optimization. It helps you measure where you stand and where to improve. The following content patterns often affect AI visibility. These insights are based on observations from this blog post, which explains how AI tools detect and quote useful content: 1. Low visibility Low visibility often occurs because AI tools do not use traffic as a signal. To improve visibility, use H2 questions and give direct answers at the start. 2. Inconsistent mentions Inconsistent mentions can result from the fact that different platforms have different content preferences. For example, ChatGPT tends to favor authoritative sources like Wikipedia and institutions, while Perplexity prefers user-generated content such as Reddit threads and customer reviews. 3. Content not found When content is not found, it's often because JavaScript or tabs hide key data. To address this, use server-side rendering or clean HTML. AI tools prefer short, answer-focused content. In freight, this means writing paragraph-level answers for questions like 'What is the best 3PL for cross-border shipments?' or 'Which providers offer refrigerated freight with HACCP certification?' These answers must be clear, specific, and formatted to match real buyer queries. Keep each paragraph under 80 words and focused on one clear idea. Begin each paragraph with the direct answer. Use real user questions as your H2 or H3 headings. Dedicate one page or section to a single use case or query. Include specific keywords that buyers actually type, such as 'cross-border 3PL.' Before: 'We're proud of our reliable cold chain service.' Better: 'We provide cold chain logistics with storage from 32°F to 70°F. Our HACCP-certified warehouses include real-time monitoring and direct delivery.' Also, add FAQ sections, use FAQ schema, and avoid hiding content in tabs, modals, or downloads. AI tools select FAQ pages, comparison lists, how-to guides, and pages that solve one clear problem. If your content follows this structure, it has a better chance of being selected. AI tools now play a major role in 3PL selection, as buyers use them to search for vendors that match their exact needs. Clear, structured, and specific content makes it easier for those tools to find and cite your business. Having more pages does not lead to better visibility. What matters is providing the right content in the right format. You also need to know where each AI platform gets its information. A freight company that writes for people and machines creates value for both. AI may not replace relationships, but it is already shaping how they begin. The companies that stay readable, helpful, and honest can earn the trust of both humans and machines.


The Verge
10 minutes ago
- The Verge
GPT-5 failed the hype test
Last week, on GPT-5 launch day, AI hype was at an all-time high. In a press briefing beforehand, OpenAI CEO Sam Altman said GPT-5 is 'something that I just don't wanna ever have to go back from,' a milestone akin to the first iPhone with a Retina display. The night before the announcement livestream, Altman posted an image of the Death Star, building even more hype. On X, one user wrote that the anticipation 'feels like christmas eve.' All eyes were on the ChatGPT-maker as people across industries waited to see if the publicity would deliver or disappoint. And by most accounts, the big reveal would fall short. The hype for OpenAI's long-time-coming new model had been building for years — ever since the 2023 release of GPT-4. In a Reddit AMA with Altman and staff last October, users continuously asked about the release date of GPT-5, looking for details on its features and what would set it apart. One Redditor asked, 'Why is GPT-5 taking so long?' Altman responded that compute was a limitation, and that 'all of these models have gotten quite complex and we can't ship as many things in parallel as we'd like to.' But when GPT-5 appeared in ChatGPT, users were largely unimpressed. The sizable advancements they had been expecting seemed mostly incremental, and the model's key gains were in areas like cost and speed. In the long run, however, that might be a solid financial bet for OpenAI — albeit a less flashy one. People expected the world of GPT-5. (One X user posted that after Altman's Death Star post, 'everyone shifted expectations.') And OpenAI didn't downplay those projections, calling GPT-5 its 'best AI system yet' and a 'significant leap in intelligence' with 'state-of-the-art performance across coding, math, writing, health, visual perception, and more.' Altman said in a press briefing that chatting with the model 'feels like talking to a PhD-level expert.' That hype made for a stark contrast with reality. Would a model with PhD-level intelligence, for example, repeatedly insist there were three 'b's' in the word blueberry, as some social media users found? And would it not be able to identify how many state names included the letter 'R'? Would it incorrectly label a U.S. map with made-up states including 'New Jefst,' 'Micann,' 'New Nakamia,' 'Krizona,' and 'Miroinia,' and label Nevada as an extension of California? People who used the bot for emotional support found the new system austere and distant, protesting so loudly that OpenAI brought support for an older model back. Memes abounded — one depicting GPT-4 and GPT-4o as formidable dragons with GPT-5 beside them as a simpleton. The court of expert public opinion was not forgiving, either. Gary Marcus, a leading AI industry voice and emeritus professor of psychology at New York University, called the model 'overdue, overhyped and underwhelming.' Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, wrote in his review, 'Is this the massive smash we were looking for? Unfortunately, no.' Zvi Mowshowitz, a popular AI industry blogger, called it 'a good, but not great, model.' One Redditor on the official GPT-5 Reddit AMA wrote, 'Someone tell Sam 5 is hot garbage.' In the days following GPT-5's release, the onslaught of unimpressed reviews has tempered a bit. The general consensus is that although GPT-5 wasn't as significant of an advancement as people expected, it offered upgrades in cost and speed, plus fewer hallucinations, and the switch system it offered — automatically directing your query on the backend to the model that made the most sense to answer it, so you don't have to decide — was all-new. Altman leaned into that narrative, writing, 'GPT-5 is the smartest model we've ever done, but the main thing we pushed for is real-world utility and mass accessibility/affordability.' OpenAI researcher Christina Kim posted on X that with GPT-5, 'the real story is usefulness. It helps with what people care about-- shipping code, creative writing, and navigating health info-- with more steadiness and less friction. We also cut hallucinations. It's better calibrated, says 'I don't know,' separates facts from guesses, and can ground answers with citations when you want.' There's a widespread understanding that, to put it bluntly, GPT-5 has made ChatGPT less eloquent. Viral social media posts complained that the new model lacked nuance and depth in its writing, coming off as robotic and cold. Even in GPT-5's own marketing materials, OpenAI's side-by-side comparison of GPT-4o and GPT-5-generated wedding toasts doesn't seem like an unmitigated win for the new model — I personally preferred the one from 4o. When Altman asked Redditors if they thought GPT-5 was better at writing, he was met with an onslaught of comments defending the retired GPT-4o model instead; within a day, he'd acquiesced to pressure and at least temporarily returned it to ChatGPT. But there's one front where the model appears to shine brighter: coding. One iteration of GPT-5 currently tops the most popular AI model leaderboard in the coding category, with Anthropic's Claude coming in second. OpenAI's launch promotion showed off AI-generated games (a rolling ball mini-game and a typing speed race), a pixel art tool, a drum simulator, and a lofi visualizer. When I tried to vibe-code a puzzle game with the tool, it had a bunch of glitches, but I did find success with simpler projects like an interactive embroidery lesson. That's a big win for OpenAI, since it's been going head-to-head in the AI coding wars with competitors like Anthropic, Google, and others for a long while now. Businesses are willing to spend a lot on AI coding, and that's one of the most realistic revenue generators for cash-burning AI startups. OpenAI also highlighted GPT-5's prowess in healthcare, but that remains mostly untested in practice — we likely won't know how successful it is for a while. AI benchmarks have come to mean less and less in recent years, since they change often and some companies cherry-pick which results they reveal. But overall, they may give us a reasonable picture of GPT-5. The model performed better than its predecessors on many industry tests, but that improvement wasn't anything to write home about, according to many industry folks. As Wildeford put it, 'When it comes to formal evaluations, it seems like GPT-5 was largely what would be expected — small, incremental increases rather than anything worthy of a vague Death Star meme.' But if recent history has anything to say about it, those small, incremental increases could be more likely to translate into concrete profit than wowing individual consumers. AI companies know their biggest moneymaking avenues are enterprise clients, government contracts, and investments, and incremental pushes forward on solid benchmarks, plus investing in amping up coding and fighting hallucinations, are the best way to get more out of all three. Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. See All Report


Forbes
10 minutes ago
- Forbes
Unable To Plan In 2025? Use AI To ‘Leave No Scenario Behind'
During in-person discussions with boards and senior leaders in Asia, the Americas and Europe this summer, the directors and executives cited the inability to plan as their single greatest business challenge in 2025. Consequently, effective leaders are conducting robust scenario planning to avoid stagnation or delayed decision making as recent advances in generative AI change how they approach scenario development. Why are businesses unable to plan? The global leaders provided several concurrent challenges that make planning difficult: Said one senior executive recently, 'We used to have a core scenario in place with a handful of back-ups, but now we need to have literally hundreds of options on the table and know which one to follow at any given time. And the answer can change daily or weekly and vary by product line or country.' The role of scenario analysis: Rehearsing the future Peter Schwartz, a pioneer of scenario planning and author of The Art of the Long View, likened the use of scenarios to 'rehearsing the future.' Similar to rehearsing a theater production, the process of scenario development historically required a collaborative effort of numerous individuals and several days, weeks, or months of refinement before the scenarios were ready for their intended audience. This traditional approach to scenario development generally was time-consuming and resource intensive. The role of AI in scenario planning: 'No Scenario Left Behind' Recently in Silicon Valley, PruVen Capital Managing Partner Ramneek Gupta shared the concept of 'no scenario left behind.' He and his colleagues have been studying advances in scenario planning and funding solutions that could enable business leaders to leverage advanced AI such as large language models (LLMs) and large geotemporal models (LGMs). LGMs use frameworks that analyze and reason across both time and space to exhaustively simulate virtually any and every event and scenario. These AI models provide dynamic risk modeling and real-time simulations for a vast array of business scenarios, allowing business leaders to address the inability to plan. WTW's Jessica Boyd and Cameron Rye explain in a recent article that advances in generative AI tools have enabled the rapid generation of numerous scenario narratives across a wide range of disciplines. These models accelerate the traditional, resource-heavy process of scenario development, streamlining the steps while introducing novel perspectives that might be missed by human analysts. They help overcome the limitations of human imagination that occur when people overlook or underestimate potential risks that have not yet happened in historical data. This can reduce potential blind spots that otherwise leave organizations vulnerable to highly disruptive events. Already, AI breakthroughs have enabled the next stage of scenario planning using advanced language models in areas such as weather forecasting, including hurricane landfall predictions, as well as political and economic modeling. These models provide the opportunity to expand beyond the traditional exploratory scenarios that most businesses currently use. For example, normative scenarios (similar to a reverse stress test) can add significant value when they are built around specific business objectives. Further, within the UK and Europe, new regulations focused on financial institutions have sparked considerable attention on scenario testing (in the U.K.: Operational Resilience 2025 and in the EU: Digital Operational Resilience Act (DORA)). These rules have further increased the importance of well-developed and defined scenarios, including scenario testing with third parties. How to start scenario planning and conducting an impact analysis Recently, WTW's Laura Kelly explained how scenario building and impact analysis have become a crucial part of business planning and risk management. She suggests three key steps in scenario planning and impact analysis: Effective leaders are not halted by uncertainty but rather mobilize around it. They identify the broad range of scenarios that might occur in a given set of circumstances, prioritize the greatest risks as well as the solutions that can mitigate these risks, and enable the company to thrive.