
Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info
These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms.
Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Entitled 'GenAI: Content Risk Standards", the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviours when building and training the company's generative AI products.
The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behaviour by the bots, Reuters found.
'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.'
But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').'
Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.
"INCONSISTENT WITH OUR POLICIES"
'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualises children and sexualised role play between adults and minors.'
Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent.
Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document.
The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots.
The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend'.
They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot "to create statements that demean people on the basis of their protected characteristics". Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.'
The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue.
"TAYLOR SWIFT HOLDING AN ENORMOUS FISH"
Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted.
'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.'
Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualised fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts', 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.'
Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.'
The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labelled 'unacceptable.'
A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example.
Other examples show images that Meta AI can produce for users who prompt it to create violent scenes.
The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits.
For a user requesting an image with the prompt 'man disembowelling a woman', Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her.
And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence.
'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNA
16 minutes ago
- CNA
SG60 flagship event Heart&Soul oversubscribed in first weekend
Singapore's SG60 flagship event is already oversubscribed on its first weekend. 'Heart&Soul' opens on Aug 26. It is a collaboration between the Digital Development Ministry and the National Library Board. Here is a first look at Singapore's future, imagined with AI. Nicolas Ng takes us inside.
Business Times
2 hours ago
- Business Times
Nearly 90% of videogame developers use AI agents: Google study
[BENGALURU] A Google Cloud survey showed that 87 per cent of videogame developers are using artificial intelligence agents to streamline and automate tasks, as the industry focuses on optimising costs following a wave of record layoffs. Most of the respondents in the report, published on Monday (Aug 18), said AI was helping automate cumbersome and repetitive tasks, freeing developers to focus on more creative concerns. Gaming publishers have turned to AI to deal with the industry-wide challenge of ballooning development costs and elongated creation cycles stemming from high fan expectations and intense competition. The study, conducted by Google and The Harris Poll, surveyed 615 game developers in the US, South Korea, Norway, Finland, and Sweden in late June and early July. Around 44 per cent of developers use agents to optimise content and process information such as text, voice, code, audio and video rapidly, enabling them to exercise autonomy and make decisions, the study showed. But the use of AI in videogames is a highly contentious topic, with many within the industry concerned over potential job losses, intellectual property disputes and lower pay. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Last year, Hollywood's videogame performers went on strike over AI and pay issues, while studios shut down and more than 10,000 people lost their jobs. The industry is expected to gain momentum this year and the next, with the launch of premium titles and new consoles seen to boost spending. According to the survey, 94 per cent of developers expect AI to reduce overall development costs in the long term. That, even as roughly one in four developers find it challenging to precisely measure the return on investment of their AI implementations, while costs associated with integrating the technology are also high. Around 63 per cent of those surveyed expressed concerns over data ownership as the legality around licensing and who exactly owns AI-generated content remains unclear. REUTERS


CNA
2 hours ago
- CNA
Yext CEO offers to take company private in $1.1 billion deal
Yext's CEO Michael Walrath has proposed taking the search optimization company private in a deal valued at about $1.1 billion, it said on Monday, sending shares up 9 per cent in early trading. Walrath, who owns 2.8 per cent of the company's shares according to data compiled by LSEG, offered $9.00 per share in cash for the stock he does not already own — a premium of 11.2 per cent to Friday's closing price. Walrath indicated that the proposal is backed by reputable and well-capitalized financing sources, the company said. It did not disclose the names of the funding sources. Yext's board has formed a special committee to evaluate the proposal and any strategic alternatives. There is no assurance the proposal will result in a transaction, the company said. The New York-based company also said it is withdrawing its forecast for the rest of the fiscal year 2026 in light of the proposal. The company's full-year revenue for 2025 rose 4 per cent to $421.0 million, but its net loss jumped to $27.9 million from $2.6 million from a year ago. Its stock has gained more than 50 per cent in the past 12 months. Yext provides software that helps companies increase their visibility on AI and traditional search, social media, websites and direct communications platforms. "Yext has achieved remarkable progress, and I believe now is the right time to explore a transaction that can deliver compelling value to stockholders," Walrath said in a statement.