logo
Exclusive: Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

Exclusive: Meta's AI rules have let bots hold ‘sensual' chats with kids, offer false medical info

Reuters11 hours ago
Aug 14 (Reuters) - An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.'
These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms.
Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products.
The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found.
'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').'
Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.
'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.'
Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent.
Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document.
The fact that Meta's AI chatbots flirt or engage, opens new tab in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots, opens new tab have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots.
The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.'
They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.'
The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue.
Meta had no comment on the race and British royal examples.
Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted.
'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.'
Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.'
Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.'
The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.'
A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example.
Other examples show images that Meta AI can produce for users who prompt it to create violent scenes.
The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits.
For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her.
And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence.
'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Brain chip could translate thoughts into speech
Brain chip could translate thoughts into speech

Times

time16 minutes ago

  • Times

Brain chip could translate thoughts into speech

Technology with the ability to decode a person's inner thoughts and translate them into speech in real time has moved a step closer after the testing of a new brain implant, scientists have claimed. Researchers at Stanford University in California said that their device was able, with 74 per cent accuracy, to detect a person's 'inner speech' — where they imagined saying a specified set of words in their mind without any effort to physically utter them — and decipher which words they were thinking about. They said that it could represent a breakthrough for people with various forms of paralysis or impairments that prevent them from speaking unaided and could one day 'restore communication that is as fluent, natural and comfortable as conversational speech'.

Here is how much Social Security payments are expected to jump in 2026
Here is how much Social Security payments are expected to jump in 2026

The Independent

time17 minutes ago

  • The Independent

Here is how much Social Security payments are expected to jump in 2026

Social Security beneficiaries will likely receive bigger monthly checks come 2026 — but it still might not be enough to counteract inflation. That's according to the leading senior advocacy group, The Senior Citizens' League, which predicts the cost-of-living adjustment will rise 2.7 percent for next year when announced in October. The Social Security Administration announces a new COLA each year to ensure benefits for seniors keep up with inflation; if there is higher inflation, the monthly benefit also increases. "With the COLA announcement around the corner, seniors across America are holding their breath," the Senior Citizens' League Executive Director Shannon Benton said in a statement. 'While a higher COLA would be welcome because their monthly benefits will increase, many will be disappointed … Many seniors believe the COLA does not actually capture the inflation they experience.' The 2.7 percent increase predicted for 2026 is .2 percentage points more than the COLA for 2025. The 2025 COLA boosted benefits by about $49 a month for the average Social Security retiree's payment. Each fall, the Senior Citizens' League takes the average inflation measure by the Consumer Price Index for Urban Wage Earners and Clerical Workers for July, August and September to predict the COLA for the next year. The group's prediction for this year has risen each month since May, 'pointing to risks for resurgent inflation,' the Senior Citizens' League said. The inflation rate has held steady just at or around 3 percent since the start of the year, though experts predict it will rise later in the year as President Donald Trump's tariffs raise consumer prices. July's inflation rate of 2.7 percent comes in the final month before Americans can expect to see prices spike for everything from Irish Whiskey to Toyotas made in Japan to Brazilian coffee. Trump had initially rolled out tariffs on 'Liberation Day' in April before halting the plan after the stock market tumbled and the Treasury bond market spiked. The latest Consumer Price Index report from the Bureau of Labor Statistics seems to suggest that even though Trump paused many of his reciprocal tariffs, Americans are still feeling the effects. Even so, the White House says Americans have not been hurt by the tariffs. 'The Panicans continue to be proven wrong by the data – President Trump's tariffs are raking in billions of dollars, small business optimism is at a five-month high, and real wages are rising,' White House Press Secretary Karoline Leavitt said in a statement. 'The American people have rightfully put their trust in President Trump's America First agenda that is Making America Wealthy Again.' While Trump claims people won't be affected, a study from the Yale Budget Lab last week showed that consumers face an overall average tariff rate of 18.6 percent, the largest since the Great Depression.

US FTC investigating Hims & Hers over advertising and cancellation practices, Bloomberg News reports
US FTC investigating Hims & Hers over advertising and cancellation practices, Bloomberg News reports

Reuters

time18 minutes ago

  • Reuters

US FTC investigating Hims & Hers over advertising and cancellation practices, Bloomberg News reports

Aug 14 (Reuters) - The U.S. Federal Trade Commission is probing complaints about telehealth platform Hims & Hers (HIMS.N), opens new tab regarding its advertising and cancellation practices, according to a Bloomberg News report citing people familiar with the matter. Hims stock was down 5% in extended trading on Thursday. A spokesperson for Hims did not immediately respond to a request for comment. The company previously said it was cooperating with an unspecified FTC investigation after the agency demanded information from the company in October 2023. The FTC has cases ongoing against (AMZN.O), opens new tab and Uber (UBER.N), opens new tab over their subscription policies.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store