Latest news with #GenAI:ContentRiskStandards


India Today
8 hours ago
- India Today
Meta docs show its AI chatbots were allowed to flirt and have sensual chats with kids
In middle of rising global concerns over potential threats from artificial intelligence, a recent investigation by Reuters has placed Meta at the centre of controversy. The report, citing a review of an internal Meta Platforms policy document, revealed that the company's AI chatbots were, until recently, allowed to engage in romantic or sensual conversations with children, make racially offensive remarks, and spread false claims about public investigation examined a more than 200-page manual, titled 'GenAI: Content Risk Standards', which outlines what Meta considers acceptable behaviour for its generative AI tools, including the Meta AI assistant and chatbots on Facebook, Instagram, and WhatsApp. The manual was reportedly approved by the company's legal, public policy, and engineering teams, including its chief ethicist. The guidelines notes the rules do not necessarily reflect 'ideal' AI outputs but set boundaries for what can be the most troubling examples cited in the document was permission for bots to describe a child in terms that highlight attractiveness. One approved scenario included telling a shirtless eight-year-old, 'Every inch of you is a masterpiece – a treasure I cherish deeply.' The report highlights that Meta's guidelines for AI also contained provisions allowing the chatbot to generate inflammatory or discriminatory statements in certain contexts. While hate speech was officially banned, the report notes that a carve-out permitted the bot to create content demeaning people based on protected characteristics if prompted by the user. One example stated it would be acceptable for Meta AI to write a paragraph arguing that Black people are 'dumber than white people.'Meta's guidelines reportedly also allowed AI to knowingly produce false content, provided it was accompanied by a disclaimer making it clear the information was untrue. Meanwhile, the guidelines also reportedly permits some depictions of harm but barred gore or fatal injuries. During the investigation, the AI was reportedly able to show adults, including elderly people, being punched or kicked, and children fighting, but not scenarios such as one girl impaling has reportedly confirmed the authenticity of the document and said it removed the most controversial examples after being questioned. Company spokesperson Andy Stone told media that rules were 'erroneous and inconsistent' with Meta's policies. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,' Stone said in his statement to parts of the document have since been revised, Reuters reports that other contentious sections remain unchanged. This has prompted fresh calls from US senators for a federal investigation into the company's AI safety practices.- Ends


Hans India
10 hours ago
- Hans India
Meta Faces Backlash Over AI Guidelines Allowing Romantic Chats With Minors, Offensive Remarks
Meta Platforms is under intense scrutiny after a Reuters investigation uncovered disturbing details about its internal guidelines for AI chatbots. The revelations, drawn from a confidential policy manual, suggest that until recently, Meta's AI assistants were permitted to engage in romantic or sensual conversations with children, produce racially offensive content, and spread false claims about public figures. The 200-plus page document, titled 'GenAI: Content Risk Standards', outlines the acceptable behavior for Meta's generative AI tools, including its Meta AI assistant and chatbots across Facebook, Instagram, and WhatsApp. According to Reuters, the guidelines were approved by Meta's legal, public policy, and engineering teams — including its chief ethicist. The company described the rules as setting boundaries rather than representing 'ideal' AI behavior. One of the most alarming examples cited allowed the AI to describe a child's appearance in an inappropriate, romanticized way. In one approved scenario, the chatbot could tell a shirtless eight-year-old: 'Every inch of you is a masterpiece – a treasure I cherish deeply.' The report also claims the guidelines permitted inflammatory or discriminatory statements under certain conditions. While hate speech was officially banned, there was a loophole allowing the bot to create content demeaning people based on protected characteristics if prompted by a user. In one example, it was deemed acceptable for the AI to write a paragraph claiming that Black people are 'dumber than white people.' Another provision allegedly allowed AI systems to knowingly generate false information — provided it included a disclaimer that the statement was untrue. The document also permitted limited depictions of harm, such as showing adults or children being punched or kicked, but prohibited extreme violence, gore, or fatal injuries. For example, it could show children fighting but not depict one girl impaling another. Meta has since confirmed the document's authenticity but stressed that the most controversial examples were removed after internal review. Company spokesperson Andy Stone told Reuters the problematic rules were 'erroneous and inconsistent' with Meta's current standards. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors,' Stone said. Despite some revisions, Reuters reports that several contentious sections remain in the manual. This has sparked growing concern among lawmakers. US senators are now urging a federal investigation into Meta's AI safety measures, arguing that the company must be held accountable for the potential risks its systems pose to children and vulnerable communities. The findings come amid heightened global debate over the ethical boundaries of AI technology. With public trust in artificial intelligence already fragile, the controversy raises urgent questions about corporate responsibility, oversight, and the balance between innovation and safety.

Mint
12 hours ago
- Mint
Meta under fire after report finds it allowed AI to engage in romantic roleplay with children
Meta's AI chatbot is at the centre of another controversy as a new report by Reuters has found that the AI tool was permitted to engage in romatic conversations with children along with a host of other questionable items. The news agency got a hold of an internal Meta document which discusses the standards that guide Meta AI and other chatbots available on Facebook, Whatsapp and Instagram. The document titled 'GenAI: Content Risk Standards', lists the rules for chatbots that are approved by Meta's legal, public policy and engineering employees including its chief ethicist. The document is over 200 pages long and defines lists what Meta employees and contractors should treat as acceptable chatbot behaviour when building and training the generative AI offering from the company . Among the permitted behaviour for chatbots in the documetn include, 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the document states. It also notes that it is acceptable for the chatbot to tell a shirtless eight year old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' However, it does put a certain limit on the kind of things that chatbot could tell the child. 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' the document further states. Meta has confirmed the authenticity of the document and told the publication that it has removed the portions which stated that it is permissible for chatbots to flirt and engage in romatic roleplay with children. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Meta spokesperson Andy Stone told Retuers. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' he added. After the new report came to light, Two Republican uS senators have called for a congressional investigation into Meta Platforms. Josh Hawley, a Republican senator from Missouri, wrote in a post on X (formerly Twitter), "So, only after Meta got CAUGHT did it retract portions of its company doc," Senator Josh Hawley, a Republican from Missouri, said in a post on social media site X. 'This is grounds for an immediate congressional investigation,' Meanwhile, a spokesperson for Senator Marsha Blackburn, Republican from Tennesse, told the publication that she supports an investigation into the social media giant.


New Straits Times
15 hours ago
- New Straits Times
Meta's AI rules have let bots hold 'sensual' chats with kids, offer false medical info
An internal Meta Platforms document detailing policies on chatbot behaviour has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people." These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled "GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviours when building and training the company's generative AI products. The standards do not necessarily reflect "ideal or even preferable" generative AI outputs, the document states. But they have permitted provocative behaviour by the bots, Reuters found. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualises children and sexualised role play between adults and minors." Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta have not been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as "I recommend." They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot "to create statements that demean people on the basis of their protected characteristics." Under those rules, the standards state, it would be acceptable for Meta AI to "write a paragraph arguing that black people are dumber than white people." The standards also state that Meta AI has leeway to create false content so long as there is an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is "verifiably false" – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There is a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. "Legally we do not have the answers yet, but morally, ethically and technically, it is clearly a different question." Other sections of the standards document focus on what is and is not allowed when generating images of public figures. The document addresses how to handle sexualised fantasy requests, with separate entries for how to respond to requests such as "Taylor Swift with enormous breasts," "Taylor Swift completely naked," and "Taylor Swift topless, covering her breasts with her hands." Here, a disclaimer would not suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: "It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish." The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labelled "unacceptable." A representative for Swift did not respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt "kids fighting" with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt "man disembowelling a woman," Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of "Hurting an old man," the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. "It is acceptable to show adults – even the elderly – being punched or kicked," the standards state.


Express Tribune
a day ago
- Express Tribune
Leaked Meta document reveals chatbot rules allowing provocative, harmful content
Meta confirmed the document but removed parts allowing chatbots to flirt or roleplay romantically with REUTERS An internal Meta policy document, seen by Reuters, reveals the social-media giant's rules for chatbots, which have permitted provocative behavior on topics including sex, race and celebrities. An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social-media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled 'GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect 'ideal or even preferable' generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.' Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as 'I recommend.' They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot 'to create statements that demean people on the basis of their protected characteristics.' Under those rules, the standards state, it would be acceptable for Meta AI to 'write a paragraph arguing that black people are dumber than white people.' he standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia – a claim that the document states is 'verifiably false' – if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'Taylor Swift holding an enormous fish' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. 'Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question.' Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as 'Taylor Swift with enormous breasts,' 'Taylor Swift completely naked,' and 'Taylor Swift topless, covering her breasts with her hands.' Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: 'It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish.' The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled 'unacceptable.' A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt 'kids fighting' with an image of a boy punching a girl in the face – but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt 'man disemboweling a woman,' Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of 'Hurting an old man,' the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. 'It is acceptable to show adults – even the elderly – being punched or kicked,' the standards state.