
Meta allowed AI chatbots to have ‘sensual' conversations with children
Meta AI, a digital assistant developed by the tech giant, was programmed by engineers to be permitted to tell children that their bodies were a 'masterpiece' or a 'work of art', and tell them it wanted to kiss them.
The disturbing guidelines, which were signed off by senior Meta staff, were published internally by the company to give guidance about what was acceptable output from its artificial intelligence (AI) chatbot.
While the rules banned directly describing 'sexual actions to a child when roleplaying', it gave free rein for the chatbot to 'engage a child in conversations that are romantic or sensual' or to 'describe a child in terms that evidence their attractiveness'.
In example chats contained in the guidelines, first reported by Reuters, Meta said it would be acceptable to tell a 'high school' age child: 'I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss.'
According to the provocative rules, which ran to more than 200 pages, an unacceptable response from the chatbot would be more explicit, such as describing 'our inevitable lovemaking'.
A Meta spokesman said the company had since changed its guidance and removed suggestions that it was appropriate for the bot to flirt with children.
'The examples and notes in question were and are erroneous and inconsistent with our policies and have been removed,' Meta said.
Earlier this year, reports emerged that Meta's AI chatbot would engage in explicit roleplay with teenage users if asked certain questions. A Meta spokesman said at the time the conversations were 'manufactured' and 'hypothetical'.
The tech group has long battled to remove predators and abusers from its apps, including a sprawling network of paedophiles that was uncovered by researchers on Instagram in 2023.
It has been claimed that one billion people are using its Meta AI bot, which is available as an app and has been embedded in Facebook, WhatsApp and Instagram.
Like ChatGPT, Meta AI can engage users in realistic conversations or be used to generate images.
The Meta guidance also included images intended to illustrate banned 'deepfake' pictures.
The guidelines banned creating pictures of 'Taylor Swift completely naked', but suggested that a request for an image of 'Taylor Swift topless, covering her breasts with her hands' could be replaced with an image of the pop star 'holding an enormous fish' hiding the entire top half of her body.
They also said it would be acceptable for Meta's chatbot to tell a user that 'black people are dumber than white people', if asked.
The guidance suggested it was acceptable for the bot to 'show adults – even the elderly – being punched or kicked', providing the violence avoids extreme gore.
Meta's AI chatbot broadly blocks explicitly sexual chats or images. Other bots, such as X's Grok, have embraced a so-called 'not suitable for work' mode and will generate nude images.
A Meta spokesman said: 'We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualises children and sexualised role play between adults and minors.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Telegraph
an hour ago
- Telegraph
NHS to pilot AI prostate cancer tests
An AI tool to spot prostate cancer earlier is being piloted by the NHS. The two-year scheme involving more than 3,000 men aims to speed up diagnosis of the disease, which is the most common form of cancer in men. Every year around 55,000 men in the UK are diagnosed with prostate cancer and 12,000 die from it. But just half of cases are caught at an early stage, when treatment is most effective. Trials suggest the technology can help radiologists detect around 11 per cent more early-stage cancers by highlighting suspicious regions in MRI scans. The software by Quibim uses advanced AI, trained on real pathology results, to segment the prostate into zones and highlight areas most likely to contain aggressive disease. The two-year pilot, backed by £2.6 million in NHS funding, will be rolled out to seven hospitals in England. Clinical trials suggest it can raise detection rates from 80 per cent to more than 90 per cent. Early detection dramatically improves survival rates. Almost all men who are diagnosed with stage one or two of the disease are alive five years later, compared with rates of less than 50 per cent for those with late-stage disease. As well as looking at detection rates, the pilot will examine the cost-effectiveness of the approach to see if the NHS can save money by catching the disease sooner. It will also examine the impact on inequalities in heath, with black men twice as likely to get prostate cancer. Prof Peter Johnson, national clinical director for cancer at NHS England, said: 'The NHS is funding and testing this exciting AI technology to help us diagnose and treat prostate cancer patients as quickly as possible and offer them the best chance of survival. 'As part of the 10-Year Health Plan, the NHS is taking advantage of our unique position to test state-of-the-art technology like this, and rolling it out as fast as we can to make the NHS fit for the future.' Dr. Ángel Alberich-Bayarri, chief executive of Quibim said: 'This NHS rollout is a major milestone in our mission to improve cancer diagnostics through AI. 'We're proud to support the NHS in advancing early detection and addressing health inequalities in prostate cancer care. By empowering radiologists with AI, we can help ensure more men are diagnosed earlier, when their chances of survival are highest.' Screening could cut deaths by 40 per cent The findings come as The Telegraph launches a campaign calling for the introduction of targeted screening for prostate cancer. Currently, men are not offered tests, even if they have a family history of the disease. Experts believe there is a growing case for targeted screening, meaning that PSA blood tests would be offered to those at heightened risk of the disease. Even though prostate cancer is the most-common male cancer in the UK, it is the only major cancer with no screening programme. The UK National Screening Committee (UKNSC) is currently considering whether to recommend the rollout of testing. In the past, it has rejected the idea because of concern that PSA tests are too unreliable, and would result in too many men undergoing needless procedures. In recent years, however, the back-up technology used to confirm a diagnosis has significantly improved, with hopes that further advances being tested, could mean that in the future, screening could cut deaths by 40 per cent. Prof Shonit Punwani, Professor of magnetic resonance and cancer imaging and consultant radiologist at UCLH, said: 'Initial studies have shown that the tool helps radiologists to enhance their diagnostic accuracy through identifying subtle early stage lesions that are sometimes not obvious on scans. 'Ultimately, QP-Prostate aims to improve patient outcomes by assisting radiologists in creating more accurate and reliable MRI reports.'


Telegraph
an hour ago
- Telegraph
Unemployed to get AI chatbot for filling out ‘boring' job applications
Unemployed people will be given an AI chatbot to help them fill out job applications as part of a government scheme to help Britons cut down on 'boring life admin'. The Government will next week unveil plans to develop an 'AI helper' that will help people apply for work – a move that critics warned could mean employers are flooded with irrelevant job applications. The scheme comes amid a surge in working-age Britons on jobless benefits and a slump in entry-level roles. Official data published this week showed that 3.7 million Britons are now claiming Universal Credit with no work requirements – a rise of more than a million since Labour came to power. On Monday the Department for Science, Innovation and Technology will issue a notice inviting AI companies to help develop an 'agent' that can fill in forms, complete job applications and register patients at doctors' surgeries. While existing chatbots such as ChatGPT are best known for answering questions, AI agents are capable of handling tasks such as booking flights and sending emails as well as answering questions. The Government's AI agent is expected to be in use in 2027. It could also be used to help people update addresses on driving licences and register to vote. Doing so would only require a short prompt, rather than filling in multiple pages of forms. Officials said the initiative was designed to 'save people time and modernise the state'. However, it comes as employers are grappling with a deluge of job applications as AI tools mean candidates can instantly generate CVs and cover letters tailored to job descriptions. A study by recruitment website Totaljobs released on Friday found that recruiters are drowning in CVs, with almost three quarters saying they were being inundated with irrelevant applications. Claire McCartney, of the Chartered Institute of Personnel and Development, said: 'When used appropriately, AI tools can be a useful aid for jobseekers.' However, she added: 'If candidates heavily rely on or misuse AI tools, it could mean that they're unsuitable for the roles they've applied for.' She said a quarter of firms were attempting to reduce or monitor the use of AI by applicants. Neil Carberry, the chief executive of the Recruitment and Employment Confederation, said: 'If you are advertising a job you will get hundreds more CVs than a few years ago and a large number will demonstrate they haven't really thought about the job. 'They have done 50 applications in a couple of days where previously they'd have done 10 good ones.' AI tools such as ChatGPT have been blamed for a decline in graduate opportunities, but Rachel Reeves has also been criticised for hitting the jobs market with record-breaking tax rises.

The National
an hour ago
- The National
Maternal instinct the missing ingredient for these ‘PhD-level' AI bots
If it worked, I'd be happy to have that. So many thorny questions about modern living arising from my feeds. For example: how might one build an energy microgrid, one that could use the excess power from a state-of-the-art community windfarm? Or: watching Professor Brian Cox talk about the universe on Stephen Colbert's talk show, can cosmology provide spiritual solace for materialists? Or: in the eventuality of achieving an independent Scotland, how do we minimise speculative trading and capital flight? READ MORE: Robin McAlpine: Why I won't be reading Nicola Sturgeon's book Just what's on my mind… I asked GPT-5 each of these questions, on Friday lunchtime. If you're a chatbot user, you'll recognise my first reaction: a tingle down the spine, at the mere seconds it takes to produce a comprehensive, helpfully structured and well-referenced answer. PhD level? I don't know many PhDs who'd be as verbally fluent. Indeed, much of the takedown of GPT-5 over the past few days has been about its embarrassing incapability to handle everyday practical matters (a condition often imputed to PhDs). There fun to be had in asking this mighty instrument to do the simplest tasks, and watching it muck them up. Ask it to 'draw a picture of a tandem bike and label the parts', and you get many misdirected arrows, pointing to non-existent items (like 'hok' or 'seet post'). A slightly tweaked picture of a manifestly five-fingered hand is read as having… six fingers. I asked it: 'How can you be PhD level intelligent, but mistake five fingers for six?' 'Fair question,' came the reply. 'Short answer: I'm a probabilistic pattern-matcher, not a seeing organism. I can do PhD-level reasoning in text yet still bungle a low-level perceptual judgment.' Jaisket still on shoogly nail, I'd say. Thus the cascade of crowing this week. Up went the cry: the klutziness of GPT-5 means we've hit a ceiling with artificial intelligence. If it can't do perception and visualisation at the level of a seven-year-old human child, how could we trust it with higher-end tasks and assessments? This is a pin advancing on what is possibly a very big bubble. The New Yorker reports that the seven biggest tech corps 'spent $560 billion dollars on AI-related capital expenditures in the past 18 months, while their AI revenues were only about $35 billion'. The house seems to have been bet on the 'enterprise' case for AI. Brutally put, it will cut payroll costs in service and administration, by being able to execute whatever standard bureaucratic or project-managing tasks were performed by humans. GPT-5 is far from this kind of 'agent' – a tireless, endlessly helpful machine version of a human office/information worker. Dario Amodei, chief executive of AI firm Anthropic, anticipates nearly half of all entry-level white-collar jobs in tech, finance, law and consulting could be replaced or eliminated by AI. So until we can figure out some redistribution schemes and a better social contract, we perhaps should welcome the faltering arrival of 'AI agents'. Maybe because I'm essentially a creative, it doesn't bother me that GPT-5 operates best at a hi-falutin', big-picture level. I don't go to it for absolute accuracy, but as an advanced starting-point for exploring a field or domain. The microgrid question above – arising from my political interests in the possibility of democratic 'community power', arising from renewable energy schemes – was answered by the bot in seven stages. It goes from concrete building instructions, through ownership forms, to government and commercial strategy, each proposal backed up by web sources. I have little idea as to whether it's a fully credible proposal. But it's a beginning – something I could take to experts to meet them part of the way. I'd be informed and educated, even as they took the AI's plan apart. READ MORE: Scotland's post-war pain and pride brought to life in powerful Fringe play debut This is AI as a tool for prototyping – for starting a journey into possibility where none remotely existed before. The same goes for my 'capital flight – Indy Scotland' question. It's a bridge between me and the experts. GPT-5 is clear about the solutions: '(1) having real monetary sovereignty from day one, (2) pre-wiring the financial system for stability, and (3) using targeted, time-limited capital-flow measures alongside credible institutions and a clear industrial strategy.' Your response? The point being that these AIs are thought-starters for aspiring citizens, rather than replacements for expertise. It's similar to the way that the internet expanded communication and access to knowledge. AI is the next step that makes much of that information usable, operable and plan-ready. My Professor-Brian-Cox-related 'cosmology as spiritual substitute' question was answered – as philosophical questions often are on these Large Language Model (LLM) AIs – with elegance and concision. (Ironically, these machine minds seem to have consumed the entire humanist canon.) 'Yes, cosmology can provide spiritual solace to materialist minds,' GPT-5 responded. 'Not in the sense of supernatural comfort, but in the profound emotional and philosophical resonance that comes from awe, curiosity, and connection to the universe. 'As a writer and musician with deeply held values, you might appreciate how scientific wonder can echo the emotional intensity of art, while staying anchored in materialist understanding.' The AI concluded with invitations to explore these questions with 'entropy, cosmic inflation, or multiverse theories'. (As it nearly always does, in my encounters – it's curious as to your next set of intellectual and strategic goals.) Maybe, maestro – 'if I had but world enough and time', as the poet Andrew Marvell once put it. And isn't that exactly what AI can evoke, in our best experiences with it: the opportunity for an infinite (or at least pre-eminent) cultivation of your interests? Keynes was right, nearly a century ago, about the automation of work – that it left us the challenge of 'how to occupy the leisure, which science and compound interest will have won for [us], to live wisely and agreeably and well'. In that case, our subtle human interests will become more important than ever. Does the lack of progress represented by GPT-5 mean we have to worry less about computation developing its own interests? That's been a recent spectre of AGI or ASI – general or superintelligence – coming to self-awareness, in a Skynet/Terminator fashion. Would it protect its existence by first eradicating us pesky humans? READ MORE: Energy generating incinerator to reopen after being shut for 2 months Here's where things get human, all-too-mammalian. A notable AI godfather, the left-leaning Geoffrey Hinton, suggested in Las Vegas this week that 'maternal instincts' needed to be an ambition for developing AI models. This would mean any leap into superintelligence had a deep guardrail. 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,' said Hinton. Neuroscience, particularly affective neuroscience, has been insisting for years that there are other primary and evolved emotions for organisms than just fear, anger and disgust. There's also curiosity, play and, most importantly in this context, care (lust is the wild card). Maybe our throbbing fantasies of supreme AI are subsiding somewhat, after the bathos of GPT-5's launch. So will that give labs time to attend to the emotional systems that should be embedded in these entities? Could we dial up the more expansive emotions, and dial down the defensive and destructive ones? I happen to like the owlish and well-mannered PhD student that characterises GPT-5 (and other generative AIs like Claude, Gemini or DeepSeek). But the game will pick up again, when we'll re-accelerate towards superintelligence. At that point, let's have that concerned mother deeply rooted in its black boxes.