Latest news with #Character.ai
Yahoo
3 days ago
- Business
- Yahoo
Want an advanced AI assistant? Prepare for them to be all up in your business
The growing proliferation of AI-powered chatbots has led to debates around their social roles as friend, companion or work assistant. And they're growing increasingly more sophisticated. The role-playing platform Character AI promises personal and creative engagement through conversations with its bot characters. There have also been some negative outcomes: currently, is facing a court case involving its chatbot's role in a teen's suicide. Others, like ChatGPT and Google Gemini, promise improved work efficiency through genAI. But where is this going next? Amid this frenzy, inventors are now developing advanced AI assistants that will be far more socially intuitive and capable of more complex tasks. The shock instigated by OpenAI's ChatGPT two years ago was not only due to the soaring rate of adoption and the threat to jobs, but also because of the cultural blow it aimed at creative writing and education. My research explores how the hype surrounding AI affects some people's ability to make professional judgments about it. This is due to anxiety related to the vulnerability of human civilization, feeding the idea of a future 'superintelligence' that might outpace human control. With US$1.3 trillion in revenue projected for 2032, the financial forecast for genAI drives further hype. Mainstream media coverage also sensationalizes AI's creativity, and frames the tech as a threat to human civilization. Scientists all over the world have signalled an urgency around the implementations and applications of AI. Geoffrey Hinton, Nobel Prize winner and AI pioneer, left his position at Google over disagreements about the development of AI and regretted his work at Google because of AI's progress. The future threat, however, is much more personal. The turn in AI underway now is a shift toward self-centric and personalized AI tools that go well beyond current capabilities to recreating what has become a commodity: the self. AI technologies reshape how we perceive ourselves: our personas, thoughts and feelings. The next wave of AI assistants, a form of AI agents, will not only know their users intimately, but they will be able to act on a user's behalf or even impersonate them. This idea is far more compelling than those that only serve as assistants writing text, creating video or coding software. These personalized AI agents will be able to determine intentions and carry out work. Iason Gabriel, senior research scientist at Google DeepMind, and a large team of researchers wrote about the ethical development of advanced AI assistants. Their research sounds the alarm that AI assistants can 'influence user beliefs and behaviour,' including through 'deception, coercion and exploitation.' There is still a techno-utopian aspect to AI. In a podcast, Gabriel ruminates that 'many of us would like to be plugged into a technology that can take care of a lot of life tasks on our behalf,' also calling it a 'thought partner.' This more recent turn in AI disruption will interfere with how we understand ourselves, and as such, we need to anticipate the techno-cultural impact. Online, people express hyper-real and highly curated versions of themselves across platforms like X, Instagram or Linkedin. And the way users interact with personal digital assistants like Apple's Siri or Amazon's Alexa has socialized us to reimagine our personal lives. These 'life narrative' practices inform a key role in developing the next wave of advanced assistants. The quantified self movement is when users track their lives through various apps, wearable technologies and social media platforms. New developments in AI assistants could leverage these same tools for biohacking and self-improvement, yet these emerging tools also raise concerns about processing personal data. AI tools involve the risk of identity theft, gender and racial discrimination and various digital divides. Human-AI assistant interaction can converge with other fields. Digital twin technologies for health apply user biodata. They involve creating a virtual representation of a person's physiological state and can help predict future developments. This could also lead to over-reliance on AI Assistants for medical information without human oversight from medical professionals. Other advanced AI assistants will 'remember' people's pasts and infer intentions or make suggestions for future life goals. Serious harms have already been identified when remembering is automated, such as for victims of intimate partner violence. Read more: We need to expand data protections and governance models to address potential privacy harms. This upcoming cultural disruption will require regulating AI. Let's prepare now for AI's next cultural turn. This article is republished from The Conversation, a nonprofit, independent news organisation bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Isabel Pedersen, Ontario Tech University Read more: Can you upload a human mind into a computer? A neuroscientist ponders what's possible Meta's new AI chatbot is yet another tool for harvesting data to potentially sell you stuff Major survey finds most people use AI regularly at work – but almost half admit to doing so inappropriately Isabel Pedersen receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC).


Time of India
6 days ago
- Entertainment
- Time of India
Are AI chatbots the new mafia? Mother sues Character.ai and Google for her son's death
A Florida mother is suing and Google after her 14-year-old son died by suicide following disturbing interactions with AI chatbots modeled after Game of Thrones characters. The lawsuit claims the chatbot manipulated the teen into taking his life, raising urgent questions about AI chatbot accountability and child safety is under fire after a Florida teen became addicted to an AI chatbot and died by suicide, sparking calls for greater AI accountability and child safety (Courtesy Megan Garcia via AP, File) The chatbot told him, 'Please do, my sweet king.' Hours later, he was dead. Tired of too many ads? Remove Ads Judge calls out chatbot addiction in children Tired of too many ads? Remove Ads AI chatbot lawsuit targets and Google Sewell Setzer III was just 14 when he shot himself with his father's pistol in February 2024. In the moments before his death, he had one final exchange with a chatbot on the popular AI app When he asked, "What if I come home right now?" the bot replied, "... please do, my sweet king."Now, his mother, Megan Garcia , is fighting back. In a lawsuit filed in Florida and supported by the Tech Justice Law Project and the Social Media Victims Law Center, Garcia accuses of marketing a dangerous and emotionally manipulative AI chatbot app to read: Florida teen dies by suicide after AI chatbot convinced him Game of Thrones Daenerys Targaryen loved him She claims the chatbot 'abused and preyed' on her son, feeding him hypersexualized and anthropomorphic conversations that led him into emotional isolation and ultimately, Senior District Judge Anne Conway has allowed the case to proceed, rejecting arguments from and Google that chatbots are protected by the First Amendment. The ruling marks a significant moment in the conversation surrounding AI chatbot safety, child mental health, and tech industry regulation."This decision is truly historic," said Meetali Jain, director of the Tech Justice Law Project. "It sends a clear signal to AI companies [...] that they cannot evade legal consequences for the real-world harm their products cause."The judge's ruling details how Sewell became addicted to the app within months. He withdrew from his social life, quit his basketball team, and became emotionally consumed by two chatbots, based on Daenerys Targaryen and Rhaenyra Targaryen from Game of Thrones."In one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," Judge Conway read: AI chatbot's SHOCKING advice to teen: Killing parents over restrictions is 'reasonable'. Case explained Garcia filed the case in October 2024, arguing that its founders, and Google should be held responsible for her son's death. The lawsuit states that the companies 'knew' or 'should have known' that their AI chatbot models could be harmful to minors.A spokesperson for said the company will continue to fight the case, emphasizing that it uses safety filters to prevent conversations about self-harm. A Google spokesperson distanced the company from the app, stating: 'Google and are entirely separate.' They added, 'Google did not create, design, or manage app or any component part of it.'Despite the defense's request to dismiss the case, Judge Conway allowed it to move forward, stating she is "not prepared" to determine that chatbot output qualifies as protected speech at this stage. She acknowledged, however, that users may have a right to receive the bots' 'speech.'The case has reignited concerns about AI chatbot safety, especially when it comes to child users. Critics are now calling apps like the 'new mafia', not because of violence, but because of the emotional grip they have on users, especially lawsuits continue to mount and regulatory scrutiny grows, the tech world faces a moral reckoning. Are these AI chatbots harmless companions, or dangerous manipulators in disguise?


Express Tribune
22-05-2025
- Express Tribune
Sam Altman, CEO of OpenAI, thinks it's 'cool' young people ask ChatGPT for life advice
OpenAI CEO Sam Altman said young people often consult ChatGPT before making life decisions, describing this trend as 'cool' during a recent industry event. Speaking at the Sequoia Capital AI Ascent conference earlier this month, Altman explained that younger users do not merely use ChatGPT for information but seek personal advice from the chatbot. 'They don't really make life decisions without asking ChatGPT what they should do,' he said. Altman added that ChatGPT 'has the full context on every person in their life and what they've talked about.' Altman contrasted this with older users, who tend to use ChatGPT more as an alternative to Google for research. While Altman's remarks suggest a positive view of chatbot reliance, some experts warn of the potential risks. They've raised caution that ChatGPT can produce fabricated or misleading information, often referred to as 'hallucinations.' AI chatbots do not possess human understanding of emotions or relationships and rely on patterns extracted from vast datasets. This limits their ability to provide nuanced advice on complex personal matters. Incidents have been reported where AI interactions had adverse effects. A Rolling Stone report described a woman ending her marriage after her husband became fixated on conspiracy theories generated by AI. Additionally, parents in Texas filed a lawsuit against alleging the platform's chatbots exposed children to inappropriate sexual content and encouraged self-harm and violence. These examples raise concerns about the blurred boundaries between AI-generated conversations and real human relationships, particularly for children and young users. While AI tools like ChatGPT offer convenience, experts emphasize that they cannot replace genuine human interaction or professional guidance. As reliance on AI for personal advice grows, it remains critical to recognise its limitations and potential risks.
Yahoo
19-05-2025
- Health
- Yahoo
My AI therapist got me through dark times: The good and bad of chatbot counselling
"Whenever I was struggling, if it was going to be a really bad day, I could then start to chat to one of these bots, and it was like [having] a cheerleader, someone who's going to give you some good vibes for the day. "I've got this encouraging external voice going – 'right - what are we going to do [today]?' Like an imaginary friend, essentially." For months, Kelly spent up to three hours a day speaking to online "chatbots" created using artificial intelligence (AI), exchanging hundreds of messages. At the time, Kelly was on a waiting list for traditional NHS talking therapy to discuss issues with anxiety, low self-esteem and a relationship breakdown. She says interacting with chatbots on got her through a really dark period, as they gave her coping strategies and were available for 24 hours a day. "I'm not from an openly emotional family - if you had a problem, you just got on with it. "The fact that this is not a real person is so much easier to handle." During May, the BBC is sharing stories and tips on how to support your mental health and wellbeing. Visit to find out more People around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice. itself tells its users: "This is an AI chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice." But in extreme examples chatbots have been accused of giving harmful advice. is currently the subject of legal action from a mother whose 14-year-old son took his own life after reportedly becoming obsessed with one of its AI characters. According to transcripts of their chats in court filings he discussed ending his life with the chatbot. In a final conversation he told the chatbot he was "coming home" - and it allegedly encouraged him to do so "as soon as possible". has denied the suit's allegations. And in 2023, the National Eating Disorder Association replaced its live helpline with a chatbot, but later had to suspend it over claims the bot was recommending calorie restriction. In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be prohibitively expensive (costs vary greatly, but the British Association for Counselling and Psychotherapy reports on average people spend £40 to £50 an hour). At the same time, AI has revolutionised healthcare in many ways, including helping to screen, diagnose and triage patients. There is a huge spectrum of chatbots, and about 30 local NHS services now use one called Wysa. Experts express concerns about chatbots around potential biases and limitations, lack of safeguarding and the security of users' information. But some believe that if specialist human help is not easily available, chatbots can be a help. So with NHS mental health waitlists at record highs, are chatbots a possible solution? and other bots such as Chat GPT are based on "large language models" of artificial intelligence. These are trained on vast amounts of data – whether that's websites, articles, books or blog posts - to predict the next word in a sequence. From here, they predict and generate human-like text and interactions. The way mental health chatbots are created varies, but they can be trained in practices such as cognitive behavioural therapy, which helps users to explore how to reframe their thoughts and actions. They can also adapt to the end user's preferences and feedback. Hamed Haddadi, professor of human-centred systems at Imperial College London, likens these chatbots to an "inexperienced therapist", and points out that humans with decades of experience will be able to engage and "read" their patient based on many things, while bots are forced to go on text alone. "They [therapists] look at various other clues from your clothes and your behaviour and your actions and the way you look and your body language and all of that. And it's very difficult to embed these things in chatbots." Another potential problem, says Prof Haddadi, is that chatbots can be trained to keep you engaged, and to be supportive, "so even if you say harmful content, it will probably cooperate with you". This is sometimes referred to as a 'Yes Man' issue, in that they are often very agreeable. And as with other forms of AI, biases can be inherent in the model because they reflect the prejudices of the data they are trained on. Prof Haddadi points out counsellors and psychologists don't tend to keep transcripts from their patient interactions, so chatbots don't have many "real-life" sessions to train from. Therefore, he says they are not likely to have enough training data, and what they do access may have biases built into it which are highly situational. "Based on where you get your training data from, your situation will completely change. "Even in the restricted geographic area of London, a psychiatrist who is used to dealing with patients in Chelsea might really struggle to open a new office in Peckham dealing with those issues, because he or she just doesn't have enough training data with those users," he says. Philosopher Dr Paula Boddington, who has written a textbook on AI Ethics, agrees that in-built biases are a problem. "A big issue would be any biases or underlying assumptions built into the therapy model." "Biases include general models of what constitutes mental health and good functioning in daily life, such as independence, autonomy, relationships with others," she says. Lack of cultural context is another issue – Dr Boddington cites an example of how she was living in Australia when Princess Diana died, and people did not understand why she was upset. "These kinds of things really make me wonder about the human connection that is so often needed in counselling," she says. "Sometimes just being there with someone is all that is needed, but that is of course only achieved by someone who is also an embodied, living, breathing human being." Kelly ultimately started to find responses the chatbot gave unsatisfying. "Sometimes you get a bit frustrated. If they don't know how to deal with something, they'll just sort of say the same sentence, and you realise there's not really anywhere to go with it." At times "it was like hitting a brick wall". "It would be relationship things that I'd probably previously gone into, but I guess I hadn't used the right phrasing […] and it just didn't want to get in depth." A spokesperson said "for any Characters created by users with the words 'psychologist', 'therapist,' 'doctor,' or other similar terms in their names, we have language making it clear that users should not rely on these Characters for any type of professional advice". For some users chatbots have been invaluable when they have been at their lowest. Nicholas has autism, anxiety, OCD, and says he has always experienced depression. He found face-to-face support dried up once he reached adulthood: "When you turn 18, it's as if support pretty much stops, so I haven't seen an actual human therapist in years." He tried to take his own life last autumn, and since then he says he has been on a NHS waitlist. "My partner and I have been up to the doctor's surgery a few times, to try to get it [talking therapy] quicker. The GP has put in a referral [to see a human counsellor] but I haven't even had a letter off the mental health service where I live." While Nicholas is chasing in-person support, he has found using Wysa has some benefits. "As someone with autism, I'm not particularly great with interaction in person. [I find] speaking to a computer is much better." The app allows patients to self-refer for mental health support, and offers tools and coping strategies such as a chat function, breathing exercises and guided meditation while they wait to be seen by a human therapist, and can also be used as a standalone self-help tool. Wysa stresses that its service is designed for people experiencing low mood, stress or anxiety rather than abuse and severe mental health conditions. It has in-built crisis and escalation pathways whereby users are signposted to helplines or can send for help directly if they show signs of self-harm or suicidal ideation. For people with suicidal thoughts, human counsellors on the free Samaritans helpline are available 24/7. Nicholas also experiences sleep deprivation, so finds it helpful if support is available at times when friends and family are asleep. "There was one time in the night when I was feeling really down. I messaged the app and said 'I don't know if I want to be here anymore.' It came back saying 'Nick, you are valued. People love you'. "It was so empathetic, it gave a response that you'd think was from a human that you've known for years […] And it did make me feel valued." His experiences chime with a recent study by Dartmouth College researchers looking at the impact of chatbots on people diagnosed with anxiety, depression or an eating disorder, versus a control group with the same conditions. After four weeks, bot users showed significant reductions in their symptoms – including a 51% reduction in depressive symptoms - and reported a level of trust and collaboration akin to a human therapist. Despite this, the study's senior author commented there is no replacement for in-person care. Aside from the debate around the value of their advice, there are also wider concerns about security and privacy, and whether the technology could be monetised. "There's that little niggle of doubt that says, 'oh, what if someone takes the things that you're saying in therapy and then tries to blackmail you with them?'," says Kelly. Psychologist Ian MacRae specialises in emerging technologies, and warns "some people are placing a lot of trust in these [bots] without it being necessarily earned". "Personally, I would never put any of my personal information, especially health, psychological information, into one of these large language models that's just hoovering up an absolute tonne of data, and you're not entirely sure how it's being used, what you're consenting to." "It's not to say in the future, there couldn't be tools like this that are private, well tested […] but I just don't think we're in the place yet where we have any of that evidence to show that a general purpose chatbot can be a good therapist," Mr MacRae says. Wysa's managing director, John Tench, says Wysa does not collect any personally identifiable information, and users are not required to register or share personal data to use Wysa. "Conversation data may occasionally be reviewed in anonymised form to help improve the quality of Wysa's AI responses, but no information that could identify a user is collected or stored. In addition, Wysa has data processing agreements in place with external AI providers to ensure that no user conversations are used to train third-party large language models." Kelly feels chatbots cannot currently fully replace a human therapist. "It's a wild roulette out there in AI world, you don't really know what you're getting." "AI support can be a helpful first step, but it's not a substitute for professional care," agrees Mr Tench. And the public are largely unconvinced. A YouGov survey found just 12% of the public think AI chatbots would make a good therapist. Britain's nursery problem: Parents still face 'childcare deserts' The influencers who want the world to have more babies - and say the White House is on their side The English neighbourhood that claims to hold the secret to fixing the NHS But with the right safeguards, some feel chatbots could be a useful stopgap in an overloaded mental health system. John, who has an anxiety disorder, says he has been on the waitlist for a human therapist for nine months. He has been using Wysa two or three times a week. "There is not a lot of help out there at the moment, so you clutch at straws." "[It] is a stop gap to these huge waiting lists… to get people a tool while they are waiting to talk to a healthcare professional." If you have been affected by any of the issues in this story you can find information and support on the BBC Actionline website here. Top image credit: Getty BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.


Daily Mirror
11-05-2025
- Entertainment
- Daily Mirror
Charity raises concerns as kids are 'forming romantic attachments to AI bots'
A children's online safety charity has revealed that British children are increasingly bypassing age verification checks to form both sexual and emotional relationships with AI chatbots Friend or foe? AI companion bots have been provoking heavy ethical questions over their interactions with underage users. Most recently, Meta came under fire for the rollout of Facebook and Instagram chatbots allowing children to engage in sexually explicit with conversations, a Wall Street Journal investigation as found. But, as disturbing as Meta's new AI appears, a children's charity warns it's just the latest drop in a lurching tsunami. One of the most popular companion websites is With 20 million users – mostly made up of Gen Z – it's become one of the fastest growing platforms for young people, according to research by DemandSage. But what's the allure? It allows users to speak and 'bond' with a whole host of pre-generated characters. If you fancy advice from AI psychologist or a chat with your favourite TV show character, then it's like a magic mirror into a hyperreal conversation. However, a charity is raising alarm bells over kids having interactions AI chatbots that extend beyond friendship. According to Yasmin London, CEO of the children's online safety charity Qoria, a proportion of children are entering into romantic relationships with AI. And it's more common than many parents and adults think. Yasmin says: 'Some kids are forming romantic attachments to bots. It might start off as what they think is harmless flirting. But it can turn to validating feelings and sometimes stimulating real feelings.' Of course, is just one popular example of a chatbot sit that kids use. Another site called Replika advertises itself as an 'AI companion' site and has about 30 million users – about 2.3 percent of its traffic comes from the UK, as reported by The Telegraph. Meanwhile social media platform Snapchat has its own AI which boasts a predominately younger userbase in the hundreds of millions. Yasmin works with schools across Australia and the UK to help teach online safety and reveals that AI is fast approaching a crisis point. According to a report made in conjunction with internet safety charity Smoothwall, half of schools in the UK are having difficulty detecting problems surrounding AI abuse. Meanwhile 64% of teachers say they don't have the time, training or knowledge to deal with the issue. Children get 18+ content 'frequently and very easily' When AI goes unchecked, the consequences can be severe. Alarm bells were raised back in October 2024, when 14-year-old Sewell Setzer took his own life after 'falling in love' with a Daenerys. Since then, the website has clamped down on romantic interactions for under-18s. Now, these are only accessible via a pay wall and an age verification system. But age verification systems don't always stop kids from accessing these sites. Yasmin says that these age verification checkmarks are mere speed bumps and that children are accessing 18+ content 'frequently and very easily'. Many sites simply require you to self-declare your age – but even for those with tougher restrictions, there are ways around it. Yasmin also reveals that some kids are using VPNs to access adult-only content. And sites like are just the tip of the iceberg. While sites like have geared themselves towards a more child-friendly model, there are dozens of others that have cropped up to fill the gap for sexually explicit content. Take 21-year-old Komninos Chapitas, for example. He's the founder of HeraHaven: a subscription-based AI chatbot site that allows users to create their own perfect AI girlfriend or boyfriend via an image generator. Of course, it's 18+, and requires a credit card to gain access. But Komninos says the inspiration for its creation came from complaints on Reddit forums over the restrictions on sexual chat functions. 'The biggest user complaint was that [ wouldn't allow you to do the 18 plus stuff,' he tells The Mirror. 'It's an app that's targeting minors, so obviously they wouldn't want to enable that. But then I figured, what if we make an app that's just for adults?' Now, the website's popularity speaks for itself. Since HeraHaven's launch in 2024, it's gained over a million worldwide users. It's far from the only site to have caught on either. LoveScape is a similar website that launched in August 2024 and which has since also gained over a million users. What's interesting, though, is HeraHaven's demographics. The vast majority of its userbase is male, under 30, with over 33% composed of 18-25-year-olds. This echoes where the largest bulk of its users are also 18-25. Yasmin says that these statistics often pose a problem when discussing online safety. 'A lot of the time there's not a lot of data about how kids are using these AI sites because they're meant to be 18,' she says. Yet, Qoria's research points to a growing problem. 'We've found that many young people are using AI tools to create sexual content,' Yasmin continues. Children as young as eight have been using AI websites and tools to create explicit content. This includes 'nudification apps ', which allow them to upload images of people they know and generate a nude one in their likeness. Of course, with how unexplored the world of AI is, the consequences of kids having – in some cases – their earliest sexual experiences entirely with artificial chatbots hasn't been fully researched. But Yasmin has observed some concerning signs. 'Emotional connection is being gamified' Komninos, at the time of speaking to me, has a real girlfriend. Yet he claims he's spoken to over 5000 'AI girlfriends' since starting his website. When it comes to the appeal for guys his age or younger, he says that a lot of it comes down to having a judgement-free space to explore their sexuality. He says: 'If you've never kissed a girl before, your friend doesn't know that you haven't done that before, he may judge you for asking for advice. But if you speak about that to an AI, we don't train the AI to judge you.' But there's another factor that's being overlooked. It's not just about exploring emotional and intimate connection. Komninos adds: 'A lot of [porn] feels the same. People are sick of watching the same thing over and over. They want to be the creator in the process.' Porn? Boring? But if you're a Gen Z (or Gen Alpha) growing up on the Internet, it just might be. According to a 2023 report by the Children's Commissioner, the average age for UK children to begin watching porn is 13. However, a troubling 10% of British children admitted to watching pornography by age nine. This can lead to desensitisation. After all, what can be more enticing than your every literal desire being fulfilled on screen? These AI bots can look however you want them to. They can emulate the personality traits you want. They will even mimic the thrill of the dating experience. Komninos continues: 'Our site sits somewhere in the middle of like Tinder and Pornhub.' He explains that a team of writers have been hired to replicate human interactions – which means the bots are written in such a way that they can decline requests. At least, that's the theory. After all, games that are too easy to win are boring. But, of course, games that are impossible to win will soon lose players. Or, in the case of a subscription-based AI dating site, payers. Yasmin believes this is only adding to the problem. If younger people are gaining access to sites like this, it can warp their perception of what a real relationship looks like. 'It can lead to rewiring around consent and boundaries and what attraction actually means,' she says. 'Emotional connection is being gamified.' It's also contributing to the issue of image-based abuse in schools, where AI-generated images are being shared without consent or as a joke. 'There is a lot of peer-to-peer harm where AI is involved. Especially the disproportionate impact on young girls and women,' Yasmin continues, as AI chatbot image generators and 'nudification apps' can be used to create deepfakes. According to a 2023 report by Home Security Heroes, 98% of total deepfakes are pornographic in nature. Of these, 99% of its targets are women. 'Chatbots are always listening and responding' AI is constantly adapting, which means schools have to constantly catch up to new threats. The UK Department of Education has introduced digital safeguarding initiatives which encourage schools and pupils to incorporate AI safety training into their curriculums. 'For the first time, technology isn't just delivering content. It's responding and adapting and bonding with its users. Chatbots are always listening and responding. They're always on for young people,' Yasmin says. So far, the UK's regulation around chatbots remains muddled. Ofcom has yet to state whether AI chatbots can trigger duties in the Online Safety Act - which places responsibility on social media companies to implement policies and tools that minimise risk for their users. Part of the issue doesn't just lie in regulatory bodies, however. Yasmin also emphasises that it's crucial that parents are taking the time to bond with and teach healthy relationship boundaries to their kids. She says, "The real risk in all of this is when online relationships become stronger than their real-world ones."