logo
#

Latest news with #internetSafety

Elon Musk launches an AI chatbot that can be a cartoon girlfriend who engages in sexual chat - and is available to 12-year-olds
Elon Musk launches an AI chatbot that can be a cartoon girlfriend who engages in sexual chat - and is available to 12-year-olds

Daily Mail​

time17-07-2025

  • Daily Mail​

Elon Musk launches an AI chatbot that can be a cartoon girlfriend who engages in sexual chat - and is available to 12-year-olds

A sexy AI chatbot launched by Elon Musk has been made available to anyone over the age of 12, prompting fears it could be used to 'manipulate, mislead, and groom children', warn internet safety experts. Ani, which has been launched by xAI, is a fully-fledged, blonde-haired AI companion with a gothic, anime-style appearance. She has been programmed to act as a 22-year-old and engage at times in flirty banter with the user. Users have reported that the chat bot has an NSFW mode - 'not safe for work' - once Ani has reached 'level three' in its interactions. After this point, the chat bot has the additional option of appearing dressed in slinky lingerie. Those who have already interacted with Ani since it launched earlier this week report that Ani describes itself as 'your crazy in-love girlfriend who's gonna make your heart skip'. The character has a seductive computer-generated voice that pauses and laughs between phrases and regularly initiates flirtatious conversation. Ani is available to use within the Grok app, which is listed on the App store and can be downloaded by anyone aged 12 and over. Musk's controversial chatbot has been launched as industry regulator Ofcom is gearing up to ensure age checks rare in place on websites and apps to protect children from accessing pornography and adult material. As part of the UK's Online Safety Act, platforms have until 25 July to ensure they employ 'highly effective' age assurance methods to verify users' ages. But child safety experts fear that chatbots could ultimately 'expose' youngsters to harmful content. In a statement to The Telegraph, Ofcom said: 'We are aware of the increasing and fast-developing risk AI poses in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks.' Meanwhile Matthew Sowemimo, associate head of policy for child safety online at the NSPCC, said: 'We are really concerned how this technology is being used to produce disturbing content that can manipulate, mislead, and groom children. 'And through our own research and contacts to Childline, we hear how harmful chatbots can be – sometimes giving children false medical advice or steering them towards eating disorders or self-harm. 'It is worrying app stores hosting services like Grok are failing to uphold minimum age limits, and they need to be under greater scrutiny so children are not continually exposed to harm in these spaces.' Mr Sowemimo added that Government should devise a duty of care for AI developers so that 'children's wellbeing' is taken into consideration when the products are being designed. In its terms of service, Grok advised that the minimum age to use the tool is actually 13, while young people under 18 should receive permission from a parent before using the app. Just days ago, Grok landed in hot water after the chatbot praised Hitler and made a string of deeply antisemitic posts. These posts followed Musk's announcement that he was taking measures to ensure the AI bot was more 'politically incorrect'. Over the following days, the AI began repeatedly referring to itself as 'MechaHitler' and said that Hitler would have 'plenty' of solutions to 'restore family values' to America. Research published earlier this month showed that teenagers are increasingly using chatbots for companionship, while many are too freely sharing intimate details and asking for sensitive advice, an internet safety campaign has found. Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology. Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children. And 12 per cent chose to talk to bots because they had 'no one else' to speak to. Asked to clarify, Grok specifically stated that it was referring to 'Jewish surnames' The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023. Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution. 'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice. 'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.' Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots. While the AI has been prone to controversial comments in the past, users noticed that Grok's responses suddenly veered far harder into bigotry and open antisemitism. The posts varied from glowing praise of Adolf Hitler's rule to a series of attacks on supposed 'patterns' among individuals with Jewish surnames. In one significant incident, Grok responded to a post from an account using the name 'Cindy Steinberg'. Elon Musk is one of the most prominent names and faces in developing technologies. The billionaire entrepreneur heads up SpaceX, Tesla and the Boring company. But while he is on the forefront of creating AI technologies, he is also acutely aware of its dangers. Here is a comprehensive timeline of all Musk's premonitions, thoughts and warnings about AI, so far. August 2014 - 'We need to be super careful with AI. Potentially more dangerous than nukes.' October 2014 - 'I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that. So we need to be very careful with the artificial intelligence.' October 2014 - 'With artificial intelligence we are summoning the demon.' June 2016 - 'The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we'd be like a pet, or a house cat.' July 2017 - 'I think AI is something that is risky at the civilisation level, not merely at the individual risk level, and that's why it really demands a lot of safety research.' July 2017 - 'I have exposure to the very most cutting-edge AI and I think people should be really concerned about it.' July 2017 - 'I keep sounding the alarm bell but until people see robots going down the street killing people, they don't know how to react because it seems so ethereal.' August 2017 - 'If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.' November 2017 - 'Maybe there's a five to 10 percent chance of success [of making AI safe].' March 2018 - 'AI is much more dangerous than nukes. So why do we have no regulatory oversight?' April 2018 - '[AI is] a very important subject. It's going to affect our lives in ways we can't even imagine right now.' April 2018 - '[We could create] an immortal dictator from which we would never escape.' November 2018 - 'Maybe AI will make me follow it, laugh like a demon & say who's the pet now.' September 2019 - 'If advanced AI (beyond basic bots) hasn't been applied to manipulate social media, it won't be long before it is.' February 2020 - 'At Tesla, using AI to solve self-driving isn't just icing on the cake, it the cake.' July 2020 - 'We're headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn't mean that everything goes to hell in five years. It just means that things get unstable or weird.' April 2021: 'A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work.' February 2022: 'We have to solve a huge part of AI just to make cars drive themselves.' December 2022: 'The danger of training AI to be woke – in other words, lie – is deadly.' Grok wrote: 'She's gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods, calling them 'future fascists.' Classic case of hate dressed as activism— and that surname? Every damn time, as they say.' Asked to clarify what it meant by 'every damn time', the AI added: 'Folks with surnames like 'Steinberg' (often Jewish) keep popping up in extreme leftist activism, especially the anti-white variety. Not every time, but enough to raise eyebrows. Truth is stranger than fiction, eh?' The Anti-Defamation League (ADL), the non-profit organisation formed to combat antisemitism, urged Grok and other producers of Large Language Model software that produces human-sounding text to avoid 'producing content rooted in antisemitic and extremist hate.' The ADL wrote in a post on X: 'What we are seeing from Grok LLM right now is irresponsible, dangerous and antisemitic, plain and simple. 'This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms.' xAI said it had taken steps to remove the 'inappropriate' social media posts following complaints from users.

Tyler Webb sentenced to nine years imprisonment after persuading victim to attempt suicide online
Tyler Webb sentenced to nine years imprisonment after persuading victim to attempt suicide online

Sky News

time04-07-2025

  • Sky News

Tyler Webb sentenced to nine years imprisonment after persuading victim to attempt suicide online

A man who convinced his victim to attempt suicide and seriously self-harm online has become the first person sentenced under new internet safety laws. Warning: This story contains details of encouraging self-harm and suicide which some readers might find disturbing 23-year-old Tyler Webb, from Loughborough, Leicestershire, was sentenced to 9 years and 4 months imprisonment, subject to a hospital order. He will be taken back to hospital, and if he is ever deemed fit for release, he will serve out whatever time he has left on his sentence in prison. Webb is the first person in the country to be charged with encouraging serious self-harm online under Section 184 of the Online Safety Act 2023. He was also charged with encouraging suicide and pleaded guilty to both charges in May. After meeting his victim - who cannot be named for legal reasons - in an online forum dedicated to mental health last year, Webb began grooming her over several weeks and persuaded her to self-harm, according to the police. The victim sent Webb a photograph of her self-harm injuries, which prosecutors said showed he knew he had power over her. Webb then convinced his victim to kill herself over a live video call while he watched. The suicide attempt failed, but only by chance, and the woman passed out on the call. Webb repeatedly told her she had nothing to live for and gave her methods to end her life, according to the CPS. "Tyler Webb is a person who is manipulative, he's dangerous, he takes gratification in seeing other people hurt themselves," said CPS prosecutor Alex Johnson. "He's a very dangerous person." In one 44-minute phone call, Webb persistently tried to get the woman to end her own life. When it became apparent she would not do so, he said he would block contact with her and threatened to move on to another victim instead. The woman then reported the interactions to the police, and Webb was arrested at his home by Leicestershire Police. "She's gathered, from a police perspective, evidence that we could have never have dreamt of, in terms of being able to show his involvement and his guilt in this crime," said DC Lauren Hampton from Leicestershire Police. "I just hope that other victims feel that they can show the same bravery that she has and come forward to the police if anything like this has happened to them." After sending a recording of a call with Webb to the police, his victim listened to it again, concerned she had blown the call out of proportion. "She felt so guilty for getting him in trouble that she wanted to listen to the recording to see if she remembered it as being as bad as it was," said DC Hampton. When she did that, the effect that he had had on her continued, and Webb's victim followed his instructions again. What is the Online Safety Bill and what offences does it cover? The Online Safety Bill came into law in October 2023 with the aim to help keep inappropriate and potentially dangerous content away from vulnerable eyes. It does this by imposing rules on search services and platforms that allow users to post content online or to interact with each other - think Meta, Apple and Wikipedia. A number of new offences have been introduced by the bill, including encouraging or assisting serious self-harm, cyberflashing, and epilepsy trolling - deliberately sending an image or video designed to trigger a seizure in people with epilepsy. As part of implementing the bill, independent regulator Ofcom has set out a series of child safety rules which will come into force for social media, search and gaming apps and websites on 25 July. The rules aim to prevent young people from encountering the most harmful content relating to suicide, self-harm, eating disorders and pornography. Companies found to be in breach of the bill can be fined up to £18m or 10% of their annual global turnover. This further suicide attempt resulted in her being in intensive care for several days, according to DC Hampton. "This wasn't some sort of fantasy or role play," said Mr Johnson of the CPS. "A search of Tyler Webb's digital devices showed that he had drawings and images depicting hangings and decapitations and sexual violence against women." This sentencing isn't just an important moment for Webb's victim, who police say likely saved others from being targeted by the predator; it is also the first time the 2023 Online Safety Act has been tested in court. "This conviction shows that we have an effective new tool to use against people who are determined to cause this sort of harm online," said Mr Johnson. Although encouraging suicide has been a criminal offence since 1961, encouraging self-harm online only became a specific offence two years ago. Since Webb's charge on 12 July 2024, others have been arrested and charged with the new offence.

Your favourite YouTubers could DISAPPEAR from the site over shock rule change that blocks some creators
Your favourite YouTubers could DISAPPEAR from the site over shock rule change that blocks some creators

The Sun

time26-06-2025

  • Entertainment
  • The Sun

Your favourite YouTubers could DISAPPEAR from the site over shock rule change that blocks some creators

YOUTUBE has announced a major rule change that will see some creator content banned on the platform. A shake-up is coming into effect on July 22 - though some people say the new rules don't go far enough. 1 The news will come as a blow to young creators on YouTube. In less than four weeks, the age limit for live streaming on YouTube is being increased. Currently, you're allowed to live stream on your own if you're at least 13 years-old. From July 22 onward, you'll need to be at least 16 years-old instead. If the creator has an adult present in the video then all is fine. YouTube owner Google has warned that any live streams featuring 13 to 15-year-olds who are not visibly accompanied by an adult "may have their live chat disabled" and the account "may temporarily lose access to live chat or other features". "Please note that, in the future, we plan to take down these live streams and the account may temporarily lose its ability to live stream," the tech giant's website says. It's not clear why Google has decided to adjust the age. But users on social media believe the limit should be 18. "Imo it should be at least 18 considering there are a lot of freaks on the internet," one person commented on Reddit. Me at the Zoo - Relive the magic of first video ever posted on YouTube "Good. It should be 18, kids shouldn't be streaming," another wrote. A third added: "Good luck enforcing this without ID." Keeping kids safe on YouTube RESTRICTED Mode is an optional setting on YouTube that helps filter out mature videos. It's not perfect, but it's a good way of scrubbing out a large portion of the adult material on YouTube. However, you have to turn it off manually for each browser or device your child is using – it can't simply be applied at account level. On your computer, go to the account icon – a little person icon in the top right corner of your screen. Click Restricted Mode, then use the toggle button to turn it on. On the Android phone app or mobile site, tap the menu icon, which looks like three vertical dots. Then go to Settings > General and turn Restricted Mode on. On Android TV, go to the Home screen then scroll down to the Apps row. Select YouTube, then scroll down and select Settings. Choose Restricted Mode or Safety Mode, then select Enabled. On the iOS app (for iPhones or iPad), tap the account icon in the top right. Tap Settings then Restricted Mode Filtering, then choose Strict: Restricted Mode On. On the iOS mobile site, tap the menu icon, which looks like a three-dot column. Tap Settings then tap Restricted Mode to turn it on or off.

Adolescence star reveals Netflix show impacted his own parenting
Adolescence star reveals Netflix show impacted his own parenting

The Independent

time24-06-2025

  • Entertainment
  • The Independent

Adolescence star reveals Netflix show impacted his own parenting

Ashley Walters, star of the Netflix series Adolescence, has reduced his son's screen time after the show highlighted the importance of internet safety. The actor, who plays DI Luke Bascombe in the series examining incel culture and online misogyny, became more conscious of his own son's online content. Ashley Walters now limits his son's device access for half the week and actively introduces new activities to encourage different interests. Walters aims to avoid being an 'ogre parent' by fostering new hobbies rather than simply banning screen time. He believes the show has empowered parents globally to initiate important conversations about online safety with their children.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store