logo
#

Latest news with #SoniaLivingstone

'Dark Peppa Pig' horror as fake YouTube videos target terrified kids
'Dark Peppa Pig' horror as fake YouTube videos target terrified kids

Daily Mirror

time6 days ago

  • Entertainment
  • Daily Mirror

'Dark Peppa Pig' horror as fake YouTube videos target terrified kids

Twisted creators on YouTube are taking advantage of Peppa Pig's popularity, leaving children at risk of viewing inappropriate content. The platform removed two worrying examples after being contacted by The Mirror With its wholesome storylines documenting everyday family life, Peppa Pig is a children's TV staple. And there was much excitement recently when fans were treated to the arrival of a new member of the family - a baby girl piglet named Evie. But while the show itself is universally trusted by parents, watching it on YouTube can be a different matter. As a hugely popular cartoon for kids, Peppa Pig has been a target for twisted YouTube creators over the years. This week, a search by The Mirror found a clip called 'MLG Peppa Pig (PARODY)' within seconds, which showed the character holding a machine gun. Made eight years ago, it has been watched 18 million times. A second disturbing creation called 'Peppa does Drugs' showed the pig snorting cocaine. ‌ In response to our investigation, a YouTube spokesperson told us "We've removed both videos from YouTube and terminated a channel for violating our child safety policies, which we rigorously enforce. Neither of the videos shared by The Mirror have ever appeared in the YouTube Kids app, our recommended experience for younger viewers. ‌ "Our teams remain vigilant, and will continue to take further actions as needed." While YouTube maintains its main platform is not for children, research suggests 80 per cent of 3 to 17 year olds in the UK regularly watch it nonetheless, mainly on their phones and devices. The tech giant says it prohibits content targeting young minors and families, which contains inappropriate themes, with videos flagged and reviewed using a combination of human reviewers and AI. YouTube places age restrictions and warnings on graphic content that doesn't violate guidelines but is inappropriate for users under 18 years of age. ‌ Professor Sonia Livingstone, a social psychologist at the London School of Economics and expert on child online safety, told the BBC back in 2017: "It's perfectly legitimate for a parent to believe that something called Peppa Pig is going to be Peppa Pig. "And I think many of them have come to trust YouTube... as a way of entertaining your child for ten minutes while the parent makes a phone call. I think if it wants to be a trusted brand then parents should know that protection is in place." ‌ The so-called 'Dark Peppa' videos first surfaced in 2017, when an investigation by BBC Trending unearthed hundreds of YouTube videos that appeared to be episodes of Peppa Pig and Thomas the Tank Engine, but were actually parodies with inappropriate themes. One video appeared to be an episode of Peppa Pig featuring a dentist with a huge syringe. Peppa's teeth got pulled out, and distressed crying could be heard on the soundtrack in the fake clip. Parent and journalist Laura June stumbled across the episode when she was looking for something for her three-year-old daughter to watch on YouTube. "This is not like a video of an animated Peppa Pig getting high with Snoop Dogg (that is also available) made for adults to laugh at," she said. "These videos are for kids, intentionally injected into the stream via confusing tags, for them to watch instead of legit episodes of beloved shows." ‌ While some of the videos use the characters in more innocent ways, others appear to be deliberately designed to trick children into watching disturbing content. One channel called "Toys and Funny Kids Surprise Eggs" had a landing page with a picture of a toddler alongside official-looking pictures of Peppa Pig, Thomas the Tank Engine, the Cookie Monster, Mickey and Minnie Mouse and Elsa from Frozen. However, many of the videos on the channel at the time had titles like "BABY HULK BITES BABY ELSA", "NAKED HULK LOSES HIS PANTS" and "SPIDERBABY CUTS ELSA'S DRESS". Some of the darker ones also depict violence and frightening situations. YouTube said that users can flag any problematic content by clicking on the "... More" button underneath a video and clicking "Report". The BBC report led to the channels highlighted in the investigation being removed - including the one containing the video of fake Peppa visiting the dentist. The company also suggested that parents use the YouTube Kids app, which has a much higher bar for content allowed on the platform. Parents are able to block specific content, set the age level of videos and report videos. YouTube also blocks search queries that are vulnerable to returning mature results.

Social media bans for teens: Australia has passed one, should other countries follow suit?
Social media bans for teens: Australia has passed one, should other countries follow suit?

The Guardian

time22-02-2025

  • Politics
  • The Guardian

Social media bans for teens: Australia has passed one, should other countries follow suit?

Social media has transformed our relationships with our friends and family, brought unfiltered news from around the world to our handsets and introduced us to an unending supply of cat memes. Some of this has been positive, some negative and, for much of it, the jury is still out. But as the first generation of social media natives start to have children of their own, there is increasing unease about tech's impact on children. These concerns prompted Australia to pass legislation last November banning access to social media for under-16s. 'So many things are happening at once,' says Sonia Livingstone, professor of social psychology at the London School of Economics and a specialist in children and social media. 'We clearly have a silent problem of parents at home struggling with social media and feeling unsupported. We have a small number of parents whose children have come to serious harm, or died, who have become mobilised. We have politicians worried about complaints in their constituencies and also looking for a good news story in gloomy times. And we have big tech outrunning regulation in all directions.' It is a perfect storm, she says, into which discussion of an outright ban on social media for under-16s has come as a supposed saviour. The UK government has twisted itself into a torturous position: Peter Kyle, the technology secretary, said last November that a ban was 'on the table', before then telling the Guardian it was 'not on the cards' for now. In January, he said: 'I don't have any plans to ban social media for under-16s.' While the UK government seems to be deciding that a ban is not for them, some big names have signalled their support. Microsoft co-founder Bill Gates recently said of Australia's ban, 'There's a good chance that that's a smart thing'. The UK's head of counter-terrorism policing said a ban 'warrants serious attention'. Chris Philp, the shadow home secretary, has said he is 'broadly in favour' of a ban, but the age limit could be lower than 16. 'There's a huge amount of conflict and uncertainty in the world,' says Livingstone. 'And social media seems the fixable problem.' But is banning access the answer? How might social media bans work?The new Australian law says that social media networks have to take 'reasonable steps' to prevent those under 16 from having an account when the law comes into force in December this year. What this means in practice is not fully fleshed out, but an explanatory memorandum suggested that a minimum level should put in place 'age assurance' tech, which might include facial recognition and age estimation. Such technology is often offered as the solution to identifying someone's age, but it remains an estimate – and can be wrong. The average gap between what one of these systems believes someone's age to be and their actual age can vary between one and three years. That may be a small margin of error for a 45-year-old, but if you are an 18-year-old student and the computer says you are 15 so can't join social media with your university friends, that is frustrating. Would a ban actually work?A recent More In Common poll found that three-quarters of the public would support a ban on social media for under-16s, up from the current minimum age of 13 when children can legally access platforms. Many will be parents at their wit's end as they struggle to keep their children safe online. 'Social media has no place for children under 16,' says Vicky Borman, a mother of three children, one of whom is under the age of 16. 'It exposes them to a myriad of unacceptable content, including pornography, nudity, bullying and harassment, that they simply aren't equipped to handle.' Typical of many parents, Borman is in favour of a ban. 'It's time for us to reclaim childhood for our kids, ensuring they have the opportunity to create lasting memories away from screens,' she says. Yet even those pushing most publicly for something to be done do not believe an outright ban on children accessing social media is the answer. Andy Burrows is the CEO of the Molly Rose Foundation, set up by the family of Molly Russell, the 14-year-old who took her own life after being bombarded with negativity on social media. 'The reality is that if we pull up the drawbridge on social media platforms, those bad actors won't disappear,' he says. 'They will simply migrate to gaming and messaging services, and the risk would be that the volume of harm on those platforms then becomes unmanageable.' Sonia Livingstone also has doubts. 'A ban makes a great headline and seems straightforward, but it isn't,' she says. 'A ban is meant to be a ban on technology companies making problematic products available to children, and it very quickly becomes a ban on children accessing technology.' What protections exist at the moment, and how effective are they judged to be?There are currently protections for child users of social media – many put in place and managed by the social media platforms – for example, that users should be over 13. 'But they're not very transparent or stable,' says Livingstone. Most companies tag accounts they suspect are run by children younger than 13 and put child-safety features on them, such as limits on who can message them or the type of content they can encounter. But it is not clear they work, says Livingstone, who regularly talks to children as part of her research. They say they still receive message requests from adult users. 'There are some protections, but absolutely not enough,' says Livingstone. 'And until the [UK's] Online Safety Act and the [EU's] Digital Services Act kick in, we're a long way from getting those algorithmic protections people really want.' (While the laws have been passed, enforcement by regulators, such as Ofcom in the UK, is still months away.) Burrows agrees on the UK front. 'The prime minister should be urgently prioritising, strengthening and fixing the Online Safety Act, so it works much more effectively for children,' he says. What is the evidence that under-16 social media use is harmful?If you read US social scientist Jonathan Haidt's book The Anxious Generation – which has been on the New York Times bestseller list for 46 weeks – there is a lot of evidence that it is harmful. The book is a compelling manifesto warning about the polluting impact of social media and tech on our teenagers' minds. Yet, one statistician argues that a good number of the studies Haidt relies on are misrepresented, and some even contradict his reasoning. The author admits two minor errors on his website. While a psychology professor has accused Haidt of 'making up stories by simply looking at trend lines', adding that his conclusions were 'not supported by science'. Haidt says that his critics have misinterpreted his claims, including using the wrong standard of proof. Among the criticisms of the book was that Haidt confuses correlation with causation. But his central argument seems to fit with the concerns and experiences for many parents. Few people doubt there is a teenage mental health crisis. And adults can feel the addictive nature of their own smartphones. Debates about causation and correlation can feel abstract when parents face daily dilemmas about how to manage their children's access to smartphones and social media. What constitutes social media?This is the big question that vexes those who are studying this issue. 'We don't really have any clear definitions of what legislators mean by social media at the moment,' says Pete Etchells, professor of psychology at Bath Spa University and the author of Unlocked. Do two friends chatting to one another on WhatsApp become social media? What happens when you add a third? And does using the status update function on WhatsApp make it social? A definition has not been settled on, even by Australia. When it passed its legislation in November, it failed to detail which companies would be affected, although the country's communications minister Michelle Rowland said Snapchat, TikTok, X, Instagram, Reddit and Facebook would probably come under the rules. What is the evidence so far from Australia and other places that have passed bans?Australia is the highest-profile country to take action, but its ban has not yet come into force. In the absence of evidence from a total ban, we have to rely on data from partial or scenario-specific bans, such as limiting access to tech or phones in schools or at certain hours of the day. A recent study published in the Lancet of more than 1,200 secondary school pupils found little difference in the mental wellbeing of those attending schools that had restrictive bans and schools that did not. The authors' explanation was that school bans did not affect total phone use. However, according to the study: 'We observed that increased time spent on phones/social media is significantly associated with worsened outcomes for mental health and wellbeing, physical activity and sleep, and attainment and disruptive behaviour.' 'Anecdotally, we know that overly restrictive, blanket bans tend not to work, tend to be circumvented by teens, but feel like they're the right thing to do,' says Etchells. 'The South Korea shutdown law is a good example of this.' In 2011, the country enacted a ban on children under the age of 16 from playing video games between midnight and 6am to try to head off concerns about video game addiction. The law was repealed a decade later after the country realised it did not have the intended effect, with identity theft rising as kids found ways to circumvent it. Are some of the manifestations of big tech cosying up to Donald Trump in the US – from downsizing moderation teams to cancelling factchecking initiatives – focusing the calls for bans?During Joe Biden's presidency, says Livingstone, 'there was a sense that trust and safety teams were building up. The regulation was coming, being consulted on and under way'. But recent attacks by the Trump campaign against NCMEC, the National Center for Missing & Exploited Children, a US nonprofit that is government funded, worry experts. NCMEC stops the spread of images of child abuse, and has had its funding threatened over its gender ideology. Overall, some fear it adds up to a bleak picture that might trigger more calls for blunt tools such as bans, rather than more nuanced measures that can make a real difference. 'The child online safety experts are really worried about whether regulators are positioned to stand up to big tech,' says Livingstone. 'Right now, it's hard to reassure children, parents and the public that social media will get safer in the coming year.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store