Latest news with #onlinesafety
Yahoo
20 minutes ago
- Yahoo
Cyberstalking has surged by 70% in the UK since 2012, study finds
On the receiving end of unsolicited text messages, emails, being harassed on live streams, or seeing personal photos posted online without your permission? If so, you could be a victim of cyberstalking, an online behaviour that is becoming more common in the UK. A new study, published in the British Journal of Criminology, found that cyberstalking is growing at a faster pace than traditional types of stalking. It examined responses from nearly 147,800 people who answered crime surveys in England and Wales. The surveys asked about the prevalence and perception of cyberstalking, physical stalking, and cyber-enabled stalking from 2012 to 2020. The UK's Crown Prosecution Office describes cyberstalking as 'threatening behaviour or unwanted advances' directed at someone online. It can be combined with other types of stalking or harassment. Related Bluetooth devices can unwittingly track your every move. Here's how to know if they're watching you Cyberstalking could include threatening or unsolicited text messages and emails, harassment on live chats, or posting photoshopped photos of a specific person, their children or workplace on social media, the office said. Cyber-enabled crimes are ones that don't depend on technology but have changed significantly because of it, like cyberbullying, trolling, or virtual mobbing. Over the eight-year period, 1.7 per cent of respondents said they had experienced cyberstalking, up from 1 per cent in 2012. Cyberstalking identified as 'wrong but not a crime' While physical stalking remains more common overall, the 70 per cent rise in cyberstalking over the eight-year period was the only type with a 'significant' increase over time, the researchers found. Complaints of physical stalking increased by 15 per cent and cyber-enabled stalking actually fell during that time period. Women, young people, and LGBTQ+ people were more likely to say they had been cyberstalked than other groups, the study found. Almost half of the respondents that had experienced cyberstalking in the previous year said their experience was 'wrong but not a crime,' which the authors found could impact the number of people that report their experience to law enforcement. Related Europol-led operation takes down pro-Russian cybercrime network 'There is a clear disconnect between the lived experience of cyberstalking and how it is understood legally and socially,' Madeleine Janickyj, one of the study's authors and a researcher in the violence, health and society group at University College London, said in a statement. 'This not only affects whether victims seek help, but also how police and other services respond,' she added. Part of the problem, Janickyj said, could be that young people are 'so used to cyberstalking that they don't see it as a crime'. The researchers said the UK government should improve public education, clarify legal definitions, and provide additional support for victims of cyberstalking.

News.com.au
9 hours ago
- Politics
- News.com.au
‘Create more risk, not less': Expert's warning after YouTube added to social media ban
'Rushed, vague and politically motivated.' That's how one expert has described the federal government's decision to include YouTube in its controversial under-16s social media ban, warning it could cause 'more risk, not less' to young Australians. Prime Minister Anthony Albanese confirmed on Wednesday the government had reversed an earlier decision to exclude the platform from its world-leading restrictions under the banner of educational material, on the advice of eSafety Commissioner Julie Inman Grant. Research conducted by Dr Inman Grant's office found that of 2600 children, 4 in 10 reported exposure to 'misogynistic or hateful material, dangerous online challenges, violent fight videos, and content promoting eating disorders'. From December, children will be barred from creating their own YouTube accounts – but will still be able to access the site in either a logged-out state or through a parent or other adult's account. The video-sharing giant, which is owned by Google, has since threatened the government with a High Court challenge, arguing it is 'not a social media service' and 'offers benefit to younger Australians' – a move Swinburne University media expert Dr Belinda Barnet labelled 'a last-ditch attempt to get out of the regulation'. 'I hope the government does not back down,' Dr Barnet added. 'YouTube absolutely is a social media platform like the others – it is not a special case. It also represents equivalent risk of harm as the others.' Other experts spoke to in the wake of the government's announcement, however, were less inclined to agree. Director of Queensland University of Technology's (QUT) Digital Media Research Centre and Professor of Digital Communication, Daniel Angus, said the restrictions on access to YouTube in a logged-in state – rather than to the site as a whole – could actually achieve the opposite of what the government ban is intended to, 'alienating young users rather than meaningfully protecting them'. 'Logged-in access alAlows for personalised experiences, safety controls like restricted mode, and content curation through subscriptions and algorithmic recommendations,' Professor Angus said. 'Ironically, removing that logged-in functionality for under-16s may increase their exposure to harmful content by stripping away those safety features and pushing them into unmoderated, anonymous browsing – a shift that could create more risk, not less. 'This remains to be seen, but it underscores how poorly thought-through these proposals are. 'The main concern with YouTube is that the algorithms that recommend new videos to users are opaque, and we know that YouTube's recommendation system has served content that is sexually-explicit and otherwise distressing to young viewers,' University of Sydney lecturer in Media and Communications, Dr Catherine Page Jeffrey, said. 'Yet including (it) in the ban will not necessarily preclude this.' Dr Page Jeffrey, who said she disagreed with both YouTube's inclusion in the ban and 'the legislation more broadly', stressed the 'important role' the platform plays in the digital lives of teenagers for education, entertainment, information and community. 'Young people have a right to engagement in the digital world, and (to) simply live out parts of their lives online,' she added. 'Sure, there are risks – but the approach to mitigating these risks should not be excluding young people (from these platforms) altogether.' Failure to differentiate the specific risks posed by each platform and instead lump them 'under a generic 'social media' label is a fundamental flaw in the government's approach', Prof Angus said. 'YouTube's … user dynamics differ significantly from, say, Snapchat or TikTok,' he continued. 'There are certainly harmful elements … but these require nuanced and holistic responses, not blunt bans. Targeted moderation, transparency of algorithms and platform processes, and digital literacy education are more effective and proportional strategies.' Though the ban will 'hopefully (act as a) wakeup call' to social media platforms on what – and how – they algorithmically push users, Deakin University Senior Lecturer in Communications Dr Luke Heemsbergen said it won't be enough to 'stop teenagers from finding things they want to online'. 'Unfortunately, it is also already setting new precedents around policing and surveilling online spaces that break rights and privacy in new ways – ironically offering more power to the big platforms in how we get to live and connect.' At a press conference on Wednesday, Communications Minister Anika Wells – who ultimately made the decision to include YouTube in the legislation – said that parents helping their children navigate the internet 'is like trying to teach (them) to swim in the open ocean with the rips and the sharks, compared to at the local council pool'. 'We can't control the ocean, but we can police the sharks – and that is why we will not be intimidated by legal threats (from Google) when this is a genuine fight for the wellbeing of Australian kids,' Ms Wells said. Invoking the Minister's analogy, Dr Heemsbergen said it was 'pretty hard to tell if YouTube's 'currents' of content are any worse than other services – so I'd rather teach my kids to swim and what to do when they hit a rip, than try to ban them from this beach or that beach'. 'We – as a society – can do a lot to clean the beach up, for sure, but the water is always going to be there, and it remains our responsibility to make sure our kids understand and act accordingly,' he said. It's 'unlikely' the ban will be effective, Prof Angus said, pointing to 'international experience that shows children can – and do – find ways to circumvent age verification systems'. After the UK introduced its own mandatory age verification systems on porn sites, Reddit and X last week, virtual private network (VPN) use skyrocketed. Research conducted by Prof Angus' own team has also indicated tools like facial age estimation 'are unreliable, biased, and potentially discriminatory, especially against already marginalised groups'. What's needed, he said, 'is a shift in thinking away from trying to protect children from the internet, and toward protecting children within the internet': building age-appropriate digital spaces, enhancing media literacy, ensuring access to comprehensive sex and relationship education, and involving them in the design of the policies that affect them. 'Policies like this one, rushed, vague, and politically motivated, risk doing more harm than good,' Prof Angus said.


Irish Times
10 hours ago
- Business
- Irish Times
The Irish Times view on X's court defeat: the conflict will continue
The High Court's rejection of X's challenge to Ireland's new online safety code may come to be seen as a milestone in the enforcement of Europe's digital rulebook. It is also a reminder that the battle over online content regulation is not simply a matter of legal interpretation or child protection policy. It sits squarely in the middle of a transatlantic struggle over who sets the rules for the digital economy. Ireland's Online Safety Code, enforced by Coimisiún na Meán, requires platforms to shield children from harmful video content, introduce age checks and parental controls, and prevent the sharing of material that promotes self-harm, eating disorders or bullying. The court ruled these measures fall within the EU's Audiovisual Media Services Directive and complement the Digital Services Act, dismissing X's claims of overreach. That finding may seem straightforward from a European perspective. The EU has long sought to assert that technology companies must respect European standards if they wish to operate here. But the US views such measures through a different lens, shaped by its dominance in the tech sector and a political culture that prizes free expression in almost absolute terms. The commercial stakes are immense. The global tech services market is overwhelmingly dominated by American firms: Meta, Google, Apple and Amazon. EU regulation is therefore not just a neutral exercise in public protection but, inevitably, a rebalancing of power between the jurisdictions where these companies are based and the markets in which they operate. That tension is heightened by the fact that Ireland is home to the European headquarters of many of these firms, making it the front line in this conflict. READ MORE In Washington, the issues are often couched in the language of principle. Conservative figures such as JD Vance have been vocal in their defence of unfettered online speech, casting regulation as censorship. Such arguments, while grounded in America's First Amendment tradition, also align neatly with the commercial interests of the companies whose revenues depend on maximising user engagement. The defence of principle and the defence of profit are intertwined. The ruling against X will not end these disputes. The tech industry's legal resources are vast, and its political allies influential. But it confirms that Ireland, acting within the EU framework, has the authority to challenge the ethos of the platforms it hosts. That will not be welcomed in boardrooms in California or on Capitol Hill. As the digital economy becomes a key arena of US-EU competition, Ireland's decisions will be read not only as regulatory acts but as statements about where power lies in the online world. Tuesday's judgment suggests that, at least for now, that power may be shifting.


Android Authority
11 hours ago
- Android Authority
Google's AI age tests will have consequences that extend far beyond YouTube
It turns out the new Google AI age checks aren't just a YouTube thing. The company is beginning to roll out a broader age assurance system across its entire platform in the US, which means more of your online activity could soon be used to judge whether you're a teen. This follows yesterday's launch of AI-based age estimation on YouTube , but it now appears that the effort reaches well beyond video recommendations. According to a new Google blog post , the same technology will start testing across a wider set of Google services in the coming weeks. It will target a small number of US users in the initial testing phase, before rolling out more broadly over time. The system works by using machine learning to analyze how you use your account, including the types of things you search for, what you watch on YouTube, and other behavioral patterns. If the algorithm thinks you're under 18, you'll get a prompt asking you to verify your age, either by uploading an ID or taking a selfie. Google says it's trying to avoid unnecessary data collection and will only ask for proof when needed. If you don't verify, or if the system gets it right, you'll be pushed into a more limited version of Google's ecosystem by default. That means Digital Wellbeing tools like bedtime reminders on YouTube, no personalized ads, no Timeline history in Google Maps, and no access to age-restricted apps in the Play Store. Some of those changes are genuinely positive for protecting younger users, but it's only a matter of time before wrongly flagged adults start getting frustrated. To be fair to Google, the whole tech industry is under pressure to do more to protect kids online. Still, there's something a little unsettling about having an AI model scan your habits and decide you're not old enough for certain content — especially if it gets it wrong. If your account is new or you've been binge-watching MrBeast challenges and Taylor Swift reaction videos, you might find that Google starts nagging you to go to bed at a reasonable hour.
Yahoo
11 hours ago
- Business
- Yahoo
YouTube adopts AI to spot underage viewers in US
YouTube, the video-sharing platform owned by Alphabet, is set to deploy AI to identify users in the US who are under 18 years old. The move comes amid increasing demands for technology companies to bolster online safety measures for children. In a blog post, YouTube announced that it plans to utilise AI technology to assess various signals such as longevity of the account, video searches, and preferred content categories, to estimate the age of its users. For those identified as under 18, the platform will automatically enforce standard protection measures designed for teenage accounts. "This will happen regardless of the birthdate you entered when creating your account. We'll then use that to extend age-appropriate product experiences and protections to more teens (like enabling digital well-being tools and only showing non-personalised ads)", the company stated in its blog. The initiative is set to commence on 13 August 2025 and will initially involve a limited group of users in the US. 'These protections for teens are not new - we now have enhanced technology to more accurately determine whether or not a user is under 18 and are now able to extend these protections to more teenagers. We've used this approach in other markets for some time, where it is working well and we are now gradually rolling it out to the US", the post added. This development aligns with the age verification laws passed in several US states and other countries, which mandate platforms to confirm users' ages to protect minors from inappropriate content. On YouTube, if the AI determines a user is under 18, the platform will activate safeguards such as privacy reminders and reduced recommendations of potentially problematic content. In scenarios where the AI inaccurately estimates a user's age, YouTube offers the option for users to verify their age through government ID, credit card, or by uploading a personal photograph. The video-sharing platform warned that some content creators might notice changes in their teenage audience demographics and potentially experience a decrease in ad revenue. However, it anticipates a "limited impact for most creators." This initiative follows YouTube CEO Neal Mohan's February announcement regarding the company's plans to expand the use of AI, including for age estimation. Additionally, creators will have access to AI features such as automatic dubbing between languages and tools to generate video titles or image thumbnails. YouTube said it intends to closely observe the AI-driven age estimation process before broadening its application, Bloomberg reported. "YouTube adopts AI to spot underage viewers in US" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data