Latest news with #JulieInmanGrant

ABC News
27-05-2025
- Health
- ABC News
Location-Sharing Apps
JOE BARONIO, REPORTER: Do you have any location-sharing apps on your phone right now? PERSON: Snapchat and Find My. PERSON: I have Life360 and Snapchat has my location. PERSON: I've got Snapchat and sort of like iPhone, Find It. PERSON: Snapchat, Find My and Life360. PERSON: Snapchat and I have Find My Phone. PERSON: Life360 right now and Google Maps sharing. Yeah, a lot of young people have some kind of location-sharing app on their devices, whether it's to keep up with friends or for parents and family to keep an eye on you. PERSON: I use location-sharing apps so that my parents can track me, so they know that I'm safe. PERSON: Oh, because my Mum makes me. PERSON: Yeah, just to find my mates. PERSON: For Snapchat I think it's more like a cultural thing, you like to see where everyone is. PERSON: I'm driving so my Mum just likes to know like where I'm at and like if I'm going the speed limit. PERSON: Yeah, I think like with going out and us being, you know, the age that we are, there's a lot of uni, a lot of travelling, so just, you know, gives a bit of comfort for myself, my parents. But, while many of these apps sell themselves on safety, there is a potential dark side. JULIE INMAN GRANT, ESAFETY COMMISSIONER: We've been talking about coercive control for a long time in the context of family, domestic and sexual violence, but we were starting to see indications that coercive control from a technology perspective was starting to creep into the romantic and intimate relationships of young people. Coercive control is when someone tries to control another through manipulation, pressure and fear, and a recent study from the eSafety Commissioner looked into the normalisation of tech-based coercive control in more than 2,000 Aussie adults aged 18 to 75, including the use of location-sharing apps. It found that about one in five of 18 to 24-year-old participants expect to be able to track a partner using these apps, and that it may stem from childhood experience. JULIE INMAN GRANT: Young people's lives are inextricably intertwined with technology. They're meeting each other online. You know, they're checking out where their friends are on Snap Maps. I mean it's, it, it is part of their daily lives. PERSON: I think it's being accepted so much more now with Snapchat and stuff. PERSON: I wouldn't say normalised. I think it's definitely, like I've had conversations with many of my friends being like, 'Do you wanna show your location?' 'Yeah, sure, whatever.' PERSON: I think 100% it's normalised and I think for some people it might even be an expectation, like I'll even message my friend here and be like, 'Why is your location not on? Where are you?' So, do you think that more people in the future are going to expect to, to track loved ones? PERSON: Yep, as is growing right now, I'm sure the trend's gonna stay the same. That is a big concern for the eSafety Commissioner, who says that particularly for young people who've grown up with this tech, it can be difficult to work out the line between love and care, and dangerous controlling behaviour, which doesn't stop with location-sharing apps. JULIE INMAN GRANT: There were two examples where we saw some concerning trends and that is that almost one in four said that it was reasonable to expect to have a partner's codes or passwords for their personal devices, and, and we also found that more than one in eight agree that constantly texting a partner to check who they're with or what they're doing is usually a sign of care in a relationship rather than a manifestation of control. PERSON: With it being such a new concept, it's so hard to understand because you know we're all trying to learn about. PERSON: When they start, like doing it all the time, getting super insecure, always asking questions. I think that's when the line starts to draw. PERSON: Yeah I think it's a bit of like a hard line to draw, but I think if you're constantly getting a message like I would say more than five times a day really saying, 'oh where are you?' like I think if it's a constant like situation that's arising, I think that's when you have to re-evaluate if it's beneficial for you or not. PERSON: If it's your first relationship, it's been just like a couple of days, a couple of weeks, you, that's a huge expectation to you know, and why are you firstly looking at someone's location? Why are you wondering that much about them? JULIE INMAN GRANT: We've seen families heated out of their homes where the former partner turns it up to 45 degrees, or every time the family turns on their smart TV, there are menacing messages. And then there are more high-tech versions like drones over safe houses, cars that are now largely electronic, that can stall when it goes more than a kilometre away from the family home. We've even seen a cat feeder with a video tracking a person's movements and who is coming into the household. But even stuff like this, this one really gets me, is the whole idea of let's send a grand gesture, I will send a beautiful meal on Uber Eats to my partner to check that she's where she says she is. So, it can be tied up in a grand gesture, but it can also be used in a way that is meant to check on a person's whereabouts in a very covert way. Of course, we can't forget that the main reason location-sharing apps were created is safety, and that is still what they're mainly used for. But, if you find yourself feeling unsure or uncomfortable about location-sharing, experts like the eSafety Commissioner say to trust your gut and set digital boundaries, and if things get really uncomfortable make sure to reach out to the Kids Helpline, Headspace, or talk to a trusted adult. JULIE INMAN GRANT: I don't want to sound like Commissioner gloom and doom. You know, all of this is a balance, right? We want young people to be vigilant, you know, I guess armed but not alarmed. And we, we don't want this to be so normalised that, you know, it changes sort of the fabric of the, the kind of relationships that we're in. PERSON: Make sure you trust the people you share your location with. PERSON: Just really important that it's all you know, consensual and that it's all safe. PERSON: And if they don't know, or they don't give consent, then absolutely not. PERSON: If you have that trust within you two, awesome. But it's also another aspect of trust. PERSON: It's all about communication, and if you're communicating properly, I don't think there's a need to. PERSON: Especially in a modern world, you don't have a lot of privacy, and that just adds to it, you know, it's just a constant sort of feeling of surveillance. PERSON: Yeah, I'd say take whatever you, whatever privacy you can get really.

ABC News
14-05-2025
- ABC News
Location-sharing apps linked to increased risk of digital coercive control, eSafety Commission research finds
Nearly one in five young adults believe tracking a partner's location is to be expected in a relationship, new research has found. The eSafety Commission study has linked location-sharing apps with an increased risk of coercive control. More than 13 per cent of the 2,000 adults surveyed said it was reasonable for a partner to monitor them using location-sharing apps like Life360 and Find My iPhone. "It isn't an exaggeration to say preventing tech-based coercive control, and the attitudes that it condones can save lives," eSafety Commissioner Julie Inman Grant said. "Young people have become so accustomed to certain levels of online tracking by family and friends … that it no longer raises red flags in a relationship." Tech-based coercive control is defined as a behaviour pattern that uses digital technology to control, manipulate and create fear in a current or former relationship. About a quarter of adults surveyed agreed it was necessary to have a partner's passwords and codes for their personal devices. A similar study published by Griffith University earlier this year found location-sharing apps had become "normalised" as a way for parents to monitor their children. Location-sharing apps like Life360 are marketed to "protect your family with easy-to-use location tracking". Over the last three years, Griffith University PhD candidate María Atiénzar Prieto has studied perceptions of digital coercive control. Her survey of more than 1,000 young people said the acceptance of location sharing "starts at home". "We found parents increasingly used tracking apps to monitor children and provide safety," she said. The 2023 murder of Lilie James put tech-facilitated control in the headlines, when it was revealed last March that her killer had tracked her location through Snapmaps. The court heard her killer's behaviour was normalised as "coming from a place of kindness and love". Ms Atiénzar Prieto found the technology was "the perfect tool for perpetrators of domestic violence". "Participants in my study discussed how the first time they were monitored was in an environment of care and love and that was then being transferred to other relationships," Ms Atiénzar Prieto said. She said parents needed to explain that they're using these apps for safety, but that there were risks in using it outside the family. According to the eSafety commission, the average Australian household has about 40 inter-connected devices. The eSafety Commission wants more education for young people to understand digital coercion and consent, and for norms around digital privacy respect in relationships be reshaped. "There needs to be conversations about setting digital boundaries and what consent looks like" Ms Inman Grant said. "We're not saying a couple in a relationship or family members shouldn't share their location online but it's the expectation to track a romantic partner's movements, which is problematic — even dangerous". Ms Inman Grant is calling on companies to adopt more safety by design features and mindful engineering of apps to prevent misuse. "We had one situation where a young woman was being surveilled through her electronic cat-feeder" she said. "It was used by her former partner to track her movements in the house and to see who she was with. "The thing that's so insidious about coercive control is that a very determined predator can always exploit the loopholes." In serious cases, victim-survivors have had to begin new digital identities and accounts to stay safe. Ashton Wood is the founder and chief executive DV Safe Phone, which provides free phones to victim-survivors as part of their safety and escape plan. "Unfortunately, the only way of really knowing that you're not being tracked or monitored as a domestic violence victim is to have a device that the partner doesn't know about at all, so it doesn't even exist, as far as they're concerned," he said. "As technology gets better, perpetrators are always finding ways to weaponise it, and then the manufacturers and software providers find ways to lock it down, and then the perpetrators find a way around that." DV Safe Phone has provided more than 12,000 free phones in the past five years and asks people to donate their old phones to help. The charity has been handing the fully reset phones out to police stations, safe houses, hospitals and domestic violence agencies to distribute. "Once they're out of danger, we generally recommend destroying the old phone, unless they need it for evidence, and then keep [the new] phone because that hasn't been compromised."


7NEWS
04-05-2025
- 7NEWS
AI deepfake abuse: Boys at Sydney schools caught selling sexually explicit images of female students
Male students at a Sydney private school have been caught selling deepfake nude images of female students on social media. They reportedly used artificial intelligence (AI) to superimpose the faces of their female schoolmates — and the faces of girls from two other independant schools — onto sexually explicit images. WATCH THE VIDEO ABOVE: Explicit depefake abuse images of girls sold online by male students. The pictures were then sold within group chats on Instagram and Snapchat for less than $5, 7NEWS Sunrise reports. It is unclear exactly when these incidents occurred, and whether they were reported to police for investigation. has contacted NSW Police, and the Australian Federal Police (AFP). An e-Safety spokesperson told it has received 38 complaints about explicit deepfake images involving children under 18 in NSW since January 2023. 'While eSafety is aware of reports involving digitally generated nude images of teenagers allegedly being sold at schools, we have not received complaints of this nature to date,' the spokesperson said. 'When a report involves a person under the age of 18, it is child sexual abuse material, and we refer it to our colleagues at the Australian Centre to Counter Child Exploitation (ACCCE).' The spokesperson did note that school leaders have been voicing concern about an increase in this kind of abuse. 'Deepfake image-based abuse is not only becoming more prevalent, but it is also very gendered and incredibly distressing to the victim-survivor,' eSafety Commissioner Julie Inman Grant previously said amid a 2024 inquiry into sexual deepfake material. Cyber security expert Ross Bark said that people accessing the AI programs used to make such images do not need to be technology experts — the programs are free and easy to use. 'These are not hard-to-access pieces of code, or things that you need to set up, (you don't need to) have any knowledge of tech,' Bark said. 'This is not just about AI and technology, this is sexual abuse.' 'Your daughters have done nothing wrong' A male year 12 student from southwest Sydney was accused of targeting students with explicit AI images earlier this year. The NSW Department of Education sent an email to parents at the time, alerting them of the scandal. 'We want to emphasise that your daughters have done nothing wrong, there are no inappropriate real photos of them being used,' it said. 'I am sorry this has occurred.' The incident came after about 50 girls, believed to be between Years 9 and 12 at Melbourne's Bacchus Marsh Grammar, had images taken from their personal Instagram accounts, which were then manipulated using AI to make 'obscene photographs'. Those images were then also shared across multiple social media platforms. 'Explicit deepfakes have increased on the internet as much as 550 per cent year on year since 2019,' Inman Grant said in 2024. 'It's a bit shocking to note that pornographic videos make up 98 per cent of the deepfake material currently online and 99 per cent of that imagery is of women and girls.' The effects of producing and distributing such material can be devastating. Matilda 'Tilly' Rosewarne was just 15-years-old when she took her own life near a cubby house at her family's Bathurst home in 2022, after becoming the victim of sexually explicit image-based abuse on Snapchat — a blow which followed years of bullying. New laws criminalising the sharing of non-consensual deepfake sexually explicit material were brought forward late last year under The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024. The Australian Federal Police later charged two men in March with child abuse material offences, for possessing and accessing deepfake child abuse material. They were among 25 people arrested as part of a global operation targeting the alleged production and distribution of child abuse material generated by AI, AFP said.


New European
09-04-2025
- Business
- New European
Britain must not sell out Online Safety for tariff scraps
Instead of Europe cowering, and offering to scrap its online safety regulations in return for some chlorinated chicken and cheap Chevrolets, the European Commission is digging in. The UK must not be tempted to triangulate, and trade away digital regulation that was designed to keep people safe, protect fair competition, and safeguard user privacy. No 'free trade without free speech' has been the mantra from the Trump administration, as the president plunges the world into a tariff war. Yet as their stock market valuations tumble, the titans of America technology are probably feeling a bit shortchanged from the pact they made with The Donald. In 2020, the Trump administration proposed that a UK/US trade agreement should exempt American tech firms from our regulations. This was rightly rejected by the then trade minister Liz Truss, and the government now should make clear it will not compromise on online user safety. The idea that digital regulation is an attack on free speech is also a myth, largely perpetuated by the allies of the big tech companies. The internet safety regimes in countries like the UK and Australia, as well as that of the European Union, have rightly been designed to make the social media platforms liable for illegal activity that is hosted and promoted on their services. On April 15 it will be a year since a brutal stabbing attack at the Christ The Good Shepherd church in Wakeley, Sydney, left three people injured, including Mari Emmanuel, a bishop in the Assyrian Orthodox Church, who was delivering his sermon at the time. As Emmanuel used social media to live stream his services, footage of this attack was soon available on major platforms around the world. The police regarded this as a terror-related incident, and Australia's online safety regulator, the eSafety commissioner Julie Inman Grant ordered that footage of the attack be removed from social media platforms. Companies like Google, TikTok and Meta complied, but Elon Musk's X platform did not, stating that it would only geo-block the footage to make it harder to view in Australia. Musk's company ultimately won a case in the Australian Federal Court to prevent it from having to remove the footage globally, a result which a company spokesman for X called a victory for 'freedom of speech.' Violent acts, often committed with the intention that they will be shown online, are inciting further attacks leading to even more lives being lost. In January this year, a court in Liverpool heard that the Southport attacker Axel Rudakubana searched online for videos of the knife attack in Wakeley about 40 minutes before he left home to commit the brutal murder of three young girls who were enjoying a Taylor Swift-themed dance class. That is why we need to require the major social media companies to take action against content like this being freely distributed on their platforms. At the end of March, I spoke alongside Julie Inman Grant at the Manly Writers Festival in Sydney, where we discussed the global challenge of combatting online hate and abuse. Her personal experience as eSafety commissioner has also demonstrated how platforms like X do not promote all free speech, just that which conforms to the opinions of Elon Musk. Julie faced a torrent of abuse from X users over the Wakeley stabbings footage, with Musk himself branding her a 'censorship commissar'. On that platform, dissenting opinions are either marginalised or buried in hate and bile. This is not a healthy public square where real debate can thrive. In the United States Congress, the Republican member of the House of Representatives, Darrell Issa, no stranger himself to campaign donations from American tech companies, has even introduced the 'No Censors on our Shores Act' which if passed would ban regulators like Julie Inman Grant from even entering the USA. Australia has been right as well to push for tighter controls to prevent children using social media apps. The guidelines set by the companies themselves should prevent use below the age of 14, but we all know this system doesn't work. The social media firms largely use self-referencing tools which allow children to lie about their age, while at the same time gathering data about them which almost certainly reveals to the companies how old they actually are. The rising number of cases of self-harm amongst children who have been targeted with content that promotes such abuse, sometimes leading to loss of life, is totally unacceptable. We have to make the companies take responsibility for the tools they have created, and not leave it to them to set the safety standards. The eSafety regime in Australia has delivered comprehensive industry codes for higher safety standards, and the UK's Online Safety Act, which becomes operational this month, seeks to do the same. The determined effort of legislators and regulators is necessary now more than ever.
Yahoo
06-03-2025
- Yahoo
Huge holes in tech anti-terrorism checks
The tech giants have not made changes recommended after the 2019 Christchurch terror attacks, a new report from the Australian eSafety commissioner finds. 'Telegram, WhatsApp and Meta's Messenger did not employ measures to detect livestreamed terrorist and violent extremism despite the fact that the 2019 Christchurch attack was livestreamed on another of Meta's services, Facebook Live,' commissioner Julie Inman Grant said. 'Ever since the 2019 Christchurch attack, we have been particularly concerned about the role of livestreaming, recommender systems and of course now AI, in producing, promoting and spreading this harmful content and activity.' The report has been released days after NSW police charged a West Australian teenager for allegedly making online threats towards a mosque in southwestern Sydney that directly referenced replicating the Christchurch terror attack. In a report released on Thursday, Ms Inman Grant points to holes and inconsistencies in how the tech platforms identify violent extremist material and child sexual abuse material. Human moderators at Reddit and WhatsApp also understand markedly fewer languages than at Meta and Google. The gaps are as simple as being logged in; people looking at Facebook or YouTube cannot report extremist content if they are not logged in. WhatsApp is owned by Meta, but WhatsApp does not ban all organisations that are on Meta's Dangerous Organisations and Individuals list. Across most tech platforms, analysis called 'hash-matching' is used. Hash-matching makes a unique digital signature on an image that is then compared with other images to weed out copies of extreme material. Ms Inman Grant said some iterations of hash-matching had error rates as low as one-in-50 billion. But YouTube owner Google only uses hash-matching to find 'exact' matches, not altered copies. 'This is deeply concerning when you consider in the first days following the Christchurch attack, Meta stated that over 800 different versions of the video were in circulation,' Ms Inman Grant said. The New Zealand government quickly classified footage of the livestreamed attack as 'objectionable material', banning possession and distribution. Men and teenage males were convicted up and down the country for having copies, many of which had rifle crosshairs or other video game iconography digitally added. 'Telegram said while it detected hashes of terrorist and violent extremist images and videos it had previously removed from its service, it did not utilise databases of known material from trusted external sources such as the Global Internet Forum to Counter Terror or Tech Against Terrorism,' Ms Inman Grant said in the report. The loopholes and methods for people to watch and create criminal imagery are being served up by the tech platforms. In the 12 months to the end of February 2024, Google received hundreds of reports that its own AI tool Gemini was being used to generate terrorist and child exploitation material. AI-generated suspected terrorist and violent extremist material reports totalled 258. There were 86 user reports of suspected AI-generated, synthetic child sexual exploitation and abuse material. Google was unable to tell eSafety if the 344 reports were actually offensive material. The online safety regulator conducted the research after issuing Google, Meta, WhatsApp, Reddit, Telegram and X notices to each answer questions about the steps they were taking to implement the Basic Online Safety Expectations with respect to terrorist and violent extremist material and activity. The notice is binding under Australian law. X is challenging the notice at the Administrative Review Tribunal. Telegram has been fined more than $950,000 for responding late. An independent inquiry into the 2019 Christchurch terrorist attack concluded New Zealand's government agencies could not have detected the shooter's plan 'except by chance'. The report detailed how the terrorist was radicalised online and legally acquired semiautomatic weapons before the shooting; New Zealand's government quickly brought in sweeping gun reform. The Australian terrorist responsible has been sentenced to life in prison in New Zealand without the chance of parole after pleading guilty but has an appeal against the sentence and convictions pending.