Latest news with #InmanGrant

The Age
14-05-2025
- The Age
Parents love these apps. So do violent criminals
One of the most popular tracking apps is Life360, which allows users to invite family and friends to have their location shown live on a private map. Its use has surged in Australia from 1.9 million monthly active users in the June quarter of 2023 to 2.7 million in the same period the following year. Another is Snap Map, which lets selected friends see the user's location. Loading Research from the eSafety Commissioner, involving a survey of 2000 adults, shows the extent to which this has become normalised. It found men (one in five) were more likely than women (one in 10) to agree that constantly texting to ask who their partner was with or what they were doing was usually a sign of care. Men were also more likely (26.3 per cent) than women (11.8 per cent) to think that wanting their partner to be constantly available to respond to texts, calls or video chat was a sign of care in a partnership. Almost 14 per cent of the 2000 survey participants said using a location-sharing app to track an intimate partner whenever they wanted to would be reasonable, but that jumped to almost 19 per cent for 18- to 24-year-olds. 'When this is happening to young people in, say, their first relationships, I don't think they would self-identify as being a victim survivor of coercive control or domestic violence,' said Inman Grant. Griffith University student Maria Atienzar-Prieto has researched technology-facilitated coercive control in relationships for her PhD thesis, and held focus groups with young people who had used location sharing apps in relationships. She, too, found most young people misinterpreted following a partner via a tracking app as a protective behaviour, and a sign of care and trust. Often, they had been tracked as teens by their parents. 'One of the findings that really highlighted how this behaviour was normalised, was that the behaviour starts in the family home,' Atienzar-Prieto said. '[They said], 'I felt very comfortable using this app with my friends and partners because my parents tracked me while I was growing up.' Parents need to be aware of the associated risks that can come with this type of technology.' The focus groups told Atienzar-Prieto that location sharing was regarded as a demonstration of commitment in young relationships, so if someone in the relationship tried to stop sharing it can be seen as a sign of distrust or a breach of dating etiquette. 'A lot willingly opt in because everyone around them does it – their family, their friends, it's very easy to opt in,' she said. 'But when a young person doesn't want to share their location, opting out is very difficult.' Loading Some children track their parents too. One discussion on a Facebook mother's group talked about young people finding the tracking of their parents reassuring, after hearing a group of teens discussing it on a train. 'I get notifications that Mum has left home, or Mum has returned home,' one said. 'I don't know how to turn it off, but I actually like it.' Yet even when used as a safety precaution, tracking apps don't necessarily work. Audrey Griffin used such an app, and reportedly sent friends details of her location when she walked home in the early hours of Sunday, March 23. Her friends lost track of her at around 3am and reported her disappearance to police. She was allegedly murdered by a stranger that night. 'A lot of the girls and young women mention, 'Well, I feel safer, if I'm going alone – I would prefer someone knowing where my location is',' said Atienzar-Prieto. 'That can also create a false sense of safety.' Inman Grant urged parents to discuss boundaries around surveillance and tracking apps with their children, and to never watch them without their knowledge. 'To say, 'I'm turning this on because I'm concerned about your safety and need to know where you are',' she said. 'Where it becomes problematic is when a child is being monitored, called several times at school, followed after school. We don't want this sense of being surveilled or monitored to be normalised as they start to embark on intimate and romantic relationships.'

Sydney Morning Herald
14-05-2025
- Sydney Morning Herald
Parents love these apps. So do violent criminals
One of the most popular tracking apps is Life360, which allows users to invite family and friends to have their location shown live on a private map. Its use has surged in Australia from 1.9 million monthly active users in the June quarter of 2023 to 2.7 million in the same period the following year. Another is Snap Map, which lets selected friends see the user's location. Loading Research from the eSafety Commissioner, involving a survey of 2000 adults, shows the extent to which this has become normalised. It found men (one in five) were more likely than women (one in 10) to agree that constantly texting to ask who their partner was with or what they were doing was usually a sign of care. Men were also more likely (26.3 per cent) than women (11.8 per cent) to think that wanting their partner to be constantly available to respond to texts, calls or video chat was a sign of care in a partnership. Almost 14 per cent of the 2000 survey participants said using a location-sharing app to track an intimate partner whenever they wanted to would be reasonable, but that jumped to almost 19 per cent for 18- to 24-year-olds. 'When this is happening to young people in, say, their first relationships, I don't think they would self-identify as being a victim survivor of coercive control or domestic violence,' said Inman Grant. Griffith University student Maria Atienzar-Prieto has researched technology-facilitated coercive control in relationships for her PhD thesis, and held focus groups with young people who had used location sharing apps in relationships. She, too, found most young people misinterpreted following a partner via a tracking app as a protective behaviour, and a sign of care and trust. Often, they had been tracked as teens by their parents. 'One of the findings that really highlighted how this behaviour was normalised, was that the behaviour starts in the family home,' Atienzar-Prieto said. '[They said], 'I felt very comfortable using this app with my friends and partners because my parents tracked me while I was growing up.' Parents need to be aware of the associated risks that can come with this type of technology.' The focus groups told Atienzar-Prieto that location sharing was regarded as a demonstration of commitment in young relationships, so if someone in the relationship tried to stop sharing it can be seen as a sign of distrust or a breach of dating etiquette. 'A lot willingly opt in because everyone around them does it – their family, their friends, it's very easy to opt in,' she said. 'But when a young person doesn't want to share their location, opting out is very difficult.' Loading Some children track their parents too. One discussion on a Facebook mother's group talked about young people finding the tracking of their parents reassuring, after hearing a group of teens discussing it on a train. 'I get notifications that Mum has left home, or Mum has returned home,' one said. 'I don't know how to turn it off, but I actually like it.' Yet even when used as a safety precaution, tracking apps don't necessarily work. Audrey Griffin used such an app, and reportedly sent friends details of her location when she walked home in the early hours of Sunday, March 23. Her friends lost track of her at around 3am and reported her disappearance to police. She was allegedly murdered by a stranger that night. 'A lot of the girls and young women mention, 'Well, I feel safer, if I'm going alone – I would prefer someone knowing where my location is',' said Atienzar-Prieto. 'That can also create a false sense of safety.' Inman Grant urged parents to discuss boundaries around surveillance and tracking apps with their children, and to never watch them without their knowledge. 'To say, 'I'm turning this on because I'm concerned about your safety and need to know where you are',' she said. 'Where it becomes problematic is when a child is being monitored, called several times at school, followed after school. We don't want this sense of being surveilled or monitored to be normalised as they start to embark on intimate and romantic relationships.'
Yahoo
06-03-2025
- Yahoo
Huge holes in tech anti-terrorism checks
The tech giants have not made changes recommended after the 2019 Christchurch terror attacks, a new report from the Australian eSafety commissioner finds. 'Telegram, WhatsApp and Meta's Messenger did not employ measures to detect livestreamed terrorist and violent extremism despite the fact that the 2019 Christchurch attack was livestreamed on another of Meta's services, Facebook Live,' commissioner Julie Inman Grant said. 'Ever since the 2019 Christchurch attack, we have been particularly concerned about the role of livestreaming, recommender systems and of course now AI, in producing, promoting and spreading this harmful content and activity.' The report has been released days after NSW police charged a West Australian teenager for allegedly making online threats towards a mosque in southwestern Sydney that directly referenced replicating the Christchurch terror attack. In a report released on Thursday, Ms Inman Grant points to holes and inconsistencies in how the tech platforms identify violent extremist material and child sexual abuse material. Human moderators at Reddit and WhatsApp also understand markedly fewer languages than at Meta and Google. The gaps are as simple as being logged in; people looking at Facebook or YouTube cannot report extremist content if they are not logged in. WhatsApp is owned by Meta, but WhatsApp does not ban all organisations that are on Meta's Dangerous Organisations and Individuals list. Across most tech platforms, analysis called 'hash-matching' is used. Hash-matching makes a unique digital signature on an image that is then compared with other images to weed out copies of extreme material. Ms Inman Grant said some iterations of hash-matching had error rates as low as one-in-50 billion. But YouTube owner Google only uses hash-matching to find 'exact' matches, not altered copies. 'This is deeply concerning when you consider in the first days following the Christchurch attack, Meta stated that over 800 different versions of the video were in circulation,' Ms Inman Grant said. The New Zealand government quickly classified footage of the livestreamed attack as 'objectionable material', banning possession and distribution. Men and teenage males were convicted up and down the country for having copies, many of which had rifle crosshairs or other video game iconography digitally added. 'Telegram said while it detected hashes of terrorist and violent extremist images and videos it had previously removed from its service, it did not utilise databases of known material from trusted external sources such as the Global Internet Forum to Counter Terror or Tech Against Terrorism,' Ms Inman Grant said in the report. The loopholes and methods for people to watch and create criminal imagery are being served up by the tech platforms. In the 12 months to the end of February 2024, Google received hundreds of reports that its own AI tool Gemini was being used to generate terrorist and child exploitation material. AI-generated suspected terrorist and violent extremist material reports totalled 258. There were 86 user reports of suspected AI-generated, synthetic child sexual exploitation and abuse material. Google was unable to tell eSafety if the 344 reports were actually offensive material. The online safety regulator conducted the research after issuing Google, Meta, WhatsApp, Reddit, Telegram and X notices to each answer questions about the steps they were taking to implement the Basic Online Safety Expectations with respect to terrorist and violent extremist material and activity. The notice is binding under Australian law. X is challenging the notice at the Administrative Review Tribunal. Telegram has been fined more than $950,000 for responding late. An independent inquiry into the 2019 Christchurch terrorist attack concluded New Zealand's government agencies could not have detected the shooter's plan 'except by chance'. The report detailed how the terrorist was radicalised online and legally acquired semiautomatic weapons before the shooting; New Zealand's government quickly brought in sweeping gun reform. The Australian terrorist responsible has been sentenced to life in prison in New Zealand without the chance of parole after pleading guilty but has an appeal against the sentence and convictions pending.
Yahoo
24-02-2025
- Business
- Yahoo
Australian internet watchdog fines Telegram nearly $1 million
Australia's internet watchdog has fined Telegram almost $1 million ($635,000 US) for a delay in reporting about terrorism and child abuse material. eSafety asked Telegram, along with other social media sites, about measures they had in place to tackle terrorist and violent extremist material on their services. Telegram was also asked about the measures it was taking to combat child sexual abuse material. On Monday, eSafety said other platforms had met the May 6, 2024, deadline, but Telegram didn't respond until October 31. "eSafety considered Telegram to be non-compliant with the transparency notice and has given it an infringement notice for $957,780." eSafety commissioner Julie Inman Grant said the infringement notice sent an important message to industry that timely transparency was not a voluntary requirement in Australia. "If we want accountability from the tech industry we need much greater transparency. These powers give us a look under the hood at just how these platforms are dealing, or not dealing, with a range of serious and egregious online harms which affect Australians," she said. Inman Grant said the threat posed by terrorist and extremist material shared and promoted online posed a growing risk and tech providers must live up to their responsibilities to be transparent and put in place measures to prevent their services being misused. "Research and observation have shown us that this material can normalise, desensitise and sometimes radicalise – especially the young who are viewing harmful material online that they cannot unsee." Telegram has 28 days to request the withdrawal of the infringement notice, pay it, or seek an extension to pay.


The Guardian
23-02-2025
- Business
- The Guardian
Telegram fined nearly $1m by Australian watchdog for delay in reporting about terrorism and child abuse material
Encrypted messaging app Telegram has been fined nearly $1m by Australia's online safety regulator for failing to respond on time to questions about what the company does to tackle terrorism and child abuse material on its platform. The notice was issued to Telegram, among other companies, in May last year, with a deadline to report back in October on steps taken to address terrorist and violent extremism material, as well as child exploitation material on its platform. Because Telegram failed to respond for nearly 160 days, eSafety has issued an infringement notice to the company for A$957,780. 'If we want accountability from the tech industry we need much greater transparency. These powers give us a look under the hood at just how these platforms are dealing, or not dealing, with a range of serious and egregious online harms which affect Australians,' the eSafety commissioner, Julie Inman Grant, said in a statement. 'Telegram took 160 days to provide information that was asked in the reporting notice and providing this information so late has obstructed eSafety from delivering its functions under the Online Safety Act for almost half a year.' A joint statement by the Five Eyes security agencies, including the Australian Security Intelligence Organisation and the Australian federal police, last year named Telegram as one platform through which young people were accessing extremist propaganda videos. Sign up for Guardian Australia's breaking news email 'Research and observation have shown us that this material can normalise, desensitise and sometimes radicalise – especially the young who are viewing harmful material online that they cannot unsee,' Inman Grant said. Telegram has 28 days to request the withdrawal of the infringement notice, pay the infringement notice, or seek an extension to pay the infringement notice. A spokesperson for Telegram described the penalty as 'unfair and disproportionate' and said the company intends to appeal. If Telegram does not pay the fine, the commissioner could take other action, including seeking civil penalties in the federal court. Guardian Australia asked eSafety whether the ultimate power Inman Grant's office has for noncompliant platforms – to block the site and remove the app from app stores – was being considered, but eSafety indicated the federal court was the appropriate next step. Telegram has signalled its willingness to be more cooperative with regulators across the globe after the company's CEO, Pavel Durov, was arrested in France in August last year and charged with several counts of failing to curb extremist and terrorist content. He remains on bail and has been banned from leaving France until the case is heard. Wired reported this month it could be a year before the case goes to court. Since Durov's arrest, Telegram has begun releasing transparency reports for its responses to law enforcement requests. According to the most recent report, for 2024, Telegram fulfilled 14 requests for IP address and/or telephone number information from Australian law enforcement, affecting 23 users. The eSafety report containing the responses from Telegram, Meta, WhatsApp, Google and Reddit will be released in early March. When Elon Musk's X was given a notice to provide similar information, it appealed against the decision to the Administrative Review Tribunal, with the case still ongoing. In a different set of cases in the federal court, eSafety fined X more than $610,000 for failing to adequately respond to notices, and took X to court, while X also sued over the notices. The cases are ongoing.