logo
#

Latest news with #Sun-Herald

Social media ban must look to future teen trends
Social media ban must look to future teen trends

Sydney Morning Herald

timea day ago

  • Politics
  • Sydney Morning Herald

Social media ban must look to future teen trends

The federal government plans to introduce its social media ban for under-16s by December. Announced to mixed reviews last year – parent groups were ecstatic, while mental health organisations have warned about the risk of isolating vulnerable teens and tech commentators questioned the data security trade-offs – the ban would eventually require all Australians to complete an age verification process to use Instagram, Facebook, TikTok and other social media apps. The exact parameters of the ban remain to be seen, and will need to pass parliament, but last week, the Herald reported eSafety Commissioner Julie Inman Grant had advised the government to not restrict its new rules to specific social media platforms. Inman Grant is specifically seeking to include video platform YouTube in the ban, after it previously received an exemption due to its 'significant educational purpose'. According to the eSafety Commission's research, four in 10 young teenagers have been exposed to harmful content, such as eating disorder videos, misogynistic or hateful material, or violent fight videos, while watching YouTube. As the Albanese government finalises the details of its attempt to restrict social media on a national scale, the Sun-Herald believes it is extremely prudent to not include a discrete list of platforms the rules cover. Indeed, as Emily Kowal reports in today's Sun-Herald, there are emerging forms of online engagement driven by artificial intelligence, for which regulation should also be considered. Companion chatbots such as Replika and allow users to converse, call and exchange photos and videos with an AI 'friend'. The user can style this friend as their favourite character from a movie, a celebrity, or someone they know in real life. Loading It is not hard to see why child safety experts are concerned. The eSafety Commissioner said she had received reports of children as young as 10 spending hours on chatbots, which AI researchers say learn from their user, evolving to respond in ways to keep them talking for longer. Some bots are designed to be mean, others tend towards pornographic or other forms of conversation inappropriate for children. All collect information about their user, and few have any real mechanism to validate their user's age.

Social media ban must look to future teen trends
Social media ban must look to future teen trends

The Age

timea day ago

  • Politics
  • The Age

Social media ban must look to future teen trends

The federal government plans to introduce its social media ban for under-16s by December. Announced to mixed reviews last year – parent groups were ecstatic, while mental health organisations have warned about the risk of isolating vulnerable teens and tech commentators questioned the data security trade-offs – the ban would eventually require all Australians to complete an age verification process to use Instagram, Facebook, TikTok and other social media apps. The exact parameters of the ban remain to be seen, and will need to pass parliament, but last week, the Herald reported eSafety Commissioner Julie Inman Grant had advised the government to not restrict its new rules to specific social media platforms. Inman Grant is specifically seeking to include video platform YouTube in the ban, after it previously received an exemption due to its 'significant educational purpose'. According to the eSafety Commission's research, four in 10 young teenagers have been exposed to harmful content, such as eating disorder videos, misogynistic or hateful material, or violent fight videos, while watching YouTube. As the Albanese government finalises the details of its attempt to restrict social media on a national scale, the Sun-Herald believes it is extremely prudent to not include a discrete list of platforms the rules cover. Indeed, as Emily Kowal reports in today's Sun-Herald, there are emerging forms of online engagement driven by artificial intelligence, for which regulation should also be considered. Companion chatbots such as Replika and allow users to converse, call and exchange photos and videos with an AI 'friend'. The user can style this friend as their favourite character from a movie, a celebrity, or someone they know in real life. Loading It is not hard to see why child safety experts are concerned. The eSafety Commissioner said she had received reports of children as young as 10 spending hours on chatbots, which AI researchers say learn from their user, evolving to respond in ways to keep them talking for longer. Some bots are designed to be mean, others tend towards pornographic or other forms of conversation inappropriate for children. All collect information about their user, and few have any real mechanism to validate their user's age.

Rise of AI risks undermining HSC fairness
Rise of AI risks undermining HSC fairness

The Age

time14-06-2025

  • The Age

Rise of AI risks undermining HSC fairness

Lucy Carroll reports in today's Sun-Herald that the number of students caught cheating in the HSC has doubled in the past five years, a trend some in the sector have attributed to rising instances of teenagers using generative AI in their assessments. With all we know about the pervasiveness of artificial intelligence and its increasing sophistication, the figure is likely to be an underestimate. Indeed, Australian Tutoring Association president Mohan Dhall told Carroll that malpractice as a result of AI was likely going 'vastly undetected'. Over the past two years, the university sector has grappled with how to manage the use of AI in assessments. After initially reacting with outright bans, the institutions – increasingly reliant on online learning as a cost-saving teaching model – have changed their tune, allowing AI to be used in at least some assessments. At present, the University of Sydney is phasing in a policy that allows students to use AI in some assessments – a radical reversal of its previous ban on the technology. In the coming semester, students will be able to use AI in all take-home assessments, and co-ordinators cannot ban its use. Loading At the University of NSW, teachers set a level of acceptable AI use for each assessment. The university signed an Australian-first deal last year with ChatGPT to roll out a special version of the technology on campus. The NSW Education Standards Authority (NESA) believes it is the responsibility of individual schools and school sectors to manage policies for the use of AI in their establishments. But, in the case of the high-pressure, statewide HSC, this approach surely cannot hold. If scenes outside selective schools test centres when some computers malfunctioned last month are anything to go by, any sense of unfairness across the system will not be tolerated: too much rides on the marks received by students in the NSW school system, be it a place at a top-performing selective school, or admission into a dream university course. As Carroll reports today, a paper published last month by Catholic Schools NSW said HSC take-home assessments should decrease in importance for a student's overall grade until 'the AI threat to assessment integrity can be satisfactorily contained'.

Rise of AI risks undermining HSC fairness
Rise of AI risks undermining HSC fairness

Sydney Morning Herald

time14-06-2025

  • Sydney Morning Herald

Rise of AI risks undermining HSC fairness

Lucy Carroll reports in today's Sun-Herald that the number of students caught cheating in the HSC has doubled in the past five years, a trend some in the sector have attributed to rising instances of teenagers using generative AI in their assessments. With all we know about the pervasiveness of artificial intelligence and its increasing sophistication, the figure is likely to be an underestimate. Indeed, Australian Tutoring Association president Mohan Dhall told Carroll that malpractice as a result of AI was likely going 'vastly undetected'. Over the past two years, the university sector has grappled with how to manage the use of AI in assessments. After initially reacting with outright bans, the institutions – increasingly reliant on online learning as a cost-saving teaching model – have changed their tune, allowing AI to be used in at least some assessments. At present, the University of Sydney is phasing in a policy that allows students to use AI in some assessments – a radical reversal of its previous ban on the technology. In the coming semester, students will be able to use AI in all take-home assessments, and co-ordinators cannot ban its use. Loading At the University of NSW, teachers set a level of acceptable AI use for each assessment. The university signed an Australian-first deal last year with ChatGPT to roll out a special version of the technology on campus. The NSW Education Standards Authority (NESA) believes it is the responsibility of individual schools and school sectors to manage policies for the use of AI in their establishments. But, in the case of the high-pressure, statewide HSC, this approach surely cannot hold. If scenes outside selective schools test centres when some computers malfunctioned last month are anything to go by, any sense of unfairness across the system will not be tolerated: too much rides on the marks received by students in the NSW school system, be it a place at a top-performing selective school, or admission into a dream university course. As Carroll reports today, a paper published last month by Catholic Schools NSW said HSC take-home assessments should decrease in importance for a student's overall grade until 'the AI threat to assessment integrity can be satisfactorily contained'.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store