logo
#

Latest news with #SimonMilner

ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth
ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth

Straits Times

time3 days ago

  • Business
  • Straits Times

ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth

(From left) IMDA's Alamelu Subramaniam, Adobe's Andy Parsons, Baroness Jones of Whitchurch, Meta's Simon Milner and SMU's Lim Sun Sun during a discussion at ATxSummit 2025 on May 29. PHOTO: INFOCOMM MEDIA DEVELOPMENT AUTHORITY ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth SINGAPORE – Meta, the parent company of Facebook and Instagram, downplayed fears over the impact of artificial intelligence (AI), urging policymakers and the public to focus on actual outcomes rather than worst-case scenarios. The comments by its Asia-Pacific public policy vice-president Simon Milner drew sharp rebuttals at the ATxSummit 2025 on May 29, where fellow panellists said the rapid spread of AI has real-world consequences such as online harms affecting youth and children. During the panel at Capella Singapore, Mr Milner cited 2024 as the 'year of democracy', as more people across a bigger number of countries went to the polls than at any other time in history. While there were widespread concerns about deepfakes and generative AI (GenAI) disrupting elections, he said no significant evidence of such interference was found – not even in major democracies like the US, India or Indonesia. 'Although enormous amounts of GenAI were deployed across platforms, the impact has not been catastrophic,' he added. However, his views were not shared by fellow panellists discussing the topic of protecting society in an always-online world. Drawing from her work, Singapore Management University's professor of communication and technology Lim Sun Sun said many parents feel anxious and unsure about how to guide their children in navigating the rapid rise of GenAI. 'Even if the data doesn't paint a worrying picture overall, on the ground, people are struggling to understand this technology,' Prof Lim said. Teachers also face a dilemma: Encouraging experimentation with AI while warning about its risks. 'It is a difficult balance,' she added. Baroness Jones of Whitchurch (Margaret Beryl Jones) , the UK's parliamentary under-secretary for the future digital economy and online safety, echoed similar concerns about online harms affecting youth and children. She pointed to an ongoing public debate in the UK about the damaging effects some online platforms have on young users. 'For example, children accessing online suicide forums and committing suicide. This is just heartbreaking, and we have some terrible stories about it,' she said. In May 2024, 17-year-old Vlad Nikolin-Caisley from Hampshire in south-east England died after allegedly being encouraged by members of an online pro-suicide group. His family believes these harmful online interactions played a significant role in his death, intensifying calls for stronger regulation of such platforms. Baroness Jones stressed the need for tech companies to work closely with the government to minimise such harms, but acknowledged that not all companies are fully on board, as the government is 'laying high expectations in a new territory'. But Mr Milner pushed back, arguing that the UK – or more broadly, Europe – rushed to be the first region to regulate AI, which he described as a mistake. He said this approach has led to a stand-off with companies. In contrast, he praised Singapore and other Asian governments for taking a different path: Fostering robust dialogue with tech firms, both publicly and privately, while asking tough questions without rushing into heavy-handed regulations. Mr Andy Parsons, senior director of content authenticity at Adobe, highlighted the spread of child sexual abuse material (CSAM) online. It is becoming nearly impossible for the police to identify real victims if the materials were generated entirely by AI, he said. Mr Parsons warned that this not only hinders efforts to bring perpetrators to justice but also erases the real human suffering behind these crimes – a grave problem that requires urgent attention. Prof Lim agreed, noting that the issue of CSAM has been worsened by the rapid spread of GenAI. She is currently identifying key stakeholders across the industry, government and the community who are involved in tackling the problem . We need to understand 'where else can we coordinate our efforts better so that we can combat this really dreadful scourge', she said. Addressing the concerns raised by his fellow panellists, Mr Milner emphasised that Meta's top priority is developing products with features to mitigate online harms. He cited the introduction of teen-specific accounts on Instagram as a response to growing worries about young people's engagement with the platform. 'I think we should be more parent-focused in our approach to young people's safety,' he said, adding that teen accounts are not just about imposing bans. 'Parents want help, and we are here to help them.' Baroness Jones stressed that AI safety must be approached as safety by design – embedded into platforms from the outset, rather than relying on reactive measures like taking down the content afterwards. 'It should be an integral part of the system that children, in particular, are protected,' she said. But achieving that remains a major challenge. Citing reports from the UK, she highlighted that children as young as eight have encountered disturbing content online, often repeatedly surfaced to them by algorithms. She believed the algorithms are clearly reinforcing exposure to harmful material. If tech companies truly put their minds to it, they could rework the way these systems operate, she said, emphasising that keeping children safe must be the top priority. Prof Lim also called for safety by design, stressing that online spaces should be built with the most vulnerable users in mind – whether they are children, women, the elderly or marginalised communities. She said: 'Because once you've designed for the most vulnerable, it makes the whole platform safer for everyone.' Join ST's WhatsApp Channel and get the latest news and must-reads.

ATx Summit 2025: Meta VP downplays fears over AI, critics raise alarm over online risks to youth
ATx Summit 2025: Meta VP downplays fears over AI, critics raise alarm over online risks to youth

Straits Times

time4 days ago

  • Business
  • Straits Times

ATx Summit 2025: Meta VP downplays fears over AI, critics raise alarm over online risks to youth

(From left) IMDA Assistant Chief Executive Alamelu Subramaniam, Adobe Content Authenticity Senior Director Andy Parsons, UK Parliamentary Under-Secretary of State for the Future Digital Economy and Online Safety, Baroness Jones of Whitchurch, Meta Public Policy for Asia Pacific Vice-President Simon Milner, Singapore Management University Partnerships and Engagement Vice-President Sun Sun Lim during a panel discussion on May 29. PHOTO: INFOCOMM MEDIA DEVELOPMENT AUTHORITY SINGAPORE – Meta, the parent company of Facebook and Instagram, downplayed fears over the impact of artificial intelligence (AI), urging policymakers and the public to focus on actual outcomes rather than worst-case scenarios. The comments by its Asia-Pacific public policy vice-president Simon Milner drew sharp rebuttals at the ATx Summit 2025 on May 29, where fellow panellists said the rapid spread of AI has caused real-world consequences such as online harms affecting youth and children. During the panel at Capella Singapore, Mr Milner cited 2024 as the 'year of democracy', as more people across a bigger number of countries went to the polls than at any other time in history. While there were widespread concerns about deepfakes and generative AI (GenAI) disrupting elections, he said no significant evidence of such interference was found – not even in major democracies like the United States, India or Indonesia. 'Although enormous amounts of GenAI were deployed across platforms, the impact has not been catastrophic,' he added. However, his views were not shared by fellow panellists discussing the topic of protecting society in an always online world. Drawing from her work, Singapore Management University's professor of communication and technology Lim Sun Sun said many parents feel anxious and unsure about how to guide their children in navigating the rapid rise of GenAI. 'Even if the data doesn't paint a worrying picture overall, on the ground, people are struggling to understand this technology,' Prof Lim said. Teachers also face a dilemma: encouraging experimentation with AI while warning about its risks. 'It is a difficult balance,' she added. Baroness Jones of Whitchurch (Margaret Beryl Jones) , the United Kingdom's parliamentary under-secretary for the future digital economy and online safety, echoed similar concerns about online harms affecting youth and children. She pointed to an ongoing public debate in the UK about the damaging effects some online platforms have on young users. 'For example, children accessing online suicide forums and committing suicide. This is just heartbreaking, and we have some terrible stories about it,' Baroness Jones said. In May 2024, 17-year-old Vlad Nikolin-Caisley from Hampshire in south-east England died after allegedly being encouraged by members of an online pro-suicide group. His family believes these harmful online interactions played a significant role in his death, intensifying calls for stronger regulation of such platforms. Baroness Jones stressed the need for tech companies to work closely with the government to minimise such harms but acknowledged that not all companies are fully on board, as the government is 'laying high expectations in a new territory'. But Mr Milner pushed back, arguing that the UK – or more broadly, Europe – rushed to be the first region to regulate AI, which he described as a mistake. He said this approach has led to a stand-off with companies. In contrast, he praised Singapore and other Asian governments for taking a different path: fostering robust dialogue with tech firms, both publicly and privately, while asking tough questions without rushing into heavy-handed regulations. Mr Andy Parsons, senior director of content authenticity at Adobe, highlighted the spread of child sexual abuse material (CSAM) online. It is becoming nearly impossible for the police to identify real victims if the materials were generated entirely by AI, he said. Mr Parsons warned that this not only hinders efforts to bring perpetrators to justice but also erases the real human suffering behind these crimes – a grave problem that requires urgent attention. Prof Lim agreed, noting that the issue of CSAM has been worsened by the rapid spread of GenAI. She is currently identifying key stakeholders across industry, government and the community who are involved in tackling the problem . We need to understand 'where else can we coordinate our efforts better so that we can combat this really dreadful scourge', she said. Addressing the concerns raised by his fellow panellists, Mr Milner emphasised that Meta's top priority is developing products with features to mitigate online harms. He cited the introduction of teen-specific accounts on Instagram as a response to growing worries about young people's engagement with the platform. 'I think we should be more parent-focused in our approach to young people's safety,' he said, adding that teen accounts are not just about imposing bans. 'Parents want help, and we are here to help them.' Baroness Jones stressed that AI safety must be approached as safety by design – embedded into platforms from the outset, rather than relying on reactive measures like taking down the content afterwards. 'It should be an integral part of the system that children, in particular, are protected,' she said. But achieving that remains a major challenge. Citing reports from the UK, she highlighted that children as young as eight years old have encountered disturbing content online, often repeatedly surfaced to them by algorithms. She believed the algorithms are clearly reinforcing exposure to harmful material. If tech companies truly put their minds to it, they could rework the way these systems operate, she said, emphasising that keeping children safe must be the top priority. Prof Lim also called for safety by design, stressing that online spaces should be built with the most vulnerable users in mind – whether they are children, women, the elderly or marginalised communities. She said: 'Because once you've designed for the most vulnerable, it makes the whole platform safer for everyone.' Join ST's WhatsApp Channel and get the latest news and must-reads.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store