28-05-2025
Safety measures can spur AI's growth, not stifle it: Panellists
Professor Dawn Song (right) from the University of California, Berkeley at a fireside chat with Professor Simon Chesterman, senior director of AI governance at AI Singapore, at the Asia Tech x Singapore conference on May 28. PHOTO: IMDA
SINGAPORE - Safety measures can foster artificial intelligence (AI) innovations by avoiding unintended harm and building public trust, said Professor Dawn Song from the University of California, Berkeley.
Likening AI safety measures to seat belts in the early days of driving, she said that these guardrails will not stifle AI innovation in the same way that seat belts do not slow down the development of the automotive industry.
Rather, seat belts have fostered more confident driving, contributing to advancements in overall automotive safety and development, she said.
Prof Song was speaking at a fireside chat about securing AI's future with science-based safety on May 28 at the Asia Tech x Singapore conference held at Capella Singapore.
'AI safety... can help you to innovate faster... (and also) in a safer way... It is not there to slow things down,' says Prof Song.
AI safety was a common theme at the conference, where many speakers acknowledged that its harms have to be minimised either through regulation or some form of global consensus.
Past cases of AI biases that caused massive embarrassments and harmed minority groups offered some lessons.
A 2016 investigation led by US publication ProPublica found that a criminal justice algorithm used by the US courts wrongly flagged black defendants as high risk for reoffending, nearly twice the rate of white defendants.
In 2018, e-commerce giant Amazon scrapped an internal hiring tool after discovering it was biased against women. The system, trained on resumes submitted over a decade - many of which came from men - systematically downgraded applications that included the word 'women's,' such as 'women's chess club.' The tool was never deployed beyond testing.
Calling for responsible innovation, Prof Song said: 'We want to innovate. We want to actually make the world a better place. But we can only do that if we can ensure the safety and security of the AI systems we are building and deploying.'
Dr Samir Menon, CEO of Dexterity AI, which builds dexterous robot solutions, pointed out that the safety risks with physical AI can be significantly higher compared with AI chatbots.
Physical AI refers to hardware that interacts with the real world through sensors and actuators, and includes autonomous vehicles and surgical robots.
While software can cause online harms, hardware failures can be disastrous, he said speaking at a panel titled I, Robot - Future of Embodied AI at the same event on May 28.
'Once a robot moves in the real world, it can bump into people, or knock things over or worse,' he said.
In 2023, a robot taxi operated by US self-driving company Cruise ran over a pedestrian who had already been hit by another vehicle, dragging her for several metres and causing serious injuries.
A year later in South Korea, a delivery robot made by robot delivery service Neubility collided with a moving car at a pedestrian crossing, damaging the vehicle and sparking debate over who should be held accountable when such machines go rogue.
Speaking at the same panel, Stanford University's assistant professor of computer science Jiajun Wu noted that there are inconsistencies in the performance of robots in different research settings .
Besides, most robots are trained in controlled environments and applying to real-world settings can be challenging, Dr Samir said.
'If you deploy 10 robots and need to cordon off five meters around each of them, your whole building ends up looking like a robot zoo. That's just not viable,' Dr Samir added.
True progress, he said, lies in shared spaces. For instance, robots must be taught how to handle tools safely or carry drinks without spilling to safely co-exist with humans. Their training must be done in a real-world setting.
'If we can pull that off in the next four to five years, that's going to be a fantastic step up,' said Dr Samir.
To scale AI deployments safely, clear standards are needed. 'If I train an AI on a robot with one type of hand, and then change the hand or switch to two arms, will it still work? Right now, we just don't know,' he said.
Prof Song echoed this, noting that AI systems often lack transparency and systematic evaluation. She added that the broader AI policy landscape remains fragmented, with limited consensus on best practices. Most AI firms still prioritise performance over safety, investing relatively little in risk mitigation.
Prof Song said that the AI industry can draw lessons from the cybersecurity sector, which is shifting away from treating safety as an afterthought. Now, many systems are built with security measures already built in.
Similarly, AI can be designed to be safe at the onset, she said.
Join ST's Telegram channel and get the latest breaking news delivered to you.