Latest news with #GlobalAIAssurancePilot


AsiaOne
30-05-2025
- Business
- AsiaOne
Initiative by IMDA, AI Verify Foundation tests AI accuracy, trustworthiness in real-world scenarios, Digital News
SINGAPORE – Doctors at Changi General Hospital (CGH) are testing the use of generative artificial intelligence (GenAI) to summarise medical reports and provide recommendations on clinical surveillance. But are these recommendations accurate? Meanwhile, regulatory technology firm Tookitaki uses GenAI to investigate potential money laundering and fraud cases. Are its findings trustworthy? Earlier in 2025, the Infocomm Media Development Authority (IMDA) and the AI Verify Foundation rolled out an initiative focused on real-world uses of GenAI to encourage the safe adoption of AI across various industries. The AI Verify Foundation is a not-for-profit subsidiary of IMDA that tackles pressing issues arising from AI. Between March and May, 17 organisations across 10 different sectors – including human resources, healthcare and finance – had their GenAI applications assessed by specialist GenAI testing firms. The findings were published on May 29, marking Singapore's commitment to spearhead the development of global standards for the safe deployment of GenAI apps. The Global AI Assurance Pilot, as the initiative is called, has allowed organisations to see how their GenAI applications perform under practical conditions, said Senior Minister of State for Digital Development and Information Tan Kiat How on May 29. He was speaking on the last day of the Asia Tech x Singapore conference, held at Capella Singapore. Clinical Associate Professor Chow Weien, chief data and digital officer at CGH, told The Straits Times that taking part in the initiative helped the hospital design a more robust and reliable way of testing its AI models. 'For example, we could assess whether our GenAI application was extracting the clinical information accurately from the doctor's colonoscopy report, and if the application was providing the correct recommendation, in line with the clinical guidelines,' he said. Tookitaki founder and chief executive Abhishek Chatterjee told ST the experience helped make the firm's AI model more auditable and allowed the company to incorporate guardrails against AI hallucinations. These are inaccurate or nonsensical results generated due to factors such as insufficient training data. [[nid:693529]] While earlier initiatives had focused on the testing of AI models, the Global AI Assurance Pilot aimed to test the reliability of GenAI in real-world scenarios, said AI Verify Foundation executive director Shameek Kundu. This is important as the information fed to AI can be flawed, he said, giving the example of poor-quality scans from a patient provided to a hospital's AI. The aim is to make the use of GenAI 'boring and predictable', to ensure the technology's reliability for day-to-day use, he said. In a statement, IMDA and AI Verify Foundation said the initiative also showed that human experts were essential at every stage of testing, from designing the right tests to interpreting test results. While the technology may improve in the future, a human touch is still needed for now, said Mr Shameek. 'The technology is not good enough for us to blindly trust and say it's working,' he said. A report detailing the findings is available on AI Verify Foundation's website. In line with the pilot, a testing starter kit for GenAI applications has also been developed, serving as a set of voluntary guidelines for businesses that want to responsibly adopt GenAI. 'It draws on insights from the Global AI Assurance Pilot, tapping the experience of practitioners to ensure the guidance is practical and useful,' said Mr Tan. He added that the kit includes emerging best practices and methodologies for testing GenAI applications, as well as practical guidance on how to conduct such testing. The guidelines will be complemented by testing tools to help developers conduct these tests, which will be made progressively available via IMDA and AI Verify Foundation's Project Moonshot, a toolkit targeted at AI app developers. IMDA is conducting a four-week public consultation on the starter kit, which can be found online. The consultation will end on June 25. Feedback can be e-mailed to aigov@ with the e-mail header 'Comments on the draft Starter Kit for Safety Testing of LLM-Based Applications'. Mr Tan also announced that AI Singapore (AISG) – a national initiative to build the Republic's capabilities in AI – will sign a memorandum of understanding with the United Nations Development Programme (UNDP) to advance AI literacy in developing countries. This partnership will see AISG's AI for Good programme, launched in 2024 to bolster national AI capabilities, expand to an international scale, he said. 'AISG and UNDP will explore initial AI for Good pilots in South-east Asia, the Caribbean and the Pacific Islands, so that we can support more inclusive participation in AI-driven growth together,' he added. [[nid:714736]] This article was first published in The Straits Times. Permission required for reproduction.


CNA
29-05-2025
- Business
- CNA
Singapore releases tools and resources to help businesses test, develop AI safety
Singapore has released tools and resources to help businesses create and deploy AI in a safer way. It's part of an effort to turn the country into a hub for AI trust — something it has been working on since 2019. The Global AI Assurance Pilot pairs businesses that use generative AI with those that can test it. Meanwhile, the Testing Starter Kit is a 125-page document that identifies common ways that generative AI models can go wrong. Nicolas Ng reports.

Straits Times
29-05-2025
- Business
- Straits Times
Initiative by IMDA, AI Verify Foundation tests AI accuracy, trustworthiness in real-world scenarios
IMDA is conducting a four-week public consultation, which ends on June 25, on the starter kit, which can be found online. PHOTO: REUTERS SINGAPORE – Doctors at Changi General Hospital use generative artificial intelligence (GenAI) to summarise colonoscopy reports and provide recommendations on clinical management. But are these recommendations accurate? Meanwhile, regulatory technology firm Tookitaki uses GenAI to investigate potential money laundering and fraud cases. Are its findings trustworthy? Earlier in 2025, the Infocomm Media Development Authority (IMDA) and the AI Verify Foundation rolled out an initiative focused on real-world uses of GenAI to encourage the safe adoption of AI across various industries. The AI Verify Foundation is a not-for-profit subsidiary of IMDA which tackles pressing issues arising from artificial intelligence. Between March and May, 17 organisations across 10 different sectors – including human resources, healthcare and finance – had their GenAI applications assessed by specialist GenAI testing firms. The findings were published on May 29, marking Singapore's commitment to spearhead the development of global standards for the safe deployment of GenAI apps. The Global AI Assurance pilot , as the initiative is called, has allowed organisations to see how their GenAI applications perform under practical conditions, said Senior Minister of State for Digital Development and Information Tan Kiat How on May 29. He was speaking on the last day of the Asia Tech x Singapore conference, held at Capella Singapore. Clinical Associate Professor Chow Weien, chief data and digital officer at CGH, told The Straits Times that taking part in the initiative helped the hospital design a more robust and reliable way of testing its AI models. 'For example, we could assess whether our GenAI application was extracting the clinical information accurately from the doctor's colonoscopy report, and if the application was providing the correct recommendation, in line with the clinical guidelines,' he said. Tookitaki founder and chief executive Abhishek Chatterjee told ST the experience helped make the firm's AI model more auditable and allowed the company to incorporate guardrails against AI hallucinations. These are inaccurate or nonsensical results generated due to factors such as insufficient training data. While earlier initiatives had focused on the testing of AI models, the Global AI Assurance Pilot aimed to test the reliability of GenAI in real world scenarios, said AI Verify Foundation executive director Shameek Kundu. This is important as the information fed to AI can be flawed, he said, giving the example of poor quality scans from a patient provided to a hospital's AI. The aim is to make the use of GenAI 'boring and predictable', to ensure the technology's reliability for day-to-day use, he said. In a statement, IMDA and AI Verify Foundation said the initiative also showed human experts were essential at every stage of testing, from designing the right tests to interpreting the test results. While the technology may improve in the future, the human touch is still needed for now, said Mr Shameek. 'The technology is not good enough for us to blindly trust and say it's working,' he said. A report detailing the findings is available on the AI Verify Foundation's website. In line with the pilot, a testing starter kit for GenAI applications has also been developed, serving as a set of voluntary guidelines for businesses that want to responsibly adopt GenAI. 'It draws on insights from the Global AI Assurance Pilot, tapping on the experience of practitioners to ensure the guidance is practical and useful,' said Mr Tan. He added that the kit includes emerging best practices and methodologies for testing GenAI applications, as well as practical guidance on how to conduct such testing. The guidelines will be complemented by testing tools to help developers conduct these tests, which will be made progressively available via IMDA and AI Verify Foundation's Project Moonshot, a toolkit targeted at AI app developers. IMDA is conducting a four-week public consultation, which ends on June 25, on the starter kit, which can be found online. Feedback can be emailed to aigov@ with the email header: 'Comments on the draft Starter Kit for Safety Testing of LLM-Based Applications'. Mr Tan also announced that AI Singapore (AISG) – a national initiative to build the Republic's capabilities in artificial intelligence – will sign a memorandum of understanding with the United Nations Development Programme (UNDP) to advance AI literacy in developing countries. This partnership will see AISG's AI for Good programme, launched in 2024 to bolster national AI capabilities, expand to an international scale, he said. 'AISG and UNDP will explore initial AI for Good pilots in South-east Asia, the Caribbean, and the Pacific Islands, so that we can support more inclusive participation in AI-driven growth together,' he added. Join ST's Telegram channel and get the latest breaking news delivered to you.


Korea Herald
29-05-2025
- Business
- Korea Herald
Singapore Unveils Insights from World's First Technical Testing of Real World Applications of GenAI
SINGAPORE, May 29, 2025 /PRNewswire/ -- Singapore unveiled key insights from its Global AI Assurance Pilot, an initiative to catalyse emerging norms and best practices around technical testing of Generative AI (GenAI) applications. These insights then provided the blueprint for the world's first Testing Starter Kit for GenAI applications, which is now open for views. These global initiatives were announced by Mr Tan Kiat How, Senior Minister of State for Digital Development and Information at the ATxSummit 2025, the flagship event of Asia Tech x Singapore (ATxSG). These efforts put Singapore at the forefront of efforts to operationalise AI safety, accelerate trusted and responsible AI adoption and deployment, and promote international cooperation for AI that benefits all. 2. To encourage safe adoption of AI in industries, the Global AI Assurance Pilot was launched in February 2025, an initiative by the AI Verify Foundation (AIVF) and Infocomm Media Development Authority (IMDA) to catalyse emerging norms and best practices around technical testing of Gen AI applications. The pilot received strong interest from both local and international AI stakeholders, especially from companies deploying Gen AI in their business process. In the pilot, 16 specialist AI testers were paired with 17 deployers of real-world Gen AI applications – from 10 different industries including finance, healthcare, HR, people and public sectors. 3. An example of a key finding from the pilot was that Gen AI risks are often context-dependent (specific to industry, use case, culture, language and organisation). To narrow risks and tests for specific situations is a challenge and the recommendation is to involve subject matter experts throughout the application lifecycle. AIVF and IMDA will continue to work with industry to refine the pilot. Testing Starter Kit for Gen AI Apps 4. IMDA also announced plans to develop a first of its kind Testing Starter Kit for Gen AI applications. The Starter Kit generalises key insights from the Assurance Pilot and consultations with other practitioners to provide practical testing guidance for all businesses developing or leveraging GenAI applications, across sectors and use-cases. The Starter Kit provides a step-by-step guide on how to think about risks to be concerned about, highlighting common ones like hallucination, undesirable content, data disclosure, and vulnerability to adversarial prompts, and subsequently how to test the Gen AI applications. IMDA is calling for views from the industry on this Starter Kit on the testing guidance as well as recommended tests for the four identified risks. 5. The Starter Kit is complemented by testing tools such as Project Moonshot, which provides a platform enabling businesses to implement the testing guidance. The Starter Kit will continue to expand to address emerging risks and testing requirements in tandem with technological developments. 6. Both the Assurance Pilot and Starter Kit aim to uplift the capabilities of businesses in the safe deployment of Gen AI applications and build overall trust in the AI ecosystem. Singapore continues to harness AI for Public Good for Singapore and the World 7. Singapore believes in harnessing AI for Public Good and that AI can uplift economic potential, enhance social impact and meet the needs and challenges of our time. AI Singapore (AISG) will be signing a Memorandum of Understanding with the UNDP to advance AI literacy in six pilot countries aimed at closing the AI literacy divide and transforming communities in developing countries. This partnership will extend AISG's successful AI for Good (AI4Good) programme – initially launched in 2024 to bolster national AI capabilities across Asia – to an international scale. AISG's AI Student Developer Conference 8. The AI Student Developer Conference (AISDC) led by AI Singapore (AISG) brought together over 1,000 students and 60 industry partners in a two-day event dedicated to artificial intelligence innovation and talent development. A key highlight is the National AI Student Challenge (NAISC), where students from six ASEAN countries (Singapore, Indonesia, Malaysia, Thailand, Philippines, Vietnam) compete to tackle real world problems through LLM fine-tuning and prompt engineering. The conference has expanded to include ASEAN participation through its first regional challenge track, underlining Singapore's role in fostering AI talent development and collaboration across Southeast Asia. Mdm Rahayu Mahzam, Minister of State at the Ministry of Digital Development and Information will deliver the closing remarks at this event. Women in Tech 9. Mrs Josephine Teo, Minister for Digital Development and Information, spoke on a panel 'Success to Significance – Leaders Building Communities" which explored how successful Women in Tech can and have been playing impactful roles in "paying it forward" and creating equally successful communities of women. On the panel with Minister Teo were Ms Jane Sun, CEO of Ms Tan Su Shan, CEO of DBS Group, moderated by Professor Annie Koh, Professor Emeritus of Finance (Practice), Singapore Management University.