Latest news with #AItesting
Yahoo
29-05-2025
- Business
- Yahoo
Singapore Unveils Insights from World's First Technical Testing of Real World Applications of GenAI
First Global AI Assurance Pilot brought global testing companies and GenAI deployers to test real world uses World's first Starter Kit for Gen AI applications to provide practical guidance to industry AI Singapore and the UN Development Programme to strengthen AI literacy and expand AI access and opportunity SINGAPORE, May 29, 2025 /PRNewswire/ -- Singapore unveiled key insights from its Global AI Assurance Pilot, an initiative to catalyse emerging norms and best practices around technical testing of Generative AI (GenAI) applications. These insights then provided the blueprint for the world's first Testing Starter Kit for GenAI applications, which is now open for views. These global initiatives were announced by Mr Tan Kiat How, Senior Minister of State for Digital Development and Information at the ATxSummit 2025, the flagship event of Asia Tech x Singapore (ATxSG). These efforts put Singapore at the forefront of efforts to operationalise AI safety, accelerate trusted and responsible AI adoption and deployment, and promote international cooperation for AI that benefits all. Global AI Assurance Pilot 2. To encourage safe adoption of AI in industries, the Global AI Assurance Pilot was launched in February 2025, an initiative by the AI Verify Foundation (AIVF) and Infocomm Media Development Authority (IMDA) to catalyse emerging norms and best practices around technical testing of Gen AI applications. The pilot received strong interest from both local and international AI stakeholders, especially from companies deploying Gen AI in their business process. In the pilot, 16 specialist AI testers were paired with 17 deployers of real-world Gen AI applications – from 10 different industries including finance, healthcare, HR, people and public sectors. 3. An example of a key finding from the pilot was that Gen AI risks are often context-dependent (specific to industry, use case, culture, language and organisation). To narrow risks and tests for specific situations is a challenge and the recommendation is to involve subject matter experts throughout the application lifecycle. AIVF and IMDA will continue to work with industry to refine the pilot. Testing Starter Kit for Gen AI Apps 4. IMDA also announced plans to develop a first of its kind Testing Starter Kit for Gen AI applications. The Starter Kit generalises key insights from the Assurance Pilot and consultations with other practitioners to provide practical testing guidance for all businesses developing or leveraging GenAI applications, across sectors and use-cases. The Starter Kit provides a step-by-step guide on how to think about risks to be concerned about, highlighting common ones like hallucination, undesirable content, data disclosure, and vulnerability to adversarial prompts, and subsequently how to test the Gen AI applications. IMDA is calling for views from the industry on this Starter Kit on the testing guidance as well as recommended tests for the four identified risks. 5. The Starter Kit is complemented by testing tools such as Project Moonshot, which provides a platform enabling businesses to implement the testing guidance. The Starter Kit will continue to expand to address emerging risks and testing requirements in tandem with technological developments. 6. Both the Assurance Pilot and Starter Kit aim to uplift the capabilities of businesses in the safe deployment of Gen AI applications and build overall trust in the AI ecosystem. Singapore continues to harness AI for Public Good for Singapore and the World 7. Singapore believes in harnessing AI for Public Good and that AI can uplift economic potential, enhance social impact and meet the needs and challenges of our time. AI Singapore (AISG) will be signing a Memorandum of Understanding with the UNDP to advance AI literacy in six pilot countries aimed at closing the AI literacy divide and transforming communities in developing countries. This partnership will extend AISG's successful AI for Good (AI4Good) programme – initially launched in 2024 to bolster national AI capabilities across Asia – to an international scale. AISG's AI Student Developer Conference 8. The AI Student Developer Conference (AISDC) led by AI Singapore (AISG) brought together over 1,000 students and 60 industry partners in a two-day event dedicated to artificial intelligence innovation and talent development. A key highlight is the National AI Student Challenge (NAISC), where students from six ASEAN countries (Singapore, Indonesia, Malaysia, Thailand, Philippines, Vietnam) compete to tackle real world problems through LLM fine-tuning and prompt engineering. The conference has expanded to include ASEAN participation through its first regional challenge track, underlining Singapore's role in fostering AI talent development and collaboration across Southeast Asia. Mdm Rahayu Mahzam, Minister of State at the Ministry of Digital Development and Information will deliver the closing remarks at this event. Women in Tech 9. Mrs Josephine Teo, Minister for Digital Development and Information, spoke on a panel 'Success to Significance – Leaders Building Communities" which explored how successful Women in Tech can and have been playing impactful roles in "paying it forward" and creating equally successful communities of women. On the panel with Minister Teo were Ms Jane Sun, CEO of Ms Tan Su Shan, CEO of DBS Group, moderated by Professor Annie Koh, Professor Emeritus of Finance (Practice), Singapore Management University. Contact: View original content to download multimedia: SOURCE Infocomm Media Development Authority of Singapore
Yahoo
29-05-2025
- Business
- Yahoo
Singapore Unveils Insights from World's First Technical Testing of Real World Applications of GenAI
First Global AI Assurance Pilot brought global testing companies and GenAI deployers to test real world uses World's first Starter Kit for Gen AI applications to provide practical guidance to industry AI Singapore and the UN Development Programme to strengthen AI literacy and expand AI access and opportunity SINGAPORE, May 29, 2025 /PRNewswire/ -- Singapore unveiled key insights from its Global AI Assurance Pilot, an initiative to catalyse emerging norms and best practices around technical testing of Generative AI (GenAI) applications. These insights then provided the blueprint for the world's first Testing Starter Kit for GenAI applications, which is now open for views. These global initiatives were announced by Mr Tan Kiat How, Senior Minister of State for Digital Development and Information at the ATxSummit 2025, the flagship event of Asia Tech x Singapore (ATxSG). These efforts put Singapore at the forefront of efforts to operationalise AI safety, accelerate trusted and responsible AI adoption and deployment, and promote international cooperation for AI that benefits all. Global AI Assurance Pilot 2. To encourage safe adoption of AI in industries, the Global AI Assurance Pilot was launched in February 2025, an initiative by the AI Verify Foundation (AIVF) and Infocomm Media Development Authority (IMDA) to catalyse emerging norms and best practices around technical testing of Gen AI applications. The pilot received strong interest from both local and international AI stakeholders, especially from companies deploying Gen AI in their business process. In the pilot, 16 specialist AI testers were paired with 17 deployers of real-world Gen AI applications – from 10 different industries including finance, healthcare, HR, people and public sectors. 3. An example of a key finding from the pilot was that Gen AI risks are often context-dependent (specific to industry, use case, culture, language and organisation). To narrow risks and tests for specific situations is a challenge and the recommendation is to involve subject matter experts throughout the application lifecycle. AIVF and IMDA will continue to work with industry to refine the pilot. Testing Starter Kit for Gen AI Apps 4. IMDA also announced plans to develop a first of its kind Testing Starter Kit for Gen AI applications. The Starter Kit generalises key insights from the Assurance Pilot and consultations with other practitioners to provide practical testing guidance for all businesses developing or leveraging GenAI applications, across sectors and use-cases. The Starter Kit provides a step-by-step guide on how to think about risks to be concerned about, highlighting common ones like hallucination, undesirable content, data disclosure, and vulnerability to adversarial prompts, and subsequently how to test the Gen AI applications. IMDA is calling for views from the industry on this Starter Kit on the testing guidance as well as recommended tests for the four identified risks. 5. The Starter Kit is complemented by testing tools such as Project Moonshot, which provides a platform enabling businesses to implement the testing guidance. The Starter Kit will continue to expand to address emerging risks and testing requirements in tandem with technological developments. 6. Both the Assurance Pilot and Starter Kit aim to uplift the capabilities of businesses in the safe deployment of Gen AI applications and build overall trust in the AI ecosystem. Singapore continues to harness AI for Public Good for Singapore and the World 7. Singapore believes in harnessing AI for Public Good and that AI can uplift economic potential, enhance social impact and meet the needs and challenges of our time. AI Singapore (AISG) will be signing a Memorandum of Understanding with the UNDP to advance AI literacy in six pilot countries aimed at closing the AI literacy divide and transforming communities in developing countries. This partnership will extend AISG's successful AI for Good (AI4Good) programme – initially launched in 2024 to bolster national AI capabilities across Asia – to an international scale. AISG's AI Student Developer Conference 8. The AI Student Developer Conference (AISDC) led by AI Singapore (AISG) brought together over 1,000 students and 60 industry partners in a two-day event dedicated to artificial intelligence innovation and talent development. A key highlight is the National AI Student Challenge (NAISC), where students from six ASEAN countries (Singapore, Indonesia, Malaysia, Thailand, Philippines, Vietnam) compete to tackle real world problems through LLM fine-tuning and prompt engineering. The conference has expanded to include ASEAN participation through its first regional challenge track, underlining Singapore's role in fostering AI talent development and collaboration across Southeast Asia. Mdm Rahayu Mahzam, Minister of State at the Ministry of Digital Development and Information will deliver the closing remarks at this event. Women in Tech 9. Mrs Josephine Teo, Minister for Digital Development and Information, spoke on a panel 'Success to Significance – Leaders Building Communities" which explored how successful Women in Tech can and have been playing impactful roles in "paying it forward" and creating equally successful communities of women. On the panel with Minister Teo were Ms Jane Sun, CEO of Ms Tan Su Shan, CEO of DBS Group, moderated by Professor Annie Koh, Professor Emeritus of Finance (Practice), Singapore Management University. Contact: View original content to download multimedia: SOURCE Infocomm Media Development Authority of Singapore Melden Sie sich an, um Ihr Portfolio aufzurufen.


Fast Company
27-05-2025
- Business
- Fast Company
Achieve scalable test automation with AI-native testing
Most testing strategies collapse under the weight of modern software development demands. But speed requirements continue to increase while application complexity grows, which creates an impossible equation for traditional approaches. AI-native testing is a fundamental reimagination of quality assurance (QA) at scale. After working with hundreds of organizations for testing upgrades, I've noticed one consistent pattern: Test automation used to break at scale, however, that's not the case anymore. With AI-native testing capabilities, teams could amplify developer experience and QA efficiency, ultimately accelerating the release velocity without compromising the quality of product. Every quality strategy exists somewhere on what I call the scale-intelligence matrix. Picture this: Bottom-Left Quadrant: Manual testing provides intelligence, but lacks scale. Top-Left Quadrant: Traditional automation offers scale without adaptability. Bottom-Right Quadrant: Exploratory testing delivers insights without consistency. Top-Right Quadrant: AI-driven testing combines scale with intelligence. Organizations struggling with testing effectiveness inevitably discover they've optimized for just one dimension—either scale or intelligence—but never both simultaneously. Four essential pillars help organizations move into that coveted top-right quadrant: 1. STRATEGIC FOUNDATION Most organizations attempt to layer AI onto existing frameworks and wonder why they hit walls. Scalable AI testing begins with reimagining your testing foundation from first principles. True scale emerges through architectural transformation: Domain-Focused Structure: Structure tests around business domains rather than application layers. Testing boundaries align with business functions, allowing independent scaling of different capabilities. Capturing Testing Intent: Recording actions limits adaptability. Capturing the 'why' behind each test creates space for AI to determine optimal execution paths as interfaces evolve. Moving Beyond Test Pass/Fail: Binary paradigms create brittleness. Confidence scoring reflects the reality of modern applications, quantifying behavior likelihood rather than enforcing absolute correctness. 2. INTELLIGENCE AMPLIFICATION Limiting AI to test execution represents the most common implementation mistake. Genuine scalability demands intelligence across the entire testing lifecycle: Autonomous Test Generation: Application structure, user journeys, and historical defects contain patterns AI can leverage to generate relevant tests—expanding coverage without expanding effort. A retail client discovered 23 critical edge cases previously overlooked when implementing autonomous generation. Dynamic Test Prioritization: Resources remain finite regardless of scale. AI continuously evaluates which tests deliver maximum value based on recent changes, historical failures, and business impact, ensuring optimal resource allocation. Predictive Analytics: Code changes, test patterns, and system behavior contain early signals of quality issues. Advanced AI models identify potential defect clusters before they manifest, shifting quality from reactive to proactive. 3. INFRASTRUCTURE UPGRADES AI strategies cannot exceed the limitations of infrastructure. Scalable AI testing requires a complete rethinking of the execution environment: Ephemeral Environments: Static, persistent test environments create bottlenecks. On-demand environments that spawn, execute, and disappear allow massively parallel testing without environment contention. Distributed Test Orchestration: Centralized execution hits scalability ceilings quickly. Decentralized nodes scaling horizontally under intelligent orchestration create virtually unlimited capacity. Real-Time Data Pipelines: Feedback delays cripple AI effectiveness. Streaming pipelines processing test results and system telemetry enable dynamic adaptation of testing strategies. 4. ORGANIZATIONAL TRANSFORMATION Technology transformation without corresponding organizational change leads to sophisticated solutions delivering minimal value. Successful implementations require: T-Shaped Expertise: Teams need both broad testing knowledge and deep AI specialization. Hybrid professionals bridge the gap between quality goals and AI capabilities. Decision Intelligence: Data without decision frameworks creates paralysis. Clear processes for interpreting AI insights and converting them to actions unlock the full value of testing intelligence. Learning Loops: AI systems improve through feedback. Structured processes for teams to validate, correct, and enhance AI-driven testing decisions create continuous improvement cycles. THE STAGES OF IMPLEMENTATION Scaling AI testing requires deliberate progression. Here are three stages you can expect to go through: Stage 1: Augmentation: Target specific high-value, low-risk capabilities like test maintenance or data generation. Quick wins build confidence while developing organizational expertise. Stage 2: Automation: Graduate to automating entire testing workflows, allowing AI to orchestrate complete testing cycles with human oversight. Stage 3: Autonomy: Self-optimizing testing systems continuously improve based on results and changing application needs, with humans focusing on strategy rather than execution. CONTINUOUS EVOLUTION Successful AI testing programs view quality as a continuous evolution rather than a fixed destination. Mature implementations incorporate: Capability Expansion: Regular evaluation of emerging AI capabilities with integration of those delivering maximum value. Model Refinement: Systematic improvement of AI models through new data, algorithms, and training approaches. Strategic Alignment: Regular reassessment of how testing AI supports broader business and technology objectives. THE PATH FORWARD Organizations that are able to achieve extraordinary results with AI testing have one fundamental perspective. They view AI as a transformation rather than just another cool adoption. Scaling with automation requires a lot more than just taking incremental steps. It needs reimagining QA for the different environments that we work in, and to also take into consideration speed, complexity, and the scale, and use that to grow relentlessly. If you think these strategic foundations, the amplified intelligence, the evolved infrastructure, and the transformed organizational workflows can help your organization break through the traditional constraints, I think it's absolutely worth taking the leap forward and improving QA at scale.