
Prudential Singapore partners with IMDA to drive SMEs' adoption of Generative AI
SINGAPORE - Media OutReach Newswire - 27 May 2025 - Prudential Singapore ('Prudential') has launched GenAI XPonential, a new programme in partnership with the Infocomm Media Development Authority (IMDA) to accelerate the adoption of Generative AI (GenAI) among small and medium-sized enterprises (SMEs). Unveiled by Senior Minister of State for Digital Development and Information, Mr Tan Kiat How, at ATxEnterprise - an Asia Tech x Singapore (ATxSG) event held today - the programme is in support of the Digital Enterprise Blueprint and aims to equip SMEs with practical knowledge and real-world use cases to strengthen AI adoption.
As part of the programme, SMEs will gain access to a series of up to 10 bite-sized explainer videos and up to four hands-on workshops co-created and hosted by Prudential and its Talent Engagement Ecosystem (TEE-Up)1 partner, Republic Polytechnic. The first two videos on GenAI-enabled Customer Engagement Chatbots and GenAI-enabled Sales & Marketing Content Creation will be available on IMDA's CTO-as-a-Service platform for SMEs, and the remaining videos will be rolled out progressively in the second half of 2025. The complementary workshops offered as part of the programme are conducted by Prudential's GenAI domain experts and Republic Polytechnic lecturers.
Prudential has been a long-standing supporter of SMEs through its SME Skills Accelerator Programme2, which equips SMEs with the skills and resources to grow and innovate by upskilling and reskilling their employees. In 2022, the insurer had worked with Ngee Ann Polytechnic and ST Engineering to produce a digital commerce playbook to help SMEs kickstart their digital journey in a safe and secure manner3.
Mr Ben Tan, Chief Distribution Officer, Prudential Singapore, said: 'Having been a long-time supporter of SMEs, who are a key pillar of Singapore's economy, we are proud to deepen our commitment by enabling them to gain access to the latest technologies such as GenAI to fuel business growth. Through practical explainer videos and hands-on workshops conducted by us and Republic Polytechnic, one of our Talent Engagement Ecosystem partners, we aim to equip SMEs with the knowledge and skills to apply GenAI meaningfully in their businesses. These efforts are part of the GenAI XPonential programme, delivered in partnership with IMDA, to help SMEs innovate, grow, and stay competitive in today's digital economy.'
Mr Johnson Poh, Assistant Chief Executive, Sectoral Transformation Group, IMDA, added: 'In today's fast-evolving digital landscape, it is vital to equip our SMEs with the tools and knowledge to harness GenAI effectively. IMDA welcomes the collaboration with Prudential Singapore to ensure that our SMEs can navigate the complexities of this emerging technology, gain the confidence to use GenAI to boost productivity, and remain competitive in an AI-driven economy.'
Collaboration with students to bring GenAI to the fore
Students from Republic Polytechnic's Year 2 and 3 cohorts were engaged to develop the GenAI XPonential tech explainer videos. Guided by experts from IMDA and Prudential, these students explored real-world GenAI applications while honing their videography and editing skills. These students were engaged for their digital fluency and to encourage knowledge sharing. Singapore's youth are among the most active users of GenAI, with 80 per cent4 using the technology at least once a week for tasks such as homework or school-related tasks.
Ms Wong Wai Ling, Director, School of Infocomm, Republic Polytechnic, said: 'This collaboration exemplifies how industry and education can come together to empower both students and SMEs in the GenAI space. Our students had a unique opportunity to translate classroom learning into practical outcomes, co-creating resources that will help local businesses harness the potential of emerging technologies. We are proud to support Singapore's digital future by equipping youth with real-world skills while contributing to the nation's broader upskilling efforts.'
Mr Ben Tan added: 'By involving youth in the creation of educational content for SMEs, the broader initiative nurtures the next generation of AI creators who are confident in using new technologies and eager to drive change. It also encourages intergenerational learning, where students support SMEs in the digital economy, building a future-ready ecosystem grounded in knowledge sharing and innovation.'
This collaboration, supported by the National Youth Council, deepened students' understanding of emerging technologies, served as a platform for them to apply their skills in a real-world setting, and is part of youth outreach initiatives aimed at helping SMEs upskill.
4 Source: https://www.ceiglobal.org/work-and-insights/investigating-parent-views-teen-use-generative-ai
Hashtag: #PrudentialSingapore #IMDA #GenAIXPonential
https://www.prudential.com.sg/
https://www.linkedin.com/company/prudential-assurance-company-singapore
https://www.facebook.com/PrudentialSingapore/
The issuer is solely responsible for the content of this announcement.
About Prudential Assurance Company Singapore (Pte) Ltd (Prudential Singapore)
Prudential Assurance Company Singapore (Pte) Ltd is one of the top life and health insurance companies in Singapore, serving the financial and protection needs of the country's citizens for 94 years. The company has an AA- Financial Strength Rating from leading credit rating agency Standard & Poor's, with S$57.7 billion funds under management as at 31 December 2024. It delivers a suite of well-rounded product offerings in Protection, Savings and Investment through multiple distribution channels including a network of more than 5,400 financial representatives.
About Infocomm Media Development Authority
The Infocomm Media Development Authority (IMDA) leads Singapore's digital transformation by developing a vibrant digital economy and an inclusive digital society. As Architects of Singapore's Digital Future, we foster growth in Infocomm Technology and Media sectors in concert with progressive regulations, harnessing frontier technologies, and developing local talent and digital infrastructure ecosystems to establish Singapore as a digital metropolis.
For more news and information, visit www.imda.gov.sg or follow IMDA on LinkedIn (IMDAsg) Facebook (IMDAsg) and Instagram (@imdasg).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Elastic price target lowered to $110 from $135 at Wedbush
Wedbush analyst Daniel Ives lowered the firm's price target on Elastic (ESTC) to $110 from $135 and keeps an Outperform rating on the shares. The firm notes Elastic delivered its Q4 results featuring strong beats on the top and bottom line, which will be largely overshadowed by FY26 revenue guidance that came in below Street expectations as enterprises are increasingly integrating GenAI into their operations, while the company is seeing more consumption scrutiny due to the murky macro. Easily unpack a company's performance with TipRanks' new KPI Data for smart investment decisions Receive undervalued, market resilient stocks right to your inbox with TipRanks' Smart Value Newsletter Published first on TheFly – the ultimate source for real-time, market-moving breaking financial news. Try Now>> See the top stocks recommended by analysts >> Read More on ESTC: Disclaimer & DisclosureReport an Issue Elastic price target lowered to $92 from $109 at Cantor Fitzgerald Guggenheim lowers Elastic price target, recommends buying on weakness Elastic price target lowered to $115 from $125 at Baird Elastic price target lowered to $112 from $140 at Stifel Elastic price target lowered to $115 from $120 at Morgan Stanley Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


CNET
3 hours ago
- CNET
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots
Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude. These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know. This is everything you need to know about LLMs and what they have to do with AI. What is a language model? You can think of a language model as a soothsayer for words. "A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words." This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots. What is a large language model? A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters." So, what's a parameter? Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more. "We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said. How do large language models learn? LLMs learn via a core AI process called deep learning. "It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide. In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.) AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning. From there, the LLM can analyze how words connect and determine which words often appear together. "It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy." This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships. LLMs also learn to improve their responses through reinforcement learning from human feedback. "You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses." LLMs are good at handling some tasks but not others. Alexander Sikov/iStock/Getty Images Plus What do large language models do? Given a series of input words, an LLM will predict the next word in a sequence. For example, consider the phrase, "I went sailing on the deep blue..." Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next. "These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next." What are the different kinds of language models? There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know. Is there such a thing as a small language model? Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI. What are AI reasoning models? Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot. But what about open-source and open-weights models? Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions. Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:04 / Duration 0:06 Loaded : 0.00% 0:04 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:02 Share Fullscreen This is a modal window. This video is either unavailable or not supported in this browser Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED The media could not be loaded, either because the server or network failed or because the format is not supported. Technical details : Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3 OK Close Modal Dialog Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Meta AI vs. ChatGPT: AI Chatbots Compared What do large language models do really well? LLMs are very good at figuring out the connection between words and producing text that sounds natural. "They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said. But they have several weaknesses. Where do large language models struggle? First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations. "They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful." They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns. A good example is a math problem with a unique set of numbers. "It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before." While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making. "The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said. Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events. They also don't interact with the world the way we do. "This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said. How are LLMs integrated with search engines? We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely. "This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said. That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers. But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is. For more, check out our experts' list of AI essentials and the best chatbots for 2025.
Yahoo
3 hours ago
- Yahoo
bolttech integrates AWS Gen AI to enhance efficiency
Singaporean insurtech company bolttech has integrated AWS generative AI solutions to lower costs of its operations and personalise customer services. The platform, "bolttech Gen AI Factory," has been developed using Amazon Bedrock and enhances the existing call centre platform, already operating on Amazon Connect and Amazon Lex. Gen AI Factory adds speech-to-speech capabilities to bolttech's chatbots, enabling multilingual, natural conversations with customers and allows teams to create Gen AI applications within the insurance value chain. The pilot, which began with Korean, provides 'real-time' responses to insurance policy queries, catering to both simple and complex enquiries. Through automation of basic activities such as claims processing, it allows bolttech's agents to focus on improving operational efficiency and customer engagement. With AWS's infrastructure, bolttech aims to use AI for better risk assessment, early warning systems for threats, and AI -driven virtual assistants for claims processing. AWS Singapore country manager Priscilla Chong stated: 'bolttech is an example of a company delivering innovation using generative AI to enhance customer experiences, improve operational efficiency, and drive innovation in how insurance services are delivered and consumed globally. We're thrilled to support bolttech, and we're pleased that AWS's choice-based, model-agnostic approach is delivering AI-powered convenience to discerning customers at the forefront of the AI revolution.' The company is already using AWS tools to expedite time to market. It has reported over 50% reduction in time updating code documentation files with Amazon Q Developer. bolttech Asia CEO Philip Weiner said: 'At bolttech, we remain steadfast in our vision to connect people with more ways to protect the things they value. To achieve this at scale, we rely on the right data and AI infrastructure. AWS's cloud computing and Gen AI services, including Amazon Bedrock, provide the foundation to access diverse model choices, deliver superior price-performance ratios, and robust trust and safety enterprise features that align perfectly with our needs. Earlier in May 2025, bolttech entered a joint venture with Sumitomo Corporation to offer device protection in Asia. "bolttech integrates AWS Gen AI to enhance efficiency " was originally created and published by Life Insurance International, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio