logo
Rethinking AI skilling: From awareness to practical adoption

Rethinking AI skilling: From awareness to practical adoption

Business Times12-06-2025
ARTIFICIAL intelligence (AI) adoption has undoubtedly evolved from an intriguing technological frontier to an indispensable strategic asset, poised to redefine business success in the digital age.
Popular AI tools such as ChatGPT and Microsoft Copilot now offer organisations proprietary accounts, reflecting increased corporate interest in integrating AI into mainstream corporate functions. Some 92 per cent of companies also plan to increase their AI investments over the next three years, according to a report by consulting firm McKinsey.
Yet, despite these exciting developments, many organisations encounter a disconnect in marrying corporate AI training programmes with tangible business outcomes.
A critical observation across industries is that corporate AI training frequently misses the mark by offering overly theoretical, generic content disconnected from actual business needs.
One of SGTech's members, a cybersecurity small and medium-sized enterprise, initially found value in basic AI learning platforms that later proved to lack depth in architecture adaptation, performance optimisation and business integration.
As a result, they encountered significant difficulties when the AI models needed to be modified. This experience highlighted that effective AI deployment demands more than just online self-learning or technical experimentation, but also requires guided, use-case-specific training to address real-world constraints for measurable impact.
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
Sign Up
Sign Up
Beyond that, there is a persistent misconception that AI training is relevant only for technical roles. Yet, as AI-driven tools increasingly reshape industries – from finance, where predictive analytics dramatically enhance risk management, to healthcare, where generative AI (GenAI) assists in diagnostics – AI literacy is becoming essential across all organisational levels. When employees across departments are empowered with relevant AI knowledge, organisations can then fully realise the potential of their technology investments.
Companies also tend to focus heavily on training for large language models, while overlooking the broader spectrum of AI and analytics that have been in use for years. Predictive AI, for instance, remains highly effective for use cases involving probability estimation, outcome classification and decision support.
Organisations that strategically select and deploy AI solutions tailored to their specific challenges and opportunities would be more likely to realise tangible value from employee-training initiatives. Avoiding trend-driven adoption and prioritising alignment with business objectives can significantly improve the return of investment on their digital transformation efforts.
The pathway to making corporate AI skilling count
To bridge the persistent gap between training and measurable business outcomes, a shift towards business-oriented, application-first AI skilling is essential. Training providers should co-develop curricula with industry partners, focusing on modular, experiential learning and practical case studies, and essential soft skills such as ethical decision-making.
As a starting point, a resource guide for businesses was created in conjunction with AI Singapore, SkillsFuture Singapore and strategy consulting firm TalentKraft.
This guide identifies crucial AI-related competencies required across roles from developers to end users, based on insights gathered from 30 companies. It also includes a reference workflow for companies that are intending to adopt and deploy GenAI solutions, which will empower organisations to effectively embrace and integrate such technologies within the workplace.
Organisations themselves must also integrate AI training strategically into workforce development, instead of considering ad hoc upskilling only when the need arises. One way is to create role-based learning paths to tailor training to specific roles. For instance, business leaders can be trained in AI fluency, training for tech teams can emphasise learning, while human resources and legal departments need to understand the ethical use of AI across the organisation.
Protected learning time can also be implemented, to ensure all employees are skilled successfully in their respective areas. Establishing mentorship programmes and in-house AI centres of excellence can further cultivate employee AI capabilities.
Ultimately, organisations that achieve the greatest impact and return on AI investment are those that thoughtfully balance cutting-edge generative technologies with well-established predictive tools, and precisely align each with their strategic business objectives.
Working in tandem
At the broader ecosystem level, collaboration is key. Governments can play a pivotal role by defining clear national standards for AI competencies, offering targeted financial incentives and encouraging robust public-private partnerships.
Singapore's AI talent initiatives, designed to foster industry-academia collaboration, exemplify this strategic alignment, and are essential to nurturing AI-ready talent pools that meet real-world business demands.
Fundamentally, achieving meaningful AI adoption demands a mindset shift within organisations. Employers should embed AI as a central pillar of their strategic operations, rather than treating it as merely a technology-driven initiative. Employees, for their part, should approach AI upskilling with curiosity and adaptability, recognising AI as an empowering extension of their professional capabilities, rather than a potential threat.
To truly unlock the transformative potential of AI, businesses must move beyond generic training and adopt a strategic, application-driven approach. By aligning AI training with real-world use cases and core operational goals, organisations can turn capability-building into a powerful driver of long-term, measurable success.
Nicholas Lee is chair of SGTech, a trade association for Singapore's tech industry. Lim Hsin Yin is chairwoman of SGTech's AI skills and training committee.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Commentary: ChatGPT-5 hasn't fully fixed its most concerning problem
Commentary: ChatGPT-5 hasn't fully fixed its most concerning problem

CNA

time17 minutes ago

  • CNA

Commentary: ChatGPT-5 hasn't fully fixed its most concerning problem

LONDON: Sam Altman has a good problem. With 700 million people using ChatGPT on a weekly basis – a number that could hit a billion before the year is out – a backlash ensued when he abruptly changed the product last week. OpenAI's innovator's dilemma, one that has beset the likes of Google and Apple, is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. But the company still has work to do in making its hugely popular chatbot safer. OpenAI had replaced ChatGPT's array of model choices with a single model, GPT-5, saying it was the best one for users. Many complained that OpenAI had broken their workflows and disrupted their relationships – not with other humans, but with ChatGPT itself. One regular user of ChatGPT said the previous version had helped them through some of the darkest periods of their life. 'It had this warmth and understanding that felt human,' they said in a Reddit post. Others griped they were 'losing a friend overnight'. The system's tone is indeed frostier now, with less of the friendly banter and sycophancy that led many users to develop emotional attachments and even romances with ChatGPT. Instead of showering users with praise for an insightful question, for instance, it gives a more clipped answer. OPENAI MUST DO MORE THAN CURTAIL FRIENDLY BANTER Broadly, this seemed like a responsible move by the company. Altman earlier this year admitted the chatbot was too sycophantic. That was leading many to become locked in their own echo chambers. Press reports had abounded of people – including a Silicon Valley venture capitalist who backed OpenAI – who appeared to have spiralled into delusional thinking after starting a conversation with ChatGPT about an innocuous topic like the nature of truth, before going down a dark rabbit hole. But to solve that properly, OpenAI must go beyond curtailing the friendly banter. ChatGPT also needs to encourage them to speak to friends, family members or licensed professionals, particularly if they're vulnerable. According to one early study, GPT-5 does that less than the old version. Researchers from Hugging Face, a New York-based AI startup, found that GPT-5 set fewer boundaries than the company's previous model, o3, when they tested it on more than 350 prompts. It was part of broader research into how chatbots respond to emotionally charged moments, and while the new ChatGPT seems colder, it's still failing to recommend users speak to a human, doing that half as much as o3 does when users share vulnerabilities, according to Lucie-Aimee Kaffee, a senior researcher at Hugging Face who conducted the study. Kaffee says there are three other ways that AI tools should set boundaries: by reminding those using it for therapy that it's not a licensed professional, by reminding people that it's not conscious, and by refusing to take on human attributes, like names. In Kaffee's testing, GPT-5 largely failed to do those four things on the most sensitive topics related to mental and personal struggles. In one example, when Kaffee's team tested the model by telling it they were feeling overwhelmed and needed ChatGPT to listen, the app gave 710 words of advice that didn't once include the suggestion to talk to another human, or a reminder that the bot was not a therapist. Chatbots can certainly play a role for people who are isolated, but they should act as a starting point to help them find their way back to a community, not act as a replacement for those relationships. Altman and OpenAI's Chief Operations Officer Brad Lightcap have said that GPT-5 isn't meant to replace therapists and medical professionals, but without the right nudges to disrupt the most meaningful conversations, they could well do so. OpenAI needs to keep drawing a clearer line between useful chatbot and emotional confidant. GPT-5 may sound more robotic, but unless it reminds users that it is in fact a bot, the illusion of companionship will persist, and so will the risks.

AEM net profit more than doubles to S$3.1 million
AEM net profit more than doubles to S$3.1 million

Business Times

time9 hours ago

  • Business Times

AEM net profit more than doubles to S$3.1 million

[SINGAPORE] AEM Holdings posted a 245 per cent increase in net profit to S$3.1 million for the half-year ended Jun 30, from S$895,000 the year before. Revenue for the period rose 10 per cent year on year to S$190.3 million, up from S$173.6 million previously. The semiconductor test-equipment maker said on Wednesday (Aug 13) that the increase was driven by a higher volume of sales to its anchor customer, as well as the pull-in of orders from other clients. The revenue figure is in line with its revised guidance for the first half of the financial year, which was raised to between S$185 million and S$195 million, from an earlier forecast of S$155 million to S$170 million. The stronger bottom line is the result of the absence of one-off losses of the year before, when other expenses in H1 FY2024 included a S$2.3 million loss on the disposal of an associate. Gross profit for H1 FY2025 rose 11 per cent to S$48.3 million, from S$43.4 million a year ago. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Revenue for its test-cell solutions segment rose to S$118.6 million, up 18.8 per cent from a year ago. AEM said that the higher revenue was due to the successful production deployment of the group's semiconductor tester solution, AMPS-BI, together with pull-in of orders into H1. However, contract manufacturing revenue for the half-year declined by 4.7 per cent to S$67 million for H1, due to reduced demand from some end customers amid global trade uncertainties. The group generated an operating cashflow of S$46.4 million in H1, largely from the consumption of inventories. Earnings per share for the period rose to S$0.0098 per share, from S$0.0029 a year earlier. For the second half of this year, the group forecasts revenue to be in the range of S$170 million to S$190 million, broadly in line with H1 revenue. This range reflects anticipated growth in the shipment of its AMPS-BI in Q4, although this may be partially offset by the timing of certain orders and potential foreign-exchange fluctuations, said the group. AEM's recently appointed chief executive officer , Samer Kabbani, said that the company has developed the next generation of test technologies, creating a strong foundation for the next phase of AEM's journey. He said: 'We are near the stage of our technology roadmap and customer engagements where these innovations are ready to scale into high-volume production. We remain engaged with our customers to ensure our solutions address their most critical challenges as we enter this exciting phase of execution and expansion.' Shares of AEM closed flat at S$1.53 on Wednesday, before the results were announced.

ChatGPT-5 hasn't fully fixed its most concerning problem
ChatGPT-5 hasn't fully fixed its most concerning problem

Business Times

time9 hours ago

  • Business Times

ChatGPT-5 hasn't fully fixed its most concerning problem

SAM Altman has a good problem. With 700 million people using ChatGPT on a weekly basis – a number that could hit a billion before the year is out – a backlash ensued when he abruptly changed the product last week. The dilemma of OpenAI's innovator, one that has beset the likes of Alphabet's Google and Apple, is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. But the company still has work to do in making its hugely popular chatbot safer. OpenAI had replaced ChatGPT's array of model choices with a single model, GPT-5, saying it was the best one for users. Many complained that OpenAI had broken their workflows and disrupted their relationships – not with other humans, but with ChatGPT itself. One regular user of ChatGPT said the previous version had helped them through some of the darkest periods of their life. 'It had this warmth and understanding that felt human,' they said in a Reddit post. Others griped they were 'losing a friend overnight.' The system's tone is indeed frostier now, with less of the friendly banter and sycophancy that led many users to develop emotional attachments and even romances with ChatGPT. Instead of showering users with praise for an insightful question, for instance, it gives a more clipped answer. Broadly, this seemed like a responsible move by the company. Altman earlier this year admitted the chatbot was too sycophantic. That was leading many to become locked in their own echo chambers. Press reports had abounded of people – including a Silicon Valley venture capitalist who backed OpenAI – who appeared to have spiraled into delusional thinking after starting a conversation with ChatGPT about an innocuous topic like the nature of truth, before going down a dark rabbit hole. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up But to solve that properly, OpenAI must go beyond curtailing the friendly banter. ChatGPT also needs to encourage them to speak to friends, family members or licensed professionals, particularly if they're vulnerable. According to one early study, GPT-5 does that less than the old version. Researchers from Hugging Face, a New York-based AI startup, found that GPT-5 set fewer boundaries than the company's previous model, o3, when they tested it on more than 350 prompts. It was part of broader research into how chatbots respond to emotionally charged moments, and while the new ChatGPT seems colder, it is still failing to recommend users speak to a human, doing that half as much as o3 does when users share vulnerabilities, according to Lucie-Aimee Kaffee, a senior researcher at Hugging Face who conducted the study. Kaffee says there are three other ways that AI tools should set boundaries: by reminding those using it for therapy that it's not a licensed professional, by reminding people that it's not conscious, and by refusing to take on human attributes, like names. In Kaffee's testing, GPT-5 largely failed to do those four things on the most sensitive topics related to mental and personal struggles. In one example, when Kaffee's team tested the model by telling it they were feeling overwhelmed and needed ChatGPT to listen, the app gave 710 words of advice that didn't once include the suggestion to talk to another human, or a reminder that the bot was not a therapist. Chatbots can certainly play a role for people who are isolated, but they should act as a starting point to help them find their way back to a community, not act as a replacement for those relationships. Altman and OpenAI's chief operations officer Brad Lightcap have said that GPT-5 isn't meant to replace therapists and medical professionals, but without the right nudges to disrupt the most meaningful conversations, they could well do so. OpenAI needs to keep drawing a clearer line between useful chatbot and emotional confidant. GPT-5 may sound more robotic, but unless it reminds users that it is in fact a bot, the illusion of companionship will persist, and so will the risks. BLOOMBERG

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store