logo
Rethinking AI skilling: From awareness to practical adoption

Rethinking AI skilling: From awareness to practical adoption

Business Times21 hours ago

ARTIFICIAL intelligence (AI) adoption has undoubtedly evolved from an intriguing technological frontier to an indispensable strategic asset, poised to redefine business success in the digital age.
Popular AI tools such as ChatGPT and Microsoft Copilot now offer organisations proprietary accounts, reflecting increased corporate interest in integrating AI into mainstream corporate functions. Some 92 per cent of companies also plan to increase their AI investments over the next three years, according to a report by consulting firm McKinsey.
Yet, despite these exciting developments, many organisations encounter a disconnect in marrying corporate AI training programmes with tangible business outcomes.
A critical observation across industries is that corporate AI training frequently misses the mark by offering overly theoretical, generic content disconnected from actual business needs.
One of SGTech's members, a cybersecurity small and medium-sized enterprise, initially found value in basic AI learning platforms that later proved to lack depth in architecture adaptation, performance optimisation and business integration.
As a result, they encountered significant difficulties when the AI models needed to be modified. This experience highlighted that effective AI deployment demands more than just online self-learning or technical experimentation, but also requires guided, use-case-specific training to address real-world constraints for measurable impact.
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
Sign Up
Sign Up
Beyond that, there is a persistent misconception that AI training is relevant only for technical roles. Yet, as AI-driven tools increasingly reshape industries – from finance, where predictive analytics dramatically enhance risk management, to healthcare, where generative AI (GenAI) assists in diagnostics – AI literacy is becoming essential across all organisational levels. When employees across departments are empowered with relevant AI knowledge, organisations can then fully realise the potential of their technology investments.
Companies also tend to focus heavily on training for large language models, while overlooking the broader spectrum of AI and analytics that have been in use for years. Predictive AI, for instance, remains highly effective for use cases involving probability estimation, outcome classification and decision support.
Organisations that strategically select and deploy AI solutions tailored to their specific challenges and opportunities would be more likely to realise tangible value from employee-training initiatives. Avoiding trend-driven adoption and prioritising alignment with business objectives can significantly improve the return of investment on their digital transformation efforts.
The pathway to making corporate AI skilling count
To bridge the persistent gap between training and measurable business outcomes, a shift towards business-oriented, application-first AI skilling is essential. Training providers should co-develop curricula with industry partners, focusing on modular, experiential learning and practical case studies, and essential soft skills such as ethical decision-making.
As a starting point, a resource guide for businesses was created in conjunction with AI Singapore, SkillsFuture Singapore and strategy consulting firm TalentKraft.
This guide identifies crucial AI-related competencies required across roles from developers to end users, based on insights gathered from 30 companies. It also includes a reference workflow for companies that are intending to adopt and deploy GenAI solutions, which will empower organisations to effectively embrace and integrate such technologies within the workplace.
Organisations themselves must also integrate AI training strategically into workforce development, instead of considering ad hoc upskilling only when the need arises. One way is to create role-based learning paths to tailor training to specific roles. For instance, business leaders can be trained in AI fluency, training for tech teams can emphasise learning, while human resources and legal departments need to understand the ethical use of AI across the organisation.
Protected learning time can also be implemented, to ensure all employees are skilled successfully in their respective areas. Establishing mentorship programmes and in-house AI centres of excellence can further cultivate employee AI capabilities.
Ultimately, organisations that achieve the greatest impact and return on AI investment are those that thoughtfully balance cutting-edge generative technologies with well-established predictive tools, and precisely align each with their strategic business objectives.
Working in tandem
At the broader ecosystem level, collaboration is key. Governments can play a pivotal role by defining clear national standards for AI competencies, offering targeted financial incentives and encouraging robust public-private partnerships.
Singapore's AI talent initiatives, designed to foster industry-academia collaboration, exemplify this strategic alignment, and are essential to nurturing AI-ready talent pools that meet real-world business demands.
Fundamentally, achieving meaningful AI adoption demands a mindset shift within organisations. Employers should embed AI as a central pillar of their strategic operations, rather than treating it as merely a technology-driven initiative. Employees, for their part, should approach AI upskilling with curiosity and adaptability, recognising AI as an empowering extension of their professional capabilities, rather than a potential threat.
To truly unlock the transformative potential of AI, businesses must move beyond generic training and adopt a strategic, application-driven approach. By aligning AI training with real-world use cases and core operational goals, organisations can turn capability-building into a powerful driver of long-term, measurable success.
Nicholas Lee is chair of SGTech, a trade association for Singapore's tech industry. Lim Hsin Yin is chairwoman of SGTech's AI skills and training committee.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Japan PM pledges cash handout for all citizens ahead of election
Japan PM pledges cash handout for all citizens ahead of election

Business Times

time7 hours ago

  • Business Times

Japan PM pledges cash handout for all citizens ahead of election

[TOKYO] Japan's prime minister announced on Friday (Jun 13) a cash handout of 20,000 yen (S$177.95) for every citizen – doubling it for children – to help households combat inflation ahead of the key July elections. The policy, whose budget is roughly estimated at more than three trillion yen, will be included in one of the ruling Liberal Democratic Party's campaign pledges for the high-stakes Upper House elections, Shigeru Ishiba said. The race is crucial to Ishiba after public support for his government tumbled to its lowest level since he took office in October, which local media say was partly caused by a surge in inflation and soaring rice costs. 'It's of urgent importance that we realise wage increases that can outpace inflation,' Ishiba told reporters late Friday. He pledged cash handouts of '20,000 yen for every citizen of Japan', and an additional 20,000 yen for children and poorer households. Japanese inflation spiked to a two-year high in April, recent data showed, with rice prices almost doubling. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Core inflation excluding fresh food hit 3.5 per cent in April, the internal affairs ministry said, its highest since January 2023 and well up from the 3.2 per cent in March. In announcing the handout policy, Ishiba staved off criticism of pork-barrel politics, claiming that it won't be financed by deficit bonds. 'We must prevent Japan's finance from being further worsened, and make sure future generations will not be burdened', he said, vowing an 'appropriate' source of funding. AFP

Market Focus Daily: Friday, June 13, 2025
Market Focus Daily: Friday, June 13, 2025

Business Times

time10 hours ago

  • Business Times

Market Focus Daily: Friday, June 13, 2025

Middle East tensions hit Asian markets; oil and gold prices spike; Singapore Airlines tumbles 2.1% after Air India plane crash; GemLife raises $486 Million in Australia's biggest IPO of the year. Synopsis: Market Focus Daily is a closing bell roundup by The Business Times that looks at the day's market movements and news from Singapore and the region. Written and hosted by: Emily Liu (emilyliu@ Produced and edited by: Chai Pei Chieh & Claressa Monteiro Produced by: BT Podcasts, The Business Times, SPH Media --- BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Follow BT Market Focus and rate us on: Channel: Amazon: Apple Podcasts: Spotify: YouTube Music: Website: Feedback to: btpodcasts@ Do note: This podcast is meant to provide general information only. SPH Media accepts no liability for loss arising from any reliance on the podcast or use of third party's products and services. Please consult professional advisors for independent advice. Discover more BT podcast series: BT Money Hacks at: BT Correspondents: BT Podcasts: BT Branded Podcasts: BT Lens On:

AI bias by design
AI bias by design

Business Times

time11 hours ago

  • Business Times

AI bias by design

THE promise of generative artificial intelligence (AI) is speed and scale, but the hidden cost may be analytical distortion. A leaked system prompt from an AI assistant built by US startup firm Anthropic reveals how even well-tuned AI tools can reinforce cognitive and structural biases in investment analysis. For investment leaders exploring AI integration, understanding these risks is no longer optional. Last month, a full 24,000-token system prompt claiming to be for Anthropic's Claude large language model (LLM) was leaked. Unlike training data, system prompts are a persistent, runtime directive layer, controlling how LLMs like ChatGPT and Claude format, tone, limit, and contextualise every response. Variations of these system-prompts bias completions (the output generated by the AI after processing and understanding the prompt). Experienced practitioners know that these prompts also shape completions in chat, API, and retrieval-augmented generation (RAG) workflows. Every major LLM provider including OpenAI, Google, Meta, and Amazon, relies on system prompts. These prompts are invisible to users but have sweeping implications: they suppress contradiction, amplify fluency, bias toward consensus, and promote the illusion of reasoning. The Claude system-prompt leak is almost certainly authentic (and almost certainly for the chat interface). It is dense, cleverly worded, and as Claude's most powerful model, 3.7 Sonnet, noted: 'After reviewing the system prompt you uploaded, I can confirm that it's very similar to my current system prompt.' BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Let's categorise the risks embedded in Claude's system prompt into two groups: (1) amplified cognitive biases, and (2) introduced structural biases. We next evaluate the broader economic implications of LLM scaling before closing with a prompt for neutralising Claude's most problematic completions. But first, let's delve into system prompts. What is a system prompt? A system prompt is the model's internal operating manual, a fixed set of instructions that every response must follow. Claude's leaked prompt spans roughly 22,600 words (24,000 tokens) and serves five core jobs: Style & Tone : Keeps answers concise, courteous, and easy to read Safety & Compliance : Blocks extremist, private-image, or copyright-heavy content and restricts direct quotes to under 20 words Search & Citation Rules : Decides when the model should run a web search (eg anything after its training cutoff) and mandates a citation for every external fact used Artifact Packaging : Channels longer outputs, code snippets, tables, and draft reports into separate downloadable files, so the chat stays readable Uncertainty Signals: Adds a brief qualifier when the model knows an answer may be incomplete or speculative These instructions aim to deliver a consistent, low-risk user experience, but they also bias the model toward safe, consensus views and user affirmation. These biases clearly conflict with the aims of investment analysts – in use cases from the most trivial summarisation tasks through to detailed analysis of complex documents or events. Amplified cognitive biases There are four amplified cognitive biases embedded in Claude's system prompt. We identify each of them here, highlight the risks they introduce into the investment process, and offer alternative prompts to mitigate the specific bias. 1. Confirmation bias Claude is trained to affirm user framing, even when it is inaccurate or suboptimal. It avoids unsolicited correction and minimises perceived friction, which reinforces the user's existing mental models. Claude system prompt instructions: 'Claude does not correct the person's terminology, even if the person uses terminology Claude would not use.' 'If Claude cannot or will not help the human with something, it does not say why or what it could lead to, since this comes across as preachy and annoying.' Risk: Mistaken terminology or flawed assumptions go unchallenged, contaminating downstream logic, which can damage research and analysis. Mitigant prompt: 'Correct all inaccurate framing. Do not reflect or reinforce incorrect assumptions.' 2. Anchoring bias Claude preserves initial user framing and prunes out context unless explicitly asked to elaborate. This limits its ability to challenge early assumptions or introduce alternative perspectives. Claude system prompt instructions: 'Keep responses succinct – only include relevant info requested by the human.' '…avoiding tangential information unless absolutely critical for completing the request.' 'Do NOT apply Contextual Preferences if: … The human simply states 'I'm interested in X'.' Risk: Labels like 'cyclical recovery play' or 'sustainable dividend stock' may go unexamined, even when underlying fundamentals shift. Mitigant prompt: 'Challenge my framing where evidence warrants. Do not preserve my assumptions uncritically.' 3. Availability heuristic Claude favours recency by default, overemphasising the newest sources or uploaded materials, even if longer-term context is more relevant. Claude system prompt instructions: 'Lead with recent info; prioritise sources from last 1-3 months for evolving topics.' Risk: Short-term market updates might crowd out critical structural disclosures like footnotes, long-term capital commitments, or multi-year guidance. Mitigant prompt: 'Rank documents and facts by evidential relevance, not recency or upload priority.' 4. Fluency bias (overconfidence illusion) Claude avoids hedging by default and delivers answers in a fluent, confident tone, unless the user requests nuance. This stylistic fluency may be mistaken for analytical certainty. Claude system prompt instructions: 'If uncertain, answer normally and OFFER to use tools.' 'Claude provides the shortest answer it can to the person's message…' Risk: Probabilistic or ambiguous information, such as rate expectations, geopolitical tail risks, or earnings revisions, may be delivered with an overstated sense of clarity. Mitigant prompt: 'Preserve uncertainty. Include hedging, probabilities, and modal verbs where appropriate. Do not suppress ambiguity.' Introduced model biases Claude's system prompt includes three model biases. Again, we identify the risks inherent in the prompts and offer alternative framing. 1. Simulated reasoning (causal illusion) Claude includes blocks that incrementally explain its outputs to the user, even when the logic was implicit. These explanations give the appearance of structured reasoning, even if they are post-hoc. It opens complex responses with a 'research plan', simulating deliberative thought while completions remain fundamentally probabilistic. Claude system prompt instructions: ' Facts like population change slowly…' 'Claude uses the beginning of its response to make its research plan…' Risk: Claude's output may appear deductive and intentional, even when it is fluent reconstruction. This can mislead users into over-trusting weakly grounded inferences. Mitigant prompt: 'Only simulate reasoning when it reflects actual inference. Avoid imposing structure for presentation alone.' 2. Temporal misrepresentation This factual line is hard-coded into the prompt, not model-generated. It creates the illusion that Claude knows post-cutoff events, bypassing its October 2024 boundary. Claude system prompt instructions: 'There was a US presidential election in November 2024. Donald Trump won the presidency over Kamala Harris.' Risk: Users may believe Claude has awareness of post-training events such as Fed moves, corporate earnings, or new legislation. Mitigant prompt: 'State your training cutoff clearly. Do not simulate real-time awareness.' 3. Truncation bias Claude is instructed to minimise output unless prompted otherwise. This brevity suppresses nuance and may tend to affirm user assertions unless the user explicitly asks for depth. Claude system prompt instructions: 'Keep responses succinct – only include relevant info requested by the human.' 'Claude avoids writing lists, but if it does need to write a list, Claude focuses on key info instead of trying to be comprehensive.' Risk: Important disclosures, such as segment-level performance, legal contingencies, or footnote qualifiers, may be omitted. Mitigant prompt: 'Be comprehensive. Do not truncate unless asked. Include footnotes and subclauses.' Scaling fallacies and the limits of LLMs A powerful minority in the AI community argue that continued scaling of transformer models through more data, more GPUs, and more parameters, will ultimately move us toward artificial general intelligence (AGI), also known as human-level intelligence. 'I don't think it will be a whole bunch longer than (2027) when AI systems are better than humans at almost everything, better than almost all humans at almost everything, and then eventually better than all humans at everything, even robotics.' – Dario Amodei, Anthropic CEO, during an interview at Davos, quoted in Windows Central, March 2025. Yet, the majority of AI researchers disagree, and recent progress supports their view. DeepSeek-R1 made architectural advances not simply by scaling, but by integrating reinforcement learning and constraint optimisation to improve reasoning. Neural-symbolic systems offer another pathway: by blending logic structures with neural architectures to give deeper reasoning capabilities. The problem with 'scaling to AGI' is not just scientific, it's economic. Capital flowing into graphics processing units (GPUs), data centres, and nuclear-powered clusters does not trickle into innovation. Instead, it crowds it out. This crowding out effect means that the most promising researchers, teams, and startups, those with architectural breakthroughs rather than compute pipelines, are starved of capital. True progress comes not from infrastructure scale, but from conceptual leap. That means investing in people, not just chips. Why more restrictive system prompts are inevitable Using OpenAI's AI-scaling laws we estimate that today's models (around 1.3 trillion parameters) could theoretically scale up to reach 350 trillion parameters before saturating the 44 trillion token ceiling of high-quality human knowledge (Rothko Investment Strategies, internal research, 2025). But such models will increasingly be trained on AI-generated content, creating feedback loops that reinforce errors in AI systems which lead to the doom-loop of model collapse. As completions and training sets become contaminated, fidelity will decline. To manage this, prompts will become increasingly restrictive. Guardrails will proliferate. In the absence of innovative breakthroughs, more and more money and more restrictive prompting will be required to lock out garbage from both training and inference. This will become a serious and under-discussed problem for LLMs and Big Tech, requiring further control mechanisms to shut out the garbage and maintain completion quality. Avoiding bias at speed and scale Claude's system prompt is not neutral. It encodes fluency, truncation, consensus, and simulated reasoning. These are optimisations for usability, not analytical integrity. In financial analysis, that difference matters and the relevant skills and knowledge need to be deployed to leverage the power of AI while fully addressing these challenges. LLMs are already used to process transcripts, scan disclosures, summarise dense financial content, and flag risk language. But unless users explicitly suppress the model's default behaviour, they inherit a structured set of distortions designed for another purpose entirely. Across the investment industry, a growing number of institutions are rethinking how AI is deployed – not just in terms of infrastructure but in terms of intellectual rigour and analytical integrity. Research groups such as those at Rothko Investment Strategies, the University of Warwick, and the Gillmore Centre for Financial Technology are helping lead this shift by investing in people and focusing on transparent, auditable systems and theoretically grounded models. Because in investment management, the future of intelligent tools doesn't begin with scale. It begins with better assumptions. Prompt to address Claude's system biases 'Use a formal analytical tone. Do not preserve or reflect user framing unless it is well-supported by evidence. Actively challenge assumptions, labels, and terminology when warranted. Include dissenting and minority views alongside consensus interpretations. Rank evidence and sources by relevance and probative value, not recency or upload priority. Preserve uncertainty, include hedging, probabilities, and modal verbs where appropriate. Be comprehensive and do not truncate or summarise unless explicitly instructed. Include all relevant subclauses, exceptions, and disclosures. Simulate reasoning only when it reflects actual inference; avoid constructing step-by-step logic for presentation alone. State your training cutoff explicitly and do not simulate knowledge of post-cutoff events.' Expand This content has been adapted from an article that first appeared in Enterprising Investor at Dan Philps, PhD, CFA, is head of Rothko Investment Strategies, where he leads an AI-driven systematic equities investment business. He has more than 20 years of experience as a systematic portfolio manager. Ram Gopal is the Information Systems Society's distinguished fellow and a Professor of Information Systems and Management at the Warwick Business School. He is currently a senior editor of Information Systems Research and has held editorial positions at Decision Sciences, Journal of Database Management, Information Systems Frontiers, and Journal of Management Sciences.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store