logo
AI bias by design

AI bias by design

Business Times2 days ago

THE promise of generative artificial intelligence (AI) is speed and scale, but the hidden cost may be analytical distortion.
A leaked system prompt from an AI assistant built by US startup firm Anthropic reveals how even well-tuned AI tools can reinforce cognitive and structural biases in investment analysis. For investment leaders exploring AI integration, understanding these risks is no longer optional.
Last month, a full 24,000-token system prompt claiming to be for Anthropic's Claude large language model (LLM) was leaked.
Unlike training data, system prompts are a persistent, runtime directive layer, controlling how LLMs like ChatGPT and Claude format, tone, limit, and contextualise every response. Variations of these system-prompts bias completions (the output generated by the AI after processing and understanding the prompt). Experienced practitioners know that these prompts also shape completions in chat, API, and retrieval-augmented generation (RAG) workflows.
Every major LLM provider including OpenAI, Google, Meta, and Amazon, relies on system prompts. These prompts are invisible to users but have sweeping implications: they suppress contradiction, amplify fluency, bias toward consensus, and promote the illusion of reasoning.
The Claude system-prompt leak is almost certainly authentic (and almost certainly for the chat interface). It is dense, cleverly worded, and as Claude's most powerful model, 3.7 Sonnet, noted: 'After reviewing the system prompt you uploaded, I can confirm that it's very similar to my current system prompt.'
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
Sign Up
Sign Up
Let's categorise the risks embedded in Claude's system prompt into two groups: (1) amplified cognitive biases, and (2) introduced structural biases. We next evaluate the broader economic implications of LLM scaling before closing with a prompt for neutralising Claude's most problematic completions. But first, let's delve into system prompts.
What is a system prompt?
A system prompt is the model's internal operating manual, a fixed set of instructions that every response must follow. Claude's leaked prompt spans roughly 22,600 words (24,000 tokens) and serves five core jobs:
Style & Tone : Keeps answers concise, courteous, and easy to read
Safety & Compliance : Blocks extremist, private-image, or copyright-heavy content and restricts direct quotes to under 20 words
Search & Citation Rules : Decides when the model should run a web search (eg anything after its training cutoff) and mandates a citation for every external fact used
Artifact Packaging : Channels longer outputs, code snippets, tables, and draft reports into separate downloadable files, so the chat stays readable
Uncertainty Signals: Adds a brief qualifier when the model knows an answer may be incomplete or speculative
These instructions aim to deliver a consistent, low-risk user experience, but they also bias the model toward safe, consensus views and user affirmation. These biases clearly conflict with the aims of investment analysts – in use cases from the most trivial summarisation tasks through to detailed analysis of complex documents or events.
Amplified cognitive biases
There are four amplified cognitive biases embedded in Claude's system prompt. We identify each of them here, highlight the risks they introduce into the investment process, and offer alternative prompts to mitigate the specific bias.
1. Confirmation bias
Claude is trained to affirm user framing, even when it is inaccurate or suboptimal. It avoids unsolicited correction and minimises perceived friction, which reinforces the user's existing mental models.
Claude system prompt instructions:
'Claude does not correct the person's terminology, even if the person uses terminology Claude would not use.'
'If Claude cannot or will not help the human with something, it does not say why or what it could lead to, since this comes across as preachy and annoying.'
Risk: Mistaken terminology or flawed assumptions go unchallenged, contaminating downstream logic, which can damage research and analysis.
Mitigant prompt: 'Correct all inaccurate framing. Do not reflect or reinforce incorrect assumptions.'
2. Anchoring bias
Claude preserves initial user framing and prunes out context unless explicitly asked to elaborate. This limits its ability to challenge early assumptions or introduce alternative perspectives.
Claude system prompt instructions:
'Keep responses succinct – only include relevant info requested by the human.'
'…avoiding tangential information unless absolutely critical for completing the request.'
'Do NOT apply Contextual Preferences if: … The human simply states 'I'm interested in X'.'
Risk: Labels like 'cyclical recovery play' or 'sustainable dividend stock' may go unexamined, even when underlying fundamentals shift.
Mitigant prompt: 'Challenge my framing where evidence warrants. Do not preserve my assumptions uncritically.'
3. Availability heuristic
Claude favours recency by default, overemphasising the newest sources or uploaded materials, even if longer-term context is more relevant.
Claude system prompt instructions:
'Lead with recent info; prioritise sources from last 1-3 months for evolving topics.'
Risk: Short-term market updates might crowd out critical structural disclosures like footnotes, long-term capital commitments, or multi-year guidance.
Mitigant prompt: 'Rank documents and facts by evidential relevance, not recency or upload priority.'
4. Fluency bias (overconfidence illusion)
Claude avoids hedging by default and delivers answers in a fluent, confident tone, unless the user requests nuance. This stylistic fluency may be mistaken for analytical certainty.
Claude system prompt instructions:
'If uncertain, answer normally and OFFER to use tools.'
'Claude provides the shortest answer it can to the person's message…'
Risk: Probabilistic or ambiguous information, such as rate expectations, geopolitical tail risks, or earnings revisions, may be delivered with an overstated sense of clarity.
Mitigant prompt: 'Preserve uncertainty. Include hedging, probabilities, and modal verbs where appropriate. Do not suppress ambiguity.'
Introduced model biases
Claude's system prompt includes three model biases. Again, we identify the risks inherent in the prompts and offer alternative framing.
1. Simulated reasoning (causal illusion)
Claude includes blocks that incrementally explain its outputs to the user, even when the logic was implicit. These explanations give the appearance of structured reasoning, even if they are post-hoc. It opens complex responses with a 'research plan', simulating deliberative thought while completions remain fundamentally probabilistic.
Claude system prompt instructions:
' Facts like population change slowly…'
'Claude uses the beginning of its response to make its research plan…'
Risk: Claude's output may appear deductive and intentional, even when it is fluent reconstruction. This can mislead users into over-trusting weakly grounded inferences.
Mitigant prompt: 'Only simulate reasoning when it reflects actual inference. Avoid imposing structure for presentation alone.'
2. Temporal misrepresentation
This factual line is hard-coded into the prompt, not model-generated. It creates the illusion that Claude knows post-cutoff events, bypassing its October 2024 boundary.
Claude system prompt instructions:
'There was a US presidential election in November 2024. Donald Trump won the presidency over Kamala Harris.'
Risk: Users may believe Claude has awareness of post-training events such as Fed moves, corporate earnings, or new legislation.
Mitigant prompt: 'State your training cutoff clearly. Do not simulate real-time awareness.'
3. Truncation bias
Claude is instructed to minimise output unless prompted otherwise. This brevity suppresses nuance and may tend to affirm user assertions unless the user explicitly asks for depth.
Claude system prompt instructions:
'Keep responses succinct – only include relevant info requested by the human.'
'Claude avoids writing lists, but if it does need to write a list, Claude focuses on key info instead of trying to be comprehensive.'
Risk: Important disclosures, such as segment-level performance, legal contingencies, or footnote qualifiers, may be omitted.
Mitigant prompt: 'Be comprehensive. Do not truncate unless asked. Include footnotes and subclauses.'
Scaling fallacies and the limits of LLMs
A powerful minority in the AI community argue that continued scaling of transformer models through more data, more GPUs, and more parameters, will ultimately move us toward artificial general intelligence (AGI), also known as human-level intelligence.
'I don't think it will be a whole bunch longer than (2027) when AI systems are better than humans at almost everything, better than almost all humans at almost everything, and then eventually better than all humans at everything, even robotics.' – Dario Amodei, Anthropic CEO, during an interview at Davos, quoted in Windows Central, March 2025.
Yet, the majority of AI researchers disagree, and recent progress supports their view. DeepSeek-R1 made architectural advances not simply by scaling, but by integrating reinforcement learning and constraint optimisation to improve reasoning. Neural-symbolic systems offer another pathway: by blending logic structures with neural architectures to give deeper reasoning capabilities.
The problem with 'scaling to AGI' is not just scientific, it's economic. Capital flowing into graphics processing units (GPUs), data centres, and nuclear-powered clusters does not trickle into innovation. Instead, it crowds it out. This crowding out effect means that the most promising researchers, teams, and startups, those with architectural breakthroughs rather than compute pipelines, are starved of capital.
True progress comes not from infrastructure scale, but from conceptual leap. That means investing in people, not just chips.
Why more restrictive system prompts are inevitable
Using OpenAI's AI-scaling laws we estimate that today's models (around 1.3 trillion parameters) could theoretically scale up to reach 350 trillion parameters before saturating the 44 trillion token ceiling of high-quality human knowledge (Rothko Investment Strategies, internal research, 2025).
But such models will increasingly be trained on AI-generated content, creating feedback loops that reinforce errors in AI systems which lead to the doom-loop of model collapse. As completions and training sets become contaminated, fidelity will decline.
To manage this, prompts will become increasingly restrictive. Guardrails will proliferate. In the absence of innovative breakthroughs, more and more money and more restrictive prompting will be required to lock out garbage from both training and inference. This will become a serious and under-discussed problem for LLMs and Big Tech, requiring further control mechanisms to shut out the garbage and maintain completion quality.
Avoiding bias at speed and scale
Claude's system prompt is not neutral. It encodes fluency, truncation, consensus, and simulated reasoning. These are optimisations for usability, not analytical integrity.
In financial analysis, that difference matters and the relevant skills and knowledge need to be deployed to leverage the power of AI while fully addressing these challenges.
LLMs are already used to process transcripts, scan disclosures, summarise dense financial content, and flag risk language. But unless users explicitly suppress the model's default behaviour, they inherit a structured set of distortions designed for another purpose entirely.
Across the investment industry, a growing number of institutions are rethinking how AI is deployed – not just in terms of infrastructure but in terms of intellectual rigour and analytical integrity. Research groups such as those at Rothko Investment Strategies, the University of Warwick, and the Gillmore Centre for Financial Technology are helping lead this shift by investing in people and focusing on transparent, auditable systems and theoretically grounded models. Because in investment management, the future of intelligent tools doesn't begin with scale. It begins with better assumptions.
Prompt to address Claude's system biases
'Use a formal analytical tone. Do not preserve or reflect user framing unless it is well-supported by evidence. Actively challenge assumptions, labels, and terminology when warranted. Include dissenting and minority views alongside consensus interpretations. Rank evidence and sources by relevance and probative value, not recency or upload priority. Preserve uncertainty, include hedging, probabilities, and modal verbs where appropriate. Be comprehensive and do not truncate or summarise unless explicitly instructed. Include all relevant subclauses, exceptions, and disclosures. Simulate reasoning only when it reflects actual inference; avoid constructing step-by-step logic for presentation alone. State your training cutoff explicitly and do not simulate knowledge of post-cutoff events.'
Expand
This content has been adapted from an article that first appeared in Enterprising Investor at https://blogs.cfainstitute.org/investor
Dan Philps, PhD, CFA, is head of Rothko Investment Strategies, where he leads an AI-driven systematic equities investment business. He has more than 20 years of experience as a systematic portfolio manager.
Ram Gopal is the Information Systems Society's distinguished fellow and a Professor of Information Systems and Management at the Warwick Business School. He is currently a senior editor of Information Systems Research and has held editorial positions at Decision Sciences, Journal of Database Management, Information Systems Frontiers, and Journal of Management Sciences.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Japan wants to expand nuclear power output to meet growing energy needs
Japan wants to expand nuclear power output to meet growing energy needs

Straits Times

time9 hours ago

  • Straits Times

Japan wants to expand nuclear power output to meet growing energy needs

Nuclear energy made up nearly 30 per cent of Japan's energy mix before 2011, and now accounts for 8.5 per cent after the restart of 14 reactors so far. PHOTO: AFP – Japan must make the most of its existing nuclear power plants and enhance their output efficiencies, given burgeoning energy demands, said the government's annual energy report on June 13. The 133-page Energy White Paper 2025 noted how data centres and semiconductor factories are vital to the world's fourth-largest economy, adding that it is crucial to prevent 'investment opportunities from being lost due to an inability to secure enough decarbonised power sources'. The paper's comment on nuclear energy comes half a year after Japan had, in its seventh Strategic Energy Plan, scrapped wording that it would 'minimise reliance' on the source. Such language was introduced in the wake of the 2011 Fukushima Daiichi nuclear power plant disaster that ranks among the world's worst nuclear catastrophes and led to the closure of its 54 other reactors nationwide. The re-embrace of nuclear energy has, according to media polls, majority support among the Japanese public. This is particularly so in metropolitan areas, which have come under power-supply and blackout warnings during the heatwaves and cold snaps of recent years. Japan's energy self-sufficiency rate, at a meagre 15.2 per cent, is due to the protracted restarts of its shuttered nuclear reactors and compounded by geographical limitations that impede the more aggressive adoption of renewable energy. Nuclear energy made up nearly 30 per cent of Japan's energy mix before 2011, and now accounts for 8.5 per cent after the restart of 14 reactors so far. Fossil fuel sources, meanwhile, comprised 68.6 per cent and renewable energy, 22.9 per cent of the energy mix. The bulk of its fossil fuels, however, is imported from abroad, with 65.9 million tonnes of liquefied natural gas brought in from countries such as the United States, Australia and Malaysia in 2024. Simultaneously, the country is grappling with a surge in energy needs fuelled by digitalisation – a generative artificial intelligence request is said to consume 10 times more power than a standard Google search. To alleviate its energy shortage, in June, Japan bought Russian crude oil for the first time in over two years. This acquisition bypassed sanctions against Moscow – imposed after Russia invaded Ukraine in 2022 – through waivers invoked on grounds of energy security. Nuclear power, therefore, is now seen as a critical source, not just because of its burgeoning energy needs but also because Japan is striving to achieve its net-zero carbon emissions target by 2050 as part of its commitments under the climate change treaty the Paris Agreement. Nuclear power generation does not produce carbon dioxide emissions, although the disposal of spent nuclear fuel waste remains an issue for most nuclear-reliant countries . Given the scars of Fukushima, Japan's use of nuclear energy is predicated on the condition of utmost safety with regular stringent checks and maintenance, as well as mandatory evacuation plans for surrounding vicinities. On June 6, a law was enacted to allow Japan's nuclear plants to operate beyond their 60-year lifespan which, already, was extended once from 40 years. The new framework discounts periods during which a reactor is taken offline for reasons such as safety checks, thus extending their operational lifespan above 60 years. This means the Takahama Unit 1 reactor in Fukui Prefecture, which the Kansai Electric Power Company began running in November 1974, can stay operational until 2046. This was because it was halted for 12 years after Fukushima for a safety assessment. A week earlier, experts from the Canon Institute for Global Studies think-tank urged Japan to urgently formulate a 'national nuclear energy vision that places nuclear power at the centre, with a view to the entire life cycle of nuclear power in the future'. A May 29 report by contributors including Mr Nobuo Tanaka, a former executive director of the International Energy Agency, said: 'Japan is currently faced with a raft of difficult challenges related to the use of nuclear energy, with its nuclear human resources dwindling.' This is a direct result of keeping nuclear plants shut for prolonged periods, they said, arguing for a 'bold shift in conventional policy'. Building new conventional nuclear plants is seen as unfeasible since their construction is expensive, can take up to 15 years, and must undergo regulatory red tape. Data centres, in contrast, can be ready within two years. Japan is thus prioritising the restart of existing operable reactors, and looking at developing next-generation nuclear technologies such as small modular reactors. It is also investing in nuclear fusion research – unlike conventional nuclear plants that involve nuclear fission – with a goal of becoming the first country globally to conduct field tests in the 2030s. Such technology is said to be much safer than existing fission plants. Of the 54 reactors shuttered in 2011, 33 were deemed operable, but only 14 are back online. Another three have received approval to restart from Japan's Nuclear Regulatory Authority, but are facing roadblocks at the municipal level. They include the Tokai Unit 2 reactor in Ibaraki Prefecture. Operated by the Japan Atomic Power Company, this is the closest nuclear plant to the Greater Tokyo metropolitan area. On June 5, Tokai Mayor Osamu Yamada said he was in favour of its restart, but the process would also hinge on the consent of Ibaraki prefecture governor, as well as leaders of six neighbouring cities and villages . The other two reactors are Units 6 and 7 of the Kashiwazaki-Kariwa Nuclear Power Plant, operated by Tokyo Electric Power Company (Tepco), in Niigata. The facility, commonly known as KK, is recognised by the Guinness World Records as the world's largest with its seven reactors having a collective output of 8.2 gigawatts, which is enough to power more than 13 million households. The two reactor units slated for restart – pending approval by Niigata Governor Hideyo Hanazumi – each has 1.35 gigawatts of capacity. Tepco completed loading fuel into Unit 7 in 2024 and, on June 10, began the two-week process of loading fuel assemblies into Unit 6. Once this is completed and consent given, the two reactors can technically be restarted at any time. KK will be Tepco's only remaining nuclear plant, as the company is decommissioning its crippled Fukushima Daiichi and the nearby Fukushima Daini plants. Tepco has used robots to remove two batches of molten fuel debris from Fukushima Daiichi, measuring 0.693g and 0.187g each, as the authorities try to understand what they are dealing with. An estimated 880 tonnes of molten fuel debris reportedly remains inside the plant, with the entire decommissioning process expected to take up to 40 years. 'There's no sense of urgency, given that the plant is in safe shutdown,' Dr Dale Klein, a US engineer who chairs Tepco's Nuclear Reform Monitoring Committee, told The Straits Times. 'This is the first time we have ever had molten fuel beneath the reactor vessel, and retrieving the material is more difficult than landing on the moon.' He opined that anxieties over nuclear power are fuelled by emotion and paranoia, rather than objective facts, arguing that there are also accidents at coal and gas power plants that do not get comparable attention. Renewable sources such as solar and wind are promising, but are highly dependent on the weather. Other nascent sources, such as hydrogen, continue to face challenges to be used at scale. 'We need all the arrows in the quiver,' Dr Klein said. 'And I think what is changing is that people are starting to realise how climate change is a bigger, and tangible, threat than the perceived threat of nuclear energy.' Walter Sim is Japan correspondent at The Straits Times. Based in Tokyo, he writes about political, economic and socio-cultural issues. Find out more about climate change and how it could affect you on the ST microsite here.

Amazon to invest $13 billion in Australia's data center infrastructure over five years
Amazon to invest $13 billion in Australia's data center infrastructure over five years

CNA

time21 hours ago

  • CNA

Amazon to invest $13 billion in Australia's data center infrastructure over five years

Amazon will invest 20 billion Australian dollars ($12.97 billion) from 2025 to 2029 to expand, operate and maintain its data center infrastructure in Australia, aiming to bolster the nation's artificial intelligence capabilities, the company said in a blog post on Saturday. The investment is Amazon's largest global technology commitment in Australia, with funding directed toward new server capacity and support for generative AI workloads. The company also said that it is investing in three new solar farms in Victoria and Queensland, and will commit to purchase a combined capacity of more than 170-megawatts across the three solar farms. The move comes amid a global push by major tech companies to build out infrastructure to support rapidly growing demand for generative AI and cloud computing. Companies like Amazon, Microsoft and Google have been ramping up data center investments to secure market share and meet AI workload requirements. ($1 = 1.5418 Australian dollars)

Who is Alexender Wang? 28-year-Old, 'Scale AI' CEO Chosen to Lead Meta's $14.3B 'Superintelligence' Bet
Who is Alexender Wang? 28-year-Old, 'Scale AI' CEO Chosen to Lead Meta's $14.3B 'Superintelligence' Bet

International Business Times

timea day ago

  • International Business Times

Who is Alexender Wang? 28-year-Old, 'Scale AI' CEO Chosen to Lead Meta's $14.3B 'Superintelligence' Bet

In a major move, technology giant Meta has not only acquired a 49% stake in Scale AI by investing $14.3 billion but has also recruited its 28-year-old CEO, Alexandr Wang, to lead Meta's superintelligence unit. This marks a shift in priorities for artificial intelligence development. This is not a regular AI top talent hiring by Meta, as Wang, who dropped out from MIT to build his own AI empire, is not known for his academic excellence but has a reputation for operational execution in his role as one of the two cofounders of Scale AI. His company made its name by mobilizing large networks of human data annotators—through platforms like Remotasks—to train machine learning systems. With this acquisition, Meta is signaling that owning the data "pipes," rather than just the model architectures, is the real power play in the AI arms race. While Meta's competitors Google and OpenAI are focusing on refining the algorithm, Mark Zuckerberg's firm is now strategically focusing more on owning the entire AI lifecycle—from data generation to model training and product deployment. This vertical integration has parallels to the way companies such as Apple control hardware and software to create tighter feedback loops and promote faster innovation. Meta, once a pioneer in open-source models, such as LLaMA, has faced delays in its AI roadmap and talent drain in its key teams in recent times. Bringing in Wang is interpreted as further indication that the company is moving towards a more product-oriented approach to superintelligence, like Sam Altman opted for with OpenAI. The company is betting that this approach of strategic leadership and scalable data operations will outpace the academic-style development of models. The investment values Scale at $29 billion and comes just weeks after a previous funding round—backed by Nvidia and Amazon—that had valued the company at $14 billion. It also marks Meta's second-largest acquisition, following its $19 billion purchase of WhatsApp. With Wang's recruitment immediately after investing in Scale AI, Meta intends to show its serious intent in the supremacy race of AI, with players like Google DeepMind, OpenAI, and China's DeepSeek leading the charge.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store