Generative AI models are skilled in the art of bulls**t
LIES are not the greatest enemy of the truth, according to the philosopher Harry Frankfurt. Bulls**t is worse.
As he explained in his classic essay On Bulls**t (1986), a liar and a truth teller are playing the same game, just on opposite sides. Each responds to facts as they understand them, and either accepts or rejects the authority of truth. But a bulls**tter ignores these demands altogether. 'He does not reject the authority of truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bulls**t is a greater enemy of the truth than lies are.' Such a person wants to convince others, irrespective of the facts.
Sadly, Frankfurt died in 2023, just a few months after ChatGPT was released. But reading his essay in the age of generative artificial intelligence (GenAI) provokes a queasy familiarity. In several respects, Frankfurt's essay neatly describes the output of AI-enabled large language models (LLMs). They are not concerned with truth, because they have no conception of it. They operate by statistical correlation, not empirical observation.
'Their greatest strength, but also their greatest danger, is their ability to sound authoritative on nearly any topic irrespective of factual accuracy. In other words, their superpower is their superhuman ability to bulls**t,' Carl Bergstrom and Jevin West have written. The two University of Washington professors run an online course – Modern-Day Oracles or Bulls**t Machines? – scrutinising these models. Others have renamed the machines' output as bots**t.
One of the best-known and unsettling, yet sometimes interestingly creative, features of LLMs is their 'hallucination' of facts – or simply making stuff up. Some researchers argue this is an inherent feature of probabilistic models, not a bug that can be fixed. But AI companies are trying to solve this problem by improving the quality of the data, fine-tuning their models and building in verification and fact-checking systems.
They would appear to have some way to go, though, considering a lawyer for Anthropic told a Californian court this month that their law firm had itself unintentionally submitted an incorrect citation hallucinated by the AI company's Claude. As Google's chatbot flags to users: 'Gemini can make mistakes, including about people, so double-check it.' That did not stop Google from this week rolling out an 'AI mode' to all its main services in the US.
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
Sign Up
Sign Up
The ways in which these companies are trying to improve their models, including reinforcement learning from human feedback, itself risks introducing bias, distortion and undeclared value judgments. As the Financial Times has shown, AI chatbots from OpenAI, Anthropic, Google, Meta, xAI and DeepSeek describe the qualities of their own companies' chief executives and those of rivals very differently. Elon Musk's Grok has also promoted memes about 'white genocide' in South Africa in response to wholly unrelated prompts. XAI said it had fixed the glitch, which it blamed on an 'unauthorised modification'.
Such models create a new, even worse category of potential harm – or 'careless speech', according to Sandra Wachter, Brent Mittelstadt and Chris Russell, in a paper from the Oxford Internet Institute. In their view, careless speech can cause intangible, long-term and cumulative harm. It is like 'invisible bulls**t' that makes society dumber, Wachter told me.
At least with a politician or salesperson, we can normally understand their motivation. But chatbots have no intentionality, and are optimised for plausibility and engagement, not truthfulness. They will invent facts for no purpose. They can pollute the knowledge base of humanity in unfathomable ways.
The intriguing question is whether AI models could be designed for higher truthfulness. Will there be a market demand for them? Or should model developers be forced to abide by higher truth standards, as apply to advertisers, lawyers and doctors, for example?
Wachter suggested that developing more truthful models would take time, money and resources that the current iterations are designed to save. 'It's like wanting a car to be a plane. You can push a car off a cliff, but it's not going to defy gravity,' she said.
All that said, GenAI models can still be useful and valuable. Many lucrative business – and political – careers have been built on bulls**t. Appropriately used, GenAI can be deployed for myriad business use cases. But it is delusional, and dangerous, to mistake these models for truth machines. FINANCIAL TIMES
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Business Times
2 hours ago
- Business Times
The AI job suck is the China shock of today
REARVIEW-mirror policymaking seems as unavoidable as it is self-defeating. President Donald Trump is falling into this trap with his focus on reversing the past quarter-century of trade policy — trying to put the toothpaste back in its tube. In attempting to undo the so-called China shock, he is missing the opportunity to preempt collateral damage from the coming artificial intelligence shock, which will reshape labor markets over the coming decade. Even relatively positive economic changes hurt some workers. As did the decline of America's manufacturing hubs, AI is likely to prove a challenge for millions of workers. At the more apocalyptic extreme, Anthropic chief executive officer Dario Amodei told Axios this week that AI could eliminate half of entry-level white-collar jobs and push unemployment as high as 20 per cent over one to five years. While I'm not expecting anything that dire, there are subtle signs — as The Atlantic's Derek Thompson pointed out last month — that the impacts may be already materialising in the unique and recent increase of the unemployment rate for recent college graduates to the highest since 2021. Brookings Institution research projects that about 30 per cent of the workforce could see at least half of their tasks disrupted by generative AI. That could include close to 19 million people in office and administrative support; 13 million in sales and related jobs; and 10 million in business and financial operations, according to Brookings' analysis of OpenAI and Bureau of Labour Statistics data. Geographically speaking, economists Scott Abrahams and Frank Levy found that such work is most concentrated in expensive coastal areas, including the Bay Area and the nation's capital. For the government, the key is to stand ready to provide help to those who need it. The wildly uncertain fallout from AI requires modern tools for monitoring trends in employment and wages. In the China shock, the negative outcomes were concentrated in manufacturing communities. By failing to appreciate the scope of the problem early, policymakers allowed parts of the country to fall into a self-reinforcing cycle of decay, engendering a sense of unfairness and lighting a fire under a populist backlash in American politics. In contrast, the Abrahams and Levy study shows that a generative AI shock could nudge workers away from expensive coastal cities to places such as Savannah, Georgia, or Greenville, South Carolina, which offer affordable housing and economies that are less exposed to the job losses. One source of forward-looking data is online job postings, which can be mined for keywords related to AI, Abrahams told me this week. Such data is far more comprehensive than it was in earlier shocks, and it reveals in real time the areas where companies are expanding and replacing workers — and, equally important, the areas where they aren't. As Abrahams pointed out, AI's impact may play out in large part through job-leavers that go unreplaced, rather than large and obvious layoffs. In terms of the latter, improved disclosure would help. Kevin Frazier, the AI Innovation and Law Fellow at the UT Austin School of Law, has suggested updating the Worker Adjustment and Retraining Notification (WARN) Act, which generally requires companies to provide 60 days advance notice of closures and layoffs of 50 workers or more. Frazier has suggested that medium and large-sized firms be required to disclose 'widespread integration of new AI tools', whether or not the new technology corresponds with immediate job cuts. Though compliance could be a challenge, this would add a layer of forward-looking visibility. Frazier also wants to change the WARN policy to capture more layoffs and give communities and policymakers more time to respond. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Second, the US should make preparations to provide a strong response to any visible labour market disruptions, including getting its fiscal house in order. While some people believe that AI will eventually be so disruptive that it demands a version of Universal Basic Income, the near-term solution is likely to look like an improved and probably costlier version of Trade Adjustment Assistance, the programme rolled out during John F Kennedy's presidency to help workers sidelined by trade. That programme was too bureaucratic and small to blunt a development as big as the China shock. Any new effort would have to cut down on red tape and be better funded. Frazier has suggested businesses themselves be required to pay into rainy-day-type funds for worker retraining. Third, policymakers should make it easier for workers to move for new opportunities. Many lost jobs will be replaced by new and even better ones, but we can't take for granted that the labour supply will automatically migrate to the sectors and regions with the greatest opportunity. In fact, one key takeaway from the China shock literature is that, while the labour market migrated, many individuals didn't. David Autor, David Dorn and Gordon Hanson have found that incumbent workers were 'largely frozen in the declining manufacturing sector in their original locations'. It's not clear why so many stayed. Personal ties may have motivated some, but others may have faced financial constraints — a problem exacerbated by today's housing affordability crisis. As for Trade Adjustment Assistance, it provided only a laughable relocation allowance of no more than US$1,250, a figure that should be much larger if it's going to promote labour mobility. Finally, the US must ensure that the next generation is equipped with the skills of the future, including general AI literacy as well as domain expertise around AI and robotics engineering. The future is also likely to place a premium on the general critical thinking skills and emotional intelligence that liberal arts degrees engender. While the payoffs of higher education may become less clear, it's likely to remain essential to America's success. Policymakers can support it through thoughtfully allocated student and research grants, and immigration policies that bring the best inventors and entrepreneurs to our country. Unfortunately, Trump has paid short shrift to the AI challenge. He's spending much of his time pursuing a policy of ex post protectionism, seemingly trying to reverse the outcomes of a shock that the US inadequately prepared for a quarter century ago. Instead, if he wants to leave an economic legacy, he should take steps to ensure that AI maximally benefits Americans by mitigating the inevitable dislocations along the way. Left unaddressed, America could face another populist backlash against uneven labour market outcomes, and the Republican Party may well find itself on the wrong side of this one. BLOOMBERG


CNA
3 hours ago
- CNA
Google and DOJ to make final push in US search antitrust case
WASHINGTON :Alphabet's Google and U.S. antitrust enforcers will make their final arguments on whether the tech giant should be forced to sell its Chrome browser or adopt other measures to restore competition in online search, as the blockbuster antitrust trial concludes on Friday. The U.S. Department of Justice and a coalition of states are pressing to make Google not only sell Chrome, but also share search data and cease multibillion-dollar payments to Apple and other smartphone makers and wireless carriers that set Google as the default search engine on new devices. The proposals aim to restore competition after a judge found last year that Google illegally dominates the online search and related advertising markets. Artificial intelligence companies could get a boost after already rattling Google's status as the go-to tool to find information online. U.S. District Judge Amit Mehta is overseeing the trial, which began in April. He has said he aims to rule on the proposals by August. If the judge does require Google to sell off Chrome, OpenAI would be interested in buying it, Nick Turley, OpenAI's product head for ChatGPT, said at the trial. OpenAI would also benefit from access to Google's search data, which would help it make responses to user inquiries more accurate and up to date, Turley said. Google says the proposals go far beyond what is legally justified by the court's ruling, and would give away its technology to competitors. The company has already begun loosening agreements with smartphone makers including Samsung Electronics to allow them to load rival search and AI products. The DOJ wants the judge to go farther, banning Google from making lucrative payments in exchange for installation of its search app.

Straits Times
9 hours ago
- Straits Times
ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth
(From left) IMDA's Alamelu Subramaniam, Adobe's Andy Parsons, Baroness Jones of Whitchurch, Meta's Simon Milner and SMU's Lim Sun Sun during a discussion at ATxSummit 2025 on May 29. PHOTO: INFOCOMM MEDIA DEVELOPMENT AUTHORITY ATxSummit 2025: Meta V-P downplays fears over AI as critics raise alarm over online risks to youth SINGAPORE – Meta, the parent company of Facebook and Instagram, downplayed fears over the impact of artificial intelligence (AI), urging policymakers and the public to focus on actual outcomes rather than worst-case scenarios. The comments by its Asia-Pacific public policy vice-president Simon Milner drew sharp rebuttals at the ATxSummit 2025 on May 29, where fellow panellists said the rapid spread of AI has real-world consequences such as online harms affecting youth and children. During the panel at Capella Singapore, Mr Milner cited 2024 as the 'year of democracy', as more people across a bigger number of countries went to the polls than at any other time in history. While there were widespread concerns about deepfakes and generative AI (GenAI) disrupting elections, he said no significant evidence of such interference was found – not even in major democracies like the US, India or Indonesia. 'Although enormous amounts of GenAI were deployed across platforms, the impact has not been catastrophic,' he added. However, his views were not shared by fellow panellists discussing the topic of protecting society in an always-online world. Drawing from her work, Singapore Management University's professor of communication and technology Lim Sun Sun said many parents feel anxious and unsure about how to guide their children in navigating the rapid rise of GenAI. 'Even if the data doesn't paint a worrying picture overall, on the ground, people are struggling to understand this technology,' Prof Lim said. Teachers also face a dilemma: Encouraging experimentation with AI while warning about its risks. 'It is a difficult balance,' she added. Baroness Jones of Whitchurch (Margaret Beryl Jones) , the UK's parliamentary under-secretary for the future digital economy and online safety, echoed similar concerns about online harms affecting youth and children. She pointed to an ongoing public debate in the UK about the damaging effects some online platforms have on young users. 'For example, children accessing online suicide forums and committing suicide. This is just heartbreaking, and we have some terrible stories about it,' she said. In May 2024, 17-year-old Vlad Nikolin-Caisley from Hampshire in south-east England died after allegedly being encouraged by members of an online pro-suicide group. His family believes these harmful online interactions played a significant role in his death, intensifying calls for stronger regulation of such platforms. Baroness Jones stressed the need for tech companies to work closely with the government to minimise such harms, but acknowledged that not all companies are fully on board, as the government is 'laying high expectations in a new territory'. But Mr Milner pushed back, arguing that the UK – or more broadly, Europe – rushed to be the first region to regulate AI, which he described as a mistake. He said this approach has led to a stand-off with companies. In contrast, he praised Singapore and other Asian governments for taking a different path: Fostering robust dialogue with tech firms, both publicly and privately, while asking tough questions without rushing into heavy-handed regulations. Mr Andy Parsons, senior director of content authenticity at Adobe, highlighted the spread of child sexual abuse material (CSAM) online. It is becoming nearly impossible for the police to identify real victims if the materials were generated entirely by AI, he said. Mr Parsons warned that this not only hinders efforts to bring perpetrators to justice but also erases the real human suffering behind these crimes – a grave problem that requires urgent attention. Prof Lim agreed, noting that the issue of CSAM has been worsened by the rapid spread of GenAI. She is currently identifying key stakeholders across the industry, government and the community who are involved in tackling the problem . We need to understand 'where else can we coordinate our efforts better so that we can combat this really dreadful scourge', she said. Addressing the concerns raised by his fellow panellists, Mr Milner emphasised that Meta's top priority is developing products with features to mitigate online harms. He cited the introduction of teen-specific accounts on Instagram as a response to growing worries about young people's engagement with the platform. 'I think we should be more parent-focused in our approach to young people's safety,' he said, adding that teen accounts are not just about imposing bans. 'Parents want help, and we are here to help them.' Baroness Jones stressed that AI safety must be approached as safety by design – embedded into platforms from the outset, rather than relying on reactive measures like taking down the content afterwards. 'It should be an integral part of the system that children, in particular, are protected,' she said. But achieving that remains a major challenge. Citing reports from the UK, she highlighted that children as young as eight have encountered disturbing content online, often repeatedly surfaced to them by algorithms. She believed the algorithms are clearly reinforcing exposure to harmful material. If tech companies truly put their minds to it, they could rework the way these systems operate, she said, emphasising that keeping children safe must be the top priority. Prof Lim also called for safety by design, stressing that online spaces should be built with the most vulnerable users in mind – whether they are children, women, the elderly or marginalised communities. She said: 'Because once you've designed for the most vulnerable, it makes the whole platform safer for everyone.' Join ST's WhatsApp Channel and get the latest news and must-reads.