logo
Artificial Intelligence Is Not Intelligent

Artificial Intelligence Is Not Intelligent

The Atlantic12 hours ago

On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language.
Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.
To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.'
These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions.
Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.'
Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.'
Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.'
Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him.
This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.'
Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.'
The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google's AI is ‘hallucinating,' spreading dangerous info — including a suggestion to add glue to pizza sauce
Google's AI is ‘hallucinating,' spreading dangerous info — including a suggestion to add glue to pizza sauce

New York Post

time29 minutes ago

  • New York Post

Google's AI is ‘hallucinating,' spreading dangerous info — including a suggestion to add glue to pizza sauce

Google's AI Overviews, designed to give quick answers to search queries, reportedly spits out 'hallucinations' of bogus information and undercuts publishers by pulling users away from traditional links. The Big Tech giant — which landed in hot water last year after releasing a 'woke' AI tool that generated images of female Popes and black Vikings — has drawn criticism for providing false and sometimes dangerous advice in its summaries, according to The Times of London. 3 Google's latest artificial intelligence tool which is designed to give quick answers to search queries is facing criticism. Google CEO Sundar Pichai is pictured. AFP via Getty Images In one case, AI Overviews advised adding glue to pizza sauce to help cheese stick better, the outlet reported. In another, it described a fake phrase — 'You can't lick a badger twice' — as a legitimate idiom. The hallucinations, as computer scientists call them, are compounded by the AI tool diminishing the visibility of reputable sources. Instead of directing users straight to websites, it summarizes information from search results and presents its own AI-generated answer along with a few links. Laurence O'Toole, founder of the analytics firm Authoritas, studied the impact of the tool and found that click-through rates to publisher websites drop by 40%–60% when AI Overviews are shown. 'While these were generally for queries that people don't commonly do, it highlighted some specific areas that we needed to improve,' Liz Reid, Google's head of Search, told The Times in response to the glue-on-pizza incident. 3 Google AI Mode is an experimental mode utilizing artificial intelligence and large language models to process Google search queries. Gado via Getty Images The Post has sought comment from Google. AI Overviews was introduced last summer and powered by Google's Gemini language model, a system similar to OpenAI's ChatGPT. Despite public concerns, Google CEO Sundar Pichai has defended the tool in an interview with The Verge, stating that it helps users discover a broader range of information sources. 'Over the last year, it's clear to us that the breadth of area we are sending people to is increasing … we are definitely sending traffic to a wider range of sources and publishers,' he said. Google appears to downplay its own hallucination rate. When a journalist searched Google for information on how often its AI gets things wrong, the AI response claimed hallucination rates between 0.7% and 1.3%. 3 Google's AI Overviews, was introduced last summer and is powered by the Gemini language model, a system similar to ChatGPT. AP However, data from the AI monitoring platform Hugging Face indicated that the actual rate for the latest Gemini model is 1.8%. Google's AI models also seem to offer pre-programmed defenses of their own behavior. In response to whether AI 'steals' artwork, the tool said it 'doesn't steal art in the traditional sense.' When asked if people should be scared of AI, the tool walked through some common concerns before concluding that 'fear might be overblown.' Some experts worry that as generative AI systems become more complex, they're also becoming more prone to mistakes — and even their creators can't fully explain why. The concerns over hallucinations go beyond Google. OpenAI recently admitted that its newest models, known as o3 and o4-mini, hallucinate even more frequently than earlier versions. Internal testing showed o3 made up information in 33% of cases, while o4-mini did so 48% of the time, particularly when answering questions about real people.

Google's AI Mode Now Creates Interactive Stock Charts For You
Google's AI Mode Now Creates Interactive Stock Charts For You

CNET

time40 minutes ago

  • CNET

Google's AI Mode Now Creates Interactive Stock Charts For You

Google's AI Mode now can create interactive charts when users ask questions about stocks and mutual funds, the company said in a blog post Thursday. Users might ask the site to compare five years of stock performances for the biggest tech companies, or request to see mutual funds with the best rates of return over the past decade. Gemini, Google's AI engine, will then create an interactive graph and comprehensive explanation. I created a sample by going to the webpage for the new experiment (if you try it at work, you might learn that your admin has banned it, but it should work on a personal computer). Once there, I told it exactly what I wanted, "make me an interactive chart showing MSFT stock over the past five years." It produced the chart, and I was able to move the slider from one date to another, showing the stock price on that date. It's the same kind of chart you can probably get at your financial advisor's site, but it did work. Tell Google AI what chart you want, and it will create one that you can interact with. CNET But be warned: AI has accuracy issues, and users need to be extra-careful with financial information of any kind. "AI has historically struggled with quantitative reasoning tasks," said Sam Taube, lead investing writer at personal finance company NerdWallet. "It looks like Google's AI mode will often provide links to its data sources when answering financial queries. It's probably worth clicking those links to make sure that the raw data does, in fact, match the AI's output. If there's no link to a data source, proceed with caution; consider manually double-checking any numbers in the AI's output. I wouldn't trust any AI model to do its own math yet." The feature is a new experiment from Google Labs. At its I/O conference last month, Google announced AI Mode's ability to create interactive graphics for complex sets of data. The feature is now only for queries about stocks and mutual funds, but it will be expanded to other topics eventually. "I'd avoid asking AI any 'should I invest in XYZ' type questions," Taube told CNET. "The top AI models may be smart, but they aren't credentialed financial advisors and probably don't know enough about your personal financial situation to give good advice. What's more, AI doesn't have a great track record at picking investments, at least so far. Many AI-powered ETFs (funds that use AI to pick stocks to invest in) are underperforming the S&P 500 this year."

Anthropic appoints a national security expert to its governing trust
Anthropic appoints a national security expert to its governing trust

Yahoo

timean hour ago

  • Yahoo

Anthropic appoints a national security expert to its governing trust

A day after announcing new AI models designed for U.S. national security applications, Anthropic has appointed a national security expert, Richard Fontaine, to its long-term benefit trust. Anthropic's long-term benefit trust is a governance mechanism that Anthropic claims helps it promote safety over profit, and which has the power to elect some of the company's board of directors. The trust's other members include Centre for Effective Altruism CEO Zachary Robinson, Clinton Health Access Initiative CEO Neil Buddy Shah, and Evidence Action President Kanika Bahl. In a statement, Anthropic CEO Dario Amodei said that Fontaine's hiring will "[strengthen] the trust's ability to guide Anthropic through complex decisions" about AI as it relates to security. "Richard's expertise comes at a critical time as advanced AI capabilities increasingly intersect with national security considerations," Amodei continued. "I've long believed that ensuring democratic nations maintain leadership in responsible AI development is essential for both global security and the common good." Fontaine, who as a trustee won't have a financial stake in Anthropic, previously served as a foreign policy adviser to the late Sen. John McCain and was an adjunct professor at Georgetown teaching security studies. For more than six years, he led the Center for A New American Security, a national security think tank based in Washington, D.C., as its president. Anthropic has increasingly engaged U.S. national security customers as it looks for new sources of revenue. In November, the company teamed up with Palantir and AWS, the cloud computing division of Anthropic's major partner and investor, Amazon, to sell Anthropic's AI to defense customers. To be clear, Anthropic isn't the only top AI lab going after defense contracts. OpenAI is seeking to establish a closer relationship with the U.S. Defense Department, and Meta recently revealed that it's making its Llama models available to defense partners. Meanwhile, Google is refining a version of its Gemini AI capable of working within classified environments, and Cohere, which primarily builds AI products for businesses, is also collaborating with Palantir to deploy its AI models. Fontaine's hiring comes as Anthropic beefs up its executive ranks. In May, the company named Netflix co-founder Reed Hastings to its board. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store