Latest news with #EmpireofAI:DreamsandNightmaresin
Yahoo
4 days ago
- Yahoo
What Happens When People Don't Understand How AI Works
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book—The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' [Read: What 'Silicon Valley' knew about tech-bro paternalism] These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. [Read: Life really is better without the internet] This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences. When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic. Article originally published at The Atlantic


Atlantic
4 days ago
- Atlantic
Artificial Intelligence Is Not Intelligent
On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.


NDTV
20-05-2025
- Science
- NDTV
OpenAI Co-Founder's Doomsday Bunker Plan for AGI Apocalypse Revealed
OpenAI co-founder Ilya Sutskever once proposed building a doomsday bunker that could protect the company's top researchers in case of an end-of-the-world "rapture" triggered by the release of a new form of artificial intelligence (AI), popularly referred to as Artificial General Intelligence (AGI), capable of surpassing the cognitive abilities of humans. The revelation has been made by Karen Hao in her upcoming book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Mr Sutskever, long regarded as the brains behind ChatGPT, made the comment during a meeting with key scientists at the company in 2023. "Once we all get into the bunker..." said Mr Sutskever, only to be interrupted by a confused colleague: "I'm sorry, the bunker?" To which he replied: "We're definitely going to build a bunker before we release AGI." As per the book's excerpts, published in The Atlantic, it wasn't the first and only time that Mr Sutskever broached the topic. Two other sources told Ms Hao that Mr Sutskever regularly referenced the bunker in internal discussions. Society not ready for AGI This is not the first instance when a top-level executive, working to build AI models, has sounded the alarm about the future of AGI. Google DeepMind CEO Demis Hassabis has already warned that society is not ready for AGI. "I think we are on the cusp of that. Maybe we are five to 10 years out. Some people say shorter, I wouldn't be surprised," said Mr Hassabis when quizzed about the timeline of AGI becoming a reality. "It's a sort of like probability distribution. But it's coming, either way it's coming very soon and I'm not sure society's quite ready for that yet. And we need to think that through and also think about these issues that I talked about earlier, to do with the controllability of these systems and also the access to these systems and ensuring that all goes well," he added. The 2024 Nobel Prize in Chemistry winner has previously called for the establishment of a UN-like umbrella organisation to oversee AGI's development. What is AGI? AGI takes AI a step further. While AI is task-specific, AGI aims to possess intelligence that can be applied across a wide range of tasks, similar to human intelligence. In essence, AGI would be a machine with the ability to understand, learn, and apply knowledge in diverse domains, much like a human being.

Business Insider
20-05-2025
- Business
- Business Insider
OpenAI's growing pains
This is an excerpt from " Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao. The book is based on interviews with around 260 people and an extensive trove of correspondence and documents. Any quoted emails, documents, or Slack messages come from copies or screenshots of those documents and correspondences or are exactly as they appear in lawsuits. The author reached out to all of the key figures and companies that are described in this book to seek interviews and comment. OpenAI and Sam Altman chose not to cooperate. In November 2022, rumors began to spread within OpenAI that its rival Anthropic was testing — and would soon release — a new chatbot. If it didn't launch first, OpenAI risked losing its leading position, which could deliver a big hit to morale for employees who had worked long and tough hours to retain that dominance. Anthropic had not in fact been planning any imminent releases. But for OpenAI executives, the rumors were enough to trigger a decision: The company wouldn't wait to ready GPT-4 into a chatbot; it would release John Schulman 's chat-enabled GPT-3.5 model with the Superassistant team's brand-new chat interface in two weeks, right after Thanksgiving. No one truly fathomed the societal phase shift they were about to unleash. They expected the chatbot to be a flash in the pan. The night before the release, they placed bets on how many users might try the tool by the end of the weekend. Some people guessed a few thousand. Others guessed tens of thousands. To be safe, the infrastructure team provisioned enough server capacity for 100,000 users. On Wednesday, November 30, most employees didn't even realize that the launch had happened. But the following day, the number of users began to surge. The instant runaway success of ChatGPT was beyond what anyone at OpenAI had dreamed of. It would leave the company's engineers and researchers completely miffed even years later. GPT-3.5 hadn't been that much of a capability improvement over GPT-3, which had already been out for two years. And GPT-3.5 had already been available to developers. OpenAI CEO Sam Altman later said that he'd believed ChatGPT would be popular but by something like "one order of magnitude less." "It was shocking that people liked it," a former employee remembers. "To all of us, they'd downgraded the thing we'd been using internally and launched it." Within five days, OpenAI cofounder Greg Brockman tweeted that ChatGPT had crossed one million users. Within two months, it had reached 100 million, becoming what was then the fastest-growing consumer app in history. ChatGPT catapulted OpenAI from a hot startup well-known within the tech industry into a household name overnight. At the same time, it was this very blockbuster success that would place extraordinary strain on the company. Over the course of a year, it would polarize its factions further and wind up the stress and tension within the organization to an explosive level. By then, the company had just 300 employees. With every team stretched dangerously thin, managers begged Altman for more head count. There was no shortage of candidates. After ChatGPT, the number of applicants clamoring to join the rocket ship had rapidly multiplied. But Altman worried about what would happen to company culture and mission alignment if the company scaled up its staff too quickly. He believed firmly in maintaining a small staff and high talent density. "We are now in a position where it's tempting to let the organization grow extremely large," he had written in his 2020 vision memo, in reference to Microsoft's investment. "We should try very hard to resist this — what has worked for us so far is being small, focused, high-trust, low-bullshit, and intense. The overhead of too many people and too much bureaucracy can easily kill great ideas or result in sclerosis." OpenAI is one of the best places I've ever worked but also probably one of the worst. He was now repeating this to executives in late 2022, emphasizing during head count discussions the need to keep the company lean and the talent bar high, and add no more than 100 or so hires. Other executives balked. At the rate that their teams were burning out, many saw the need for something closer to around 500 or even more new people. Over several weeks, the executive team finally compromised on a number somewhere in the middle, between 250 and 300. The cap didn't hold. By summer, there were as many as 30, even 50, people joining OpenAI each week, including more recruiters to scale up hiring even faster. By fall, the company had blown well past its own self-imposed quota. The sudden growth spurt indeed changed company culture. A recruiter wrote a manifesto about how the pressure to hire so quickly was forcing his team to lower the quality bar for talent. "If you want to build Meta, you're doing a great job," he said in a pointed jab at Altman, alluding to the very fears that the CEO had warned about. The rapid expansion was also leading to an uptick in firings. During his onboarding, one manager was told to swiftly document and report any underperforming members of his team, only to be let go himself sometime later. Terminations were rarely communicated to the rest of the company. People routinely discovered that colleagues had been fired only by noticing when a Slack account grayed out from being deactivated. They began calling it "getting disappeared." To new hires, fully bought into the idea that they were joining a fast-moving, money-making startup, the tumult felt like a particularly chaotic, at times brutal, manifestation of standard corporate problems: poor management, confusing priorities, the coldhearted ruthlessness of a capitalistic company willing to treat its employees as disposable. "There was a huge lack of psychological safety," says a former employee who joined during this era. Many people coming aboard were simply holding on for dear life until their one-year mark to get access to the first share of their equity. One significant upside: They still felt their colleagues were among the highest caliber in the tech industry, which, combined with the seemingly boundless resources and unparalleled global impact, could spark a feeling of magic difficult to find in the rest of the industry when things actually aligned. "OpenAI is one of the best places I've ever worked but also probably one of the worst," the former employee says. Sometimes there isn't a plan as much as there is just chaos. For some employees who remembered the scrappy early days of OpenAI as a tight-knit, mission-driven nonprofit, its dramatic transformation into a big, faceless corporation was far more shocking and emotional. Gone was the organization as they'd known it; in its place was something unrecognizable. "OpenAI is Burning Man," Rob Mallery, a former recruiter, says, referring to how the desert art festival scaled to the point that it lost touch with its original spirit. "I know it meant a lot more to the people who were there at the beginning than it does to everyone now." In those early years, the team had set up a Slack channel called #explainlikeimfive that allowed employees to submit anonymous questions about technical topics. With the company pushing 600 people, the channel also turned into a place for airing anonymous grievances. In mid-2023, an employee posted that the company was hiring too many people not aligned with the mission or passionate about building AGI. Another person responded: They knew OpenAI was going downhill once it started hiring people who could look you in the eye. As OpenAI was rapidly professionalizing and gaining more exposure and scrutiny, incoherence at the top was becoming more consequential. The company was no longer just the Applied and Research divisions. Now there were several public-facing departments: In addition to the communications team, a legal team was writing legal opinions and dealing with a growing number of lawsuits. The policy team was stretching out across continents. Increasingly, OpenAI needed to communicate with one narrative and voice to its constituents, and it needed to determine its positions to articulate them. But on numerous occasions, the lack of strategic clarity was leading to confused public messaging. At the end of 2023, The New York Times would sue OpenAI and Microsoft for copyright infringement for training on millions of its articles. OpenAI's response in early January, written by the legal team, delivered an unusually feisty hit back, accusing the Times of "intentionally manipulating our models" to generate evidence for its argument. That same week, OpenAI's policy team delivered a submission to the UK House of Lords communications and digital select committee, saying that it would be "impossible" for OpenAI to train its cutting-edge models without copyrighted materials. After the media zeroed in on the word impossible, OpenAI hastily walked away from the language. "There's just so much confusion all the time," says an employee in a public-facing department. While some of that reflects the typical growing pains of startups, OpenAI's profile and reach have well outpaced the relatively early stage of the company, the employee adds. "I don't know if there is a strategic priority in the C suite. I honestly think people just make their own decisions. And then suddenly it starts to look like a strategic decision but it's actually just an accident. Sometimes there isn't a plan as much as there is just chaos." Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She is the author of "Empire of AI." Adapted from " EMPIRE OF AI: Dreams and Nightmares in Sam Altman's OpenAI" by Karen Hao, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2025 by Karen Hao.