logo
#

Latest news with #KarenHao

Karen Hao on how the AI boom became a new imperial frontier
Karen Hao on how the AI boom became a new imperial frontier

Reuters

time12-07-2025

  • Business
  • Reuters

Karen Hao on how the AI boom became a new imperial frontier

When journalist Karen Hao first profiled OpenAI in 2020, it was a little-known startup. Five years and one very popular chatbot later, the company has transformed into a dominant force in the fast-expanding AI sector — one Hao likens to a 'modern-day colonial world order' in her new book, 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI.' Hao tells Reuters this isn't a comparison she made lightly. Drawing on years of reporting in Silicon Valley and further afield to countries where generative AI's impact is perhaps most acutely felt — from Kenya, where OpenAI reportedly outsourced workers to annotate data for as little as $2 per hour, to Chile, where AI data centers threaten the country's precious water resources — she makes the case that, like empires of old, AI firms are building their wealth off of resource extraction and labor exploitation. This critique stands in stark contrast to the vision promoted by industry leaders like Altman (who declined to participate in Hao's book), who portray AI as a tool for human advancement — from boosting productivity to improving healthcare. Empires, Hao contends, cloaked their conquests in the language of progress too. The following conversation has been edited for length and clarity. Reuters: Can you tell us how you came to the AI beat? Karen Hao: I studied mechanical engineering at MIT, and I originally thought I was going to work in the tech industry. But I quickly realized once I went to Silicon Valley that it was not necessarily the place I wanted to stay because the incentive structures made it such that it was really hard to develop technology in the public interest. Ultimately, the things I was interested in — like building technology that facilitates sustainability and creates a more sustainable and equitable future — were not things that were profitable endeavors. So I went into journalism to cover the issues that I cared about and ultimately started covering tech and AI. That work has culminated in your new book 'Empire of AI.' What story were you hoping to tell? Once I started covering AI, I realized that it was a microcosm of all of the things that I wanted to explore: how technology affects society, how people interface with it, the incentives (and) misaligned incentives within Silicon Valley. I was very lucky in getting to observe AI and also OpenAI before everyone had their ChatGPT moment, and I wanted to add more context to that moment that everyone experienced and show them this technology comes from a specific place. It comes from a specific group of people and to understand its trajectory and how it's going to impact us in the future. And, in fact, the human choices that have shaped ChatGPT and Generative AI today (are) something that we should be alarmed by and we collectively have a role to play in starting to shape technology. You've mentioned drawing inspiration from the Netflix drama 'The Crown' for the structure of your book. How did it influence your storytelling approach? The title "Empire of AI" refers to OpenAI and this argument that (AI represents) a new form of empire, and the reason I make this argument is because there are many features of empires of old that empires of AI now check off. They lay claim to resources that are not their own, including the data of millions and billions of people who put their data online, without actually understanding that it could be taken to be trained for AI models. They exploit a lot of labor around the world — meaning they contract workers who they pay very little to do their data annotation and content moderation for these AI models. And they do it under the civilizing mission, this idea that they're bringing benefit to all of humanity. It took me a really long time to figure out how to structure a book that goes back and forth between all these different communities and characters and contexts. I ended up thinking a lot about 'The Crown" because every episode, no matter who it's about, is ultimately profiling this global system of power. Does that make CEO Sam Altman the monarch in your story? People will either see (Altman) as the reason why OpenAI is so successful or the massive threat to the current paradigm of AI development. But in the same way that when Queen Elizabeth II passed away people suddenly were like, 'Oh, right, this is still just the royal family and now we have another monarch,' it's not actually about the individual. It's about the fact that there is this global hierarchy that's still in place in this vestige of an old empire that's still in place. Sam Altman is like Queen Elizabeth (in the sense that) whether he's good or bad or he has this personality or that personality is not as important as the fact that he sits at the top of this hierarchy — even if he were swapped out, he would be swapped out for someone who still inherits this global power hierarchy. In the book, you depict OpenAI's transition from a culture of transparency to secrecy. Was there a particular moment that symbolized that shift? I was the first journalist to profile OpenAI and embedded within the company in 2019, and the reason why I wanted to profile them at the time was because there was a series of moments in 2018 and 2019 that signaled that there was some dramatic shift underway at the organization. OpenAI was co-founded as a nonprofit at the end of 2015 by Elon Musk and Sam Altman and a cast of other people. But in 2018, Musk leaves; OpenAI starts withholding some research and announces to the world that it's withholding this research for the benefit of humanity. It restructures and nests a for-profit within the nonprofit and Sam Altman becomes CEO; and those were the four things that made me wonder what was going on at this organization that used its nonprofit status to really differentiate itself from all of the other crop of companies within Silicon Valley working on AI research. Right before I got to the offices, they had another announcement that solidified there was some transformation afoot, which was that Microsoft was going to partner with OpenAI and give the company a billion dollars. All of those things culminated in me then realizing that all of what they professed publicly was actually not what was happening. You emphasize the human stories behind AI development. Can you share an example that highlights the real-world consequences of its rise? One of the things that people don't really realize is that AI is not magic and it actually requires an extremely large amount of human labor and human judgment to create these technologies. These AI companies will go to Global South countries to contract workers for very low wages where they will either annotate data that needs to go into training these training models or they will perform content moderation or they will converse with the models and then upvote and downvote their answers and slowly teach them into saying more helpful things. I went to Kenya to speak with workers that OpenAI had contracted to build a content moderation filter for their models. These workers were completely traumatized and ended up with PTSD for years after this project, and it didn't just affect them as individuals; that affected their communities and the people that depended on them. (Editorial note: OpenAI declined to comment, referring Reuters to an April 4 post by Altman on X.) Your reporting has highlighted the environmental impact of AI. How do you see the industry's growth balancing with sustainability efforts? These data centers and supercomputers, the size that we're talking about is something that has become unfathomable to the average person. There are data centers that are being built that will be 1,000 to 2,000 megawatts, which is around one-and-a-half and two-and-a-half times the energy demand of San Francisco. OpenAI has even drafted plans where they were talking about building supercomputers that would be 5,000 megawatts, which would be the average demand of the entire city of New York City. Based on the current pace of computational infrastructure expansion, the amount of energy that we will need to add onto the global grid will, by the end of this decade, be like slapping two to six new Californias onto the global grid. There's also water. These data centers are often cooled with fresh water resources. How has your perspective on AI changed, if at all? Writing this book made me even more concerned because I realized the extent to which these companies have a controlling influence over everything now. Before I was worried about the labor exploitation, the environmental impacts, the impact on the job market. But through the reporting of the book, I realized the horizontal concern that cuts across all this is if we return to an age of empire, we no longer have democracy. Because in a world where people no longer have agency and ownership over their data, their land, their energy, their water, they no longer feel like they can self-determine their future.

Karen Hao's Empire of AI brings nuance and much-needed scepticism to the study of AI
Karen Hao's Empire of AI brings nuance and much-needed scepticism to the study of AI

Indian Express

time12-07-2025

  • Business
  • Indian Express

Karen Hao's Empire of AI brings nuance and much-needed scepticism to the study of AI

Most conversations that we have around Artificial Intelligence (AI) today share one commonality: the technology's society-altering capacity, its ability to leap us towards the next breakthrough, a better world, a future that we rarely imagined would be possible. The founding mission of Open AI, the company that made AI a household name through ChatGPT in 2022, is 'to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity'. Behind this seemingly optimistic idea, tech reporter Karen Hao argues, is the stench of empires of old — a civilising mission that promises modernity and progress while accumulating power and money through the exploitation of labour and resources. Hao has spent seven years covering AI — at the MIT Tech Review, The Washington Post and The Atlantic. She was the first to profile OpenAI and extensively document the AI supply chain — taking the conversation beyond the promise of Silicon Valley's innovation through reportage around people behind the black boxes that are AI models. And it is these stories that find centre-stage in 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI', her debut book. It is a company book and, like all good business books, gives an intimate picture of the rise of an idea, the people, strategy and money behind it. But the book stands out as it provides us one way of framing the dizzying AI boom and conversation around us. In doing so, the book joins the list of non-fiction on AI that brings nuance and much-needed scepticism of the subject while being acutely aware of its potential. In 2024, Arvind Narayanan and Sayash Kapoor from the Computer Science department of Princeton University wrote 'AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference'. The book lays out the basics of AI research, helping distinguish hype from reality. The same year, tech journalist Parmy Olson wrote Supremacy: AI, ChatGPT, and the Race that Will Change the World about the unprecedented monopoly that Open AI and Google's AI research wing Deepmind currently have in the world. This approach needs a lot of computing capacity. The physical manifestation of it are the massive data centres that are mushrooming everywhere. These data centres, in turn, consume a lot of energy. Open AI cracked this technique and doubled down on it: more data, more high-functioning and expensive Graphic Processing Units (GPUs) that make the computation happen, and more data centers to house them. This more-is-more approach, Hao writes, has 'choked' alternative forms to AI research, which has been a subject many have been trying to crack and expand since the 1950s. 'There was research before that explored minimising data for training models while achieving similar gains. Then Large Language Models and ChatGPT entered the picture. Research suddenly stopped. Two things happened: money flowed into transformers (a type of highly-effective neural network) and generative AI, diverting funding from other explorations,' Hao says. With the 'enormous externalities' of environmental costs, data privacy issues and labour exploitation of AI today, it is important to 'redirect some funds to explore new scientific frontiers that offer the same benefits of advanced AI without extraordinary costs,' Hao argues. But it might be harder than said. In her book, Hao traces how researchers, who were working outside major AI companies, are now financially affiliated with them. Funding, too, primarily, comes from tech companies or academic labs associated with them. 'There's a misconception among the public and policymakers that AI research remains guided by a pure scientific drive,' Hao says, adding that 'the foundations of AI knowledge have been overtaken by profit motives.'

Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling
Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling

Irish Times

time12-07-2025

  • Irish Times

Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling

Empire of AI: Inside the Reckless Race for Total Domination Author : Karen Hao ISBN-13 : 978-0241678923 Publisher : Allen Lane Guideline Price : £25 Fewer than three years ago, almost nobody outside of Silicon Valley, excepting perhaps science fiction enthusiasts, was talking about artificial intelligence or throwing the snappy short form, AI, into household conversations. But then came ChatGPT, a chatbot quietly released for public online access by the San Francisco AI research company OpenAI in late November 2022. ChatGPT – GPT stands for Generative Pre-training Transformer, the underlying architecture for the chatbot – was to be made available as a 'low-key research preview' and employees took bets on how many might try it out in the coming days – maybe thousands? Possibly even tens of thousands? They figured that, like OpenAI's previous release in 2021, the visual art-generating AI called Dall-E (a play on the names of the surrealist artist Dali and the Pixar film of eponymous robot, Wall-E),it would get a swift blast of attention, then interest would wane. [ From The Terminator to Frankenstein, 12 of the best portrayals of AI from the past two centuries Opens in new window ] To prepare, OpenAI's infrastructure team decided that configuring the company servers to handle 100,000 users at once would be over-optimistically sufficient. Instead, the servers started to crash as waves of users spiked in country after country. People woke up, read about ChatGPT in their news feeds and rushed to try it out. Within just five days, ChatGPT had a million users; within two months, that number had swelled to 100 million. READ MORE No one in OpenAI 'truly fathomed the societal phase shift they were about to unleash', says Karen Hao in Empire of AI, her meticulously detailed profile of the company and its controversial leader Sam Altman . Hao, an accomplished journalist long on the AI beat, says that even now, company engineers are baffled at ChatGPT's snap ascendancy. [ OpenAI chief Sam Altman: 'This is genius-level intelligence' Opens in new window ] But why should it be so inexplicable? While Dall-E also amazed, it was fundamentally a tool for making art. Although it could construct bizarre and beautiful things (while exploiting the work of actual artists it was trained on), it wasn't chatty. ChatGPT, in thrilling contrast, hovered on the edge of embodying what people largely think a futuristic computer should be. You could converse with it, have it write an essay or code a piece of software, ask for advice, even joke with it, and it responded in an amiably conversational and, most of the time, usefully productive way. Dall-E felt like a computer programme. ChatGPT teased the possibility of the kind of sentient, thoughtful artificial intelligence that we easily recognise, given that this presentation has been honed over decades of films, TV series and science fiction novels. We've been trained to expect it – and to create it. While ChatGPT is definitely not sentient, it astonished because it seemed as if it might be, and OpenAI has continued to ramp up the expectation that an AI model might soon be, if not fully sentient, then smarter than human. No surprise, really, that Hao writes that 'ChatGPT catapulted OpenAI from a hot start-up well known within the tech industry into a household name overnight'. As big as that moment was, there's so much significant backstory for the 'hot start-up' that the tale of the game-changing release of ChatGPT doesn't materialise until a third of the way into Empire of AI. With precision and insight, Hao documents the challenges and decisions faced and resolved – or often more crucially, not resolved – in the years before ChatGPT turned OpenAI into one of the most disturbingly powerful companies in the world. Then, she takes us up to the end of 2024, as valid concerns have further ballooned over OpenAI and Altman's bossy and ruthless championing of a costly, risky, environmentally devastating and billionaire-enriching version of AI. In this convincing telling, AI is evolving into the design and control of an exclusive and dangerous club to which very few belong, but for which many, especially the world's poorest and most vulnerable, are materially exploited and economically capitalised. Hence, truly, the 'empire' of AI. OpenAI, which leads in this space, was founded in 2015 by Altman – who then ran the storied Valley start-up incubator Y Combinator – and by Elon Musk . Both (apparently) shared a deep concern that AI could prove an existential risk, but recognised it could also be a transformative, world-changing breakthrough for humanity (take your pick), and therefore should be developed cautiously and ethically within the framework of a non-profit company with a strong board. (This split between 'doomers', who see AI as an existential risk, and 'boomers', who think it so beneficial we should let development rip, still divides the AI community.) Now that the world knows Altman and Musk quite a bit better, their heart-warming regard for humanity seems improbable, and so it's turned out to be. Hao says that fissures appeared from the start between those in OpenAI prioritising safety and caution and those eager to develop and, eventually, commercialise products so powerful they perhaps heralded the pending arrival of AI that will outthink and outperform humans, called AGI or artificial general intelligence. Altman increasingly chose the 'move fast, break things' approach even as he withdrew OpenAI from outside scrutiny. Interestingly, several of OpenAI's earliest and problematical top-level hires were former employees of Stripe , the fintech firm founded by Ireland's Collison brothers. Despite having such top industry people, OpenAI 'struggled to find a coherent strategy' and 'had no idea what it was doing'. [ John Collison of Stripe: 'I am baffled by companies doing an about-face on social initiatives' Opens in new window ] What it did decide to do was to travel down a particular AI development path that emphasised scale, using breathtakingly expensive chips and computing power and requiring huge water-cooled data centres . Costs soared, and OpenAI needed to raise billions in funding, a serious problem for a non-profit since investors want a commercial return. Cue the restructuring of the company in 2019 into a bizarre, two-part vehicle with a largely meaningless 'capped profit' and a non-profit side, and the need for a CEO, a job that went to Altman and not Musk. Microsoft came on board as a major partner too; Bill Gates was wowed by OpenAI's latest AI model months before the release of ChatGPT. As dramatic as the ChatGPT launch turned out to be, Hao makes the strategic choice to open the book with a zoom-in on OpenAI's other big drama, the sudden firing in November 2023 of Altman by its tiny board of directors. The board said Altman had lied to them at times and was untrustworthy. After a number of twists and turns, Altman returned, the board departed, and OpenAI has since become increasingly defined as a profit-focused behemoth that has stumbled into numerous controversies while tirelessly pushing a version of AI development that maintains its staggeringly pricey leadership position. This, then, is Hao's framing device for looking at a company headed by an undoubtedly charismatic and gifted individual but one who has trailed controversy and whose documented non-transparency raises serious concerns. In tracing the company's early history, Hao sets out its many conflicts and problems, and Altman's willingness to drive development and growth in ways that veer far from its original ethical founding. For example, at first OpenAI adhered to a principle of using only clean data for training its models – that is, vast data sets that exclude the viler pits of internet discussion, racism, conspiracy rabbit holes, pornography or child sexual abuse material (CSAM). But as OpenAI scaled up its models, it needed ever more data, any data, and rowed back, using what noted Irish-based cognitive scientist Abeba Birhane – referenced several times in the book – has exposed as 'data swamps'. That's even before you consider AI's inaccuracies, 'hallucinations' of made-up certainty, and data privacy and protection encroachments. For a time, Hao veers away from a strict OpenAI pathway to draw on her strong past travel research and reporting to reveal how AI is built off appallingly cheap labour drawn from some of the poorest parts of the world, because AI isn't all digital wizardry. It's people being paid pennies in Kenya to identify objects in video or perform gruelling content moderation to remove CSAM. It's gigantic, water use-intensive data centres built in poorer communities despite years-long droughts, and environmentally damaging mining and construction. It's cultural loss, as data training sets valorise dominant languages and experiences. In the face of these data colonialism realities, using an AI chatbot to answer a frivolous question – requiring 10 times the computing energy and resources of an old-style search – is increasingly grotesque. Unfortunately, the book went to print before Hao could consider the groundbreaking impact of new Chinese AI DeepSeek. Its lower cost, and challenge to OpenAI and the massive scale mantra, has rocked AI, its largely Valley-based development and global politics. It would have been fascinating to get her take. But never mind. Hao knits all her threads here into a persuasive argument that AI doesn't have to be the Valley version of AI, and OpenAI's way shouldn't be the AI default, or perhaps, pursued at all. The truth is, no one understands how AI works, or why, or what it might do, especially if it does reach AGI. Humanity has major decisions to make, and Empire of AI is convincing on why we should not allow companies such as OpenAI and Microsoft, or people such as Altman or Musk, to make those decisions for us, or without us. Further reading Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary L Gray and Siddarth Suri (Harper Business, 2019). What looks like technology – AI, web services – often only works due to the task-based, uncredited labour of an invisible, poorly paid, easily-exploited global 'ghost' workforce. Supremacy: AI, ChatGPT and the Race that Changed the World by Parmy Olson (Macmillan Business, 2024). A different angle on the startling debut of OpenAI's ChatGPT, with the focus here on the emerging race between Microsoft and Google to capitalise on generative AI and dominate the market. The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (Duckworth reissue, 2024). The hugely influential 2005 classic that predicts a coming 'singularity' when humans will be powerfully enhanced by AI. Kurzweil also published a follow-up last year, The Singularity is Nearer: When We Merge with AI.

Karen explores AI's role in modern imperialism
Karen explores AI's role in modern imperialism

Observer

time04-07-2025

  • Business
  • Observer

Karen explores AI's role in modern imperialism

When journalist Karen Hao first profiled OpenAI in 2020, it was a little-known startup. Five years and one very popular chatbot later, the company has transformed into a dominant force in the fast-expanding AI sector — one Hao likens to a 'modern-day colonial world order' in her new book, 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI'. Hao tells that this isn't a comparison she made lightly. Drawing on years of reporting in Silicon Valley and further afield to countries where generative AI's impact is perhaps most acutely felt, she makes the case that, like empires of old, AI firms are building their wealth off of resource extraction and labour exploitation. This critique stands in stark contrast to the vision promoted by industry leaders like Altman, who portray AI as a tool for human advancement — from boosting productivity to improving healthcare. Empires, Hao contends, cloaked their conquests in the language of progress too. A handout image of the book cover for 'Empire of AI' by Hong Kong-based American journalist and author Karen Hao Your work has culminated in your new book 'Empire of AI'. What story were you hoping to tell? Once I started covering AI, I realised that it was a microcosm of all of the things that I wanted to explore: how technology affects society, how people interface with it, the incentives (and) misaligned incentives within Silicon Valley. I was very lucky in getting to observe AI and also OpenAI before everyone had their ChatGPT moment; and I wanted to add more context to that moment that everyone experienced and show them this technology comes from a specific place. It comes from a specific group of people and to understand its trajectory and how it's going to impact us in the future. How did Netflix drama 'The Crown' it influence your storytelling approach? The title 'Empire of AI' refers to OpenAI and this argument that (AI represents) a new form of empire and the reason I make this argument is because there are many features of empires of old that empires of AI now check off. They lay claim to resources that are not their own, including the data of millions and billions of people who put their data online, without actually understanding that it could be taken to be trained for AI models. They exploit a lot of labour around the world — meaning they contract workers who they pay very little to do their data annotation and content moderation for these AI models. And they do it under the civilising mission, this idea that they're bringing benefit to all of humanity. It took me a really long time to figure out how to structure a book that goes back and forth between all these different communities and characters and contexts. I ended up thinking a lot about 'The Crown' because every episode, no matter who it's about, is ultimately profiling this global system of power. Karen Hao, the Hong Kong-based American journalist Can you share an example that highlights the real-world consequences of its rise? One of the things that people don't really realise is that AI is not magic and it actually requires an extremely large amount of human labour and human judgment to create these technologies. These AI companies will go to Global South countries to contract workers for very low wages where they will either annotate data that needs to go into training these training models or they will perform content moderation or they will converse with the models and then upvote and downvote their answers and slowly teach them into saying more helpful things. How do you see the industry's growth balancing with sustainability efforts? These data centres and supercomputers, the size that we're talking about is something that has become unfathomable to the average person. There are data centres that are being built that will be 1,000 to 2,000 megawatts, which is around one-and-a-half and two-and-a-half times the energy demand of San Francisco. OpenAI has even drafted plans where they were talking about building supercomputers that would be 5,000 megawatts, which would be the average demand of the entire city of New York City. How has your perspective on AI changed, if at all? Writing this book made me even more concerned because I realised the extent to which these companies have a controlling influence over everything now. Before I was worried about the labour exploitation, the environmental impacts, the impact on the job market. But through the reporting of the book, I realised the horizontal concern that cuts across all this is if we return to an age of empire, we no longer have democracy. Because in a world where people no longer have agency and ownership over their data, their land, their energy, their water, they no longer feel like they can self-determine their future. — Reuters

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store