logo
Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling

Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling

Irish Times12-07-2025
Empire of AI: Inside the Reckless Race for Total Domination
Author
:
Karen Hao
ISBN-13
:
978-0241678923
Publisher
:
Allen Lane
Guideline Price
:
£25
Fewer than three years ago, almost nobody outside of Silicon Valley, excepting perhaps science fiction enthusiasts, was talking about
artificial intelligence
or throwing the snappy short form, AI, into household conversations.
But then came ChatGPT, a chatbot quietly released for public online access by the San Francisco AI research company
OpenAI
in late November 2022.
ChatGPT
– GPT stands for Generative Pre-training Transformer, the underlying architecture for the chatbot – was to be made available as a 'low-key research preview' and employees took bets on how many might try it out in the coming days – maybe thousands? Possibly even tens of thousands?
They figured that, like OpenAI's previous release in 2021, the visual art-generating AI called Dall-E (a play on the names of the surrealist artist Dali and the Pixar film of eponymous robot, Wall-E),it would get a swift blast of attention, then interest would wane.
[
From The Terminator to Frankenstein, 12 of the best portrayals of AI from the past two centuries
Opens in new window
]
To prepare, OpenAI's infrastructure team decided that configuring the company servers to handle 100,000 users at once would be over-optimistically sufficient. Instead, the servers started to crash as waves of users spiked in country after country. People woke up, read about ChatGPT in their news feeds and rushed to try it out. Within just five days, ChatGPT had a million users; within two months, that number had swelled to 100 million.
READ MORE
No one in OpenAI 'truly fathomed the societal phase shift they were about to unleash', says Karen Hao in Empire of AI, her meticulously detailed profile of the company and its controversial leader
Sam Altman
. Hao, an accomplished journalist long on the AI beat, says that even now, company engineers are baffled at ChatGPT's snap ascendancy.
[
OpenAI chief Sam Altman: 'This is genius-level intelligence'
Opens in new window
]
But why should it be so inexplicable? While Dall-E also amazed, it was fundamentally a tool for making art. Although it could construct bizarre and beautiful things (while exploiting the work of actual artists it was trained on), it wasn't chatty. ChatGPT, in thrilling contrast, hovered on the edge of embodying what people largely think a futuristic computer should be. You could converse with it, have it write an essay or code a piece of software, ask for advice, even joke with it, and it responded in an amiably conversational and, most of the time, usefully productive way.
Dall-E felt like a computer programme. ChatGPT teased the possibility of the kind of sentient, thoughtful artificial intelligence that we easily recognise, given that this presentation has been honed over decades of films, TV series and science fiction novels. We've been trained to expect it – and to create it. While ChatGPT is definitely not sentient, it astonished because it seemed as if it might be, and OpenAI has continued to ramp up the expectation that an AI model might soon be, if not fully sentient, then smarter than human. No surprise, really, that Hao writes that 'ChatGPT catapulted OpenAI from a hot start-up well known within the tech industry into a household name overnight'.
As big as that moment was, there's so much significant backstory for the 'hot start-up' that the tale of the game-changing release of ChatGPT doesn't materialise until a third of the way into Empire of AI.
With precision and insight, Hao documents the challenges and decisions faced and resolved – or often more crucially, not resolved – in the years before ChatGPT turned OpenAI into one of the most disturbingly powerful companies in the world. Then, she takes us up to the end of 2024, as valid concerns have further ballooned over OpenAI and Altman's bossy and ruthless championing of a costly, risky, environmentally devastating and billionaire-enriching version of AI.
In this convincing telling, AI is evolving into the design and control of an exclusive and dangerous club to which very few belong, but for which many, especially the world's poorest and most vulnerable, are materially exploited and economically capitalised. Hence, truly, the 'empire' of AI.
OpenAI, which leads in this space, was founded in 2015 by Altman – who then ran the storied Valley start-up incubator Y Combinator – and by
Elon Musk
. Both (apparently) shared a deep concern that AI could prove an existential risk, but recognised it could also be a transformative, world-changing breakthrough for humanity (take your pick), and therefore should be developed cautiously and ethically within the framework of a non-profit company with a strong board. (This split between 'doomers', who see AI as an existential risk, and 'boomers', who think it so beneficial we should let development rip, still divides the AI community.)
Now that the world knows Altman and Musk quite a bit better, their heart-warming regard for humanity seems improbable, and so it's turned out to be. Hao says that fissures appeared from the start between those in OpenAI prioritising safety and caution and those eager to develop and, eventually, commercialise products so powerful they perhaps heralded the pending arrival of AI that will outthink and outperform humans, called AGI or artificial general intelligence.
Altman increasingly chose the 'move fast, break things' approach even as he withdrew OpenAI from outside scrutiny. Interestingly, several of OpenAI's earliest and problematical top-level hires were former employees of
Stripe
, the fintech firm founded by Ireland's Collison brothers. Despite having such top industry people, OpenAI 'struggled to find a coherent strategy' and 'had no idea what it was doing'.
[
John Collison of Stripe: 'I am baffled by companies doing an about-face on social initiatives'
Opens in new window
]
What it did decide to do was to travel down a particular AI development path that emphasised scale, using breathtakingly expensive chips and computing power and requiring huge water-cooled
data centres
. Costs soared, and OpenAI needed to raise billions in funding, a serious problem for a non-profit since investors want a commercial return.
Cue the restructuring of the company in 2019 into a bizarre, two-part vehicle with a largely meaningless 'capped profit' and a non-profit side, and the need for a CEO, a job that went to Altman and not Musk.
Microsoft
came on board as a major partner too;
Bill Gates
was wowed by OpenAI's latest AI model months before the release of ChatGPT.
As dramatic as the ChatGPT launch turned out to be, Hao makes the strategic choice to open the book with a zoom-in on OpenAI's other big drama, the sudden firing in November 2023 of Altman by its tiny board of directors. The board said Altman had lied to them at times and was untrustworthy. After a number of twists and turns, Altman returned, the board departed, and OpenAI has since become increasingly defined as a profit-focused behemoth that has stumbled into numerous controversies while tirelessly pushing a version of AI development that maintains its staggeringly pricey leadership position.
This, then, is Hao's framing device for looking at a company headed by an undoubtedly charismatic and gifted individual but one who has trailed controversy and whose documented non-transparency raises serious concerns. In tracing the company's early history, Hao sets out its many conflicts and problems, and Altman's willingness to drive development and growth in ways that veer far from its original ethical founding.
For example, at first OpenAI adhered to a principle of using only clean data for training its models – that is, vast data sets that exclude the viler pits of internet discussion, racism, conspiracy rabbit holes, pornography or child sexual abuse material (CSAM). But as OpenAI scaled up its models, it needed ever more data, any data, and rowed back, using what noted Irish-based cognitive scientist Abeba Birhane – referenced several times in the book – has exposed as 'data swamps'. That's even before you consider AI's inaccuracies, 'hallucinations' of made-up certainty, and data privacy and protection encroachments.
For a time, Hao veers away from a strict OpenAI pathway to draw on her strong past travel research and reporting to reveal how AI is built off appallingly cheap labour drawn from some of the poorest parts of the world, because AI isn't all digital wizardry. It's people being paid pennies in Kenya to identify objects in video or perform gruelling content moderation to remove CSAM. It's gigantic, water use-intensive data centres built in poorer communities despite years-long droughts, and environmentally damaging mining and construction. It's cultural loss, as data training sets valorise dominant languages and experiences.
In the face of these data colonialism realities, using an AI chatbot to answer a frivolous question – requiring 10 times the computing energy and resources of an old-style search – is increasingly grotesque.
Unfortunately, the book went to print before Hao could consider the groundbreaking impact of new Chinese AI DeepSeek. Its lower cost, and challenge to OpenAI and the massive scale mantra, has rocked AI, its largely Valley-based development and global politics. It would have been fascinating to get her take. But never mind. Hao knits all her threads here into a persuasive argument that AI doesn't have to be the Valley version of AI, and OpenAI's way shouldn't be the AI default, or perhaps, pursued at all.
The truth is, no one understands how AI works, or why, or what it might do, especially if it does reach AGI. Humanity has major decisions to make, and Empire of AI is convincing on why we should not allow companies such as OpenAI and Microsoft, or people such as Altman or Musk, to make those decisions for us, or without us.
Further reading
Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass
by Mary L Gray and Siddarth Suri (Harper Business, 2019). What looks like technology – AI, web services – often only works due to the task-based, uncredited labour of an invisible, poorly paid, easily-exploited global 'ghost' workforce.
Supremacy: AI, ChatGPT and the Race that Changed the World
by Parmy Olson (Macmillan Business, 2024). A different angle on the startling debut of OpenAI's ChatGPT, with the focus here on the emerging race between Microsoft and Google to capitalise on generative AI and dominate the market.
The Singularity Is Near: When Humans Transcend Biology
by Ray Kurzweil (Duckworth reissue, 2024). The hugely influential 2005 classic that predicts a coming 'singularity' when humans will be powerfully enhanced by AI. Kurzweil also published a follow-up last year, The Singularity is Nearer: When We Merge with AI.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tesla ordered by Florida jury to pay $329m in fatal crash
Tesla ordered by Florida jury to pay $329m in fatal crash

Irish Times

timean hour ago

  • Irish Times

Tesla ordered by Florida jury to pay $329m in fatal crash

A Florida jury on Friday found Tesla liable in the 2019 fatal crash of an Autopilot-equipped Model S car and ordered Elon Musk 's automaker to pay $329 million (€280 million) to the family of a deceased woman and an injured survivor. Jurors in Miami federal court ordered Tesla to pay $129 million in compensatory damages and $200 million in punitive damages to the estate of Naibel Benavides Leon and to her former boyfriend Dillon Angulo. Lawyers for the plaintiffs said the trial was the first involving the wrongful death of a third party resulting from Autopilot. The plaintiffs had sought $345 million. Tesla has faced many similar lawsuits over its vehicles' self-driving capabilities, but they have been resolved or dismissed without getting to trial. A judge rejected Tesla's efforts to dismiss the case earlier in the summer, and experts said this may encourage other litigants against the EV maker. READ MORE Friday's verdict could impede efforts by Mr Musk, the world's richest person, to convince investors that Tesla can become a leader in so-called autonomous driving for private vehicles as well as robotaxis it plans to start producing next year. Shares fell 1.8 per cent on Friday. Tesla plans to appeal, according to published reports. The Austin, Texas-based company and its lawyers did not immediately respond to several requests for comment. The trial concerned an incident in 2019 where George McGee drove his 2019 Model S at about 100km/h through an intersection into the victims' parked Chevrolet Tahoe as they were standing beside it on a shoulder. Mr McGee had reached down to pick up a mobile phone he dropped on his car's floorboard and allegedly received no alerts as he ran a stop sign and stop light before hitting the victims' SUV. 'We have a driver who was acting less than perfectly, and yet the jury still found Tesla contributed to the crash,' said Philip Koopman, a Carnegie Mellon University engineering professor and expert in autonomous technology. 'The only way the jury could have possibly ruled against Tesla was by finding a defect with the Autopilot software. That's a big deal.' Benavides Leon was allegedly thrown 75ft to her death, while Mr Angulo suffered serious injuries. 'Tesla designed Autopilot only for controlled-access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans,' Brett Schreiber, a lawyer for the plaintiffs, said in a statement. 'Today's verdict represents justice for Naibel's tragic death and Dillon's lifelong injuries,' he said. Last month Tesla posted its biggest quarterly sales decline in more than a decade, and profit fell short of Wall Street forecasts. – Reuters (c) Copyright Thomson Reuters 2025

OpenAI disables ChatGPT ‘experiment' that allowed users make exchanges available on search engines
OpenAI disables ChatGPT ‘experiment' that allowed users make exchanges available on search engines

Irish Times

time2 hours ago

  • Irish Times

OpenAI disables ChatGPT ‘experiment' that allowed users make exchanges available on search engines

OpenAI has pulled the plug on a short-lived change to the configuration of the ChatGPT app that allowed users to make their conversations accessible to search engines after it became apparent that some private or commercially sensitive material was inadvertently being made accessible on the internet. Barry Scannell, an AI law and policy partner at William Fry and a member of the Government appointed AI Advisory Council, said his 'jaw hit the floor' when he saw some of the material made accessible to routine Google searches on Thursday. Open AI's chief information security officer later said the feature that allowed users to make their conversations accessible for indexing by search engines would be disabled by Friday, with Dane Stuckey describing the original move as 'a short-lived experiment'. He said the company was working to ensure that all information that had been indexed was entirely removed. READ MORE Mr Scannell said there had been widespread confusion initially as to how the information was becoming publicly accessible and whether all prompts to ChatGPT were impacted. It appears users were clicking a check-box that had the effect of making shared chats discoverable by search engines without them realising the consequences. He said it was clear from much of the information that became accessible from the user prompts that this was being done unintentionally. 'Based on what I've seen, some of the stuff was so personally sensitive and commercially sensitive that people clearly didn't realise a random person could come along and do a simple search on Google and be able to find the chats.' He said the issue did not appear to be a technical issue but rather highlighted the need for greater AI literacy on the part of users to better understand the tech they are using. 'People seem to have clicked a box to make their chats discoverable on a search engine, or make them indexable, apparently without understanding what that meant. It's just people doing this without realising it. 'What this shows, I think, is just how important it is to have critical thinking and AI literacy as a really key component of any national strategy dealing with it.' He said the incident should also serve as a warning to those working business about the potential risks involved in using AI as there was the potential to expose commercially sensitive material. An extension of legal confidentiality protections might also be required, he suggested. Online, there was considerable discussion too of the potential for deeply personal information to be made available, including the contents of chats in which individuals were using ChatGPT for the purposes of therapy.

Microsoft hits $4 trillion value after earnings
Microsoft hits $4 trillion value after earnings

Irish Times

timea day ago

  • Irish Times

Microsoft hits $4 trillion value after earnings

Microsoft has become the second company in the world to reach a $4 trillion (€3.5 trillion) market capitalisation after reporting quarterly earnings that beat Wall Street's expectations, sending the stock soaring in premarket Thursday. Shares of the technology behemoth jumped as much as 8.2 per cent in early trading in New York, pushing its market value to $4.1 trillion. Nvidia became the first company to hit the milestone earlier this month. 'Microsoft is getting the recognition that it deserves because it is the operating system for business. All of us run our businesses on Microsoft with Word, with Outlook, with Excel,' said Kim Forrest, chief investment officer at Bokeh Capital Partners. 'This quarter's results point to an even better position for Microsoft because, like Nvidia, there appear to be no substitutes.' The company's latest results confirmed that it's a leader in the artificial intelligence boom that's lifted megacap tech stocks, and the broader market, for the last few years. Microsoft reported better-than-expected growth in its cloud business, and its closely-watched Azure cloud-computing unit posted a 39 per cent rise in sales, handily beating the 34 per cent analysts expected. On a call with analysts, chief financial officer Amy Hood said Microsoft expects fiscal first quarter capital expenditures at more than $30 billion, and full year revenue growth in the double digits. In addition, Azure is expected to post a 37 per cent growth rate in the first quarter, above forecasts. Investors are welcoming outsize spending on AI infrastructure. Meta Platforms also lifted the low end of its forecast for 2025 capital expenditures and provided an early steer on 2026 spending. Shares in the Facebook-owner rallied as much as 13 per cent, adding more than $223 billion to the social media giant's market cap – if gains hold, this would be its biggest single-day market value addition ever. The stocks are the second and third-best performers among the so-called Magnificent Seven mega tech stocks this year. Since its April 8 trough when President Donald Trump's sweeping tariff threats spurred a broader market sell-off, the stocks surged more than 50 per cent and are trading at record highs. This year has marked something of a rebound for Microsoft stock. It had lagged its peers in 2024 and the first quarter of 2025, the only Magnificent Seven stock in the red for that period, as investors grew concerned about its AI position and Azure growth. Wall Street is largely bullish on Microsoft shares, with 68 of the 72 analysts covering the company giving it a buy rating and one giving it a sell, according to data compiled by Bloomberg. – Bloomberg

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store