Latest news with #AISnakeOil:


Indian Express
12-07-2025
- Business
- Indian Express
Karen Hao's Empire of AI brings nuance and much-needed scepticism to the study of AI
Most conversations that we have around Artificial Intelligence (AI) today share one commonality: the technology's society-altering capacity, its ability to leap us towards the next breakthrough, a better world, a future that we rarely imagined would be possible. The founding mission of Open AI, the company that made AI a household name through ChatGPT in 2022, is 'to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity'. Behind this seemingly optimistic idea, tech reporter Karen Hao argues, is the stench of empires of old — a civilising mission that promises modernity and progress while accumulating power and money through the exploitation of labour and resources. Hao has spent seven years covering AI — at the MIT Tech Review, The Washington Post and The Atlantic. She was the first to profile OpenAI and extensively document the AI supply chain — taking the conversation beyond the promise of Silicon Valley's innovation through reportage around people behind the black boxes that are AI models. And it is these stories that find centre-stage in 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI', her debut book. It is a company book and, like all good business books, gives an intimate picture of the rise of an idea, the people, strategy and money behind it. But the book stands out as it provides us one way of framing the dizzying AI boom and conversation around us. In doing so, the book joins the list of non-fiction on AI that brings nuance and much-needed scepticism of the subject while being acutely aware of its potential. In 2024, Arvind Narayanan and Sayash Kapoor from the Computer Science department of Princeton University wrote 'AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference'. The book lays out the basics of AI research, helping distinguish hype from reality. The same year, tech journalist Parmy Olson wrote Supremacy: AI, ChatGPT, and the Race that Will Change the World about the unprecedented monopoly that Open AI and Google's AI research wing Deepmind currently have in the world. This approach needs a lot of computing capacity. The physical manifestation of it are the massive data centres that are mushrooming everywhere. These data centres, in turn, consume a lot of energy. Open AI cracked this technique and doubled down on it: more data, more high-functioning and expensive Graphic Processing Units (GPUs) that make the computation happen, and more data centers to house them. This more-is-more approach, Hao writes, has 'choked' alternative forms to AI research, which has been a subject many have been trying to crack and expand since the 1950s. 'There was research before that explored minimising data for training models while achieving similar gains. Then Large Language Models and ChatGPT entered the picture. Research suddenly stopped. Two things happened: money flowed into transformers (a type of highly-effective neural network) and generative AI, diverting funding from other explorations,' Hao says. With the 'enormous externalities' of environmental costs, data privacy issues and labour exploitation of AI today, it is important to 'redirect some funds to explore new scientific frontiers that offer the same benefits of advanced AI without extraordinary costs,' Hao argues. But it might be harder than said. In her book, Hao traces how researchers, who were working outside major AI companies, are now financially affiliated with them. Funding, too, primarily, comes from tech companies or academic labs associated with them. 'There's a misconception among the public and policymakers that AI research remains guided by a pure scientific drive,' Hao says, adding that 'the foundations of AI knowledge have been overtaken by profit motives.'
Yahoo
18-03-2025
- Politics
- Yahoo
AI Isn't Destabilizing Elections (opinion)
Artificial intelligence pessimists, take note: New research suggests that fears about AI tools destabilizing elections through political misinformation may be overblown. The research was conducted by computer scientist Arvind Narayanan, director of the Princeton Center for Information Technology Policy, and Sayash Kapoor, a computer science Ph.D. candidate at Princeton. The pair are writing a book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference. Using information compiled by the WIRED AI Elections Project, Narayanan and Kapoor analyzed 78 instances of AI-created political content that appeared last year during elections around the world. "AI does make it possible to fabricate false content. But that has not fundamentally changed the landscape of political misinformation," they write in an essay about their research. Their analysis found that much of the AI-generated content was not intended to be deceptive. "To our surprise, there was no deceptive intent in 39 of the 78 cases in the database," they write. In more than a dozen instances, campaigns used AI tools to improve campaign materials. There were also more novel uses, such as in Venezuela, where "journalists used AI avatars to avoid government retribution when covering news adversarial to the government," or in California, where "a candidate with laryngitis lost his voice, so he transparently used AI voice cloning to read out typed messages in his voice during meet-and-greets." Moreover, deceptive content was not necessarily dependent on AI for its production. "For each of the 39 examples of deceptive intent, where AI use was intended to make viewers believe outright false information, we estimated the cost of creating similar content without AI—for example, by hiring Photoshop experts, video editors, or voice actors," write Narayanan and Kapoor. "In each case, the cost of creating similar content without AI was modest—no more than a few hundred dollars." In one instance, they even found a video involving a hired actor misclassified by Wired's database as AI-generated content. This snafu, they say, highlights how "it has long been possible to create media with outright false information without using AI or other fancy tools." Their takeaway: We should be focusing on the demand side of this equation, not the supply side. Election-related misinformation has long been an issue. And while AI might change how such content is created, it doesn't fundamentally change how it spreads or its impacts. "Successful misinformation operations target in-group members—people who already agree with the broad intent of the message," point out Narayanan and Kapoor. "Sophisticated tools aren't needed for misinformation to be effective in this context." Meanwhile, outgroups are unlikely to be fooled or influenced, whether such operations are AI-aided or not. "Seen in this light, AI misinformation plays a very different role from its popular depiction of swaying voters in elections," the researchers suggest. The post AI Isn't Destabilizing Elections appeared first on