
Will AI replace us? Yuval Noah Harari's stark warning about a future without borders
In yet another illuminating conversation, renowned author Yuval Noah Harari, known for his acclaimed works 'Sapiens' and 'Nexus', shared his unique perspective on the rapid rise of AI and how it will impact humanity. 'AI will not be one big AI. We are talking about potentially millions or billions of new AI agents with different characteristics, again, produced by different companies, different countries,' the author said in his latest conversation at the WSJ Leadership Institute.
During the conversation, one of the guests asked Harari that through history, organising principles like religion and church shaped society in a unified way, but with AI there is no single central force. There are many different AIs being built with different goals and values. What happens when there isn't one dominant AI but many competing AIs evolving quickly? What kind of world does that create?
In his response the author said that we are dealing with potentially millions or billions of new AI agents. 'You'll have a lot of religious AIs competing with each other over which AI will be the authoritative AI rabbi for which section of Judaism. And the same in Islam, and the same in Hinduism, in Buddhism, and so forth. So you'll have competition there. And in the financial system. And we just have no idea what the outcome will be.' He said that we have thousands of years of experience with human societies, and at least we have some experience as to how these things develop. But, when it comes to AI, we have zero experience. 'What happens in AI societies when millions of AIs compete with each other? We just don't know. Now this is not something you can simulate in the AI labs.'
Harari went on to say that in case OpenAI wanted to check the safety or the potential outcome of its latest AI model, it cannot simulate history in the laboratory. While it may be able to check for all kinds of failures in the system, it cannot predict what happens when there are millions of copies of these AIs in the world developing in unknown ways. He went on to call it the biggest social experiment in human history, of which all of us are a part, and nobody has any idea how it will develop.
In an extension of his argument, Harari used the analogy of the ongoing immigration crisis in the US, Europe and elsewhere. According to him, people are worried about immigrants for three reasons – they will take our jobs, they come with different cultural ideas, and they will change our culture. 'They may have political agendas; they might try to take over the country politically. These are the three main things that people keep coming back to.' According to the author, one can think about the AI revolution as simply a wave of immigration of millions or billions of AI immigrants that will take people's jobs and have very different cultural ideas, and that may even try to gain some kind of political power.
'And these AI immigrants or digital immigrants, they don't need visas; they don't cross a sea in some rickety boat in the middle of the night. They come at the speed of light,' he said, adding that far-right parties in Europe talk mostly about human immigrants but overlook the wave of digital immigrants that is coming to Europe. Harari feels that any country that cares about its sovereignty should care about the future of the economy and culture. 'They should be far more worried about the digital immigrants than about the human immigrants.'
When the host asked the acclaimed author what it meant to be human at the moment, Harari responded by saying, 'To be aware for the first time that we have real competition on the planet.' The author said that while we have been the most intelligent species by far for tens of thousands of years, now we are creating something that could compete with us in the near future.
'The most important thing to know about AI is that it is not a tool like all previous human inventions; it is an agent. An agent in the sense that it can make decisions independently of us, it can invent new ideas, and it can learn and change by itself. All previous human inventions, you know, whether they're printing presses or the atom bomb, they are tools that empower us,' said Harari.
The host said that there is a lot of responsibility on leaders because how they act is how the AI will be. 'You cannot expect to lie and cheat and have a benevolent AI.' In his response, Harari acknowledged that there is a big discussion around the world about AI alignment. He said that there are a lot of efforts focused on the idea that if we can design these AIs in a certain way, if we can teach them certain principles, they will be safe. However, there are two problems with this approach – firstly, the definition of AI is that it can learn and change by itself; secondly, if you think of AI as a child that can be educated, it surprises or horrifies you.
'The other thing is, everybody who has any knowledge of education knows that in the education of children, it matters far less what you tell them than what you do. If you tell your kids not to lie, and your kids watch you lying to other people, they will copy your behaviour, not your instructions.' Similarly, Harari explained that if AIs that are being educated are given access to a world where they see humans behave, even some of the most powerful humans, including their makers, lying, then the AIs will copy that behaviour. 'People who think that I can run this huge AI corporation, and while I'm lying, I will teach my AIs not to lie; it will not work. It will copy your behaviour,' he said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
42 minutes ago
- Mint
Why superintelligent AI isn't taking over anytime soon
A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn't enough to claim that their AI is the best. All three have recently insisted that it's going to be so good, it will change the very fabric of society. Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg's dream of AI superintelligence—that is, an AI smarter than we are. 'Humanity is close to building digital superintelligence," Altman declared in an essay this week, and this will lead to 'whole classes of jobs going away" as well as 'a new social contract." Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones. Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren't buying all that talk. The title of a fresh paper from Apple says it all: 'The Illusion of Thinking." In it, a half-dozen top researchers probed reasoning models—large language models that 'think" about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim. Generative AI can be quite useful in specific applications, and a boon to worker productivity. OpenAI claims 500 million monthly active ChatGPT users—astonishingly far reach and fast growth for a service released just 2½ years ago. But these critics argue there is a significant hazard in overestimating what it can do, and making business plans, policy decisions and investments based on pronouncements that seem increasingly disconnected from the products themselves. Apple's paper builds on previous work from many of the same engineers, as well as notable research from both academia and other big tech companies, including Salesforce. These experiments show that today's 'reasoning" AIs—hailed as the next step toward autonomous AI agents and, ultimately, superhuman intelligence—are in some cases worse at solving problems than the plain-vanilla AI chatbots that preceded them. This work also shows that whether you're using an AI chatbot or a reasoning model, all systems fail utterly at more complex tasks. Apple's researchers found 'fundamental limitations" in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered 'complete accuracy collapse." Similarly, engineers at Salesforce AI Research concluded that their results 'underscore a significant gap between current LLM capabilities and real-world enterprise demands." Importantly, the problems these state-of-the-art AIs couldn't handle are logic puzzles that even a precocious child could solve, with a little instruction. What's more, when you give these AIs that same kind of instruction, they can't follow it. Apple's paper has set off a debate in tech's halls of power—Signal chats, Substack posts and X threads—pitting AI maximalists against skeptics. 'People could say it's sour grapes, that Apple is just complaining because they don't have a cutting-edge model," says Josh Wolfe, co-founder of venture firm Lux Capital. 'But I don't think it's a criticism so much as an empirical observation." The reasoning methods in OpenAI's models are 'already laying the foundation for agents that can use tools, make decisions, and solve harder problems," says an OpenAI spokesman. 'We're continuing to push those capabilities forward." The debate over this research begins with the implication that today's AIs aren't thinking, but instead are creating a kind of spaghetti of simple rules to follow in every situation covered by their training data. Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple's paper, along with related work, exposes flaws in today's reasoning models, suggesting they're not the dawn of human-level ability but rather a dead end. 'Part of the reason the Apple study landed so strongly is that Apple did it," he says. 'And I think they did it at a moment in time when people have finally started to understand this for themselves." In areas other than coding and mathematics, the latest models aren't getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors. 'The broad idea that reasoning and intelligence come with greater scale of models is probably false," says Jorge Ortiz, an associate professor of engineering at Rutgers, whose lab uses reasoning models and other cutting-edge AI to sense real-world environments. Today's models have inherent limitations that make them bad at following explicit instructions—the opposite of what you'd expect from a computer, he adds. It's as if the industry is creating engines of free association. They're skilled at confabulation, but we're asking them to take on the roles of consistent, rule-following engineers or accountants. That said, even those who are critical of today's AIs hasten to add that the march toward more-capable AI continues. Exposing current limitations could point the way to overcoming them, says Ortiz. For example, new training methods—giving step-by-step feedback on models' performance, adding more resources when they encounter harder problems—could help AI work through bigger problems, and make better use of conventional software. From a business perspective, whether or not current systems can reason, they're going to generate value for users, says Wolfe. 'Models keep getting better, and new approaches to AI are being developed all the time, so I wouldn't be surprised if these limitations are overcome in practice in the near future," says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who has studied the practical uses of AI. Meanwhile, the true believers are undeterred. Just a decade from now, Altman wrote in his essay, 'maybe we will go from solving high-energy physics one year to beginning space colonization the next year." Those willing to 'plug in" to AI with direct, brain-computer interfaces will see their lives profoundly altered, he adds. This kind of rhetoric accelerates AI adoption in every corner of our society. AI is now being used by DOGE to restructure our government, leveraged by militaries to become more lethal, and entrusted with the education of our children, often with unknown consequences. Which means that one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should—even as it's shown itself to have antisocial tendencies such as 'opportunistic blackmail"—and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most. 'Although you can use AI to generate a lot of ideas, they still require quite a bit of auditing," says Ortiz. 'So for example, if you want to do your taxes, you'd want to stick with something more like TurboTax than ChatGPT." Write to Christopher Mims at

Economic Times
8 hours ago
- Economic Times
Will AI take your job?
Amidst AI's rise and concerns about job displacement, students and professionals are increasingly turning to meditation for clarity and intuition. As AI automates tasks, the focus shifts towards cultivating uniquely human qualities like inner balance and potential. This pursuit of human capabilities becomes essential in navigating the evolving technological landscape. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of .) When Sam Altman , the CEO of Open AI , warned in 2023 that AI could 'cause significant harm to the world', it wasn't just lawmakers who took notice. Students, engineers and fresh graduates flooded Google with searches like 'Will AI take over my job?' and 'Is coding still relevant?'Meditation sessions at IITs, B-schools and startup hubs garner interest as tools for clarity and balance in a tech-saturated world, and as Gen Z wishes to develop what AI lacks: Jobs once said, 'Intuition is more powerful than intellect.' Intuition, sharpened through stillness, can give us the cutting edge. Meditation is becoming our career armour. IT giants enable employees to work with AI, but some leaders recommend a parallel shift, teaching employees to stay focused and centred. Inner balance might be tomorrow's main may automate tasks, but it can't automate you. Each of us is a unique, carefully designed program of nature, whose innate potential and abilities extend far beyond the capabilities of mere machines. The real revolution? Discovering true human the movie 'Her', a man falls in love with an AI voice. It is an indication of where we are heading. We must pause and choose how to equip ourselves in the AI era.


Time of India
8 hours ago
- Time of India
Apple Paper questions path to AGI, sparks division in GenAI group
New Delhi: A recent research paper from Apple focusing on the limitations of large reasoning models in artificial intelligence has left the generative AI community divided, sparking significant debate whether the current path taken by AI companies towards artificial general intelligence is the right one to take. What did Apple find? The paper, titled The Illusion of Thinking, published earlier this week, demonstrates that even the most sophisticated large reasoning models do not genuinely think or reason in a human-like way. Instead, they excel at pattern recognition and mimicry, generating responses that only appear intelligent, but lack true comprehension or conceptual understanding. The study used controlled puzzle environments, such as the popular Tower of Hanoi puzzle, to systematically test reasoning abilities across varying complexities by large reasoning models such as OpenAI's o3 Mini, DeepSeek's R1, Anthropic's Claude 3.7 Sonnet and Google Gemini Flash. The findings show that while large reasoning and language models may handle simple or moderately complex tasks, they experience total failure when faced with high-complexity problems, which occur despite having sufficient computational resources. Widespread support for Apple's findings Gary Marcus, a cognitive scientist and a known sceptic of the claims surrounding large language models, views Apple's work as providing compelling empirical evidence that today's models primarily repeat patterns learned during training from vast datasets without genuine understanding or true reasoning capabilities. "If you can't use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI, current hype aside) solved with AI in 1957, and that first semester AI students solve routinely, the chances that models like Claude or o3 are going to reach AGI seem truly remote," Marcus wrote in his blog. Marcus' arguments are also echoed in earlier comments of Meta's chief AI scientist Yann LeCun, who has argued that current AI systems are mainly sophisticated pattern recognition tools rather than true thinkers. Live Events The release of Apple's paper ignited a polarised debate across the broader AI community, with many panning the design of the study than its findings. On the other hand... A published critique of the paper by researchers from Anthropic and San-Francisco based Open Philanthropy said the study has issues in the experimental design, that it overlooks output limits. In an alternate demonstration, the researchers tested the models on the same problems but allowed them to use code, resulting in high accuracy across all the tested models. The critique around the study's failure to take in the output limits and the limitations in coding by the models have also been highlighted by other AI commentators and researchers including Matthew Berman, a popular AI commentator and researcher. "SOTA models failed The Tower of Hanoi puzzle at a complexity threshold of >8 discs when using natural language alone to solve it. However, ask it to write code to solve it, and it flawlessly does up to seemingly unlimited complexity," Berman wrote in a post on X (formerly Twitter). Industry impact The study highlights Apple's more cautious approach to AI compared to rivals like Google and Samsung, who have aggressively integrated AI into their products. Apple's research explains its hesitancy to fully commit to AI, contrasting with the industry's prevailing narrative of rapid progress. Many questioned the timing of the release of the study, coinciding with Apple's annual WWDC event where it announces its next software updates. Chatter across online forums said the study was more about managing expectations in light of Apple's own struggles with AI. That said, practitioners and business users argue that the findings do not change the immediate utility of AI tools for everyday applications.