
Sam Altman fears World War III more than a rogue AI apocalypse, reveals his safety plan amid ‘people dropping bombs again'
In a world increasingly overshadowed by geopolitical tremors, even the architects of our digital future are preparing for a very analog threat—war. Sam Altman, CEO of OpenAI and the public face behind ChatGPT, has admitted that his home is fortified with a reinforced underground basement, hinting at anxieties far beyond the realm of artificial intelligence.
Altman made the revelation during a recent appearance on comedian Theo Von's podcast This Past Weekend , where a casual question about bunkers turned into a surprisingly candid confession. 'I have, like, underground concrete, heavy reinforcement basements,' Altman said, though he was quick to clarify, 'but I don't have something I would call a bunker.'
Von, not missing a beat, quipped, 'That's a bunker, dude.' Altman responded with a rhetorical grin: 'What is the difference between a basement and a bunker?' Von's answer: 'A place you can hide when it all goes off or whatever.'
Interestingly, Altman emphasized that his concerns aren't rooted in AI-driven doom scenarios—a frequent trope in media and sci-fi—but rather in the very real geopolitical volatility unfolding across the globe. 'I have been thinking, actually, that I should really do a good version of one of those [bunkers],' he added. 'Not because of AI but just because, like, people are dropping bombs in the world again.'
His statement echoes a broader unease shared by many in Silicon Valley. Reports of billionaires quietly preparing for societal breakdown have become increasingly common. From remote New Zealand hideaways to Hawaiian shelters, elite tech figures are investing in contingency plans. Facebook's Mark Zuckerberg, for instance, reportedly has a 'little shelter' on his island property, though, like Altman, he has avoided the full 'bunker' label.
Altman's caution doesn't stand alone even within OpenAI. In Empire of AI , a new exposé by journalist Karen Hao, OpenAI co-founder Ilya Sutskever is quoted referencing a company bunker—seriously or not, it's unclear. During a 2023 meeting, Sutskever reportedly said: 'Once we all get into the bunker…' prompting confusion among scientists. When questioned, he clarified, 'We're definitely going to build a bunker before we release AGI.' AGI—Artificial General Intelligence—is the next frontier in AI development, often described as surpassing human cognitive capabilities. Though it's a concept rooted in speculation, the concern around its release and implications remains real for those shaping the field.
Despite being one of the most influential voices in the AI revolution, Altman's reinforced basement tells a more primal story—of a man aware that for all the machines he builds, the human world remains deeply unstable.
NATO has scrambled jets in recent months, global conflict zones are multiplying, and headlines increasingly resemble those of a Cold War redux. In such a climate, Altman's 'not-a-bunker' might say more about our times than about him.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
23 minutes ago
- Time of India
Even OpenAI's chairman struggles to keep up with AI: Bret Taylor calls the once-in-a-lifetime boom ‘insane'
— plzaccelerate (@plzaccelerate) Living through a technological renaissance You Might Also Like: Billionaire Vinod Khosla predicts AI teachers will disrupt education and careers. Here's how A human job in an AI world Bill Gates agrees: AI is a tool, not the artist You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why If you've been struggling to keep pace with the whirlwind that is Artificial Intelligence , you're in good company. Bret Taylor , Chairman of OpenAI , the organization at the epicenter of the AI revolution , admits he too is barely able to stay afloat amid the relentless stream of a candid conversation hosted by South Park Commons with Aditya Agarwal, Taylor said, 'I am the chairman of OpenAI. I run a fairly successful applied AI company, and I have trouble keeping up with everything going on.' His words offer a rare moment of vulnerability in a world that often presents AI experts as unflappable makes his admission particularly striking is his vantage point. Taylor is not just on the frontline — he's in the command tower. From overseeing OpenAI's advancements to observing the competition's rapid rise, his plate is full. And yet, even he finds it dizzying. 'I'm probably most well situated in the world almost to do so… So it just feels insane to me right now,' he sees this turbulent moment as historic — and oddly poetic. 'I think it's a privilege... I hope you're enjoying being in this moment because... I think our society will be very different 10 years from now,' he said, reflecting on how rare it is to consciously live through such a transformative era. 'I pinch myself every day.'Indeed, the AI domain is experiencing something akin to a gold rush — except instead of panning rivers, companies are mining data and releasing new models almost weekly. OpenAI, once the undisputed leader, is now facing heated competition. Google's Gemini, Elon Musk's Grok, and emerging Chinese open-source platforms like DeepSeek and Kimi have challenged its dominance with increasingly capable on the product side, innovation is relentless. ChatGPT has become the fifth most visited website globally, but it's far from alone. New AI tools tackling niche tasks are sprouting up daily. OpenAI reportedly even attempted to acquire Windsurf, a rising AI startup — a sign of how closely it watches the this pace, Taylor offers a reassuring message: humans aren't being pushed out of the equation just yet. Speaking to Business Insider, he argued that formal computer science education remains more relevant than ever. 'Studying computer science is a different answer than learning to code, but I would say I still think it's extremely valuable,' he emphasized that such degrees instill systems thinking — a way of understanding how components interact in complex systems, which remains vital for innovation. He pointed out how topics like Big O notation, cache misses, and randomized algorithms teach the kind of structured logic that no AI model can fully Taylor's view is none other than Microsoft co-founder Bill Gates. In conversations on The Tonight Show, Gates predicted that programming will "remain a human job for at least a century.' His reason? Writing software isn't about typing code; it's about pattern recognition, judgment, and making creative like GitHub Copilot and ChatGPT may streamline debugging and accelerate development, but Gates insists, 'They are power chisels, not replacement carpenters.' AI may help you shape the material, but the blueprint still comes from the human mind.


Economic Times
an hour ago
- Economic Times
Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'
Reuters Dario Amodei, Anthropic CEO, has criticised Meta's AI recruitment. He questions Meta's high compensation offers. Amodei says Anthropic prioritizes culture and fair pay. In the escalating turf war for top AI talent, Anthropic CEO Dario Amodei has delivered a pointed, and slightly humorous, critique of Meta's aggressive recruitment tactics. Speaking on the Big Technology Podcast , Amodei painted a vivid picture: "If Mark Zuckerberg throws a dart at a dartboard and it hits your name, that doesn't mean you should be paid ten times more than the guy next to you who's just as skilled." His remarks come amid widespread reports of Meta launching an all-out offensive to poach AI engineers from rivals like OpenAI, Apple, Google, and Anthropic itself. Yet Amodei claims his startup has remained largely untouched. 'Some [employees] wouldn't even talk to Meta,' he said, asserting that their culture and mission are more attractive than any compensation package Meta can offer. Meta has reportedly been dangling massive offers, with some packages surpassing $200 million for a single hire, according to Business Insider and WIRED . Amodei, however, says Anthropic refuses to match such sums, insisting on fair and consistent pay across the board. "I recently posted in our company Slack that we will not compromise our compensation principles or fairness if someone gets a big offer," he shared. In his view, rewarding one employee disproportionately just because they were on Meta's radar would be unjust to their equally capable colleagues. Despite this stance, Meta has managed to lure away at least one former Anthropic engineer—Joel Pobar—but Amodei suggests their broader impact has been limited. — BigTechPod (@BigTechPod) Meta's latest AI moonshot, the Superintelligence Lab , has ignited a fierce scramble for elite minds. OpenAI's Chief Research Officer Mark Chen likened it to a break-in after losing several staffers overnight. Meanwhile, OpenAI CEO Sam Altman accused Meta of deploying 'giant offers' to lure talent, with some signing bonuses rumored to top $100 million. Zuckerberg is unapologetic about the ambition. In an internal memo seen by CNBC , he claimed, 'Developing superintelligence is coming into sight,' declaring his goal to bring personal AI to every individual, not just enterprise clients. While Meta may have the resources, Amodei questions whether mission-driven AI work can be bought. 'Zuckerberg is trying to buy something that can't be bought,' he said during the podcast, underscoring Anthropic's long-term focus on safe and ethical AI development. This sentiment resonates with other industry leaders too. OpenAI continues to frame itself as a purpose-first organization, while Meta's flashier, big-money moves risk creating tension even within its own teams. As CNBC reported, some insiders at Meta worry that a talent-heavy, cash-fueled approach could lead to ego clashes and fractured projects. In the current AI landscape, where demand far outpaces supply, the value of a skilled AI researcher is rivaling that of a professional athlete. Yet, for companies like Anthropic and OpenAI, the real challenge isn't just retaining talent—it's maintaining a sense of purpose amid the frenzy.


NDTV
an hour ago
- NDTV
'Got Humbled': Vibe Coder Caught Using AI By Boss Gets Schooled
For a long time, writing code meant that software engineers sat long hours in front of a computer, typing out lines of instructions in a programming language. But in recent times, the rise of artificial intelligence (AI) has allowed anyone to 'vibe code', meaning the technology churns out the code after a user feeds it what they want. Now, an intern working at two places who used a similar modus operandi has revealed how the vibe conding tactic backfired for them. As per the now-viral post, the user said they were using Cursor/GPT to ship the product quickly whilst working at two companies. "I'm currently interning at 2 companies SRE at one, and SDE at a very early-stage startup (like 20 employees). At the startup, it's just me and the CTO in tech. They're funded ($5M), but super early," wrote the user in the r/developersIndia subreddit. While all was going well, the CTO of one of the companies started asking them in-depth questions about their code and this is where things turned pear-shaped. "The CTO started asking deep dive questions about the code. Stuff like, "Why did you structure it this way?" or "Explain what this function does internally." The code was mostly AI-generated, and I honestly couldn't explain parts of it properly." "He straight up told me: "I don't mind if you use AI, but you have to know what your code is doing." Then he started explaining my code to me. Bruh. I was cooked." The OP said the entire experience was 'super humbling' as he had been vibe coding without really understanding the "deeper stuff like architecture, modularisation, and writing clean, production-level code". 'How did you even...' As the post went viral, garnering hundreds of upvotes, social media users agreed with the CTO's remarks, while others questioned how the OP had landed the internship without knowing what the code meant. "I am working as QA, and you can't replace experience. You will have to learn over time. But asking questions is also a good approach. Why and how," said one user while another added: "Get to know your application's core system design. Decide your architecture which can scale in production later. Now use this as a knowledge base in Cursor/ChatGPT." A third commented: "If you can't say what that code is doing by looking at it, then how did you even get 2 internships?" A fourth said: "Best way to learn how to write clean code is reading open source project code. Hands down its the best way to learn plus have a curious mind." Notably, the term vibe coding has been popularised by Andrej Karpathy, who has worked with companies like Tesla and OpenAI.