The AGI-pilled and the damned
In his free time, he's preparing for the possibility of failure, by building DIY bioshelters to protect him and his family from an AI apocalypse.
Speaking from a video call in his office, Henry tells me it's remarkably easy to build a bioshelter capable of protecting against lethal pathogens created by or with the aid of advanced AI. First, you buy an off-the-shelf positively pressurized tent, the sort typically used as grow rooms for plants. Then you stack multiple professional-grade HEPA filters in front of the air intake. Finally, you stuff it with as much shelf-stable food, water, and supplies as you can fit.
Henry's bioshelter will be "quite cheap," he tells me, "probably less than $10,000 including the three years worth of food I'm going to put in there." He asked that I use a pseudonym because of the social stigma associated with being a "prepper" — particularly if his fears do not come to pass and humanity prevails.
Henry is far from alone in putting his money where his mouth is regarding his deepest fears and hopes for AI. For a certain class of Silicon Valley denizens, AI is not just the next buzzy technological wave; it is poised to fundamentally transform our society, and very soon. For them, there is little time to babble about the possible futures of artificial general intelligence, or AGI, over Slack channels and dinner parties. The time for radical change is now. Rationalists, a Silicon Valley-aligned philosophy centered on trying to improve human rationality and morality, have grown increasingly concerned about the perceived AI risk — while on the other side of the aisle, startup boosters' predictions for the tech are growing ever-more ebullient.
"A lot of us are just going to look back on these next two years as the time when we could have done something."
Some believe we're at the dawn of an age of superabundance — in which almost all intellectual labor can be automated — unlocking an unprecedented wave of human flourishing. They're embracing a lifestyle shift they call "smart-to-hot." Others are bracing for economic catastrophe and making major investments and career moves accordingly. And yet others, who think AI will inevitably wrest free of human control and gain the ability to kill all organic life. They're spending their retirement savings, having "weird orgies," and building survival bunkers.
"A lot of us are just going to look back on these next two years as the time when we could have done something," Henry says. "Lots of people will look back on this and be like, 'Why didn't I quit my job and try to do something that really mattered when I had a chance to?'"
A biomedical research data scientist living in Los Angeles, Srinivasan had historically been attracted to a certain kind of intelligent guy, prioritizing smarts over conventional attractiveness, she tells me. Now she saw that because generative AI is doing the intellectual labor of more and more people, raw intelligence has become less important to her than charisma, social engagingness, and hotness. Or as she recently quipped in a semi-viral tweet, "If you're smart, pivot to being cool/hot."
Many of the people I spoke to for this story believe a variation of this: that because AI will soon subsume much of intellectual life, social life will become much more integral to human society, and being physically attractive will become all the more essential to flourishing within it. Brains are over, beauty and brawn are in.
"I've sort of always loved fitness," says Soren Larson, a tech entrepreneur in Florida, "and I rather think that being hot and personable and funny are poised to be rare features when AI can do all the sort of intellectual things."
Jason Liu, an AI consultant, tells me he's "already made that pivot." Several years ago, a debilitating, repetitive strain injury in his hands brought his career as a software engineer to a standstill. He retooled his life, diving into leisure pursuits like jiu jitsu and ceramics, and fashioned a second career as a consultant, trying to optimize for delegation and free time to socialize rather than hustle. "I personally did not want to be valued for my intelligence," he says. "I was like, this intelligence is what physically hurt me, and caused me to lose my job."
When we spoke by phone, he was out strolling the streets of Paris, as part of an extended international jaunt. "Really leaning into leisure is kind of how I think about AGI," he says.
Other people I meet with are reshaping their social lives today not because of their hopes for AI, but because of their fears.
"If we are all going to be destroyed by an atomic bomb," C.S. Lewis wrote in 1948, "let that bomb when it comes find us doing sensible and human things — praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts — not huddled together like frightened sheep and thinking about bombs." In my conversations with people concerned about AI's impact, a few explicitly cited this Lewis quote, and many expressed a similar sentiment, in how they're trying to treasure the time they have now.
"It's really freeing in some ways," Aella, a fetish researcher and sex worker in the San Francisco Bay Area with a cult following on X, tells me of her concerns about AI development. "I like throwing weird orgies, and I'm like — well, we're going to die. What's a weirder, more intense, crazier orgy we can do? Just do it now."
As we sit out on the AstroTurf lawn at Lighthaven — a kitschy old hotel in Berkeley converted into an intellectual campus for the Rationalist movement — she talks about her fears of how AI may destroy humanity. "I can't face it all at once," she says. "I can catch glimpses of the big thing out of the corner of my eye and then I grieve it when I can, but I don't have the emotional capacity to really absorb it." As a result, she lives much more in the moment. She's gradually spending down her savings. She exercises less. She's tried "hard drugs" she would otherwise avoid. She's taking more sleeping pills, despite concerns about dementia risk. She's freezing a bunch of her eggs; "I'm just trying to get as many as I can, for fun."
Over in San Francisco's Dolores Park, Vishal Maini, a venture capital investor, tells me something similar — though perhaps a little less extreme. "I think it makes sense to just adopt a little bit of a bucket-list-mentality around this," he says. "Do what's important to you in the time that we have."
As we drink herbal tea, Maini talks me through his mental model for the coming years. He isn't sure if we're approaching a future where human capability is radically enhanced by AI, or a darker future in which humanity is "deprecated" by the technology. Amid this uncertainty, he advocates for "paleo-futurism": consciously prioritizing human interaction in a world replete with hyper-engaging, endlessly personalized digital content. "As we enter the era of AI companions, and increasingly dopamine-rich digital experiences, we have this choice," he says. "Do you go into the metaverse all the way, or do you stay in the real world?"
For Holly Elmore, concerns about AI have impacted her life more intimately: It contributed to her decision to get divorced. At a coffee shop in San Francisco's Mission District, she tells me she and her husband were both deeply attuned to the risks of unconstrained AI development, but had different approaches to reining it in.
Elmore, the executive director of anti-AI protest group Pause AI, believed steadfast mass organization against the big labs like OpenAI was the only viable way forward, which she says her ex-husband, Ronny Fernandez, was "unsupportive" of. "We had a lot of problems and I should have probably never been in that marriage, but it just became very clear that I wasn't going to be able to keep doing Pause AI if we stayed together," she says. "I had a very strong moral conviction on that and it did organize the priorities in my life very well. And honestly, I love living that way."
"I do think that trying to use shaming and promoting in-group out-group thinking to achieve policy goals has a pretty bad track record," Fernandez, who is the manager of Lighthaven, writes over email. "Those disagreements led to resentments on both of our ends which contributed to our eventually getting divorced." While he believes that casting the AI scaling labs as "political enemies" will likely backfire, he stresses that "there is a significant chance that smarter than human AI will literally kill approximately everyone, or lead to even worse outcomes, within a few decades."
For others, their dreams and worst fears about AI have transformed their approach to money. Sometime in 2020, Daniel Kokotajlo, then 28 years old, stopped saving for retirement.
The AI researcher was growing concerned about the existential threat AI might pose to humanity. He worked at OpenAI from 2022 until he quit in 2024 over concerns about how it was handling AI safety — an issue he continues to work on. Earlier this year, he published AI 2027, a widely read online essay exploring how rapid advancements in AI may lead to several "loss-of-control" situations, from a world war over the AI arms race in the late 2020s to the extinction of human life by 2035, via an AI releasing a lethal chemical spray across civilization.
Haroon Choudery, an AI startup founder, sees next few years as his last chance to make generational wealth.
Amid these threats, he reasons, why bother saving for decades, when even the next few years look increasingly uncertain? "I have a decent amount of money, especially because of the equity, but I'm planning to spend it down," he tells me at a coffee shop in Berkeley. "I'm planning to have less every year." He says he knows of numerous other AI researchers doing the same.
On a recent episode of Dwarkesh Patel's popular tech podcast, Trenton Bricken, a researcher at OpenAI rival Anthropic, shared that he, too, has quit putting money away for retirement, because he believes AGI is fast approaching. "It's hard for me to imagine a world in which I have all this money that's just sitting in this account and waiting until I'm 60 and things look so different then," he said.
Others in the tech industry are taking a very different approach to their money. Among some of the most bullish about how bearish they see the future, there's a pervasive fear that there are only a few years left to earn as much as possible "before the music stops," when human intellectual labor becomes largely obsolete.
"We have just a handful of years to try to make it financially," says a crypto writer in the Midwest who goes by the pseudonym Redphone professionally. "And if you don't, your bloodline could be relegated to this sort of peasant class under these technological overlords who control AI."
Haroon Choudery, a former data integrity analyst at Facebook who now runs an AI startup called Autoblocks, has a similar concern. He emigrated from Pakistan to the United States when he was 5; his father was a cabbie, while his mother didn't work outside the home. He views the next few years as his last chance to make generational wealth for himself and his children. "Things are going to feel a lot more scarce from an upward mobility perspective, so people will generally freeze in their socioeconomic statuses," he tells me.
Massey Branscomb, an executive at AI hedge fund AlphaFund puts this concept to me in even more blunt terms: "If you are not positioning yourself as a key member of these critical companies," by which he means top-flight AI labs like OpenAI and Anthropic, "and you're kind of living — the term is ironically a 'wagie' — you're living a wagie life, then you could be on the chopping block and then it's going to be harder. These jobs are not going to come back."
Others are less sure AI will soon topple the global economy. As an assistant professor of philosophy at Vanderbilt University, David Thorstad could be considered a wagie. But he tells me he's not too worried about it. While he has increased the amount he's saving because of uncertainty around AI, he urges caution about any grand predictions. "I think that there are lots of communities," he says, "particularly in the Bay Area where groups of very smart, like-minded people live together, work together, read similar forums and podcasts, and when they get very immersed in a particular kind of an extreme worldview about AI, it tends to be very hard to break out."
And then there are the people who aren't just preparing AI-driven financial apocalypse; they're preparing for an AI-driven apocalypse apocalypse.
Ulrik Horn had always been interested in "societal problems," which led to work in renewable energies after graduating from the University of Pennsylvania in 2008. But in recent years, the Stockholm-based entrepreneur has been concerned about a different kind of problem: biosecurity. Horn is worried about "mirror life," an emerging field of biological research that involves creating mirror-forms of naturally occurring life. Specifically, he's worried that AI may help accelerate research into the field — and may lead to devastating biological weapons. We're five to 10 years out from AI developing this capability, he believes.
After raising philanthropic funding to research protections against biothreats, he founded Fonix — a startup building off-the-shelf bioshelters with high-grade air filters. For $39,000, you can buy a shelter you can erect at home if and when the scat hits the fan. He has received a handful of pre-orders, he said, with shipping expected in 2026.
Horn isn't the only one viewing the perceived threat of AI as a business opportunity. Ross Gruetzemacher, an assistant professor of business analytics at Wichita State University, is launching a "resiliency" consulting firm to help businesses and individuals prepare for significant shocks as a result of AI and other existential risks. He has also bought land in Wyoming, on which he plans to build his own secure facility. James Norris, an entrepreneur and longtime worrier about a variety of threats to humanity, has recently moved into what he describes as a "survival sanctuary" in an undisclosed location in South-East Asia, and is also offering consulting services and assistance setting up sanctuaries to others. Norris has also sworn off having children, he tells me, because of the havoc he believes AI will wreak on the world.
Despite his personal fears, Kokotajlo, the ex- OpenAI researcher, is heavily skeptical of any attempt to aggressively prepare for a bad AI outcome today. "I think more likely it's either we're all dead, or we're all fine," he says. "I think if I spent a few weeks I could make a bug-out bag and make a bioshelter or whatever, and then in some sliver of possible futures it would save my family. But it is just more important for me to do my actual job than to do that."
A few weeks after I first chatted with Henry, the young AI safety researcher, I check in via email. He's had a change of heart, and is no longer trying to build a DIY bioshelter. He's determined that he wasn't thinking big enough.
Instead, he's now trying to buy land in California, where he can build more permanent defense structures to protect more of his friends and family. "The main scenario I think about is the one where misaligned superintelligence AI takes over," he says. He wants to be prepared for a near-future in which an all-powerful AI wages war against humans, but the "AI still has a little bit of empathy." Once the AI wins that war, he concludes, "maybe they'll take care of the survivors and they'll put humans in some kind of human zoo. And I'd much rather live in a human zoo than be killed by bioweapons."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
OpenAI is at a classic strategy crossroads involving its ‘moat'—which Warren Buffett believes can make or break a business
It's an epochal moment as history's latest general-purpose technology, AI, forms itself into an industry. Much depends on these early days, especially the fate of the industry's leader by a mile, Open AI. In terms of the last general-purpose technology, the internet, will it become a colossus like Google or be forgotten like AltaVista? No one can know, but here's how to think about it. OpenAI's domination of the industry is striking. As the creator of ChatGPT, it recently attracted 78% of daily unique visitors to core model websites, with six competitors splitting up the rest, according to a recent 40-page report from J.P. Morgan. Even with that vast lead, the report shows, OpenAI is expanding its margin over its much smaller competitors, including even Gemini, which is part of Google and its giant parent, Alphabet (2024 revenue: $350 billion). The great question now is whether OpenAI can possibly maintain its wide lead (history would say no) or at least continue as the industry leader. The answer depends heavily on OpenAI's moat, a Warren Buffett term for any factor that protects the company and cannot be easily breached–think of Coca-Cola's brand or BNSF Railroad's economies of scale, to mention two of Buffett's successful investments. On that count the J.P. Morgan analysts are not optimistic. Specifically, they acknowledge that while OpenAI has led the industry in innovating its models, that strategy is 'an increasingly fragile moat.' Example: The company's most recent model, GPT-5, included multiple advances yet underwhelmed many users. As competitors inevitably catch up, the analysts conclude, 'Model commoditization is an increasingly likely outcome.' With innovations suffering short lives, OpenAI must now become 'a more product-focused, diversified organization that can operate at scale while retaining its position' at the top of the industry–skills the company has yet to demonstrate. Bottom line, OpenAI can maintain its leading rank in the industry, but it won't be easy, and betting on it could be risky. Yet a different view suggests OpenAI is much closer to creating a sustainable moat. It comes from Robert Siegel, a management lecturer at Stanford's Graduate School of Business who is also a venture capitalist and former executive at various companies, many in technology. He argues that OpenAI is already well along the road to achieving a valuable attribute, stickiness: The longer customers use something, the less likely they are to switch to a competitor. In OpenAI's case, 'people will only move to Perplexity or Gemini or other solutions if they get a better result,' he says. Yet that becomes unlikely because AI learns; the more you use a particular AI engine, the more it learns about you and what you want. 'If you keep putting questions into ChatGPT, which learns your behaviors better, and you like it, there's no reason to leave as long as it's competitive.' Now combine that logic with OpenAI's behavior. 'It seems like their strategy is to be ubiquitous,' Siegel says, putting ChatGPT in front of as many people as possible so the software can start learning about them before any competitor can get there first. Most famously, OpenAI released ChatGPT 3.5 to the public in 2022 for free, attracting a million users in five days and 100 million in two months. In addition, the company raised much investment early in the game, having been founded in 2015. Thus, Siegel says, OpenAI can 'continue to run hard and use capital as a moat so they can do all the things they need to do to be everywhere.' But Siegel, the J.P. Morgan analysts, and everyone else knows plenty can always go wrong. An obvious threat to OpenAI and most of its competitors is an open-source model such as China's DeepSeek, which appears to perform well at significantly lower costs. The venture capital that has poured into OpenAI could dry up as hundreds of other AI startups compete for financing. J.P. Morgan and Siegel agree that OpenAI's complex unconventional governance structure must be reformed; though a recently proposed structure has not been officially disclosed, it is reportedly topped by a nonprofit, which might worry profit-seeking investors. As for moats, OpenAI is obviously in the best position to build or strengthen one. But looking into the era of AI, the whole concept of the corporate moat may become meaningless. How long will it be, if it hasn't been done already, before a competitor asks its own AI engine, 'How do we defeat OpenAI's moat?' This story was originally featured on Sign in to access your portfolio
Yahoo
15 minutes ago
- Yahoo
‘My Kid Will Never Ever Be Smarter Than an AI': OpenAI's Sam Altman Warns Most Kids Won't Know a World Without AI
Artificial intelligence (AI) is now so advanced that some experts believe no child will ever surpass its intelligence again. OpenAI CEO Sam Altman, a central figure in the world of artificial intelligence, recently reflected on the transformative potential of AI advancements, and particularly their impact on the next generation. Speaking on a podcast, he remarked, 'My kid will never ever be smarter than an AI. That will never happen. You know, kid born a few years ago. They had a brief period of time. My kid never will be smarter.' More News from Barchart Warren Buffett Cautions Ill-Informed Investors: 'The Market, Like the Lord, Helps Those Who Help Themselves,' But Markets Are Unforgiving Can Archer Aviation Become the Tesla of the Skies? As Kodak Terminates Its Pension Plans, What Top Companies Still Offer This Retirement Perk? Our exclusive Barchart Brief newsletter is your FREE midday guide to what's moving stocks, sectors, and investor sentiment - delivered right when you need the info most. Subscribe today! Altman's statement captures a profound and ongoing shift, both in technology and society. As the current leader of OpenAI, the organization behind breakthroughs like the GPT-language model series and other advanced AI technologies, Altman's insights carry significant weight. His views are shaped by daily interactions with researchers pushing the boundaries of what AI can achieve — tasks ranging from language generation and autonomous reasoning to problem-solving at a superhuman scale. When Altman says 'my kid will never be smarter than an AI,' he is not lamenting a loss, but observing a turning point in technology. Historically, each new generation had the chance to exceed the achievements of earlier ones, shaped by new education, tools, and inventions. Now, he says, a rapidly accelerating AI trajectory means that children born today will coexist with machines that learn and develop orders of magnitude faster, with access to vast data and computational resources. Altman's comment reflects both a recognition of what has already changed and a sense of inevitability about the future. The authority behind Altman's remark comes from his central role at OpenAI. Since its founding in 2015, OpenAI has led the development of generative AI with a philosophy that blends technological optimism and public caution. Altman, previously a leading Silicon Valley investor and technologist, has often spoken about the responsibility of the sector and the need for flexible, thoughtful policy as AI becomes increasingly integrated into everyday life and the economy. His assertion that no human — no matter how young or well-educated — could ever outpace AI is rooted in empirical reality. AI models now routinely outperform humans in specialized knowledge domains, can process and generate language with uncanny fluency, and are applied across finance, healthcare, logistics, and creative fields. The 'brief period of time' when a child or their peers could match or exceed machine intelligence may well have effectively vanished, as Altman suggests, replaced by a world where coexistence and collaboration with increasingly capable AI systems is the norm. This perspective is particularly salient as debates about job displacement, educational outcomes, and the essence of human endeavor gain prominence. Altman's comment is not simply an observation about his own family, but a reflection of the collective transition underway: society must adapt to new definitions of intellect, capability, and value in an era dominated by artificial intelligence. Experts suggest this requires a renewed emphasis on skills such as creativity, adaptability, and ethical reasoning — areas where machines may never fully overtake human strengths. For now, Altman's remark encapsulates the magnitude of change artificial intelligence is bringing to global culture, labor, and the imagination of what people can become. As AI evolves, the notion of human uniqueness is being redefined, not diminished — and it's a process that will shape the upbringing and prospects of generations to come. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
15 minutes ago
- Yahoo
Silicon Valley talent keeps getting recycled, so this CEO uses a ‘moneyball' approach for uncovering hidden AI geniuses in the new era
The AI talent war among major tech companies is escalating, with firms like Meta offering extravagant $100 million signing bonuses to attract top researchers from competitors like OpenAI. But HelloSky has emerged to diversify the recruitment pool, using AI-driven data to map candidates' real-world impact and uncover hidden talent beyond traditional Silicon Valley networks. As AI becomes more ubiquitous, the need for the top-tier talent at tech firms becomes even more important—and it's starting a war among Big Tech, which is simultaneously churning through layoffs and poaching people from each other with eye-popping pay packages. Meta, for example, is dishing out $100 million signing bonuses to woo top OpenAI researchers. Others are scrambling to retain staff with massive bonuses and noncompete agreements. With such a seemingly small pool of researchers with the savvy to usher in new waves of AI developments, it's no wonder salaries have gotten so high. That's why one tech executive said companies will need to stop 'recycling' candidates from the same old Silicon Valley and Big Tech talent pools to make innovation happen. 'There's different biases and filters about people's pedigree or where they came from. But if you could truly map all of that and just give credit for some people that maybe went through alternate pathways [then you can] truly stack rank,' Alex Bates, founder and CEO of AI executive recruiting platform HelloSky, told Fortune. (In April, HelloSky announced the close of a $5.5 million oversubscribed seed round from investors like Caldwell Partners, Karmel Capital, True, Hunt Scanlon Ventures as well as prominent angel investors from Google and Cisco Systems). That's why Bates developed HelloSky, which consolidates candidate, company, talent, investor, and assessment data into a single GenAI-powered platform to help companies find candidates they might not have otherwise. Many tech companies pull from previous job descriptions and resume submissions to poach top talent, explained Bates, who also authored Augmented Mind about the relationship between humans and AI. Meta CEO Mark Zuckerberg even reportedly maintains a literal list of all the top talent he wants to poach for his Superintelligence Labs and has been heavily involved in his own company's recruiting strategies. But the AI talent wars will make it more difficult than ever to fill seats with experienced candidates. Even OpenAI CEO Sam Altman recently lamented about how few candidates AI-focused companies have to pull from. 'The bet, the hope is they know how to discover the remaining ideas to get to superintelligence—that there are going to be a handful of algorithmic ideas and, you know, medium-sized handful of people who can figure them out,' Altman recently told CNBC. The 'moneyball' for finding top talent Bates refers to his platform as 'moneyball' for unearthing top talent—essentially a 'complete map' of real domain experts who may not be well-networked in Silicon Valley. Using AI, HelloSky can tag different candidates, map connections, and find people who may not have as much of a social media or job board presence, but have the necessary experience to succeed in high-level jobs. The platform scours not just resumes, but actual code contributions, peer-reviewed research, and even trending open-source projects, prioritizing measurable impact over flashy degrees. That way, companies can find candidates who have demonstrated outsized results in small, scrappy teams or other niche communities, similar to how the Oakland A's Billy Beane joined forces with Ivy League grad Peter Brand to reinvent traditional baseball scouting, which was depicted in the book and movie Moneyball. It's a 'big unlock for everything from hiring people, partnering, acquiring whatever, just everyone interested in this space,' Bates said. 'There's a lot of hidden talent globally.' HelloSky can also sense when certain candidates 'embellish' their experience on job platforms or fill in the gaps for people whose online presence is sparse. 'Maybe they said they had a billion-dollar IPO, but [really] they left two years before the IPO. We can surface that,' Bates said. 'But also we can give credit to people that maybe didn't brag sufficiently.' This helps companies find their 'diamond in the rough,' he added. Bates also predicts search firms and internal recruiters will start forcing assessments more on candidates to ensure they're the right fit for the job. 'If you can really target well and not waste so much time talking to the wrong people, then you can go much deeper into these next-gen behavioral assessment frameworks,' he said. 'I think that'll be the wave of the future.' This story was originally featured on