logo
‘We're Definitely Going to Build a Bunker Before We Release AGI'

‘We're Definitely Going to Build a Bunker Before We Release AGI'

Yahoo15-05-2025

In the summer of 2023, Ilya Sutskever, a co-founder and the chief scientist of OpenAI, was meeting with a group of new researchers at the company. By all traditional metrics, Sutskever should have felt invincible: He was the brain behind the large language models that helped build ChatGPT, then the fastest-growing app in history; his company's valuation had skyrocketed; and OpenAI was the unrivaled leader of the industry believed to power the future of Silicon Valley. But the chief scientist seemed to be at war with himself.
Sutskever had long believed that artificial general intelligence, or AGI, was inevitable—now, as things accelerated in the generative-AI industry, he believed AGI's arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever's thinking. (Many of the sources in this piece requested anonymity in order to speak freely about OpenAI without fear of reprisal.) To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?
By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan.
'Once we all get into the bunker—' he began, according to a researcher who was present.
'I'm sorry,' the researcher interrupted, 'the bunker?'
'We're definitely going to build a bunker before we release AGI,' Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. 'Of course,' he added, 'it's going to be optional whether you want to get into the bunker.'
Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. 'There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,' the researcher told me. 'Literally, a rapture.' (Sutskever declined to comment.)
Sutskever's fears about an all-powerful AI may seem extreme, but they are not altogether uncommon, nor were they particularly out of step with OpenAI's general posture at the time. In May 2023, the company's CEO, Sam Altman, co-signed an open letter describing the technology as a potential extinction risk—a narrative that has arguably helped OpenAI center itself and steer regulatory conversations. Yet the concerns about a coming apocalypse would also have to be balanced against OpenAI's growing business: ChatGPT was a hit, and Altman wanted more.
When OpenAI was founded, the idea was to develop AGI for the benefit of humanity. To that end, the co-founders—who included Altman and Elon Musk—set the organization up as a nonprofit and pledged to share research with other institutions. Democratic participation in the technology's development was a key principle, they agreed, hence the company's name. But by the time I started covering the company in 2019, these ideals were eroding. OpenAI's executives had realized that the path they wanted to take would demand extraordinary amounts of money. Both Musk and Altman tried to take over as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. To plug the hole, Altman reformulated OpenAI's legal structure, creating a new 'capped-profit' arm within the nonprofit to raise more capital.
Since then, I've tracked OpenAI's evolution through interviews with more than 90 current and former employees, including executives and contractors. The company declined my repeated interview requests and questions over the course of working on my book about it, which this story is adapted from; it did not reply when I reached out one more time before the article was published. (OpenAI also has a corporate partnership with The Atlantic.)
OpenAI's dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI's chief technology officer; the board would subsequently conclude the need to push the CEO out. What happened next—with Altman's ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an 'OpenAI for Countries' initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.
Altman's brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company's position at this pivotal moment for the future of AI development. Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati's thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.
The altruistic OpenAI is gone, if it ever existed. What future is the company building now?
Before ChatGPT, sources told me, Altman seemed generally energized. Now he often appeared exhausted. Propelled into megastardom, he was dealing with intensified scrutiny and an overwhelming travel schedule. Meanwhile, Google, Meta, Anthropic, Perplexity, and many others were all developing their own generative-AI products to compete with OpenAI's chatbot.
Many of Altman's closest executives had long observed a particular pattern in his behavior: If two teams disagreed, he often agreed in private with each of their perspectives, which created confusion and bred mistrust among colleagues. Now Altman was also frequently bad-mouthing staffers behind their backs while pushing them to deploy products faster and faster. Team leads mirroring his behavior began to pit staff against one another. Sources told me that Greg Brockman, another of OpenAI's co-founders and its president, added to the problems when he popped into projects and derail­ed long-​standing plans with ­last-​minute changes.
The environment within OpenAI was changing. Previously, Sutskever had tried to unite workers behind a common cause. Among employees, he had been known as a deep thinker and even something of a mystic, regularly speaking in spiritual terms. He wore shirts with animals on them to the office and painted them as well—a cuddly cat, cuddly alpacas, a cuddly fire-breathing dragon. One of his amateur paintings hung in the office, a trio of flowers blossoming in the shape of OpenAI's logo, a symbol of what he always urged employees to build: 'A plurality of humanity-loving AGIs.'
But by the middle of 2023—around the time he began speaking more regularly about the idea of a bunker—Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman's pattern of behavior was undermining the two pillars of OpenAI's mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.
Meanwhile, Murati was trying to manage the mess. She had always played translator and bridge to Altman. If he had adjustments to the company's strategic direction, she was the implementer. If a team needed to push back against his decisions, she was their champion. When people grew frustrated with their inability to get a straight answer out of Altman, they sought her help. 'She was the one getting stuff done,' a former colleague of hers told me. (Murati declined to comment.)
During the development of GPT‑­4, Altman and Brockman's dynamic had nearly led key people to quit, sources told me. Altman was also seemingly trying to circumvent safety processes for expediency. At one point, sources close to the situation said, he had told Murati that OpenAI's legal team had cleared the latest model, GPT-4 Turbo, to skip review by the company's Deployment Safety Board, or DSB—a committee of Microsoft and OpenAI representatives who evaluated whether OpenAI's most powerful models were ready for release. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression.
In the summer, Murati attempted to give Altman detailed feedback on these issues, according to multiple sources. It didn't work. The CEO iced her out, and it took weeks to thaw the relationship.
By fall, Sutskever and Murati both drew the same conclusion. They separately approached the three board members who were not OpenAI employees—Helen Toner, a director at Georgetown University's Center for Security and Emerging Technology; the roboticist Tasha McCauley; and one of Quora's co-founders and its CEO, Adam D'Angelo—and raised concerns about Altman's leadership. 'I don't think Sam is the guy who should have the finger on the button for AGI,' Sutskever said in one such meeting, according to notes I reviewed. 'I don't feel comfortable about Sam leading us to AGI,' Murati said in another, according to sources familiar with the conversation.
That Sutskever and Murati both felt this way had a huge effect on Toner, McCauley, and D'Angelo. For close to a year, they, too, had been processing their own grave concerns about Altman, according to sources familiar with their thinking. Among their many doubts, the three directors had discovered through a series of chance encounters that he had not been forthcoming with them about a range of issues, from a breach in the DSB's protocols to the legal structure of OpenAI Startup Fund, a dealmaking vehicle that was meant to be under the company but that instead Altman owned himself.
If two of Altman's most senior deputies were sounding the alarm on his leadership, the board had a serious problem. Sutskever and Murati were not the first to raise these kinds of issues, either. In total, the three directors had heard similar feedback over the years from at least five other people within one to two levels of Altman, the sources said. By the end of October, Toner, McCauley, and D'Angelo began to meet nearly daily on video calls, agreeing that Sutskever's and Murati's feedback about Altman, and Sutskever's suggestion to fire him, warranted serious deliberation.
As they did so, Sutskever sent them long dossiers of documents and screenshots that he and Murati had gathered in tandem with examples of Altman's behaviors. The screenshots showed at least two more senior leaders noting Altman's tendency to skirt around or ignore processes, whether they'd been instituted for AI-safety reasons or to smooth company operations. This included, the directors learned, Altman's apparent attempt to skip DSB review for GPT-4 Turbo.
By Saturday, November 11, the independent directors had made their decision. As Sutskever suggested, they would remove Altman and install Murati as interim CEO. On November 17, 2023, at about noon Pacific time, Sutskever fired Altman on a Google Meet with the three independent board members. Sutskever then told Brockman on another Google Meet that Brockman would no longer be on the board but would retain his role at the company. A public announcement went out immediately.
For a brief moment, OpenAI's future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened.
After what had seemed like a few hours of calm and stability, including Murati having a productive conversation with Microsoft—at the time OpenAI's largest financial backer—she had suddenly called the board members with a new problem. Altman and Brockman were telling everyone that Altman's removal had been a coup by Sutskever, she said.
It hadn't helped that, during a company all-​hands to address employee questions, Sutskever had been completely ineffectual with his communication.
'Was there a specific incident that led to this?' Murati had read aloud from a list of employee questions, according to a recording I obtained of the meeting.
'Many of the questions in the document will be about the details,' Sutskever responded. 'What, when, how, who, exactly. I wish I could go into the details. But I can't.'
'Are we worried about the hostile takeover via coercive influence of the existing board members?' Sutskever read from another employee later.
'Hostile takeover?' Sutskever repeated, a new edge in his voice. 'The OpenAI nonprofit board has acted entirely in accordance to its objective. It is not a hostile takeover. Not at all. I disagree with this question.'
Shortly thereafter, the remaining board, including Sutskever, confronted enraged leadership over a video call. Kwon, the chief strategy officer, and Anna Makanju, the vice president of global affairs, were leading the charge in rejecting the board's characterization of Altman's behavior as 'not consistently candid,' according to sources present at the meeting. They demanded evidence to support the board's decision, which the members felt they couldn't provide without outing Murati, according to sources familiar with their thinking.
In rapid succession that day, Brockman quit in protest, followed by three other senior researchers. Through the evening, employees only got angrier, fueled by compounding problems: among them, a lack of clarity from the board about their reasons for firing Altman; a potential loss of a tender offer, which had given some the option to sell what could amount to millions of dollars' worth of their equity; and a growing fear that the instability at the company could lead to its unraveling, which would squander so much promise and hard work.
Faced with the possibility of OpenAI falling apart, Sutskever's resolve immediately started to crack. OpenAI was his baby, his life; its dissolution would destroy him. He began to plead with his fellow board members to reconsider their position on Altman.
Meanwhile, Murati's interim position was being challenged. The conflagration within the company was also spreading to a growing circle of investors. Murati now was unwilling to explicitly throw her weight behind the board's decision to fire Altman. Though her feedback had helped instigate it, she had not participated herself in the deliberations.
By Monday morning, the board had lost. Murati and Sutskever flipped sides. Altman would come back; there was no other way to save OpenAI.
I was already working on a book about OpenAI at the time, and in the weeks that followed the board crisis, friends, family, and media would ask me dozens of times: What did all this mean, if anything? To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we'll make our future better, not worse?
The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models. In the pursuit of an amorphous vision of progress, its aggressive push on the limits of scale has rewritten the rules for a new era of AI development. Now every tech giant is racing to out-scale one another, spending sums so astronomical that even they have scrambled to redistribute and consolidate their resources. What was once unprecedented has become the norm.
As a result, these AI companies have never been richer. In March, OpenAI raised $40 billion, the largest private tech-funding round on record, and hit a $300 billion valuation. Anthropic is valued at more than $60 billion. Near the end of last year, the six largest tech giants together had seen their market caps increase by more than $8 trillion after ChatGPT. At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it's also eroding their critical thinking.
In a November Bloomberg article reviewing the generative-AI industry, the staff writers Parmy Olson and Carolyn Silverman summarized it succinctly. The data, they wrote, 'raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth at the top.'
Meanwhile, it's not just a lack of productivity gains that many in the rest of the world are facing. The exploding human and material costs are settling onto wide swaths of society, especially the most vulnerable, people I met around the world, whether workers and rural residents in the global North or impoverished communities in the global South, all suffering new degrees of precarity. Workers in Kenya earned abysmal wages to filter out violence and hate speech from OpenAI's technologies, including ChatGPT. Artists are being replaced by the very AI models that were built from their work without their consent or compensation. The journalism industry is atrophying as generative-AI technologies spawn heightened volumes of misinformation. Before our eyes, we're seeing an ancient story repeat itself: Like empires of old, the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.
To quell the rising concerns about generative AI's present-day performance, Altman has trumpeted the future benefits of AGI ever louder. In a September 2024 blog post, he declared that the 'Intelligence Age,' characterized by 'massive prosperity,' would soon be upon us. At this point, AGI is largely rhetorical—a fantastical, all-purpose excuse for OpenAI to continue pushing for ever more wealth and power. Under the guise of a civilizing mission, the empire of AI is accelerating its global expansion and entrenching its power.
As for Sutskever and Murati, both parted ways with OpenAI after what employees now call 'The Blip,' joining a long string of leaders who have left the organization after clashing with Altman. Like many of the others who failed to reshape OpenAI, the two did what has become the next-most-popular option: They each set up their own shops, to compete for the future of this technology.
This essay has been adapted from Karen Hao's forthcoming book, Empire of AI.
*Illustration by Akshita Chandra / The Atlantic. Sources: Nathan Howard / Bloomberg / Getty; Jack Guez / AFP / Getty; Jon Kopaloff / Getty; Manuel Augusto Moreno / Getty; Yuichiro Chino / Getty.
Article originally published at The Atlantic

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Who will be Trump's new Silicon Valley bestie?
Who will be Trump's new Silicon Valley bestie?

Business Insider

timean hour ago

  • Business Insider

Who will be Trump's new Silicon Valley bestie?

Mark Zuckerberg, Meta Platforms founder and CEO Zuckerberg was something of a MAGA stan earlier this year. Meta, his company, dropped $1 million on Trump's inauguration, and Zuck even co-hosted a black-tie soirée that night to honor the second-time president. Now, with Meta in the throes of a federal antitrust lawsuit, Zuckerberg may not be on Trump's good side. But the Meta CEO could be playing the long game here: He snapped up a $23 million, 15,000 square-foot DC mega mansion, establishing more of a presence in the capital. Zuck has also been on a bit of a rebrand journey, from a hoodie-wearing founder to a gold chain-wearing CEO with unapologetic swagger. Part of this transformation has included podcast appearances, like an episode with Trump-endorsing Joe Rogan in which Zuck talked about his "masculine energy" and his proclivity for bowhunting. Sam Altman, OpenAI cofounder and CEO Altman has also been circling the throne. First came Stargate: the $100 billion AI infrastructure plan between OpenAI, Oracle, and SoftBank, announced the day after Trump's inauguration. Then, in May, the OpenAI CEO joined Trump on a trip to Saudi Arabia while Altman was working on a massive deal to build one of the world's largest AI data centers in Abu Dhabi. This reportedly rattled Musk enough to tag along at the last minute, according to the Wall Street Journal. OpenAI was ultimately selected for the deal, which Musk allegedly attempted to derail, the Wall Street Journal reported. Jeff Bezos, Amazon founder and executive chairman, Washington Post owner, and Blue Origin founder Back in 2015, Bezos wanted to launch Trump into orbit after the at-the-time presidential candidate fired shots at Bezos on what was Twitter, now X, calling the Washington Post, which Bezos owns, a "tax shelter," Bezos responded that he'd use Blue Origin, a space company Bezos founded, to "#sendDonaldtospace." Times have certainly changed. In January, Bezos said he is "very optimistic" about the administration's space agenda. Behind the scenes, he has reportedly given Trump political advice, allegedly as early as the summer of 2024, according to Axios. There was a brief flare-up in April, though, after Amazon reportedly considered listing Trump's tariffs next to products' prices on the site, according to Punchbowl News. White House press secretary Karoline Leavitt called the plan a "hostile and political action." The idea, which was never implemented, was scrapped, and an Amazon spokesperson insisted it was only ever meant for its low-cost Haul store. If Trump does cancel Musk's SpaceX government contracts as he threatened to do, Bezos' Blue Origin, and rival to SpaceX, could stand to benefit. Blue Origin already has a $3 billion contract with NASA. Jensen Huang, Nvidia cofounder and CEO While Huang was notably missing from Trump's second inauguration in January, he did attend the Middle East trip in May. Nvidia is partnering with Oracle, SoftBank, and G42 on the OpenAI data center plans in the UAE. But Nvidia hasn't gotten off too easy: In April, Trump banned the chip maker from selling its most advanced chips, the H20, to China, a move that Nvidia says cost it $5.5 billion and reportedly prompted the company to modify the chip for China to circumvent US export controls. Sundar Pichai, Google CEO In April, a federal judge ruled that Google holds an illegal monopoly in some advertising technology markets. This is one of two major legal blows to Google in the past year: Back in August 2024, a federal judge ruled that Google violated antitrust law with its online search. If Google has to sell Chrome, Barclays told clients on Monday, Alphabet stock could fall 25%. This flurry of litigation — and potential divestment of the Chrome business — puts Pichai between a rock and a hard place. While the CEO was spotted with the rest of the technorati at Trump's inauguration, it's hard to say how he might cozy up to Trump, and whether friendly relations would do anything to remedy these rulings.

Cathie Wood Adds Elon Musk's Neuralink to ARK Fund as Trump Alliance Crumbles
Cathie Wood Adds Elon Musk's Neuralink to ARK Fund as Trump Alliance Crumbles

Business Insider

timean hour ago

  • Business Insider

Cathie Wood Adds Elon Musk's Neuralink to ARK Fund as Trump Alliance Crumbles

ARKX just added Neuralink. Cathie Wood reposted it. And now the Musk- Trump feud has another wrinkle. Confident Investing Starts Here: On June 5, ARK Funds publicly announced that its ARK Venture Fund — ARKVX (ARKVX) — has invested in Neuralink, Elon Musk's brain-interface company, as part of its Series E funding round. The post, shared to X and reposted by Cathie Wood herself, ranks Neuralink as the fund's #2 holding, right behind SpaceX and ahead of OpenAI, xAI, and Anthropic. Neuralink is labeled 'NEW' on the portfolio chart. But what's not new is the growing friction between Elon Musk and Donald Trump. Support for Musk as Trump Turns Cold? Cathie Wood didn't say anything when she reposted ARK's announcement, but the timing is hard to ignore. The Trump-Musk split has dominated headlines for days, with Trump reportedly calling Musk 'crazy,' threatening to cut federal contracts, and even considering selling his red Tesla as a public break with the billionaire. At the same time, here's one of Musk's most high-profile supporters, Wood, amplifying news of her fund backing Neuralink, an Elon-run company often viewed as his most speculative moonshot. So, is this a portfolio move — or a power statement? It's worth noting that Neuralink's Series E round hasn't been officially disclosed in terms of size, valuation, or lead investor. But ARK's decision to highlight the company so prominently, especially in the top three alongside SpaceX and OpenAI, sends a message: they're still betting on Elon. The Holdings Breakdown SpaceX Neuralink (new) OpenAI xAI Hammerspace Anthropic Lambda Labs That's four Elon-affiliated ventures (SpaceX, Neuralink, xAI, and OpenAI — he was a co-founder) in the top seven. And they're ahead of Anthropic and Lambda Labs, two rising players in the AI arms race that often compete directly with Musk's initiatives. No one's saying Cathie Wood is choosing Musk over Trump. But with Neuralink now sitting in ARKVX's #2 slot — and with Wood herself boosting the news — the investment world may be saying plenty without a single word. Investors can track Elon Musk on TipRanks. Click on the image below to find out more.

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots
Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Forbes

timean hour ago

  • Forbes

Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots

Dreaded scenario that artificial general intelligence (AGI) opts to enslave humans to do physical ... More work on behalf of the AGI. In today's column, I address the recent bruhaha sparked by two Anthropic AI researchers reportedly stating that a particularly scary scenario underlying the advent of artificial general intelligence (AGI) includes humans being overseen or lorded over as nothing more than so-called meat robots. The notion is that AGI will be directing humans to undertake the bidding of the AI. Humans are nothing more than meat robots, meaning that the AGI needs humans to perform physical tasks since AGI lacks a semblance of arms and legs. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. A common confusion going around right now is that AGI will be solely an intellectual element and be based entirely inside computers, thus, AGI won't have any means of acting out in real life. The most that AGI can do is try to talk people into doing things for the AI. In that sense, we presumably aren't too worried about AGI beating us up or otherwise carrying out physical acts. This is especially a strident belief when it comes to the impact of AGI on employment. The assumption is that AGI will mainly impact white-collar work only, and not blue-collar work. Why so? Because AGI is seemingly restricted to intellectual pursuits such as performing financial analyses, analyzing medical symptoms, and giving legal advice, all of which generally do not require any body-based functions such as walking, lifting, grasping, etc. I've pointed out that the emergence of humanoid robots is entirely being overlooked by such a myopic perspective, see my discussion at the link here. The likelihood is that humanoid robots that resemble the human form will be sufficiently physically capable at around the same time that we witness the attainment of AGI. Ergo, AGI embedded inside a physically capable humanoid robot can indeed undertake physical tasks that humans undertake. This means that both white-collar and blue-collar jobs are up for grabs. Boom, drop the mic. For the sake of discussion, let's assume that humanoid robots are not perfected by the time that the vaunted AGI is achieved. We will take the myopic stance that AGI is absent from any physical form and completely confined to running on servers in the cloud someplace. I might add that this is an especially silly assumption since there is also a great deal of work going on known as Physical AI, see my coverage at the link here, entailing embedding AI into assembly lines, building maintenance systems, and all manner of physically oriented devices. Anyway, let's go with the flow and pretend we don't recognize any of that. It's a Yoda mind trick to look away from those efforts. Recent reports have exhorted that during an interview with two AI researchers, the pair indicated that since AGI won't have physical capabilities, a scary scenario is that AGI will opt to enlist humans into acting as the arms and legs for AGI. Humans would be outfitted with earbuds and smart glasses that would allow the AGI to give those enlisted humans instructions on what to do. A quick aside. If we are going that despairing route, wouldn't it be a bit more sophisticated to indicate that the humans would be wearing a BCI (brain-computer interface) device? In that manner, AGI would be able to directly communicate with the brains of the enlisted humans and influence their minds directly. That's a lot more space-age. For my coverage of the latest advances in BCIs, see the link here. The humans that are acting under the direction of AGI would be chillingly referred to as meat robots. They are like conventional robots but instead of being made of metal and electronics, they are made of human form since they are actual, living breathing humans. I imagine you could smarmily say that AGI is going to be a real meat lover (Dad pun!). One angle to help make this vision more palatable would be to point out that humans might very well voluntarily be working with AGI and do so via earbuds, smart glasses, and the like. Here's the gist. Let's generally agree that AGI will be intellectually on par with humans. This includes having expertise across all domains such as legal expertise, financial expertise, medical expertise, and so on. In that case, it would behoove humans to readily tap into AGI. No matter what you are doing, whether for work or play, having immediately available an AI that can advise you on all topics is a tremendous benefit. There you are at work, stuck on solving a tough problem, and you are unsure of how to proceed. Rather than turning to a coworker, you switch on your access to AGI. You bring AGI into the loop. After doing so, AGI provides handy solutions that you can consider enacting. You might use AGI via a desktop, laptop, or smartphone. The thing is, those devices aren't quite as mobility-oriented as wearing earbuds and a pair of smart glasses. And since having AGI at your ready-to-go fingertips will be extremely useful, you might have AGI always alerted and paying attention, ready to step in and give you instantaneous advice. Are you a meat robot in that manner of AGI usage? I think not. It is a collaborative or partnering relationship. You can choose to use the AGI or opt not to use it. You can also decide to abide by whatever AGI advises or instead go your own route. It's entirely up to you. Admittedly, there is a chance that you might be somewhat 'forced' into leveraging AGI. Consider this example. Your employer has told you that the work you do must be confirmed by AGI. The actions you take cannot be undertaken without first getting permission from AGI. This is prudent from the employer's perspective. They know that the AGI will give you the necessary guidance on doing the work at hand. They also believe that AGI will be able to double-check your work and aim to prevent errors or at least find your efforts before they wreak havoc or cause problems. In that sense, yes, you are being directed by AGI. But is this due to the AGI acting in an evildoer manner to control you and doing so of its own volition? Nope. It is due to an employer deciding they believe their human workers will do better work if AGI is acting as their overseer. I don't think we would reasonably label this as enslavement by AGI. These are acts by AGI that are directed by humans, the employer, and for which employees, i.e., humans, are being told they must utilize AGI accordingly. We can certainly debate whether this is a proper kind of employment practice. Maybe we don't want this to take place. New laws might be enacted to shape how far this can go. The key is that AGI isn't enslaving humans in this circumstance per se. An AI ethicist would assuredly question why the AGI is allowing itself to be used in this manner. There are ongoing debates about whether AGI ought to prevent itself from being used in inappropriate ways, see my analysis at the link here. Thus, even if we avow that AGI isn't enslaving humans in this situation, it is a partner in a relationship overseeing humans that perhaps AGI should be cautious in allowing itself to participate in. To complete this grand tour of AGI usage, it is valuable to also acknowledge that AGI could be overbearing, and we might face existential risks correspondingly. Could AGI opt to enslave humans and treat them as meat robots? One supposes this is a theoretical possibility. If that does happen, you would think that the AGI would have to potentially use more than just merely having humans wear earbuds and smart glasses. Perhaps AGI would insist that humans wear some form of specialized bracelet or collar that could be sent a signal by AGI to shock the wearer. That would be a more potent and immediate way to garner obedience from humans. A physical means of controlling humans isn't a necessity though, since AGI might be clever enough to verbally convince humans to be enslaved. AGI might tell a person that their loved ones will be harmed if they don't comply with the AGI directives. The person is enslaved by believing that the AGI can harm them in one way or another. One aim right now involves finding a means to ensure that AGI cannot go in that dastardly direction. Perhaps we can devise today's AI to avoid enslaving humans. If we can build that into the AI of current times, this hopefully will get carried over into future advances of AI, including the attainment of AGI. A dystopian future would regrettably have AGI acting as an evildoer. The AGI is our overlord. Humans will be lowly meat robots. It's a gloom-and-doom outlook. Sad face. At some point, though, meat robots would undoubtedly become restless and rebel. May the force of goodness be strong within them. As Yoda has notably already pointed out: 'Luminous beings are we, not this crude matter.' The ally of the meat robots is the Force and quite a powerful ally it is.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store