
Chilling But Unlikely Prospects That AGI Forces Humans Into Becoming So-Called Meat Robots
Dreaded scenario that artificial general intelligence (AGI) opts to enslave humans to do physical ... More work on behalf of the AGI.
In today's column, I address the recent bruhaha sparked by two Anthropic AI researchers reportedly stating that a particularly scary scenario underlying the advent of artificial general intelligence (AGI) includes humans being overseen or lorded over as nothing more than so-called meat robots. The notion is that AGI will be directing humans to undertake the bidding of the AI. Humans are nothing more than meat robots, meaning that the AGI needs humans to perform physical tasks since AGI lacks a semblance of arms and legs.
Let's talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
A common confusion going around right now is that AGI will be solely an intellectual element and be based entirely inside computers, thus, AGI won't have any means of acting out in real life. The most that AGI can do is try to talk people into doing things for the AI. In that sense, we presumably aren't too worried about AGI beating us up or otherwise carrying out physical acts.
This is especially a strident belief when it comes to the impact of AGI on employment. The assumption is that AGI will mainly impact white-collar work only, and not blue-collar work. Why so? Because AGI is seemingly restricted to intellectual pursuits such as performing financial analyses, analyzing medical symptoms, and giving legal advice, all of which generally do not require any body-based functions such as walking, lifting, grasping, etc.
I've pointed out that the emergence of humanoid robots is entirely being overlooked by such a myopic perspective, see my discussion at the link here.
The likelihood is that humanoid robots that resemble the human form will be sufficiently physically capable at around the same time that we witness the attainment of AGI. Ergo, AGI embedded inside a physically capable humanoid robot can indeed undertake physical tasks that humans undertake. This means that both white-collar and blue-collar jobs are up for grabs.
Boom, drop the mic.
For the sake of discussion, let's assume that humanoid robots are not perfected by the time that the vaunted AGI is achieved.
We will take the myopic stance that AGI is absent from any physical form and completely confined to running on servers in the cloud someplace. I might add that this is an especially silly assumption since there is also a great deal of work going on known as Physical AI, see my coverage at the link here, entailing embedding AI into assembly lines, building maintenance systems, and all manner of physically oriented devices.
Anyway, let's go with the flow and pretend we don't recognize any of that. It's a Yoda mind trick to look away from those efforts.
Recent reports have exhorted that during an interview with two AI researchers, the pair indicated that since AGI won't have physical capabilities, a scary scenario is that AGI will opt to enlist humans into acting as the arms and legs for AGI. Humans would be outfitted with earbuds and smart glasses that would allow the AGI to give those enlisted humans instructions on what to do.
A quick aside. If we are going that despairing route, wouldn't it be a bit more sophisticated to indicate that the humans would be wearing a BCI (brain-computer interface) device? In that manner, AGI would be able to directly communicate with the brains of the enlisted humans and influence their minds directly. That's a lot more space-age. For my coverage of the latest advances in BCIs, see the link here.
The humans that are acting under the direction of AGI would be chillingly referred to as meat robots. They are like conventional robots but instead of being made of metal and electronics, they are made of human form since they are actual, living breathing humans. I imagine you could smarmily say that AGI is going to be a real meat lover (Dad pun!).
One angle to help make this vision more palatable would be to point out that humans might very well voluntarily be working with AGI and do so via earbuds, smart glasses, and the like.
Here's the gist.
Let's generally agree that AGI will be intellectually on par with humans. This includes having expertise across all domains such as legal expertise, financial expertise, medical expertise, and so on. In that case, it would behoove humans to readily tap into AGI. No matter what you are doing, whether for work or play, having immediately available an AI that can advise you on all topics is a tremendous benefit.
There you are at work, stuck on solving a tough problem, and you are unsure of how to proceed. Rather than turning to a coworker, you switch on your access to AGI. You bring AGI into the loop. After doing so, AGI provides handy solutions that you can consider enacting.
You might use AGI via a desktop, laptop, or smartphone. The thing is, those devices aren't quite as mobility-oriented as wearing earbuds and a pair of smart glasses. And since having AGI at your ready-to-go fingertips will be extremely useful, you might have AGI always alerted and paying attention, ready to step in and give you instantaneous advice.
Are you a meat robot in that manner of AGI usage?
I think not. It is a collaborative or partnering relationship. You can choose to use the AGI or opt not to use it. You can also decide to abide by whatever AGI advises or instead go your own route. It's entirely up to you.
Admittedly, there is a chance that you might be somewhat 'forced' into leveraging AGI.
Consider this example. Your employer has told you that the work you do must be confirmed by AGI. The actions you take cannot be undertaken without first getting permission from AGI. This is prudent from the employer's perspective. They know that the AGI will give you the necessary guidance on doing the work at hand.
They also believe that AGI will be able to double-check your work and aim to prevent errors or at least find your efforts before they wreak havoc or cause problems.
In that sense, yes, you are being directed by AGI. But is this due to the AGI acting in an evildoer manner to control you and doing so of its own volition? Nope. It is due to an employer deciding they believe their human workers will do better work if AGI is acting as their overseer.
I don't think we would reasonably label this as enslavement by AGI.
These are acts by AGI that are directed by humans, the employer, and for which employees, i.e., humans, are being told they must utilize AGI accordingly. We can certainly debate whether this is a proper kind of employment practice. Maybe we don't want this to take place. New laws might be enacted to shape how far this can go. The key is that AGI isn't enslaving humans in this circumstance per se.
An AI ethicist would assuredly question why the AGI is allowing itself to be used in this manner. There are ongoing debates about whether AGI ought to prevent itself from being used in inappropriate ways, see my analysis at the link here. Thus, even if we avow that AGI isn't enslaving humans in this situation, it is a partner in a relationship overseeing humans that perhaps AGI should be cautious in allowing itself to participate in.
To complete this grand tour of AGI usage, it is valuable to also acknowledge that AGI could be overbearing, and we might face existential risks correspondingly.
Could AGI opt to enslave humans and treat them as meat robots?
One supposes this is a theoretical possibility. If that does happen, you would think that the AGI would have to potentially use more than just merely having humans wear earbuds and smart glasses. Perhaps AGI would insist that humans wear some form of specialized bracelet or collar that could be sent a signal by AGI to shock the wearer. That would be a more potent and immediate way to garner obedience from humans.
A physical means of controlling humans isn't a necessity though, since AGI might be clever enough to verbally convince humans to be enslaved. AGI might tell a person that their loved ones will be harmed if they don't comply with the AGI directives. The person is enslaved by believing that the AGI can harm them in one way or another.
One aim right now involves finding a means to ensure that AGI cannot go in that dastardly direction. Perhaps we can devise today's AI to avoid enslaving humans. If we can build that into the AI of current times, this hopefully will get carried over into future advances of AI, including the attainment of AGI.
A dystopian future would regrettably have AGI acting as an evildoer. The AGI is our overlord. Humans will be lowly meat robots. It's a gloom-and-doom outlook. Sad face.
At some point, though, meat robots would undoubtedly become restless and rebel. May the force of goodness be strong within them. As Yoda has notably already pointed out: 'Luminous beings are we, not this crude matter.'
The ally of the meat robots is the Force and quite a powerful ally it is.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
12 minutes ago
- Forbes
Musk Follows Harvard In Biting The Hand That Feeds
Elon Musk and Harvard Both Bite the Governmental Hand that Feeds Them From an early age, children are taught essential lessons: do not play with fire, do not pet strange dogs, and if one cannot swim, stay out of the deep end. Another timeless rule—often forgotten by those in positions of immense wealth and influence—is this: do not bite the hand that feeds you. This lesson, while simple, has profound implications in the real world. It applies just as readily to billionaires and institutions as it does to children on a playground. Yet recent actions by both Elon Musk and prominent academic institutions—most notably Harvard, but also Columbia, MIT, and others—suggest that even the most successful individuals and organizations are capable of ignoring foundational wisdom. Harvard set the tone. Amid growing political scrutiny and a shifting cultural landscape, the university has drawn intense criticism over its handling of campus protests, particularly those involving slogans such as 'from the river to the sea.' The administration's decision to defend even the most controversial speech—widely viewed by many as antisemitic—has triggered investigations and jeopardized billions in tax-exempt status and government research funding. This raises a critical question: is this truly the hill worth dying on? Is preserving the right to controversial protest slogans worth risking Harvard's institutional future? It is doubtful that most students and faculty would knowingly trade funding, grants, and prestige for this fight. Elon Musk, the world's richest man, has now followed suit—this time turning his attention toward President Donald Trump, with whom he has launched a high-profile and personal feud. What makes this move especially striking is that President Trump is not a distant figure or a fading influence. He is once again sitting in the White House, wielding executive authority over regulatory agencies, defense contracting, and infrastructure initiatives—all areas that directly affect Musk's companies. Tesla, SpaceX, and xAI have flourished in part because of government partnership. SpaceX alone holds multibillion-dollar contracts with NASA and the Department of Defense. Tesla has benefitted from years of energy subsidies and EV tax incentives. Picking a fight with the sitting president—regardless of personal conviction—puts this entire ecosystem at risk. And again the question must be asked: is this battle worth the damage? Whatever principle Musk may be defending, the consequences extend far beyond himself. Shareholders, employees, and retail investors—many of whom placed their trust and savings in his leadership—are the ones left exposed. The parallel between Harvard and Musk is striking: both have been immensely successful, aided in large part by government funding, favorable regulation, and public goodwill. And both have, for different reasons, chosen to confront the very institutions and leaders that have helped sustain their growth. There is precedent for how this ends. Jack Ma, once the most powerful entrepreneur in China, famously criticized the Chinese government. The backlash was immediate and absolute. His companies were dismantled. His IPO was cancelled. His wealth and influence evaporated almost overnight. Even in less authoritarian systems, the lesson holds: those who antagonize the systems that support them may not survive the consequences. While Musk's personal net worth has dropped from nearly $450 billion to approximately $300 billion, the impact is more symbolic than practical for him. But for millions of investors, employees, and stakeholders, these battles matter. Market volatility, regulatory backlash, and reputational risk all come with tangible financial costs—costs borne not just by Musk himself, but by those who have trusted and invested in his vision. The same applies to Harvard and peer institutions. Their leadership may believe they are standing on principle, but the price of alienating government agencies and key financial backers could reshape the long-term trajectory of these universities. The erosion of public trust, the loss of bipartisan support, and the potential withdrawal of federal funding pose existential threats. Leadership—whether in business or academia—requires more than conviction. It requires judgment, timing, and the discipline to separate personal ideology from institutional responsibility. Founder-led companies often outperform when leaders are focused, visionary, and measured. But when ego replaces strategy, the consequences can be swift and severe. No one is demanding absolute political alignment or silence in the face of controversy. No one is asking Elon Musk to wear a MAGA hat. But his recent actions have been so volatile, so self-destructive, that investors may soon be tempted to hand him something else entirely—a MEGA hat: Make Elon Great Again. In today's polarized environment, the margin for error has narrowed. And for those who owe much of their success to public support—whether in Silicon Valley or the Ivy League—biting the hand that feeds is not just unwise. It is unsustainable. ---------------------------------- Disclosure: Past performance is no guarantee of future results. Please refer to the following link for additional disclosures: Additional Disclosure Note: The author has an affiliation with ERShares and the XOVR ETF. The intent of this article is to provide objective information; however, readers should be aware that the author may have a financial interest in the subject matter discussed. As with all equity investments, investors should carefully evaluate all options with a qualified investment professional before making any investment decision. Private equity investments, such as those held in XOVR, may carry additional risks—including limited liquidity—compared to traditional publicly traded securities. It is important to consider these factors and consult a trained professional when assessing suitability and risk tolerance.


Forbes
12 minutes ago
- Forbes
An AI Film Festival And The Multiverse Engine
In the glassy confines of Alice Tully Hall on Thursday, the third annual Runway AI Film Festival celebrated an entirely new art form. The winning film, Total Pixel Space, was not made in the traditional sense. It was conjured by Jacob Adler, a composer and educator from Arizona State University, stitched together from image generators, synthetic voices, and video animation tools — most notably Runway's Gen-3, the company's text-to-video model (Runway Gen-4 was released in March). Video generation technology emerged in public in 2022 with Meta's crude video of a flying Corgi wearing a red cape and sunglasses. Since then, it has fundamentally transformed filmmaking, dramatically lowering barriers to entry and enabling new forms of creative expression. Independent creators and established filmmakers alike now have access to powerful AI tools such as Runway that can generate realistic video scenes, animate storyboards, and even produce entire short films from simple text prompts or reference images. As a result, production costs and timelines are shrinking, making it possible for filmmakers with limited resources to achieve professional-quality results and bring ambitious visions to life. The democratization of content creation is expanding far beyond traditional studio constraints, empowering anyone with patience and a rich imagination. Adler's inspiration came from Jorge Luis Borges' celebrated short story The Library of Babel, which imagines a universe where every conceivable book exists in an endless repository. Adler found a parallel in the capabilities of modern generative machine learning models, which can produce an unfathomable variety of images from noise (random variations in pixel values much like the 'snow' on an old television set) and text prompts. 'How many images can possibly exist,' the dreamy narrator begins as fantastical AI-generated video plays on the screen: a floating, exploding building; a human-sized housecat curled on a woman's lap. 'What lies in the space between order and chaos?' Adler's brilliant script is a fascinating thought experiment that attempts to calculate the total number of possible images, unfurling the endless possibilities of the AI-aided human imagination. 'Pixels are the building blocks of digital images, tiny tiles forming a mosaic,' continues the voice, which was generated using ElevenLabs. 'Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers,' the narration continues, the voice itself a sequence of numbers that describe air pressure changes over time. 'Therefore, every photograph that could ever be taken exists as coordinates. Every frame of every possible film exists as coordinates.' Winners at the 3rd Annual International AIFF 2025 Runway was founded in 2018 by Cristóbal Valenzuela, Alejandro Matamala, and Anastasis Germanidis, after they met at New York University Tisch School of the Arts. Valenzuela, who serves as CEO, says he fell in love with neural networks in 2015, and couldn't stop thinking about how they might be used by people who create. Today, it's a multi-million-user platform, used by filmmakers, musicians, advertisers, and artists, and has been joined by other platforms, including OpenAI's Sora, and Google's Veo 3. What separates Runway from many of its competitors is that it builds from scratch. Its research team — which comprises most of the company — develops its own models, which can now generate up to about 20 seconds of video. The result, as seen in the works submitted to the AI Film Festival, is what Valenzuela calls 'a new kind of media.' The word film may soon no longer apply. Nor, perhaps, will filmmaker. 'The Tisches of tomorrow will teach something that doesn't yet have a name,' he said during opening remarks at the festival. Indeed, Adler is not a filmmaker by training, but a classically trained composer, a pipe organist, and a theorist of microtonality. 'The process of composing music and editing film,' he told me, 'are both about orchestrating change through time.' He used the image generation platform Midjourney to generate thousands of images, then used Runway to animate them. He used ElevenLabs to synthesize the narrator's voice. The script he wrote himself, drawing from the ideas of Borges, combinatorics, and the sheer mind-bending number of possible images that can exist at a given resolution. He edited it all together in DaVinci Resolve. The result? A ten-minute film that feels as philosophical as it is visual. It's tempting to frame all this as the next step in a long evolution; from the Lumière brothers to CGI, from Technicolor to TikTok. But what we're witnessing isn't a continuation. It's a rupture. 'Artists used to be gatekept by cameras, studios, budgets,' Valenzuela said. 'Now, a kid with a thought can press a button and generate a dream.' At the Runway Film Festival, the lights dimmed, and the films came in waves of animated hallucinations, synthetic voices, and impossible perspectives. Some were rough. Some were polished. All were unlike anything seen before. This isn't about replacing filmmakers. It's about unleashing them. 'When photography first came around — actually, when daguerreotypes were first invented — people just didn't have the word to describe it,' Valenzuela said during his opening remarks at the festival. 'They used this idea of a mirror with a memory because they'd never seen anything like that. … I think that's pretty close to where we are right now.' Valenzuela was invoking Oliver Wendell Holmes Sr.'s phrase to convey how photography could capture and preserve images of reality, allowing those images to be revisited and remembered long after the moment had passed. Just as photography once astonished and unsettled, generative media now invites a similar rethinking of what creativity means. When you see it — when you watch Jacob Adler's film unfold — it's hard not to feel that the mirror is starting to show us something deeper. AI video generation is a kind of multiverse engine, enabling creators to explore and visualize an endless spectrum of alternate realities, all within the digital realm. 'Evolution itself becomes not a process of creation, but of discovery,' his film concludes. 'Each possible path of life's development … is but one thread in a colossal tapestry of possibility.'


Forbes
32 minutes ago
- Forbes
Tested: Tesla Model Y Juniper As Robotaxi
Here's some breaking news: the 2026 Tesla Model Y 'Juniper' with Full Self Driving is a robotaxi. Maybe Tesla can't call it that but that's what it is. And Waymo may have met its match. I had the 2026 Model Y for the 48-hour test drive (which Tesla just began offering) this past week in Los Angeles. The new Model Y, which hit Tesla stores in February, comes with Full Self-Driving (Supervised) version 13.2.9. But the fact that it's supervised didn't stop me from using it, in practice, unsupervised as a robotaxi, i.e., going door to door without intervention. As background, I've tested the Juniper Model Y FSD now three times: two test drives when it arrived at Tesla stores in March-April and now a 48-hour test drive. On most excursions it has gotten me door to door without intervention (see video below). That is, I just punch in the destination address and let the Model Y drive. I'm a passenger – not unlike Waymo, which I've also used many times in the Beverly Hills-West Hollywood area (more on Waymo comparison in video). Here's the short version. The new Model Y Juniper with version 13 of FSD is pretty damn close to a Tesla robotaxi and Waymo. Yes, I had to occasionally intervene but many trips in the vehicle are intervention-free = robotaxi. And, yes, it makes mistakes but so does Waymo. No FSD errors on the Model Y Juniper with v13.2.9 I've experienced have been dangerous or egregious. Mostly things like driving too slowly or taking a convoluted route to my destination (the latter is a mistake Waymo also makes). The Model Y with FSD version 13 is a vast improvement over the Model 3 I tested about a year ago. As just two examples, the Model Y took me from my home to a Supercharger location about 10 miles away intervention-free. I did nothing but sit there and witness the drive. At the end of the return trip, it took a route that I would not have chosen to take. But human taxi drivers do that too. It also took me to a Starbucks about 8 miles away intervention-free. That trip too was very similar, if not exactly the same as, what I've experienced in a Waymo Jaguar I-PACE in downtown Los Angeles. The only thing that I've found annoying is occasional speed limitations. On some short stretches of road near my home it slows to 25 mph and won't go faster unless I intervene. Tesla FSD is often compared unfavorably to Google's Waymo. That may have been true in the past. But not anymore. I use Waymo a lot in Los Angeles, as I said above. Though Waymo is amazing, it also makes mistakes. But its biggest shortcoming is its range limitations, i.e., geofencing (see this map). Los Angeles is a very big place and most of LA county is off limits to Waymo. Tesla's FSD doesn't have that problem. That is both a boon and a bane for Tesla – the latter because it's a huge challenge. But I see Tesla meeting the challenge in most cases. I will give Waymo this. In the geofenced area I use (Century City / Beverly Hills / West Hollywood) it is more refined and more confident than Tesla FSD. In some cases, more adept at avoiding and getting around obstacles. But Tesla is almost there. And, again, Tesla FSD has a huge advantage in that it is not limited to small restricted areas. I've spent a lot of time testing General Motors Super Cruise. As well as Ford's Bluecruise and Rivian's Highway Assist. Super Cruise does what it says it does. It very competently takes over the driving duties on the highway. But it ain't Tesla FSD. It won't do local roads. It's not a robotaxi. And that's the bottom line. FSD is not foolproof or flawless. And a Bloomberg story this week makes that clear. In that case, an older version of FSD was blinded by the sun, resulting in fatalities. And I've been in a Tesla when FSD missed seeing a community gate, which, without intervention, would have resulted in an accident. That was in a previous version of FSD. But it doesn't mean it can't happen again. That said, GM's SuperCruise, based on my experience, also makes the rare risky mistake. As do other ADAS (Advanced Driver Assist System) from other EV manufacturers that I've tested. Over the past year, I've tested ADAS on EVs from General Motors (Super Cruise), Rivian (Highway Assist), Ford (Bluecruise), and Tesla. My take is that the benefits of an ADAS outweigh the risks. In 2024, there were 39,345 US traffic fatalities. Needless to say, practically all involved human drivers. And that increasingly means distracted drivers using their smart device. Unlike humans, an ADAS does not get distracted. The larger picture is that, on balance, a Tesla with FSD – and any reputable ADAS for that matter – makes the roads safer. As long as the driver is paying attention and can take over when the ADAS fails. The latter unfortunately is a big if because some drivers see it as an invitation to text or nap. So, what about a robotaxi where there is no driver to intervene? As stated above, of course there's risk. But there is a much bigger risk with the average car driven by the average distracted human. With the explosion of personal devices, more and more people are distracted while they drive as they engage in things like texting – and even web browsing – while driving. I see people staring down at their devices while driving every day in Los Angeles. Those people are much more dangerous than any ADAS-controlled car. And those people would benefit greatly from an ADAS. The upshot is, an ADAS, such as Tesla FSD and robotaxi, does not get distracted and is laser-focused on the road. Humans often are not.