You Are Completely Unprepared for What This Humanoid Servant Robot Looks Like
In a bid to separate its bipedal household laborer from the far creepier competition, 1X wrapped Neo Gamma in what it's calling a KnitSuit, an eyebrow-raising onesie that's "soft to the touch and flexible for dynamic movements."
It's a fascinating choice with some eerie results. Despite its full-body sweater, Neo Gamma's face is a more conventional panel of black plastic, dotted with an unsettling pair of set-back eyes. It's as if evil scientists crossed Baymax from Disney's "Big Hero 6" with Jason Voorhees, the hockey mask-donning antagonist from the "Friday The 13th" film series — with maybe a splash of Oogie Boogie from "The Nightmare Before Christmas" and the haunted sack guys from "9."
"There is a not-so-distant future where we all have our own robot helper at home, like Rosey the Robot or Baymax," said 1X CEO Bernt Børnich in a statement. "But for humanoid robots to truly integrate into everyday life, they must be developed alongside humans, not in isolation."
But whether any of what 1X showed off in its Apple-like promotional video will ever turn into a reality is awfully hazy. In a press release, the company claims the design is only a "first step" and "opens the door to start internal home testing."
In other words, don't expect Neo Gamma to go on sale any time soon — although, as is typical in the hype-fueled tech sector, the company is simultaneously promising exactly that, even as it manages expectations.
"With NEO Gamma, every engineering and design decision was made with one goal in mind: getting NEO into customers' homes as quickly as possible," Børnich promised. "We're close. We can't wait to share more soon."
1X is far from the first company to show off a flashy humanoid robot designed to help out in the home. Elon Musk's Tesla, for instance, is working on its own bidepal assistant, dubbed Optimus. But despite plenty of fanfare, the EV maker has employed a lot of smoke and mirrors to make up for reality failing to live up to some pretty bold claims so far.
California-based AI robotics company Figure has also shown off an AI-powered humanoid that can talk courtesy of OpenAI's large language models. The company claims on its website that the second generation of its robot, Figure 02, is the "world's first commercially-viable autonomous humanoid robot" — but has yet to announce price or availability.
Interestingly, 1X also received funding from OpenAI last year as part of a $100 million series, in another sign of the hype for humanoid robots that can talk to their masters with the help of generative AI.
But despite the attention and investments being poured into the industry, nobody really knows when — or if — we'll see robots like Neo Gamma being offered to consumers. The engineering challenges are immense, and whether they can prove to be actually useful in a home setting, let alone be affordable to those who aren't hugely wealthy, remains to be seen.
At least we'll give 1X credit for a creative new twist on the otherwise uncanny aesthetics of robotics, filled with creepy facial expressions and twitching extremities.
More on humanoid robots: Tesla's Robots Were Just Remotely Controlled Dummies, Analyst Confirms
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
8 minutes ago
- Gizmodo
Bob Iger Insists Disney's Focus Is on Original Movies—but Is It?
A glance at the list of announced films coming from Walt Disney Studios—including Disney's live-action and animated divisions, plus Pixar, Marvel, Lucasfilm, 20th Century Studios, Searchlight Pictures, and more—will tell you a few things. The biggest is that the upcoming slate leans heavily toward sequels, reboots, remakes, and new entries in well-known series, something the company's CEO, Bob Iger, seemed to hedge against in a recent earnings call. 'I wouldn't say that we've got a priority one way or the other,' The Wrap reports he said in response to a question about whether or not the studio is leaning more into original or familiar titles. 'Our priority is to put out great movies that ultimately resonate with consumers, and the more we can find and develop original property, the better.' But while Iger understands the value of 'creating new IP,' he's also not about to turn his back on Disney's popular existing IP, either. He acknowledged that sequels and films that 'bring [existing IP] forward in a more modern way, as we've done, or convert what was previously animation to live action… it's just a great opportunity for the company and supports our franchises.' He used the live-action Moana, due in 2026, as an example. That series in particular is riding high after 2024's smash hit animation Moana 2, an achievement made even more impressive because it was originally intended to be a Disney+ project rather than a theatrical release. It's the perfect positive focal point for Disney shareholders, the audience for today's earnings call. That said, it may be a bit unfair to peek at Disney's future calendar, which is stuffed full of well-known IP, and declare that Disney isn't making an effort to create original films. Most of the films the studio dates well in advance—from the upcoming Tron: Ares to the Avatar sequels, The Mandalorian and Grogu, Toy Story 5, Frozen III, and the next Avengers films—are those big franchise films that will benefit from fan excitement, even years in advance. And it's no secret Disney's top priority is making money, same as every other Hollywood studio. To get a true feeling for whether or not Iger is speaking the truth, you'd need to time-warp into the future a few years, then look back at all the titles that actually got released over a certain span of time. Maybe there'd be more original movies than we think. However, we must also note that Iger's definition of what constitutes 'original' is not what you might expect. Like, say, The Fantastic Four: First Steps. He's aware other studios have made Fantastic Four movies before, of course. But also: 'We kind of consider the one that we did an original property in many respects, because we're introducing those characters to people who are not familiar with them at all.' Sir… that is a reboot. You made a reboot. What do you make of Iger's remarks regarding Disney's priorities when it comes to making original films? Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what's next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.


Forbes
9 minutes ago
- Forbes
New Models From OpenAI, Anthropic, Google – All At The Same Time
It's Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today's news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I'll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not. Powered by one 80GB GPU chip, the larger OSS models, according to the above report, 'achieves parity' with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications. Let Them Work Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology. Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to 'hide' CoT. So OpenAI has chosen not to optimize the models in this way. 'OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,' writes Roger Montti at Search Engine Journal. 'This, however, could result in hallucinations.' Montti cites the following model card report from OpenAI: 'In our recent research, we found that monitoring a reasoning model's chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having 'bad thoughts.'…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.' So, the models are allowed to have these 'bad thoughts' in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made. Claude Opus 4.1 Here's how spokespersons rolled out the announcement of this new model Aug. 5: 'Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Pricing is the same as Opus 4.' What's under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% - 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% - 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy: 'The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,' writes Michael Nunez at VentureBeat. 'However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft's GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.' Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else. Genie 3 This is the latest in the series of Genie models coming out of Google's DeepMind lab that create controlled environments. In other words, this is a gaming world model. Proponents of the new model cite longer-term memory over Genie 2's limit of about 10 seconds, as well as better visual fidelity and real-time responses. 'DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,' reports Joshua Hawkins at BGR. 'Additionally, the company says that the system will be able to respond to what it calls 'promptable world events' with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.' 'Genie 3 is the first real-time interactive general-purpose world model,' said DeepMind's Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a 'stepping stone to AGI,' a big claim in these interesting times. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' All of these new models are getting their first rafts of public users today! It's enough to make your head spin, especially if you're responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it's the speed of innovation that's challenging: given the track record of most companies, by the time you get something worked into business operations, it's likely to already be obsolete! Stay tuned.


Digital Trends
an hour ago
- Digital Trends
Chat GPT 5.0 will dramatically change the way you use AI
OpenAI's much-anticipated ChatGPT 5.0 is expected to arrive late August, 2025 —and it could change how everyday users interact with AI. While the leap from GPT 4 to GPT 5 may not be as dramatic as the jump from GPT 3, the improvements on the horizon could make AI feel less like a tool and more like a true assistant. The challenge for many users is how to efficiently tap into the power of Artificial Intelligence without having that become their full-time job. Chat GPT 5.0 should help to resolve that problem. When Is It Coming? Industry reports and hints from OpenAI CEO Sam Altman suggest a mid-to-late August 2025 launch. The rollout may be gradual, with early capacity crunches as millions rush to try it out. OpenAI has also flagged that the model has been undergoing extensive safety testing and external 'red‑teaming' to ensure it's ready for mainstream use. Currently there is a version 4.5 model available in research mode. Why It Matters: Unified AI Model : No more switching between 'reasoning' and 'creative' modes—GPT 5 blends them into one smarter assistant. : No more switching between 'reasoning' and 'creative' modes—GPT 5 blends them into one smarter assistant. Better Reasoning: Handles logic-heavy and multi-step problems more reliably, with fewer errors. Handles logic-heavy and multi-step problems more reliably, with fewer errors. Full Multimodality: Text, images, voice, and now video—you can talk to GPT 5 in almost any medium. Text, images, voice, and now video—you can talk to GPT 5 in almost any medium. Massive Memory: Potentially up to 1 million tokens of context, letting it keep track of long conversations or entire documents. Potentially up to 1 million tokens of context, letting it keep track of long conversations or entire documents. Smarter AI Agents: More capable at planning trips, booking appointments, or shopping online without constant supervision. More capable at planning trips, booking appointments, or shopping online without constant supervision. Speed Options: Three tiers—flagship, 'mini,' and 'nano'—so you can match performance to your needs and budget. Why Should I care? Your Conversations Will Flow Better : Longer memory means GPT‑5 won't 'forget' what you said halfway through a project. It can remember key details from earlier in the conversation, making it feel more like talking to a person who's actually paying attention. This is an advantage versus Google Gemini which does not have the ability to track conversational threads in its commonly used models. : Longer memory means GPT‑5 won't 'forget' what you said halfway through a project. It can remember key details from earlier in the conversation, making it feel more like talking to a person who's actually paying attention. This is an advantage versus Google Gemini which does not have the ability to track conversational threads in its commonly used models. You Can Work Across Formats Seamlessly : Need to summarize a meeting video, write a blog post, and design an image for social media? You'll be able to do it all in one conversation without juggling multiple tools. : Need to summarize a meeting video, write a blog post, and design an image for social media? You'll be able to do it all in one conversation without juggling multiple tools. More Accurate, Less Frustrating Results : GPT‑5's reasoning boost should reduce the number of 'hallucinated' facts, meaning you'll spend less time double-checking its work. If it delivers on this, this will be a major advance. Currently user need to give back some of the time that they save with AI to check its work. : GPT‑5's reasoning boost should reduce the number of 'hallucinated' facts, meaning you'll spend less time double-checking its work. If it delivers on this, this will be a major advance. Currently user need to give back some of the time that they save with AI to check its work. Smarter Help Without Micro-Managing: Imagine telling GPT‑5, 'Plan me a weekend trip to Seattle,' and it not only finds the flights but also suggests restaurants, books a hotel, and emails you an itinerary—all without extra back-and-forth. Why the Excitement Is Justified Recommended Videos Sam Altman recently shared that GPT 5 solved a problem he couldn't—prompting him to joke that he felt 'useless' next to it. That level of problem-solving power is what's drawing attention. And for casual users, the big win isn't just in speed or smarts—it's in making everyday tasks simpler, faster, and more fun. Ok, What's Next? ChatGPT‑5.0 is shaping up to be the most versatile and capable AI model yet. Whether you're looking to brainstorm creative ideas, get reliable answers, or simply save time on your to-do list, GPT‑5 promises to meet you where you are—across text, voice, images, and this release lives up to the hype, AI could move from being a novelty to becoming a daily habit. And we'll be here to cover every step of that journey.