
Amazon backed Skild AI unveils general purpose AI model for multi-purpose robots
The company said its model includes built-in power limits to prevent robots from applying unsafe force.
Skild
trains its model on simulated episodes and human-action videos, then fine-tunes it using data from every robot running the system. Co-founders Deepak Pathak and Abhinav Gupta told Reuters in an exclusive interview that the approach helps tackle a data scarcity problem unique to robotics.
"Unlike language or vision, there is no data for robotics on the internet. So you cannot just go and apply these generative AI techniques," Pathak, who serves as CEO, told Reuters.
Robots deployed by customers feed data back into Skild Brain to sharpen its skills, creating the same "shared brain," said Gupta, who previously founded
Meta
Platforms' robotics lab in Pittsburgh.
Skild's clients include LG CNS - the IT solutions arm of LG Group - and other unnamed partners in logistics and other industrial applications.
Unlike software, which can scale quickly, robotics requires physical deployment, which takes time, but Skild's approach allows robots to add new capabilities across different industries quickly, said Raviraj Jain, partner at the startup's investor Lightspeed Venture Partners.
The two-year-old startup, which has hired staff from
Tesla
, Nvidia and Meta, raised $300 million in a Series A funding round last year that valued it at $1.5 billion. Its investors include Menlo Ventures, Khosla Ventures, Sequoia Capital and Amazon founder Jeff Bezos.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
an hour ago
- Indian Express
Alexa got an AI brain transplant. How smart is it now?
For the last few years, I've been waiting for Alexa's AI glow-up. I've been a loyal user of Alexa, the voice assistant that powers Amazon's home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don't use them for anything complicated — playing music, setting timers and getting the weather forecast are basically it — they're good at what they do. But since 2023, when ChatGPT added an AI voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant — a new AI system built around the same large language models, or LLMs, that power ChatGPT and other products. LLM-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice assistant. Amazon agrees. For the last few years, the company has been working feverishly to upgrade the AI inside Alexa. It has been a slog. Replacing the AI technology inside a voice assistant isn't as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. LLMs also aren't a perfect match for this kind of product, which not only needs to work with tons of preexisting services and millions of Alexa-enabled devices but also needs to reliably perform basic tasks. But finally, the new Alexa — known as Alexa+ — is here. It's a big, ambitious remodel that is trying to marry the conversational skills of generative AI chatbots with the daily tasks that the old Alexa did well. Alexa+, which has been available to testers through an early-access program for a few months, is now being rolled out more widely. I got it recently after I bought a compatible device (the Echo Show 8, which has an 8-inch screen) and enrolled in the upgraded version. (Prime members will get Alexa+ at no cost, while non-Prime members will have to pay $19.99 per month.) The New York Times recently announced a licensing deal with Amazon, which will allow Amazon to use Times content in its AI systems, including Alexa+. The Times is also suing OpenAI, the maker of ChatGPT, and Microsoft for alleged copyright violations related to the training of AI systems. I have good news and bad news for my fellow Alexa-heads. The good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.) And I liked some of Alexa+'s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my 3-year-old. The new Alexa is also better at handling multistep requests. 'Set three kitchen timers for 15, 25 and 45 minutes' and 'write a one-day itinerary for a trip to San Diego and send it to my email' were two prompts that worked for me. And Alexa+ doesn't require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice change. The bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other AI voice assistants I've tried but also was noticeably worse than the original Alexa at some basic tasks. When I asked Alexa+ to cancel an alarm the other morning — a request I had made to the old Alexa hundreds of times with no issues — it simply ignored me. When I emailed a research paper to alexa@ in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn't be found. Alexa+ also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter's recommended box grater and add it to my Amazon cart, it responded that 'according to Wirecutter, the best box grater is the OXO Good Grips Box Grater.' Wirecutter's actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new AI model on my laptop, it got tripped up and started repeating, 'Oh, no, my wires got crossed.' And I didn't have access to some of the new Alexa+ features Amazon advertised, such as a 'routine' feature that triggers several different actions when a user enters a room. (I wanted to have Alexa+ greet me in the mornings with a motivational speech and a high-volume rendition of 'Eye of the Tiger,' but the presence-sensing feature hasn't been turned on yet, according to an Amazon spokesperson.) Daniel Rausch, the Amazon vice president who oversees Alexa and Echo, told me in a recent podcast interview that many of these flaws would be fixed soon as Alexa+ rolled out more widely and more of its features came online. 'We've got some edges to sand,' he said. Alexa, Give Me 500 Words on the History of Kitchen Timers Rausch said the biggest challenge in building generative AI models into Alexa was that they were fundamentally different types of systems. The old Alexa, he said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room — all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by one. Adding generative AI to Alexa forced Amazon to rebuild many of these processes, Rausch said. Large language models, he said, are 'stochastic,' meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative but less reliable. It also made the voice assistant slow. Rausch recalled an early internal demo in which Alexa+ took more than 30 seconds to play a song — an 'excruciating' delay, he said, that led the team to rethink its approach. 'These models are slow to respond when they're following a deep set of instructions,' he said. 'We're asking them to do something quite hard.' Another challenge to overcome, Rausch said, was generative AI's wordiness. Initially, when engineers hooked Alexa up to large language models, the system would sometimes produce long, verbose answers or introduce needless complexity. Alexa+ might respond to a user's request for a 10-minute kitchen timer with a 500-word essay about the history of kitchen timers. The solution, Rausch said, was to spend several years combining more than 70 AI models — some Amazon's proprietary models and some from outside providers, like Anthropic's Claude — into a single, voice-based interface, with an orchestration system that routes a user's request to the model that is best suited to handle it. 'The magic, when it is working really well, is to get those new ways of speaking to Alexa to interface with those predictable outcomes or behaviors,' he said. There are other barriers, too. One of them, Rausch said, is that many longtime users have learned how to 'speak Alexa,' phrasing their daily requests in familiar commands that they know the system will understand. 'We all sort of came up with our way of setting a timer to get the pasta done on time,' he said. But Alexa+ processes language in a more fluid way. Users can talk to it as they would talk to a human — no robot pidgin required — and that may necessitate some retraining. I assume that many of the flaws will be ironed out and that most users will acclimate to the new way of speaking to Alexa+. I'm also inclined to cut Amazon some slack, since building LLM-based technology into a reliable voice assistant seems like a thorny technical problem, and it's not like anyone else has solved it. (Apple, which has been struggling to give Siri an AI upgrade for years, certainly hasn't.) I also don't think the limitations of Alexa+ suggest that generative AI models are inherently unreliable or that they'll never work as personal voice assistants. Ultimately, I think it's just really hard to combine generative AI with older, legacy systems — a lesson many companies, both inside and outside tech, are learning the hard way right now — and it's going to take some time to work out all the kinks. For now, I'm going to downgrade my devices to the older, less intelligent version of Alexa and leave the beta testing to others. With AI, as with humans, sometimes raw intelligence matters less than how you use it.


Indian Express
an hour ago
- Indian Express
US licenses Nvidia to export chips to China, official says
The commerce department has started issuing licenses to Nvidia to export its H20 chips to China, a U.S. official told Reuters on Friday, removing a significant hurdle to the AI bellwether's access to a key market. The U.S. last month reversed an April ban on the sale of the H20 chip to China. The company had tailored the microprocessor specially to the Chinese market to comply with the Biden-era AI chip export controls. The curbs will slice $8 billion off sales from its July quarter, the chipmaker has warned. Nvidia CEO Jensen Huang met with Trump on Wednesday, two sources familiar with the matter told Reuters. A spokesperson for Nvidia declined comment. A White House spokesman did not immediately respond to a request for comment. The company said in July it was filing applications with the U.S. government to resume sales to China of the H20 graphics processing unit, and had been assured it would get the licenses soon. It is unclear how many licenses may have been issued, which companies Nvidia is allowed to ship the H20s to, and the value of the shipments allowed. Nvidia disclosed in April that it expected a $5.5 billion charge related to the restrictions. In May, Nvidia said the actual first-quarter charge due to the H20 restrictions was $1 billion less than expected because it was able to reuse some materials. The Financial Times first reported Friday's developments. Nvidia said last month that its products have no 'backdoors' that would allow remote access or control after China raised concerns over potential security risks in the H20 chip. Exports of Nvidia's other advanced AI chips, barring the H20, to China are still restricted. Successive U.S. administrations have curbed exports of advanced chips to China, looking to stymie Beijing's AI and defense development. While this has impacted U.S. firms' ability to fully address booming demand from China, one of the world's largest semiconductor markets, it still remains an important revenue driver for American chipmakers. Huang has said the company's leadership position could slip without sales to China, where developers were being courted by Huawei Technologies with chips produced in China. In May, Nvidia said the H20 had brought in $4.6 billion in sales in the first quarter and that China accounted for 12.5% of overall revenue during the period.


Time of India
2 hours ago
- Time of India
Alexa got an AI brain transplant: How smart is it now
Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads For the last few years, I've been waiting for Alexa 's AI glow-up.I've been a loyal user of Alexa, the voice assistant that powers Amazon 's home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don't use them for anything complicated -- playing music, setting timers and getting the weather forecast are basically it -- they're good at what they since 2023, when ChatGPT added an AI voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant -- a new AI system built around the same large language models, or LLMs, that power ChatGPT and other products. LLM-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice agrees. For the last few years, the company has been working feverishly to upgrade the AI inside Alexa. It has been a slog. Replacing the AI technology inside a voice assistant isn't as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. LLMs also aren't a perfect match for this kind of product, which not only needs to work with tons of preexisting services and millions of Alexa-enabled devices but also needs to reliably perform basic finally, the new Alexa -- known as Alexa+ -- is here. It's a big, ambitious remodel that is trying to marry the conversational skills of generative AI chatbots with the daily tasks that the old Alexa did which has been available to testers through an early-access program for a few months, is now being rolled out more widely. I got it recently after I bought a compatible device (the Echo Show 8, which has an 8-inch screen) and enrolled in the upgraded version. (Prime members will get Alexa+ at no cost, while non-Prime members will have to pay $19.99 per month.)The New York Times recently announced a licensing deal with Amazon, which will allow Amazon to use Times content in its AI systems, including Alexa+. The Times is also suing OpenAI, the maker of ChatGPT, and Microsoft for alleged copyright violations related to the training of AI systems.I have good news and bad news for my fellow good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.)And I liked some of Alexa+'s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my new Alexa is also better at handling multistep requests. "Set three kitchen timers for 15, 25 and 45 minutes" and "write a one-day itinerary for a trip to San Diego and send it to my email" were two prompts that worked for Alexa+ doesn't require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other AI voice assistants I've tried but also was noticeably worse than the original Alexa at some basic I asked Alexa+ to cancel an alarm the other morning -- a request I had made to the old Alexa hundreds of times with no issues -- it simply ignored I emailed a research paper to alexa@ in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn't be also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter 's recommended box grater and add it to my Amazon cart, it responded that "according to Wirecutter, the best box grater is the OXO Good Grips Box Grater." Wirecutter's actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new AI model on my laptop, it got tripped up and started repeating, "Oh, no, my wires got crossed."And I didn't have access to some of the new Alexa+ features Amazon advertised, such as a "routine" feature that triggers several different actions when a user enters a room. (I wanted to have Alexa+ greet me in the mornings with a motivational speech and a high-volume rendition of "Eye of the Tiger," but the presence-sensing feature hasn't been turned on yet, according to an Amazon spokesperson.)Daniel Rausch , the Amazon vice president who oversees Alexa and Echo, told me in a recent podcast interview that many of these flaws would be fixed soon as Alexa+ rolled out more widely and more of its features came online."We've got some edges to sand," he said the biggest challenge in building generative AI models into Alexa was that they were fundamentally different types of old Alexa, he said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room -- all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by generative AI to Alexa forced Amazon to rebuild many of these processes, Rausch said. Large language models, he said, are "stochastic," meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative but less also made the voice assistant slow. Rausch recalled an early internal demo in which Alexa+ took more than 30 seconds to play a song -- an "excruciating" delay, he said, that led the team to rethink its approach."These models are slow to respond when they're following a deep set of instructions," he said. "We're asking them to do something quite hard."Another challenge to overcome, Rausch said, was generative AI's wordiness. Initially, when engineers hooked Alexa up to large language models, the system would sometimes produce long, verbose answers or introduce needless complexity. Alexa+ might respond to a user's request for a 10-minute kitchen timer with a 500-word essay about the history of kitchen solution, Rausch said, was to spend several years combining more than 70 AI models -- some Amazon's proprietary models and some from outside providers, like Anthropic's Claude -- into a single, voice-based interface, with an orchestration system that routes a user's request to the model that is best suited to handle it."The magic, when it is working really well, is to get those new ways of speaking to Alexa to interface with those predictable outcomes or behaviors," he are other barriers, too. One of them, Rausch said, is that many longtime users have learned how to "speak Alexa," phrasing their daily requests in familiar commands that they know the system will understand."We all sort of came up with our way of setting a timer to get the pasta done on time," he Alexa+ processes language in a more fluid way. Users can talk to it as they would talk to a human -- no robot pidgin required -- and that may necessitate some retraining.I assume that many of the flaws will be ironed out and that most users will acclimate to the new way of speaking to Alexa+. I'm also inclined to cut Amazon some slack, since building LLM-based technology into a reliable voice assistant seems like a thorny technical problem, and it's not like anyone else has solved it. ( Apple , which has been struggling to give Siri an AI upgrade for years, certainly hasn't.)I also don't think the limitations of Alexa+ suggest that generative AI models are inherently unreliable or that they'll never work as personal voice assistants. Ultimately, I think it's just really hard to combine generative AI with older, legacy systems -- a lesson many companies, both inside and outside tech, are learning the hard way right now -- and it's going to take some time to work out all the now, I'm going to downgrade my devices to the older, less intelligent version of Alexa and leave the beta testing to others. With AI, as with humans, sometimes raw intelligence matters less than how you use it.