logo
Apple Plans AI Robots, Smarter Siri And Home Devices In Bid To Regain Momentum

Apple Plans AI Robots, Smarter Siri And Home Devices In Bid To Regain Momentum

Gulf Insider2 days ago
Apple is preparing a sweeping lineup of new hardware as part of a broader push into artificial intelligence and the smart home, according to people familiar with the plans. The centerpiece is a tabletop robot, targeted for release in 2027, that is designed to act as a lifelike virtual companion. The company is also developing a smart speaker with a display for next year, as well as home-security cameras that would anchor a new Apple-branded security system.
The projects, which have not been publicly announced, are part of a strategy to reinvigorate the company's product pipeline and expand into categories dominated by rivals like Amazon, Google and Samsung, Bloomberg reports.
Chief Executive Tim Cook signaled the scope of the work in an all-hands meeting this month, telling employees: 'The product pipeline – which I can't talk about – it's amazing, guys. It's amazing. Some of it you'll see soon. Some of it will come later. But there's a lot to see.'
The company has struggled to maintain momentum with recent projects. The Vision Pro mixed-reality headset, promoted as Apple's next big platform, has sold below expectations, while the design of its most popular devices has remained largely unchanged for years. The company has also been criticized for lagging in the generative AI race, even as OpenAI has signaled ambitions to move into hardware with former Apple design chief Jony Ive.
Robotics as the Centerpiece
The tabletop robot, code-named J595, is described as an iPad-size display mounted on a motorized arm that can pivot, extend and reposition itself to follow users in a room. It will feature an entirely new version of Siri, designed to engage in conversations, recall information and insert itself into group discussions. Apple has tested giving the assistant a visual personality under the codename Bubbles, with options ranging from an animated Finder face to Memoji-like characters.
FaceTime will be a central function, with the ability to track people around a room during calls. Apple has also tested letting an iPhone act as a joystick to remotely reposition the robot during videoconferences. Designers are considering a final product that resembles the 'Pixar Lamp' — a reference to the animated studio's logo — and prototypes use a 7-inch display on a swiveling base.
A New Operating System for the Home
Both the robot and the smart display will run a new operating system called Charismatic, built for multiuser households. The interface combines elements of the Apple TV and Apple Watch software, with a focus on widgets, voice commands and facial recognition to personalize content as users approach.
The smart display, code-named J490, will be a pared-down version of the robot, launching as soon as mid-2025. It will support home controls, music playback, browsing and videoconferencing, but initially without the robot's advanced conversational Siri.
Apple's home push also includes cameras, starting with a battery-powered model, code-named J450, that uses facial recognition and infrared sensors. The system could automate functions like turning off lights when a room is empty or playing music for a specific family member. The company has explored a doorbell that can unlock doors using facial recognition.
Siri Overhaul and AI Ambitions
Underlying these devices is a major upgrade to Siri, developed under the codename Linwood and powered by large language models. Apple is also testing a parallel project, Glenwood, that could integrate outside AI models such as Anthropic's Claude.
Engineers are working on a version code-named Linwood with an entirely new brain built around large language models — the foundation of generative AI. The goal is to tap into personal data to fulfill queries, an ability that was delayed due to hiccups with the current version.
That new software, known internally as LLM Siri, is planned for release as early as next spring, Bloomberg News has reported. But work is going even further: Apple is preparing a visually redesigned assistant for iPhones and iPads that will also debut as early as next year. -Bloomberg
Craig Federighi, Apple's senior vice president of software engineering, told employees this month that the overhaul has produced 'a much bigger upgrade than we envisioned' and that 'there is no project people are taking more seriously.'
Engineers have used systems like ChatGPT and Google Gemini during development of the tabletop robot and other AI features. The new Siri is expected to debut as early as next spring on iPhones and iPads, with a redesigned visual interface and tighter integration with personal data.
Beyond the Home
Apple is also working on redesigned iPhones for this year, as well as longer-term projects such as smart glasses, a foldable phone, a large foldable MacBook–iPad hybrid, and a 20th-anniversary iPhone.
The new hardware push comes as the company seeks fresh growth after scrapping high-profile initiatives like its self-driving car program. If successful, the products could help counter the perception that Apple no longer innovates at its former pace — and put the company in a stronger position to compete in the next era of AI-driven consumer technology.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

YouTube turns to AI to spot children posing as adults
YouTube turns to AI to spot children posing as adults

Daily Tribune

timea day ago

  • Daily Tribune

YouTube turns to AI to spot children posing as adults

United States YouTube has started using artificial intelligence (AI) to figure out when users are children pretending to be adults on the popular video-sharing platform amid pressure to protect minors from sensitive content. The new safeguard is being rolled out in the United States as Google-owned YouTube and social media platforms such as Instagram and TikTok are under scrutiny to shield children from content geared for grown-ups. A version of AI referred to as machine learning will be used to estimate the age of users based on a variety of factors, including the kinds of videos watched and account longevity, according to YouTube Youth director of product management James Beser. 'This technology will allow us to infer a user's age and then use that signal, regardless of the birthday in the account, to deliver our age-appropriate product experiences and protections,' Beser said. 'We've used this approach in other markets for some time, where it is working well.' Users will be notified if YouTube believes them to be minors, giving them the option to verify their age with a credit card, selfie, or government ID, according to the tech firm.

After 18 Years Without A Voice, AI-Powered Brain Implant Helps Stroke Survivor Speak Again
After 18 Years Without A Voice, AI-Powered Brain Implant Helps Stroke Survivor Speak Again

Gulf Insider

timea day ago

  • Gulf Insider

After 18 Years Without A Voice, AI-Powered Brain Implant Helps Stroke Survivor Speak Again

At age 30, Ann Johnson's life in Saskatchewan was full. She taught math and physical education at a high school, coached volleyball and basketball, and had recently married and welcomed her first child. At her wedding, she delivered a 15-minute speech filled with joy. Everything changed in 2005, when she suffered a brainstem stroke while playing volleyball with friends. The stroke left her with locked-in syndrome – near-total paralysis and an inability to speak. 'She would try to speak, but her mouth wouldn't move and no sound would come out,' researchers said. For nearly two decades, she communicated slowly using an eye-tracking system, spelling out words one letter at a time. In 2022, Johnson became the third participant in a clinical trial run by researchers at the University of California, San Francisco, and the University of California, Berkeley. The project aimed to restore speech using a brain-computer interface, or neuroprosthesis, that bypasses the body's damaged connections. 'We were able to get a good sense of the part of the brain that is actually responsible for speech production,' said Gopala Anumanchipalli, an assistant professor at UC Berkeley who began the work in 2015 as a postdoctoral researcher with Edward Chang, a UCSF neurosurgeon. 'From there, they figured out how to computationally model the process so that they could synthesize from brain activity what someone is trying to say.' The device records signals from the brain's speech centers, sending them to an AI model trained to translate the activity into text, sound, or even facial animation. 'Just like how Siri translates your voice to text, this AI model translates the brain activity into the text or the audio or the facial animation,' said Kaylo Littlejohn, a Ph.D. student and co-lead on the study. To give Johnson an embodied experience, researchers had her choose from a selection of avatars, and they used a recording of her wedding speech to recreate her voice. An implant plugged into a computer nearby rested on top of the region of her brain that processes speech, acting as a kind of thought decoder. Then they showed her sentences and asked her to try to say them. 'She can't, because she has paralysis, but those signals are still being invoked from her brain, and the neural recording device is sensing those signals,' said Littlejohn. The neural decoding device then sends them to the computer where the AI model resides, where they're translated. 'Just like how Siri translates your voice to text, this AI model translates the brain activity into the text or the audio or the facial animation,' he said. – For Johnson, the trial was emotional. 'What do you think of my artificial voice? Tell me about yourself. I am doing well today,' she asked her husband during one session. The researchers had used a recording of her wedding speech to recreate her voice and paired it with a digital avatar she had chosen. 'We didn't want to read her mind,' Anumanchipalli emphasized. 'We really wanted to give her the agency to do this. In some sessions where she's doing nothing, we have the decoder running, and it does nothing because she's not trying to say anything. Only when she's attempting to say something do we hear a sound or action command.' The early version of the system had an eight-second delay between prompting Johnson and producing speech. But a March study in Nature Neuroscience described a streaming architecture that reduced that to about one second, enabling near-real-time translation. While the avatar in earlier tests bore only a passing resemblance to her, researchers say more lifelike 3D photorealistic versions are possible. 'We can imagine that we could create a digital clone that is very much plugged in … with all the preferences, like how Zoom lets us have all these effects,' Anumanchipalli said. Johnson's implant was removed in February 2024 for reasons unrelated to the trial, but she continues to advise the research team. She has urged them to develop wireless implants and told them the streaming synthesis 'made her feel in control.' Looking ahead, Anumanchipalli said the goal is for neuroprostheses to be 'plug-and-play' and part of standard medical care. 'If that means they have a digital version of themselves communicating for them, that's what they need to be able to do,' he said. Johnson hopes to work as a counselor in a physical rehabilitation facility, ideally using such a device. 'I want patients there to see me and to know their lives are not over now,' she wrote to a UCSF reporter. 'I want to show them that disabilities don't need to stop us or slow us down.'

Apple Plans AI Robots, Smarter Siri And Home Devices In Bid To Regain Momentum
Apple Plans AI Robots, Smarter Siri And Home Devices In Bid To Regain Momentum

Gulf Insider

time2 days ago

  • Gulf Insider

Apple Plans AI Robots, Smarter Siri And Home Devices In Bid To Regain Momentum

Apple is preparing a sweeping lineup of new hardware as part of a broader push into artificial intelligence and the smart home, according to people familiar with the plans. The centerpiece is a tabletop robot, targeted for release in 2027, that is designed to act as a lifelike virtual companion. The company is also developing a smart speaker with a display for next year, as well as home-security cameras that would anchor a new Apple-branded security system. The projects, which have not been publicly announced, are part of a strategy to reinvigorate the company's product pipeline and expand into categories dominated by rivals like Amazon, Google and Samsung, Bloomberg reports. Chief Executive Tim Cook signaled the scope of the work in an all-hands meeting this month, telling employees: 'The product pipeline – which I can't talk about – it's amazing, guys. It's amazing. Some of it you'll see soon. Some of it will come later. But there's a lot to see.' The company has struggled to maintain momentum with recent projects. The Vision Pro mixed-reality headset, promoted as Apple's next big platform, has sold below expectations, while the design of its most popular devices has remained largely unchanged for years. The company has also been criticized for lagging in the generative AI race, even as OpenAI has signaled ambitions to move into hardware with former Apple design chief Jony Ive. Robotics as the Centerpiece The tabletop robot, code-named J595, is described as an iPad-size display mounted on a motorized arm that can pivot, extend and reposition itself to follow users in a room. It will feature an entirely new version of Siri, designed to engage in conversations, recall information and insert itself into group discussions. Apple has tested giving the assistant a visual personality under the codename Bubbles, with options ranging from an animated Finder face to Memoji-like characters. FaceTime will be a central function, with the ability to track people around a room during calls. Apple has also tested letting an iPhone act as a joystick to remotely reposition the robot during videoconferences. Designers are considering a final product that resembles the 'Pixar Lamp' — a reference to the animated studio's logo — and prototypes use a 7-inch display on a swiveling base. A New Operating System for the Home Both the robot and the smart display will run a new operating system called Charismatic, built for multiuser households. The interface combines elements of the Apple TV and Apple Watch software, with a focus on widgets, voice commands and facial recognition to personalize content as users approach. The smart display, code-named J490, will be a pared-down version of the robot, launching as soon as mid-2025. It will support home controls, music playback, browsing and videoconferencing, but initially without the robot's advanced conversational Siri. Apple's home push also includes cameras, starting with a battery-powered model, code-named J450, that uses facial recognition and infrared sensors. The system could automate functions like turning off lights when a room is empty or playing music for a specific family member. The company has explored a doorbell that can unlock doors using facial recognition. Siri Overhaul and AI Ambitions Underlying these devices is a major upgrade to Siri, developed under the codename Linwood and powered by large language models. Apple is also testing a parallel project, Glenwood, that could integrate outside AI models such as Anthropic's Claude. Engineers are working on a version code-named Linwood with an entirely new brain built around large language models — the foundation of generative AI. The goal is to tap into personal data to fulfill queries, an ability that was delayed due to hiccups with the current version. That new software, known internally as LLM Siri, is planned for release as early as next spring, Bloomberg News has reported. But work is going even further: Apple is preparing a visually redesigned assistant for iPhones and iPads that will also debut as early as next year. -Bloomberg Craig Federighi, Apple's senior vice president of software engineering, told employees this month that the overhaul has produced 'a much bigger upgrade than we envisioned' and that 'there is no project people are taking more seriously.' Engineers have used systems like ChatGPT and Google Gemini during development of the tabletop robot and other AI features. The new Siri is expected to debut as early as next spring on iPhones and iPads, with a redesigned visual interface and tighter integration with personal data. Beyond the Home Apple is also working on redesigned iPhones for this year, as well as longer-term projects such as smart glasses, a foldable phone, a large foldable MacBook–iPad hybrid, and a 20th-anniversary iPhone. The new hardware push comes as the company seeks fresh growth after scrapping high-profile initiatives like its self-driving car program. If successful, the products could help counter the perception that Apple no longer innovates at its former pace — and put the company in a stronger position to compete in the next era of AI-driven consumer technology.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store