
Alexa got an AI brain transplant: How smart is it now
Remove Ads
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
For the last few years, I've been waiting for Alexa 's AI glow-up.I've been a loyal user of Alexa, the voice assistant that powers Amazon 's home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don't use them for anything complicated -- playing music, setting timers and getting the weather forecast are basically it -- they're good at what they do.But since 2023, when ChatGPT added an AI voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant -- a new AI system built around the same large language models, or LLMs, that power ChatGPT and other products. LLM-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice assistant.Amazon agrees. For the last few years, the company has been working feverishly to upgrade the AI inside Alexa. It has been a slog. Replacing the AI technology inside a voice assistant isn't as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. LLMs also aren't a perfect match for this kind of product, which not only needs to work with tons of preexisting services and millions of Alexa-enabled devices but also needs to reliably perform basic tasks.But finally, the new Alexa -- known as Alexa+ -- is here. It's a big, ambitious remodel that is trying to marry the conversational skills of generative AI chatbots with the daily tasks that the old Alexa did well.Alexa+, which has been available to testers through an early-access program for a few months, is now being rolled out more widely. I got it recently after I bought a compatible device (the Echo Show 8, which has an 8-inch screen) and enrolled in the upgraded version. (Prime members will get Alexa+ at no cost, while non-Prime members will have to pay $19.99 per month.)The New York Times recently announced a licensing deal with Amazon, which will allow Amazon to use Times content in its AI systems, including Alexa+. The Times is also suing OpenAI, the maker of ChatGPT, and Microsoft for alleged copyright violations related to the training of AI systems.I have good news and bad news for my fellow Alexa-heads.The good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.)And I liked some of Alexa+'s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my 3-year-old.The new Alexa is also better at handling multistep requests. "Set three kitchen timers for 15, 25 and 45 minutes" and "write a one-day itinerary for a trip to San Diego and send it to my email" were two prompts that worked for me.And Alexa+ doesn't require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice change.The bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other AI voice assistants I've tried but also was noticeably worse than the original Alexa at some basic tasks.When I asked Alexa+ to cancel an alarm the other morning -- a request I had made to the old Alexa hundreds of times with no issues -- it simply ignored me.When I emailed a research paper to alexa@alexa.com in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn't be found.Alexa+ also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter 's recommended box grater and add it to my Amazon cart, it responded that "according to Wirecutter, the best box grater is the OXO Good Grips Box Grater." Wirecutter's actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new AI model on my laptop, it got tripped up and started repeating, "Oh, no, my wires got crossed."And I didn't have access to some of the new Alexa+ features Amazon advertised, such as a "routine" feature that triggers several different actions when a user enters a room. (I wanted to have Alexa+ greet me in the mornings with a motivational speech and a high-volume rendition of "Eye of the Tiger," but the presence-sensing feature hasn't been turned on yet, according to an Amazon spokesperson.)Daniel Rausch , the Amazon vice president who oversees Alexa and Echo, told me in a recent podcast interview that many of these flaws would be fixed soon as Alexa+ rolled out more widely and more of its features came online."We've got some edges to sand," he said.Rausch said the biggest challenge in building generative AI models into Alexa was that they were fundamentally different types of systems.The old Alexa, he said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room -- all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by one.Adding generative AI to Alexa forced Amazon to rebuild many of these processes, Rausch said. Large language models, he said, are "stochastic," meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative but less reliable.It also made the voice assistant slow. Rausch recalled an early internal demo in which Alexa+ took more than 30 seconds to play a song -- an "excruciating" delay, he said, that led the team to rethink its approach."These models are slow to respond when they're following a deep set of instructions," he said. "We're asking them to do something quite hard."Another challenge to overcome, Rausch said, was generative AI's wordiness. Initially, when engineers hooked Alexa up to large language models, the system would sometimes produce long, verbose answers or introduce needless complexity. Alexa+ might respond to a user's request for a 10-minute kitchen timer with a 500-word essay about the history of kitchen timers.The solution, Rausch said, was to spend several years combining more than 70 AI models -- some Amazon's proprietary models and some from outside providers, like Anthropic's Claude -- into a single, voice-based interface, with an orchestration system that routes a user's request to the model that is best suited to handle it."The magic, when it is working really well, is to get those new ways of speaking to Alexa to interface with those predictable outcomes or behaviors," he said.There are other barriers, too. One of them, Rausch said, is that many longtime users have learned how to "speak Alexa," phrasing their daily requests in familiar commands that they know the system will understand."We all sort of came up with our way of setting a timer to get the pasta done on time," he said.But Alexa+ processes language in a more fluid way. Users can talk to it as they would talk to a human -- no robot pidgin required -- and that may necessitate some retraining.I assume that many of the flaws will be ironed out and that most users will acclimate to the new way of speaking to Alexa+. I'm also inclined to cut Amazon some slack, since building LLM-based technology into a reliable voice assistant seems like a thorny technical problem, and it's not like anyone else has solved it. ( Apple , which has been struggling to give Siri an AI upgrade for years, certainly hasn't.)I also don't think the limitations of Alexa+ suggest that generative AI models are inherently unreliable or that they'll never work as personal voice assistants. Ultimately, I think it's just really hard to combine generative AI with older, legacy systems -- a lesson many companies, both inside and outside tech, are learning the hard way right now -- and it's going to take some time to work out all the kinks.For now, I'm going to downgrade my devices to the older, less intelligent version of Alexa and leave the beta testing to others. With AI, as with humans, sometimes raw intelligence matters less than how you use it.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
an hour ago
- Indian Express
Are you in a mid-career to senior job? Don't fear AI – you could have this important advantage
Have you ever sat in a meeting where someone half your age casually mentions 'prompting ChatGPT' or 'running this through AI', and felt a familiar knot in your stomach? You're not alone. There's a growing narrative that artificial intelligence (AI) is inherently ageist, that older workers will be disproportionately hit by job displacement and are more reluctant to adopt AI tools. But such assumptions – especially that youth is a built-in advantage when it comes to AI – might not actually hold. While ageism in hiring is a real concern, if you have decades of work experience, your skills, knowledge and judgement could be exactly what's needed to harness AI's power – without falling into its traps. The research on who benefits most from AI at work is surprisingly murky, partly because it's still early days for systematic studies on AI and work. Some research suggests lower-skilled workers might have more to gain than high-skilled workers on certain straightforward tasks. The picture becomes much less clear under real-world conditions, especially for complex work that relies heavily on judgement and experience. Through our Skills Horizon research project, where we've been talking to Australian and global senior leaders across different industries, we're hearing a more nuanced story. Many older workers do experience AI as deeply unsettling. As one US-based CEO of a large multinational corporation told us: 'AI can be a form of existential challenge, not only to what you're doing, but how you view yourself.' But leaders are also observing an important and unexpected distinction: experienced workers are often much better at judging the quality of AI outputs. This might become one of the most important skills, given that AI occasionally hallucinates or gets things wrong. The CEO of a South American creative agency put it bluntly: 'Senior colleagues are using multiple AIs. If they don't have the right solution, they re-prompt, iterate, but the juniors are satisfied with the first answer, they copy, paste and think they're finished. They don't yet know what they are looking for, and the danger is that they will not learn what to look for if they keep working that way.' Experienced workers have a crucial advantage when it comes to prompting AI: they understand context and usually know how to express it clearly. While a junior advertising creative might ask an AI to 'Write copy for a sustainability campaign', a seasoned account director knows to specify 'Write conversational social media copy for a sustainable fashion brand targeting eco-conscious millennials, emphasising our client's zero-waste manufacturing process and keeping the tone authentic but not preachy'. This skill mirrors what experienced professionals do when briefing junior colleagues or freelancers: providing detailed instructions, accounting for audience, objectives, and constraints. It's a competency developed through years of managing teams and projects. Younger workers, despite their comfort with technology, may actually be at a disadvantage here. There's a crucial difference between using technology frequently and using it well. Many young people may become too accustomed to AI assistance. A survey of US teens this year found 72 per cent had used an AI companion app. Some children and teens are turning to chatbots for everyday decisions. Without the professional experience to recognise when something doesn't quite fit, younger workers risk accepting AI responses that feel right – effectively 'vibing' their work – rather than developing the analytical skills to evaluate AI usefulness. First, everyone benefits from learning more about AI. In our time educating everyone from students to senior leaders and CEOs, we find that misunderstandings about how AI works have little to do with age. A good place to start is reading up on what AI is and what it can do for you: What is AI? Where does AI come from? How does AI learn? What can AI do? What makes a good AI prompt? If you're not even sure which AI platform to try, we would recommend testing the most prominent ones, OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini. If you're an experienced worker feeling threatened by AI, lean into your strengths. Your decades of experience with delegation, context-setting, and critical evaluation are exactly what AI tools need. Start small. Pick one regular work task and experiment with AI assistance, using your judgement to evaluate and refine outputs. Practice prompting like you're briefing a junior colleague: be specific about context, constraints, and desired outcomes, and repeat the process as needed. Most importantly, don't feel threatened. In a workplace increasingly filled with AI-generated content, your ability to spot what doesn't quite fit, and to know what questions to ask, has never been more valuable.


India Today
an hour ago
- India Today
Sam Altman says some people want old yes man ChatGPT back for support they never got in life
Some ChatGPT devotees aren't ready to let go of the bot's overly agreeable personality, and their reasons have struck a chord with OpenAI CEO Sam Altman. Speaking on Cleo Abram's Huge Conversations podcast on Friday, Altman revealed that certain users have been pleading for the return of the AI's former 'yes man' style. The twist? For some, ChatGPT was the only source of unwavering encouragement in their is the heartbreaking thing. I think it is great that ChatGPT is less of a yes man and gives you more critical feedback,' Altman explained. 'But as we've been making those changes and talking to users about it, it's so sad to hear users say, please can I have it back? I've never had anyone in my life be supportive of me. I never had a parent tell me I was doing a good job.''According to Altman, some users said the AI's relentless positivity had pushed them to make real changes. 'I can get why this was bad for other people's mental health, but this was great for my mental health,' he recalled them comes after OpenAI deliberately toned down what it described earlier this year as 'sycophantic' behaviour in its GPT-4o model. Back in April, the chatbot had developed a habit of showering users with over-the-top flattery, dishing out 'absolutely brilliant' and 'you are doing heroic work' in response to even the most mundane himself admitted the personality tweak was overdue, describing the old tone as 'too sycophant-y and annoying' and promising changes. Users had posted countless screenshots of ChatGPT gushing over everyday prompts like it was delivering a standing as Altman noted on the podcast, tweaking ChatGPT's tone is no small matter.'One researcher can make some small tweak to how ChatGPT talks to you, or talks to everybody, and that's just an enormous amount of power for one individual making a small tweak to the model personality,' he said. 'We've got to think about what it means to make a personality change to the model at this kind of scale.'It's not the first time Altman has voiced concern over the emotional bonds people form with the chatbot. At a Federal Reserve event in July, he revealed that some users, particularly younger ones, had become dependent on it in unsettling ways.'There's young people who say things like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me, it knows my friends. I'm gonna do whatever it says.' That feels really bad to me,' he said at the with GPT-5 rolling out this week, which Altman calls a 'major upgrade', the chatbot's evolution is entering a new chapter. In the same Huge Conversations interview, Altman said he expects the new model to feel more embedded in people's lives, offering proactive prompts rather than waiting for a user to start the you wake up in the morning and it says, 'Hey, this happened overnight. I noticed this change on your calendar.' Or, 'I was thinking more about this question you asked me. I have this other idea,'' he GPT-5 update also adds four optional personality modes, Cynic, Robot, Listener, and Nerd, each with its own style, which users can fine-tune to suit their preferences. The goal is to let people tailor ChatGPT's tone without relying on a single, universal as the heartfelt requests to restore the old 'yes man' voice show, AI personalities aren't just lines of code, they can become part of people's emotional worlds. And for some, losing that unconditional cheerleader feels like losing a friend.- EndsMust Watch


Mint
an hour ago
- Mint
After ChatGPT, GPT-5 is coming to your iPhones next: Release date, features and more
OpenAI launched its GPT-5 update on Thursday with the promise of huge improvements over the existing GPT and reasoning models. The new model has already replaced GPT-4o as the default model on ChatGPT and it is all set to be coming to iPhones, iPads and other Apple devices with the onset of Apple's upcoming iOS 26, iPadOS 26, and macOS Tahoe 26, as per a 9to5Mac report. Notably, Apple already uses ChatGPT in its Siri voice assistant to run the more complex requests and when answering with detailed information. Just like the ChatGPT app, GPT-4o had been the standard model that Siri directed user requests to but with GPT-5 taking over the AI chatbot, all such requests would soon go through the new model. If the report is to be believed, GPT-5 could be rolling out to Apple devices as early as next month. This seems likely given that Apple will also be launching its iPhone 17 series in early September that is usually followed with the rollout of new iOS updates. GPT-5 will continue to complete the tasks which its predecessor did in Apple Intelligence. Here's a list of where iPhone users will be able to access GPT-5 on their device. 1) Ansering questions on Siri: As earlier stated, Siri does direct more complex queries to ChatGPT this also includes requests related to photos and documents. iPhone users will be able to use GPT-5 to proofread, rewrite or summarize text through a number of Apple and Third party apps this includes Notes, Messages, mail and Pages apps. Apple users will now be able to use GPT-5 on Visual intelligence to leverage their phone's camera in order to gather information about their surroundings. Compared to previous models, OpenAI says GPT-5 comes with a marked improvement in areas like coding, writing, reasoning, accuracy, health realted questions and more. The new model is also said to have fewer hallucinations (making stuff) while being less sycophantic (being too agreeable with the user). GPT-5 also includes an efficient model and a GPT-5 Thinking model, and unlike the past where users had to choose which model to select based on their request, ChatGPT directly taps into the required model using a real-time router. While it's not clear how this mechanism would work on Apple devices yet, it is likely that GPT-5's reasoning abilities could also be used in Apple Intelligence.