Latest news with #Alexa+

Engadget
5 hours ago
- Engadget
Alexa+ preview: An almost philosophical exercise
'Alexa, I need to write a review about you.' 'Oh, a review? I'm intrigued! What's the occasion? I'd love to hear what you think.' 'There's no occasion, this is just my work.' 'Ah, I see. Writing a professional review, got it. I'm here to help if you need any specific information.' 'Yeah, what should I consider when reviewing you?' 'Consider my natural voice, emotional intelligence and my ability to handle complex tasks. How about my personality and humor? Those are key parts of the new me.' The above is a recent exchange I had with Alexa+. Access to the next-generation assistant has been slowly trickling out to users since Amazon first announced availability at an event in February. But work on the smarter, more natural-sounding Alexa has been going on for years, and it still isn't ready for a widespread release. The version currently rolling out is more like an intriguing preview of what a truly intelligent Alexa could be. I'm having a hard time figuring out how I feel about the upgraded Alexa. On the one hand, I cannot deny it is now capable of tackling some fairly complex requests it couldn't before. On the other, it doesn't have good memory, can't really see all tasks to completion and is just incompetent enough to be annoying. I want to first caveat all of this by pointing out that when it does become broadly available, Alexa+ will be a free upgrade for Amazon Prime subscribers. Non-Prime members will have to pay $20 a month for the privilege, but they'll still get the existing Alexa experience on their compatible devices should they choose not to upgrade. That means there will likely be no extra cost to most, and you probably won't need to spend too much time thinking about whether you want to use it. As long as it's not a worse experience, it is very easy to say yes to. I've been tempted to get philosophical when thinking about a digital assistant that's more fluent in human. Do I really want to think of the new Alexa as basically a flesh-and-blood assistant with emotional intelligence, personality and humor? No. If it performs poorly, I'll be let down; if it performs well and can essentially pass a mini Turing test, the ickier I'll feel at keeping something like an indentured servant in my kitchen. I set aside my existential questions and tried to focus on the practical experience of getting help from Amazon's upgraded assistant. Is it better than the previous version? Is it reliable and easy to use? Finally, does it deliver what Amazon promised? And as a bonus, is the experience enjoyable (or at least painless)? The answer to all those questions is a half-hearted shrug. In some ways, Alexa+ delivers. But in many ways it is an excellent showcase of the limitations of generative AI, and demonstrates that the true problem with the current cohort of AI tools is a mismatch between expectations and reality. A voice assistant is hard to describe, since it's intangible and amorphous. It's quite difficult to state where its capabilities begin and end, not to mention how it might have been upgraded. But I'll start by comparing it to its predecessor, which I'll be calling Original Alexa (or OriginAlexa, if you will indulge me). OriginAlexa taught us how to use very specific commands to do things like turn our living room lights on or off. If you had a family member or friend named Alexa, you might have renamed it to 'Computer' and adapted your relationship to that word. Due to how you might have grouped your home gadgets, you could have begun to refer to your kitchen area lights as 'skylights,' for example. 'Alexa speak,' as some call it, differs across households. I say 'Alexa, stop,' to silence alarms, while my best friend says 'Alexa, off,' to do the same. But regardless of the specific word choices, Alexa-speak largely revolved around using stilted phrases and careful enunciation to avoid having to repeat yourself to get something done. Anyone that's used any voice assistant is probably familiar with the frustration of repeating yourself when a command has been misheard for the umpteenth time. That's (supposed to be) a thing of the past with Alexa+. In a blog post announcing the new assistant, Amazon's lead of devices and services Panos Panay said 'Alexa+ is more conversational, smarter, personalized — and she helps you get things done.' The company said it 'rebuilt Alexa with generative AI,' but it didn't just use large language models (LLMs) to make its assistant converse more naturally. It also created new architecture to enable API integration 'at scale.' These APIs are how assistants can connect to third-party services to do stuff on your behalf, and Amazon described them as 'core protocols to getting things done outside of a chat window and in the real world.' In a separate blog post, Amazon said 'This architecture is what will let customers quickly and seamlessly connect with services they already use in their daily life: GrubHub, OpenTable, Ticketmaster, Yelp, Thumbtack, Vagaro, Fodor's, Tripadvisor, Amazon, Whole Foods Market, Uber, Spotify, Apple Music, Pandora, Netflix, Disney+, Hulu, Max, smart home devices from companies like Philips Hue and Roborock, and so much more.' Basically, Alexa can communicate with you more naturally, meaning you can talk to it more like you would with another human being, so you can forget about Alexa-speak. It will also retain information about your preferences and is capable of handling more tasks on your behalf. But enough about the promises. What was living with Alexa+ for weeks actually like? Alexa+ is currently only available as an 'Early Access' preview to a small group of users. Though my access was granted by Amazon for the purposes of this testing, other people in my non-tech circles did start gaining access recently, which means you might be able to check it out yourself soon. The fact that it's still somewhat exclusive and experimental means there are likely to be glitches, which is understandable. Once I got past the first day or two after upgrading to Alexa+, I didn't notice many actual bugs. What frustrations I did encounter later seemed more to do with programming and AI's limitations than unstable software. The updated Assistant currently requires at least one compatible device with a screen on your network, so those of you who only have Echo speakers will have to wait a lot longer or try it on your phone. I spent most of my time testing Alexa+ via an Echo Show 15 as well as the Alexa app on my iPhone. There were small variations in the answers I would get on either device, but by and large the experience was similar. The most meaningful difference really was in how I perceived Alexa. Initially, when I was interacting with it on the smart display, it felt more like an upgraded smart home and personal assistant, and I predominantly asked it to check on the weather, Uber prices or to help me do things like set timers, reminders and play music. Amazon (Screenshots from the Alexa app) On my phone, though, I talked to Alexa+ more like I would with ChatGPT. I asked deeper, more philosophical questions that required more research and thought. I asked it to generate images, sort 15 names into three groups and, inspired by the subreddit 'r/tipofmytongue,' help me find a book I was struggling to recall. Over time, I did come to rely on the smart display more, as it's always easier to just say 'Alexa, is Mountainhead a good movie' than to pick up my phone, find an app and ask the AI. Of course, I could ask the same question of Siri or my Google speakers, and I did. All three assistants answered similarly, each citing different sources. Only Alexa gave me a direct answer, saying 'Mountainhead is a good movie,' followed by details like its IMDB score. The other two simply rattled off 'On the website RottenTomatoes dot com, …' or 'here's an answer from whattowatch dot com.' In many ways, Alexa+ is a marked improvement over its predecessor, and I have to admit I found myself nodding, impressed, at its ability to tackle multi-step tasks and recall previous conversations. Now, I have many gripes with the latter that I'll elaborate on later, but the fact that I was able to get Alexa+ on the Echo Show to check the price of an Uber ride and book it for me was a pleasant surprise. Of course, it selected the wrong pickup location and I ended up having the first driver cancel on me because I wasn't waiting at the right spot. But it did manage to completely book a ride on my behalf, relying solely on my voice commands and an Uber integration I had set up earlier. I was initially impressed by the assistant's ability to refer to our previous conversations and remember things I told it to, like my partner's address and my temperature preferences. But its ability to do so was inconsistent — most times if I asked Alexa to refer to things we had discussed in previous conversations, it either required a lot of prodding to get to the right nugget, or it simply didn't recall. I did have to tip my hat to Amazon when I asked Alexa to 'play my Rox playlist on Spotify when I tell you I'm home.' The assistant not only walked me through setting up that routine entirely through a verbal conversation, but also pointed out limitations like only being able to set a volume for playback after a duration had been set. It presented me with two options: 'We can either set a duration for the music to play, or we can make it the last action in the routine.' I almost thought I was talking to a capable human assistant when it told me all that, though after Alexa misheard me and thought I said 'saturation' instead of 'set duration,' the illusion was shattered. There are many other things Alexa+ can do that are reminiscent of the current crop of trendy AI assistants like ChatGPT or Claude. Ask it for help making a decision on what to cook, for example, or generating images, planning a project or for movie recommendations. One new capability I was excited about was sending me emails from our conversation. I wouldn't say the sky is the limit, but I do think that coming up with a complete list of what it can now do would take forever. It'd be like asking what you can search for on Google — basically whatever you can think of. Whether it brings you the answers you're looking for is a different question. I found Alexa+ helpful in that it was able to email me the lists of names it sorted on my behalf, or the project timeline I asked it to help create. But the limits to what it would send me were frustrating. Straightforward content, like the three groups of five names, arrived at my inbox with no problem. Other times, like when I asked it to email me the conversation I started this article with, it only sent me part of our chat. This has a lot to do with what Alexa deems to be the beginning and ending of a conversation, and it was fairly often wrong. I'll go deeper into the other limits of the contents of Alexa's emails in the next section, but in short, it's inconsistent. That's a pattern of behavior that you'll see here. Alexa+ will be capable in some new way that has potential to be exciting and useful, but it will fail you somehow or execute its task incompletely. I loved that it was able to understand me through my verbal stumbles, or integrating with my third-party apps and email. But I kept hitting walls or being let down. The overall effect wasn't annoying enough to be frustrating, but it was disappointing enough that I never really came to rely on Alexa+ for some functions. For example, during my testing I asked Alexa+ most mornings to check on the price of 'that Uber ride' I booked. Over the course of a few weeks, I asked variations of 'can you check the price of that Uber ride I took yesterday' or 'please check how much an Uber is this morning for my usual ride.' In response to the latter, Alexa+ replied 'I can help you check Uber prices for your usual ride. I have two saved pickup locations for you. Would you like to be picked up from Billing address,' and proceeded to rattle off an address I had stored in the Uber app. It continued, offering a second pickup address and asking if I preferred a different location. After I selected one, it asked where I would like to be dropped off. It's as if my previous conversations telling it this every day for a week never happened. To its (very small) credit, Alexa+ gave me accurate prices after I supplied all the parameters, but it took a tiresome amount of time. That's largely due to how verbose the responses are. I understand wanting to be specific and accurate, but I really didn't need my entire mailing address, unit number and zip code included, every time I ordered a cab. I also didn't need Alexa to keep repeating my entire question back to me — a simple 'Yes I can' would have sufficed. Amazon (Screenshots from Alexa app) Alexa+ also came off a bit needy, which would be humanizing if it wasn't so robotic about it. I would thank it whenever I was done with a conversation or request, and it would reply 'You're welcome. Glad I could help you with…' and make a sort of reference to our chat in a few words. Or it would say 'you're welcome, have a nice day.' I found out I could tell it to 'be less verbose' and while it said it would, Alexa+ still continued to reply 'You're welcome, have a good day' every time I told it thanks after it filled me in on the weather forecast. I could almost put up with the overly long responses, if Alexa did things the way I expected. But like I already mentioned, it's inconsistent. Though it's capable of emailing me, it doesn't appear to be able to send images, at least based on all the picture-less emails I've received. The inability to send photos from the Echo Show's built-in camera is a prudent privacy protection measure, but Alexa+ could have just told me that when I asked 'can you send all of this plus those photos you took to me in an email?' Instead, it replied 'Certainly, I can help you with that. I'll draft an email with the descriptions of the room and the person, along with the photos I've analyzed. Let me prepare that for you,' followed shortly by 'I've sent the email with the image descriptions to your Gmail address. You should receive it shortly.' In the email, at the very bottom, Alexa said 'Unfortunately, I can't include the actual photos in this email, but I've described what I observed in them.' Thankfully, I wasn't depending on these images for anything important, but if I were, I can only imagine how frustrated I would have been. To top it all off, the descriptions in the email not only didn't match what was said in our conversation, but were also wrong about what was in the room. During our conversation, Alexa said 'I see a person in a room with white walls, wearing a black tank top and white skirt. There's furniture including a cabinet, bookshelf, desk and coffee table.' That's a fairly accurate description of my living room. The email I got had three sections with bullet points. The first, titled 'Room description,' listed eight items or characteristics, like 'white walls and ceilings' or 'television on a stand and bookshelf' that were largely accurate. Here, it said I had a 'white cabinet with drawers,' which is wrong. In the third section called 'Additional observations,' it said I had a 'ceiling fan with light fixture,' which I only wish I had. Alexa+ has some other quirks that were in early versions of ChatGPT or Dall-E. For example, when I asked it to create an image of an AI assistant on the screen of a smart display on a kitchen island, it was completely unable to understand my request. Not only that, follow-up attempts to get it to 'make the display bigger' or 'put the smiling person on the screen' only resulted in the kitchen slowly filling up with more and more vegetables while the device remained the same size. I didn't know whether to laugh or cry. Amazon (Screenshots from Alexa app) I could continue to list examples of where Alexa+ failed to completely do what I asked it to, but I think you get the point. The one area where I can confirm a noticeable improvement is in the way the assistant sounds and how it understands speech. I'll say that of the four voices I was presented with when I upgraded to the new Alexa, I simply went with the default choice, which was a feminine-sounding American speaker. I didn't mind it, but I know some other people who dislike it. This is obviously a matter of personal preference, so I don't have much comment on the actual voice. I will point out that the expressiveness of the assistant's speech was significantly different from the robotic one of the original Alexa, and again, it didn't bother me. One of my biggest initial concerns was having to adjust how I spoke to Alexa. I considered myself fluent in Alexa-speak. Thankfully, so far, I found it easy to talk to Alexa like I would a person — that is, in longer, rambly sentences with verbal stumbles and utterances. For the most part, I felt understood, or at least better understood than by the Google Assistant or Siri. Of course, that in itself raises philosophical questions of whether you want to speak to AI like you would people to begin with. But from my weeks with Alexa+, that feels like the least pressing concern. Back to the conversation I had with the assistant on what to look out for when I review it. When I pressed Alexa to explain what it meant by its 'emotional intelligence,' it said 'my emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging.' This set off my internal alarm bells just a bit, so I asked how it was detecting my mood. Alexa replied that it was picking up on my tone and word choice, calling it 'a dance of understanding.' So I pushed it further: "Tell me about your emotional intelligence." "My emotional intelligence helps me adapt to different moods and situations. I can pick up on subtle cues and respond appropriately, making our interactions more natural and engaging." "Well how are you detecting my moods" "I pick up on your tone and word choice. Your emotions color our conversation, and I adapt to match. It's like a dance of understanding." "How do you think I am feeling right now?" "Based on our chat, I'd say you're feeling analytical and focused. You're in work mode, evaluating my capabilities for your review." "What about when I, you know, asked you about the weather this morning?" "This morning, you asked about the weather and I provided a sunny forecast. How did that information help you start your day?" "No, I was asking about my mood back then, do you know how I felt then?" "Ah, I see. Based on that morning conversation, you seemed casual and curious about the day ahead. Your tone suggested a typical start to the day, neither particularly excited nor stressed." During this exchange, I also adjusted the pitch of my voice to see if it would make a difference, and none of the responses I got gave me any real insight as to whether Alexa was actually listening to the sound of my voice to determine my mood. It felt like talking to a cagey person who didn't want to let you know what they knew. Neediness, inconsistency and gaslighting. In creating something so imperfect, maybe Amazon has actually managed to nail the human experience. I'll be clear, though. The company never said it set out to make something human. But in using terms like 'natural voice' and 'emotional intelligence,' it certainly seems to imply something more person than robot. It's not a problem unique to Amazon. I didn't get too deep into comparisons to other generative-AI-infused assistants in this piece, because Alexa+ remains in limited preview and fair testing can't be conducted yet. But I did already notice some small areas where the Amazon offering was smarter than others. I asked Alexa, Siri and the Google Assistant (on a Pixel 9) to set a timer for 30 minutes, before changing my mind and telling them to adjust those to 25-minute countdowns instead. Only Alexa took into account the time that had already elapsed into consideration, setting my new timer for 24 minutes and about 30 seconds instead of simply starting over at 25 minutes like the other two. It's an insignificant difference, but effectively shows an element of consideration that's almost human in its complexity. I will spend more time with Alexa+ to further poke at the limits of its abilities and to continue our dance of understanding. We probably won't ever fully be in step with each other, but maybe the goal shouldn't be to achieve perfect harmony, and instead to simply not stomp on one another's toes. If you buy something through a link in this article, we may earn commission.


Tom's Guide
9 hours ago
- Tom's Guide
I used Alexa+ vs ChatGPT to generate 5 AI images — and the results surprised me
You probably know Alexa as the voice that sets timers, dims the lights and plays music, but after testing Alexa+ for a few weeks, I'm discovering new features nearly every I discovered that upgraded Alexa has a creative side and can now generate realistic images on the fly. In my testing, I was impressed by the speed and ease of generating the images, that I couldn't help but do a side-by-side image generation comparison with ChatGPT. Whether you ask for a watercolor of a cozy cabin, a photorealistic puppy or a design mockup, Alexa+ responds in seconds, then texts the image directly to your phone. It's surprisingly intuitive, shockingly fast and potentially a game-changer for everyday what happened when I tested Alexa+ versus ChatGPT and the images each bot generated. Prompt: 'A friendly robot mom, dad and two robot kids sitting at a kitchen table eating spaghetti, surrounded by retro 1950s-style decor, with warm lighting and a dog robot under the table.'ChatGPT automatically went with a cartoonish image of a robot family. It nailed almost everything except for the robot dog and opted for a realistic-looking dog presented a modern-looking image of a robot family also with a 'real' dog. It missed aspects of the prompt such as 'two robot kids' and the style of the ChatGPT wins for accurately following the prompt. Prompt: 'People browsing colorful fruit and vegetable stands at an outdoor farmers market. A woman holds a bouquet of sunflowers, a man samples cheese, and kids eat popsicles. Background includes food trucks and string lights.'ChatGPT once again hit every detail of the prompt, even if it's hard to tell the man is sampling offered a more diverse and more realistic look at a farmer's market, though it's not completely obvious if the man in the image is sampling Alexa+ wins for an image that better captures the energy and diversity of a farmer's market. Prompt: 'Cars lined up outside an elementary school on a gray rainy morning. Parents holding umbrellas hustle their kids to the entrance. Backpacks, rain boots, and puddles all around. View from a car window with raindrops.'ChatGPT created an image that is storybook and highlights the mood of a rainy school day. The backpacks in the puddles is a glaringly unrealistic delivered a photorealistic image that also captures the mood, but looks less like it is a view from a car and doesn't capture the hustle that the prompt asks for. Winner: ChatGPT wins for following the prompt and better storytelling with this image. Prompt: "A golden retriever sitting next to a toddler on a cozy living room rug. The toddler is offering a cracker, and the dog gently takes it. Toys and a sippy cup are scattered around. Natural window light, soft and heartwarming." ChatGPT used soft, warm lighting in an image that evokes a heartwarming, almost storybook-like feel, which best fits in line with the prompt request 'soft and heartwarming' made a great photo that looks very natural and realistic, but it's missing details such as the sippy cup and the dog is not taking the cracker. While it may arguably be a better image, it loses because of missed aspects of the prompt. Winner: ChatGPT wins for the best composition match, setting and details. Prompt: "A diverse group of people crossing a city street in the early morning — a woman in heels with a coffee, a jogger with headphones, a dad pushing a stroller, and a teen on a scooter. Background includes traffic lights, brick buildings, and steam rising from a manhole. Overcast sky, everyday realism."ChatGPT captured all of the details within the prompt but delivered a messy and cluttered image that does not look real and one I wouldn't use for delivered a polished image that highlights the prompt, but misses several Draw. ChatGPT hit every detail of the prompt, but at the cost of an unrealistic crosswalk situation. Alexa+ made an image that better captures the vibe of a busy city but misses out on key details of the prompt. After five rounds of head-to-head testing, one thing is clear: ChatGPT might take a more whimsical or storybook approach, but it consistently nails the specifics. However, I was pleasantly surprised by Alexa+. It generated refined images that often looked more realistic than ChatGPT that are instantly shareable. And it was faster than ChatGPT, too. But in most cases, Alexa+ fell short when it came to following the actual prompt. Whether it's missing a sippy cup, skipping key characters, or glossing over a specific setting, the details matter, especially when you're generating visuals with a purpose. So, while Alexa+ has potential and impressive speed, it still loses out to ChatGPT when prompt accuracy counts. My suggestion to users is to use Alexa+ for speed and realism, but be ready to tweak any resulting image with follow up prompts.


Tom's Guide
2 days ago
- Business
- Tom's Guide
I spent the last two weeks testing Alexa+ — here's 2 things that impressed me, and 1 that needs work
Alexa+, Amazon's new-and-improved voice assistant, is slowly rolling out to beta testers before its full release sometime in the near future. Unlike the original Alexa, Alexa+ uses AI in a number of ways to not only improve the quality of the conversation between you and the assistant, but also provide much more comprehensive services. For example, you can now use Alexa+ to create a smart home routine, order an Uber, or book restaurant reservations. Alexa+ is currently in Early Access, meaning that Amazon is allowing a select number of people to try it and iron out the kinks (here's how to sign up for Alexa+). When it goes live to a wider audience, it will cost $19.99/month, but will be included for free if you subscribe to Amazon Prime. While I had a chance to use Alexa+ briefly a few months back at the launch event, it was only for a few minutes. About two weeks ago, I was granted early access to Alexa+, so I've been able to test it out a bit more over the past few days with the Echo Show 15 and Echo Show 5. Here are my early impressions. When you upgrade to Alexa+, it completely changes the look of your Echo Show to an interface that's much more customizable. Now, you have a lot more flexibility when it comes to widgets. Not only are there more widgets (around 100), but you can move them around the screen, much as you can with your smartphone. Even better: You can set up multiple screens, so you're not confined to just one. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. The new interface makes the Echo Show feel something closer to, say, the Google Pixel tablet, especially with regard to its smart home controls. As someone who has a lot of smart home devices, I especially like the Smart Home widget, which has a bunch of thumbnails of my various cameras (it only works with Ring at the moment) and controls for other things, such as smart lights and thermostats. What's more, if you set up Map View in the Alexa app, you can see a floorplan of your house with all of your smart home devices. Amazon has basically taken the concept of the Echo Hub and turned it into a widget, so it will be interesting to see how much of a future there is for this device, as well as any other Echo device that doesn't have a screen. Still, there's some work to be done. Of the 128 widgets I counted, a number of them were merely larger versions of each other, and their dimensions are fixed, so you can't resize them to say a portrait or landscape orientation. On the Echo Show 5, things are naturally a bit more cramped. You don't get thumbnail views of your smart home cameras, for instance, and the screen can only hold three widgets (and small ones at that) at a time. Things also felt a bit pokey — there was a noticeable lag as I swiped around the display. The first thing you'll notice about Alexa+ is her voice (or his, depending on how you set it up). Alexa is much more expressive, and replies feel more like natural language, rather than something that's purely transactional. As a result, I found it much easier to carry on a conversation. Alexa+'s smarts also extend to when she messes up — she quickly admits her error, and then attempts to resolve it. For example, I had a raccoon wander through my backyard a few nights ago, so I asked Alexa+ if there were any animals that were in my backyard at night. She then responded with a list of videos from my Ring cameras — but all of the results were from the daytime. I said as much, and she responded 'You're right. These are all during the day. I'll look for night videos.' There were a few other instances where Alexa+ didn't return the results I was looking for — Amazon is still working out the kinks — and when I pointed out the errors, she was very apologetic, and worked to correct the mistakes. As an Italian-American, I was particularly aggrieved when Alexa pronounced 'gnocchi' as 'knee-ack-y.' I told her as much, and she asked me if there was a better way to pronounce the word. After guiding her through something that was a bit better, she said she'd keep it in mind for future conversations. It took a few tries, but Alexa finally started pronouncing it correctly. One of the keys to Alexa+'s success will be its integration with third-party services. For example, it currently works with Uber, OpenTable, TicketMaster, and a smattering of other services, so you can use your voice to order a ride or get a reservation. However, in order for this to truly be useful, Amazon will have to partner with a lot more companies. For instance, I tried to book a reservation near my home at a sushi restaurant using OpenTable, but there just aren't that many restaurants in the area that use OpenTable. As a result, Alexa+ returned options for places in New York City, which is a good 15 miles away. While Alexa+ is shaping up to be a great in-home assistant, it's going to need to do more if Amazon wants people to use it out of the house. To find things to do near me, I asked Alexa+ 'what's a good place to go kayaking near Red Bank, New Jersey?' Alexa returned several helpful results, but she was unable to send me directions, or show me on a map where they might be. An Amazon representative told me that they are working on Alexa+ integration with turn-by-turn directions. As an in-home assistant, Alexa+ already feels like a big leap forward from what came before. She's an assistant you now talk to, rather than at. Conversations no longer feel one-directional, and she actually takes your comments into account, rather than returning a generic error message. And, the on-screen interface makes Alexa a lot more functional, especially when it comes to controlling your smart home devices. But while the new design looks great on the large-scale Echo Show 15, it's less so on the smaller Echo Show 5. As the latter device was last updated in 2023, I wouldn't be surprised if Amazon released a newer version of it (as well as other Echo displays) with a beefier processor to handle more Alexa+ actions locally. Is there anything you'd like me to try with Alexa+? I plan to test it more thoroughly in the weeks and months to come before its wider release. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Tom's Guide
6 days ago
- Entertainment
- Tom's Guide
I put Alexa+ to the test with my chaotic family — here's 5 things I liked and 2 that could use work
With three kids and constant chaos, I rely on anything that can make life a little easier, more scheduled and quieter (if possible). So when I got the chance to test Alexa+, Amazon's upgraded AI assistant, I was curious whether it could actually keep up with my family of five. Spoiler: it can do a lot more than set timers and play 'We Don't Talk About Bruno' for the hundredth time. Built on Amazon's new large language model, I discovered just how much Alexa+ turns your Echo speaker into something much smarter. My experience with Alexa+ proved the assistant to be more like a helpful, proactive assistant than a glorified Bluetooth speaker (something I honestly have felt about Alexa in the past). After a week of testing it on an Echo Show 21 in our real-life madhouse, I found a lot to love, and a couple of things that could use some fine-tuning. Alexa+ is still in its "Early Access" testing phase, so it's not been widely released yet, as Amazon is working out the kinks. Here are five things I love about Alexa+ and two that still bug me. What I noticed right away is how much more conversational and human-like Alexa+ in comparison to the older version. It is also much more natural to talk to because you can interrupt it at any time. I can also interrupt myself mid-sentence to completely change topics or backtrack. It follows me and get this, it even follows my four-year-old who stumbles over his words and doesn't always speak in complete sentences. I noticed that Alexa+ adapts to him when it talks. For example, if I ask for the weather, it just gives me the weather. But if one of my kids asks what the weather is like today, Alexa+ says something like, 'Let me put on my weather goggles…' It's the little details that set this version apart. It also remembers the context of earlier questions and can handle follow-ups without me needing to rephrase. Yes, even when my kids talk fast, in Gen Alpha slang or from across the room. It's the first time Alexa has felt like a true back-and-forth assistant. It remembers facts about the family, our preferences and routines. This has been a game-changer for busy week days, like when I forget to change the thermostat at bedtime to our preferred temperature. It does it for me. I've noticed that the more I use it, the personalization has gotten better over time. One of the most underrated upgrades in Alexa+ is how much more intuitive the screen has become. It now works more like a true smart display; proactively showing relevant visuals based on your context, without needing precise voice commands. When chatting with Alexa, it actually shows the chat on the screen almost like a text message or chatbot. It's nice for a variety of reasons like, ensuring it understands what I'm saying, understanding what it is saying (especially if the room is noisy) and for reference later. Ask for a recipe, and you get a step-by-step visual layout with timers you can tap. Even home controls are easier: instead of listing devices, Alexa+ now presents touch-friendly tiles organized by room, making it simple to dim lights or change the thermostat with a swipe. The layout feels cleaner, faster and more responsive. With our previous versions of Alexa, controlling smart home devices often meant memorizing exact phrases or setting up rigid routines in the app. For example, if I said 'Turn on the lights in the mud room at 7 p.m.,' it might respond with, 'I don't understand,' if I hadn't programmed that exact routine. WIth the older version of Alexa, I felt like I was always one step behind it, while this version definitely feels like it knows my routine and my house better than I do. Alexa+ allows for much more natural, flexible phrasing. Having tested LLMs (large language models), I can only guess that the updated model is better because it can understand context, time references and the way real people actually talk. Sometimes it understands my four-year-old better than I do. For example, you can now say: 'Turn off the lights in the kids' rooms after breakfast.''Dim the family room when the movie starts.''Make sure the porch light is on before we get home.'The real game-changer for me is that Alexa+ interprets time-based cues like 'after dinner' or event-based triggers like 'when the movie starts' without needing a strict routine tied to a clock. It can even infer what 'the kids' rooms' refers to based on your device setup and previous conversations. For a family of five, this is the tool I've needed. I really like that I don't have to micromanage every device or update schedules constantly. Instead, I just speak naturally and trust Alexa+ to fill in the gaps, which makes this multitasking mom very happy. One of the best things about Alexa+ is how seamlessly it works across your entire ecosystem whether you're at home or on the go. The Alexa app now mirrors many of the assistant's upgraded capabilities, meaning you can control your smart home remotely with ease. This is a total game changer for me because sometimes I'll leave the house and wonder if I remembered to turn the outside lights off or close the garage. I've used the app while away from home to turn off forgotten porch lights, adjust the thermostat before heading back and even check if the front door is locked; all with natural, conversational commands. Thanks to Alexa+ syncing context across devices, the system knows what I've asked before, so I don't have to start from scratch each time. Whether I'm speaking to a smart speaker in the kitchen or tapping through the app at the grocery store, Alexa+ keeps everything in sync, giving me consistent, proactive help no matter where I am. What sets Alexa+ apart is how well it connects the dots between my preferences, schedule, shopping habits, and smart home activity. It's no longer just responding to commands — it's anticipating needs based on what it knows about my life. For example, it keeps track of my most frequently purchased items on Amazon, making it easy to restock household essentials with a quick 'reorder my buy-again items' prompt. No more scrambling when we run out of paper towels or dog food. But where Alexa+ really shines is in how it understands time and priorities. The other night, I asked when I should squeeze in a workout — and instead of just giving a random suggestion, it scanned my calendar, noticed my kid's soccer game, and even reminded me of a catch-up coffee I had forgotten to reschedule. That level of situational awareness is new — and genuinely helpful. Alexa+ feels more like a personal assistant who knows my routines, commitments, and habits, rather than just a voice waiting for commands. It's thoughtful, responsive, and increasingly capable of adapting to real-world complexity in a way that feels natural and surprisingly human. While there's a lot to love about Alexa+, it's not quite perfect. As with most early AI rollouts, there are still a few kinks to work out. From inconsistent audio performance to occasional inaccuracies, Alexa+ sometimes feels like it's still finding its voice, literally and figuratively. Hopefully, as Amazon continues refining the system, these growing pains will smooth out. Alexa+ has a more expressive, natural-sounding voice — which is great... until it isn't. The responses are warm and personable, but sometimes they don't always apply to what was asked. Also, she pauses at the slightest interruption even if you put down a book or your phone near it. There have been moments when I had to repeat myself multiple times before being understood, even in a quiet room. Additionally, sometimes the sound randomly goes off and I have to tell Alexa that I cannot hear it. Personality is great, but efficiency still matters — especially when you're in the middle of making dinner or wrangling kids. In my testing, Alexa+ occasionally got key facts wrong, including movie ticket prices and a restaurant's hours of operation. These might seem like small errors, but they show the limits of Alexa+ when it comes to sourcing real-time information. Because Alexa+ still relies heavily on third-party data, the accuracy of its responses is only as good as the sources it pulls from. That's fine for casual queries, but it's not yet dependable enough to be your go-to for time-sensitive or critical decisions. Until the system gets better at verifying info or flagging uncertainty, it's best to double-check anything important. Alexa+ is a big leap forward for smart assistants. After a week of living with it in a house full of noisy kids, forgotten appointments and endless to-do lists, I can confidently say it's the smartest Alexa has ever felt. This version anticipates, adapts and genuinely helps. From adjusting the thermostat when I forget, to following my four-year-old's half-formed voice commands, Alexa+ proves it can thrive in the beautiful chaos of real family life. It's not perfect yet. There were moments where I had to repeat myself or double-check answers, and sometimes I wished it would skip the extra friendliness and just get to the point. But these feel like early-version quirks, not dealbreakers. And given how quickly AI tools are evolving, I wouldn't be surprised if these issues are ironed out soon. The bottom line? Alexa+ certainly feels like a true assistant and it seems to get smarter the more we use it. For busy households, this upgrade is a total game changer. In the weeks and months to come, I'll be testing out Alexa+ further, to see how it develops. Is there anything you'd like to know about it? Leave a note in the comments!


UPI
15-07-2025
- Business
- UPI
Amazon Web Services launches Kiro for writing code with AI help
Amazon CEO Andy Jassy speaks during an Amazon Devices launch on Febriuary 26 in New York where the company unveiled Alexa+, with AI features. On Monday, Amazon Web Services announced Kiro, allows developers to write code with help from artificial intelligence. Photo by John Angelillo/UPI | License Photo July 14 (UPI) -- Amazon Web Services on Monday released Kiro, a program that allows developers to write code with help from artificial intelligence. Amazon CEO Andy Jassy launched the service in a post on X. "Introducing Kiro, an all-new agent IDE [Integrated Development Environment] that has a chance to transform how developers build software," Jassy wrote about the service from Amazon's Web Services, which is the leading provider of cloud infrastructure. Amazon's name doesn't appear in the announcement, Geekwire noted. AWS launched in 2006 and includes storage and computing power. In 2024, its revenue was $107.6 billion. Overall, Amazon's revenue was $637.9 billion, including retail services, delivery, digital content, devices, Whole Foods, physical stores Vibe coding directs computers to creative software without much human direction. After the free preview ends, free and premium versions of Kiro will be available. The company plans three pricing tiers: a free version with 50 agent interactions per month; a Pro tier at $19 per user per month with 1,000 interactions; and a Pro+ tier at $39 per user per month with 3,000 interactions. Jassy noted the advantages of its program, which uses AI models from Amazon-backed Anthropic but there will be alternatives. "Kiro is really good at 'vibe coding' but goes beyond that," he said. "While other AI coding assistants might help you prototype quickly, Kiro helps you take those prototypes all the way to production by following a mature, structured development process out of the box. This means developers can spend less time on boilerplate coding and more time where it matters most -- innovating and building solutions that customers will love." Diagrams and tasks are generated to streamline development, AWS said. Kiro now can only communicate with people in English. Two product developers, Nikhil Swaminathan and Deeak Sing, gave some details on the programming service and provided a tutorial. "I'm sure you've been there: prompt, prompt, prompt, and you have a working application," they wrote. "It's fun and feels like magic. But getting it to production requires more. ... Requirements are fuzzy and you can't tell if the application meets them." They said Kir works "like an experience developer catching things you miss or completing boilerplate tasks in the background as you work. These event-driven automation triggers an agent to execute a task in the background when you save, create, delete files, or on a manual trigger." In one example, they showed how an e-commerce application for selling crafts can add a review section for users' feedback on crafts. They looked to the future, writing "the way humans and machines coordinate to build software is still messy and fragmented, but we're working to change that. Specs is a major step in that direction." Other companies are going into vibecoding, CNBC reported. Google plans to make its Gemini Code Assist more useful for software developers. On Friday, the company paid a $2.4 billion for licensing rights and top talent from AI software coding startup WIndsurf. On Monday, AI startup Cognition announced it is acquiring Windsurf's intellectual property, produce, trademark, brand and talent for an undisclosed amoint. Microsoft's GitHub's agent allows its Visual Studio Code to work in agent mode for automated software development. Anysphere has developed Cursor and plans to raise money at a $10 billion valuation. OpenAI had considered acquiring Windsurf and Cursor.