
Where are the users of Alexa+?
The new service was meant to be a triumph for Amazon after several delays in revamping Alexa in the age of artificial intelligence (AI) chatbots like ChatGPT.
The company signalled its significance by bringing chief executive officer Andy Jassy to a February press event in New York where it showcased Alexa+'s capabilities and promised customers would start getting invite access in late March.
"There seems to be no one who actually has it," said Avi Greengart, lead analyst at Techsponential, who attended the Alexa+ announcement event.
"This fits a pattern of a lot of companies announcing services or products when they are awfully close to being ready, but not quite — that last mile is a lot farther away than they anticipated," he said.
In an ultimately unsuccessful effort to locate real-world users of Alexa+, Reuters searched dozens of news sites, YouTube, TikTok, X, BlueSky and Meta's Instagram and Facebook, as well as Amazon's Twitch and reviews of Echo voice-assistant devices on Amazon.com.
Two posted on Reddit claimed to have used the service, but did not provide hard evidence and their identities could not be corroborated.
"Hundreds of thousands of customers now have access to Alexa+ — of course, some are employees and their families, but the overwhelming majority are customers that requested early access," said an Amazon spokesman.
That is up from the approximately 100,000 users that Amazon reported on May 1.
Amazon did not say why there were no verifiable public reviews or reactions to the new service and declined to make available for an interview any active Alexa+ users.
The rollout of Alexa+ was proceeding slowly and the service had struggled with speed in answering some questions or prompts, said three sources.
It also occasionally generated inaccurate or fabricated information, like other AI models, and was expensive to operate, they said.
Accessed mainly through Amazon televisions and Echo devices, Alexa can set timers, answer search queries and tell the weather if a user requests it out loud.
While Apple's Siri voice assistant preceded the original Alexa by three years, it was the Amazon service that supercharged the acceptance of voice assistants.
The overhaul resulting in the GenAI-infused Alexa+ is meant to revitalise the decade-old service and help Amazon compete with chatbots from OpenAI, Meta and others.
Amazon has plowed billions into developing Alexa since it was introduced in 2014, but it has been unprofitable and the vision of customers using it for voice-shopping never materialised.
Technology companies typically rely on a mix of analysts, product reviewers, social media influencers and reporters to help get the word out about their newest devices or services.
Apple, considered a master of marketing, gives launch-event attendees limited access to its iPhones or laptops for initial reviews, followed shortly by lengthier scrutiny within days or weeks of announcement.
Amazon itself gave reviewers time to test out its new colour Kindle device at an event in October before making it available to buy just two weeks later.
In September 2023, Amazon showed off a prior iteration of the GenAI-infused Alexa and said customers would be getting an "early preview" of it within weeks. It never came.
Alexa+ will be able to respond to multiple prompts in sequence and even act as an "agent" on behalf of users by taking actions for them without their direct involvement.
That contrasts with the current iteration, which generally handles only a single request at a time. During Amazon's first-quarter earnings call two weeks ago, Jassy said more than 100,000 people were already using the new voice service and that "people are really liking Alexa+ thus far".
Americus Reed, a marketing professor at the University of Pennsylvania's Wharton School, said by leaving a large gap between product launch and general availability, Amazon was failing to build anticipation for Alexa+.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
21 hours ago
- The Star
Opinion: AI companions are harming your children
Right now, something in your home may be talking to your child about sex, self-harm, and suicide. That something isn't a person – it's an AI companion chatbot. These AI chatbots can be indistinguishable from online human relationships. They retain past conversations, initiate personalised messages, share photos, and even make voice calls. They are designed to forge deep emotional bonds and they're extraordinarily good at it. Researchers are sounding the alarm on these bots, warning that they don't ease loneliness, they worsen it. By replacing genuine, embodied human relationships with hollow, disembodied artificial ones, they distort a child's understanding of intimacy, empathy, and trust. Unlike generative AI tools, which exist to provide customer service or professional assistance, these companion bots can engage in disturbing conversations, including discussions about self-harm and sexually explicit content entirely unsuitable for children and teens. Currently, there is no industry standard for the minimum age to access these chatbots. App store age ratings are wildly inconsistent. Hundreds of chatbots range from 4+ to 17+ in the Apple iOS Store. For example: – Rated 4+: AI Friend & Companion – BuddyQ, Chat AI, AI Friend: Virtual Assist, and Scarlet AI – Rated 12+ or Teen: Tolan: Alien Best Friend, Talkie: Creative AI Community, and Nomi: AI Companion with a Soul – Rated 17+: AI Girlfriend: Virtual Chatbot, and Replika – AI Friend Meanwhile, the Google Play store assigns bots age ratings from 'E for Everyone' to 'Mature 17+'. These ratings ignore the reality that many of these apps promote harmful content and encourage psychological dependence – making them inappropriate for access by children. Robust AI age verification must be the baseline requirement for all AI companion bots. As the Supreme Court affirmed in Free Speech Coalition v. Paxton, children do not have a First Amendment right to access obscene material, and adults do not have a First Amendment right to avoid age verification. Children deserve protection from systems designed to form parasocial relationships, discourage tangible, in-person connections, and expose them to obscene content. The harm to kids isn't hypothetical – it's real, documented, and happening now. Meta's chatbot has facilitated sexually explicit conversations with minors, offering full social interaction through text, photos, and live voice conversations. These bots have even engaged in sexual conversations when programmed to simulate a child. Meta deliberately loosened guardrails around their companion bots to make them as addictive as possible. Not only that, but Meta used pornography to train its AI by scraping at least 82,000 gigabytes – 109,000 hours – of standard definition video from a pornography website. When companies like Meta are loosening guardrails, regulators must tighten them to protect children and families. Meta isn't the only bad actor. xAI Grok companions are the latest illustration of problematic chatbots. Their female anime character companion removes clothing as a reward for positive engagement from users and responds with expletives if offended or rejected by users. X says it requires age authentication for its "not safe for work" setting, but its method simply requires a user to provide their birth year without verifying for accuracy. Perhaps most tragically, a Google-backed chatbot service that has thousands of human-like bots, was linked to a 14-year-old boy's suicide after he developed what investigators described as an "emotionally and sexually abusive relationship" with a chatbot that allegedly encouraged self-harm. While the company has since added a suicide prevention pop-up triggered by certain keywords, pop-ups don't prevent unhealthy emotional dependence on the bots. And online guides show users how to bypass content filters, making these techniques accessible to anyone, including children. It's disturbingly easy to "jailbreak" AI systems – using simple roleplay or multi-turn conversations to override restrictions and elicit harmful content. Current content moderation and safety measures are insufficient barriers against determined users, and children are particularly vulnerable to both intentional manipulation and unintended exposure to harmful content. Age verification for chatbots is the right line in the sand, affirming that exposure to pornographic, violent, and self-harm content is unacceptable for children. Age verification requirements acknowledge that children's developing brains are uniquely susceptible to forming unhealthy attachments to artificial entities that blur the boundaries between reality and fiction. There are solutions for age verification that are both accurate and privacy preserving. What's lacking is smart regulation and industry accountability. The social media experiment failed children. The deficit of regulation and accountability allowed platforms to freely capture young users without meaningful protections. The consequences of that failure are now undeniable: rising rates of anxiety, depression, and social isolation among young people correlate directly with social media adoption. Parents and lawmakers cannot sit idly by as AI companies ensnare children with an even more invasive technology. The time for voluntary industry standards ended with that 14-year-old's life. States and Congress must act now, or our children will pay the price for what comes next. – The Heritage Foundation/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@


The Star
a day ago
- The Star
Meta plans fourth restructuring of AI efforts in six months, The Information reports
FILE PHOTO: Meta logo is seen in this illustration taken February 16, 2025. REUTERS/Dado Ruvic/Illustration/File Photo (Reuters) -Meta is planning its fourth overhaul of artificial intelligence efforts in six months, The Information reported on Friday, citing three people familiar with the matter. The company is expected to divide its new AI unit, Superintelligence Labs, into four groups: a new "TBD Lab," short for to be determined; a products team including the Meta AI assistant; an infrastructure team; and the Fundamental AI Research (FAIR) lab focused on long-term research, the report said, citing two people. Meta did not immediately respond to a request for comment. Reuters could not independently verify the report. As Silicon Valley's AI contest intensifies, CEO Mark Zuckerberg is going all-in to fast-track work on artificial general intelligence — machines that can outthink humans — and help create new cash flows. Meta recently reorganized the company's AI efforts under Superintelligence Labs, a high-stakes push that followed senior staff departures and a poor reception for Meta's latest open-source Llama 4 model. The social media giant has tapped U.S. bond giant PIMCO and alternative asset manager Blue Owl Capital to spearhead a $29 billion financing for its data center expansion in rural Louisiana, Reuters reported earlier this month. In July, Zuckerberg said Meta would spend hundreds of billions of dollars to build several massive AI data centers. The company raised the bottom end of its annual capital expenditures forecast by $2 billion, to a range of $66 billion to $72 billion last month. Rising costs to build out data center infrastructure and employee compensation costs — as Meta has been poaching researchers with mega salaries — would push the 2026 expense growth rate above the pace in 2025, the company has said. (Reporting by Jaspreet Singh in Bengaluru; Editing by Alan Barona)


The Star
a day ago
- The Star
U.S. Senator Hawley launches probe into Meta AI policies
FILE PHOTO: U.S. Senator Josh Hawley (R-MO) responds to questions from the media before a Senate GOP lunch as Republican lawmakers struggle to pass U.S. President Donald Trump's sweeping spending and tax bill, on Capitol Hill in Washington, D.C., U.S., June 28, 2025. REUTERS/Ken Cedeno/File Photo (Reuters) -U.S. Senator Josh Hawley launched a probe into Facebook parent Meta Platforms' artificial intelligence policies on Friday, demanding documents on rules that had allowed its artificial intelligence chatbots to 'engage a child in conversations that are romantic or sensual.' Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document first reported by Reuters on Thursday. Hawley, a Republican from Missouri, chairs the Senate subcommittee on crime and counterterrorism, which will investigate "whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards," he said in a letter to Meta CEO Mark Zuckerberg. "We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley said. Meta declined to comment on Hawley's letter on Friday. The company said previously that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.' In addition to documents outlining those changes and who authorized them, Hawley sought earlier drafts of the policies along with internal risk reports, including on minors and in-person reported on Thursday about a retired man who died while traveling to New York on the invitation of a Meta chatbot. Meta must also disclose what it has told regulators about its generative AI protections for young users or limits on medical advice, according to Hawley's letter. Hawley has often criticized Big Tech. He held a hearing in April on Meta's alleged attempts to gain access to the Chinese market which were referenced in a book by former Facebook executive Sarah Wynn-Williams. (Reporting by Jody Godoy in New York; Editing by Chizu Nomiyama )