
Britain's AI-care revolution isn't flashy—but it is the future
England may seem an unlikely pioneer in an AI-care revolution. Whereas Japan is deploying robots, a quarter of care providers in England still keep paper records. The country's care system is underfunded and overstretched. Yet necessity is spurring innovation, albeit not from cash-strapped local councils (over 40% of their spending on services already goes on adult social care). Young entrepreneurs are teaching carers to use tools more common to delivery services and dating apps.
One of them is Ben Maruthappu, Cera's founder. He launched Cera after failing to find reliable care for his mother (an origin story typical of aspiring age-tech entrepreneurs). If groceries could be tracked in real time to the doorstep, he reasoned, then why not medication? Today Cera claims to have created Europe's largest home-care data set—over 200bn data points—to train AI that predicts patients' needs.
A promising use of AI is to predict falls. Falls are among the gravest risks for the elderly: hip fractures are their most common cause of accidental death. They also cost the National Health Service (NHS), which is separate to social care, around £2bn ($2.7bn) a year. With its app, which uses algorithms to predict fall risk, Cera claims to have cut falls by a fifth. A peer-reviewed study from 2022 found that its app had reduced hospitalisations by 52%. In March the NHS said it would work with Cera to roll out its AI tool across the country.
Fifty kilometres south of Ms Westwood's patch is The Lawns, a nursing home. There, another tech adopter, Melanie Dawson, a former rugby player turned care manager, has overseen an NHS pilot using acoustic-monitoring devices. White boxes combine motion sensors and machine learning to detect unusual movements or noises in the residents' rooms, a kind of Shazam for ambient noise. Over a year-long trial, falls decreased by 66%, and staff made 61% fewer checks in person. With fewer disturbances, residents also slept better and, with less daytime-napping, ate more. When the pilot finished, the home chose to keep using them at its own cost.
At Cavendish Park, in Worcestershire, residents can play with robotic companion pets and try an interactive driving simulator. Most, though, stay in their rooms, chatting to smart speakers called Alexa (made by Amazon). Cavendish Park was the first home in Europe to build Alexas, adapted to care homes, into its infrastructure, integrating them with its alarm and communication systems. Residents can use them to ask for help, for a drink or to make an emergency call.
AI's adopters say the tech is meant to support staff, not replace them. At Cavendish Park, carers scan residents' faces using PainChek, a mobile app, which detects micro-expressions linked to pain. Within seconds, they can detect pain more quickly and accurately than a trained nurse.
Yet in other cases, AI appears to be filling gaps left by a shortage of carers. In Northamptonshire, Cera is testing robots to give routine prompts, like whether clients have eaten or taken their medication. Though fine for the most mundane tasks, they are no replacement for Ms Westwood's kind and competent team. At Cavendish Park, residents treat their chatbots as companions (one 99-year-old says she spends all her time playing 'Quick-fire Quiz"). Meryem Tom, who leads the Alexa division at Amazon, insists that the Alexas are 'complementary" to humans.
One risk is that care workers could become tethered to digital metrics, a grim prospect for such a human job. Consent is another concern. Many of the clients whom AI might best help have dementia, raising doubts about whether they can meaningfully opt in. Some homes already feel like prisons: the dementia wing at Cavendish Park, Chatsworth House, is locked with keypads. AI could worsen that controlling urge. With enough surveillance, warns Andrew Sixsmith, a gerontology professor at Simon Fraser University, they risk resembling Jeremy Bentham's Panopticon, a theoretical prison where a single guard can watch every cell.
Still, for those needing care, the benefits of safety and independence appear to outweigh the risks—at least for now. Brenda Adkin is a 101-year-old Cera client who recently suffered a fall. Still enjoying a sherry with her neighbour in the morning, she has no wish to go into a care home. 'I like my independence," she says. AI helps the carers she loves keep her at home. Solving the care crisis will take more of this kind of innovation, not less.
For more expert analysis of the biggest stories in Britain, sign up to Blighty, our weekly subscriber-only newsletter.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
4 hours ago
- Indian Express
Alexa got an AI brain transplant. How smart is it now?
For the last few years, I've been waiting for Alexa's AI glow-up. I've been a loyal user of Alexa, the voice assistant that powers Amazon's home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don't use them for anything complicated — playing music, setting timers and getting the weather forecast are basically it — they're good at what they do. But since 2023, when ChatGPT added an AI voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant — a new AI system built around the same large language models, or LLMs, that power ChatGPT and other products. LLM-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice assistant. Amazon agrees. For the last few years, the company has been working feverishly to upgrade the AI inside Alexa. It has been a slog. Replacing the AI technology inside a voice assistant isn't as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. LLMs also aren't a perfect match for this kind of product, which not only needs to work with tons of preexisting services and millions of Alexa-enabled devices but also needs to reliably perform basic tasks. But finally, the new Alexa — known as Alexa+ — is here. It's a big, ambitious remodel that is trying to marry the conversational skills of generative AI chatbots with the daily tasks that the old Alexa did well. Alexa+, which has been available to testers through an early-access program for a few months, is now being rolled out more widely. I got it recently after I bought a compatible device (the Echo Show 8, which has an 8-inch screen) and enrolled in the upgraded version. (Prime members will get Alexa+ at no cost, while non-Prime members will have to pay $19.99 per month.) The New York Times recently announced a licensing deal with Amazon, which will allow Amazon to use Times content in its AI systems, including Alexa+. The Times is also suing OpenAI, the maker of ChatGPT, and Microsoft for alleged copyright violations related to the training of AI systems. I have good news and bad news for my fellow Alexa-heads. The good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.) And I liked some of Alexa+'s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my 3-year-old. The new Alexa is also better at handling multistep requests. 'Set three kitchen timers for 15, 25 and 45 minutes' and 'write a one-day itinerary for a trip to San Diego and send it to my email' were two prompts that worked for me. And Alexa+ doesn't require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice change. The bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other AI voice assistants I've tried but also was noticeably worse than the original Alexa at some basic tasks. When I asked Alexa+ to cancel an alarm the other morning — a request I had made to the old Alexa hundreds of times with no issues — it simply ignored me. When I emailed a research paper to alexa@ in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn't be found. Alexa+ also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter's recommended box grater and add it to my Amazon cart, it responded that 'according to Wirecutter, the best box grater is the OXO Good Grips Box Grater.' Wirecutter's actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new AI model on my laptop, it got tripped up and started repeating, 'Oh, no, my wires got crossed.' And I didn't have access to some of the new Alexa+ features Amazon advertised, such as a 'routine' feature that triggers several different actions when a user enters a room. (I wanted to have Alexa+ greet me in the mornings with a motivational speech and a high-volume rendition of 'Eye of the Tiger,' but the presence-sensing feature hasn't been turned on yet, according to an Amazon spokesperson.) Daniel Rausch, the Amazon vice president who oversees Alexa and Echo, told me in a recent podcast interview that many of these flaws would be fixed soon as Alexa+ rolled out more widely and more of its features came online. 'We've got some edges to sand,' he said. Alexa, Give Me 500 Words on the History of Kitchen Timers Rausch said the biggest challenge in building generative AI models into Alexa was that they were fundamentally different types of systems. The old Alexa, he said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room — all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by one. Adding generative AI to Alexa forced Amazon to rebuild many of these processes, Rausch said. Large language models, he said, are 'stochastic,' meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative but less reliable. It also made the voice assistant slow. Rausch recalled an early internal demo in which Alexa+ took more than 30 seconds to play a song — an 'excruciating' delay, he said, that led the team to rethink its approach. 'These models are slow to respond when they're following a deep set of instructions,' he said. 'We're asking them to do something quite hard.' Another challenge to overcome, Rausch said, was generative AI's wordiness. Initially, when engineers hooked Alexa up to large language models, the system would sometimes produce long, verbose answers or introduce needless complexity. Alexa+ might respond to a user's request for a 10-minute kitchen timer with a 500-word essay about the history of kitchen timers. The solution, Rausch said, was to spend several years combining more than 70 AI models — some Amazon's proprietary models and some from outside providers, like Anthropic's Claude — into a single, voice-based interface, with an orchestration system that routes a user's request to the model that is best suited to handle it. 'The magic, when it is working really well, is to get those new ways of speaking to Alexa to interface with those predictable outcomes or behaviors,' he said. There are other barriers, too. One of them, Rausch said, is that many longtime users have learned how to 'speak Alexa,' phrasing their daily requests in familiar commands that they know the system will understand. 'We all sort of came up with our way of setting a timer to get the pasta done on time,' he said. But Alexa+ processes language in a more fluid way. Users can talk to it as they would talk to a human — no robot pidgin required — and that may necessitate some retraining. I assume that many of the flaws will be ironed out and that most users will acclimate to the new way of speaking to Alexa+. I'm also inclined to cut Amazon some slack, since building LLM-based technology into a reliable voice assistant seems like a thorny technical problem, and it's not like anyone else has solved it. (Apple, which has been struggling to give Siri an AI upgrade for years, certainly hasn't.) I also don't think the limitations of Alexa+ suggest that generative AI models are inherently unreliable or that they'll never work as personal voice assistants. Ultimately, I think it's just really hard to combine generative AI with older, legacy systems — a lesson many companies, both inside and outside tech, are learning the hard way right now — and it's going to take some time to work out all the kinks. For now, I'm going to downgrade my devices to the older, less intelligent version of Alexa and leave the beta testing to others. With AI, as with humans, sometimes raw intelligence matters less than how you use it.


Time of India
6 hours ago
- Time of India
Alexa got an AI brain transplant: How smart is it now
Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads For the last few years, I've been waiting for Alexa 's AI glow-up.I've been a loyal user of Alexa, the voice assistant that powers Amazon 's home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don't use them for anything complicated -- playing music, setting timers and getting the weather forecast are basically it -- they're good at what they since 2023, when ChatGPT added an AI voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant -- a new AI system built around the same large language models, or LLMs, that power ChatGPT and other products. LLM-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice agrees. For the last few years, the company has been working feverishly to upgrade the AI inside Alexa. It has been a slog. Replacing the AI technology inside a voice assistant isn't as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. LLMs also aren't a perfect match for this kind of product, which not only needs to work with tons of preexisting services and millions of Alexa-enabled devices but also needs to reliably perform basic finally, the new Alexa -- known as Alexa+ -- is here. It's a big, ambitious remodel that is trying to marry the conversational skills of generative AI chatbots with the daily tasks that the old Alexa did which has been available to testers through an early-access program for a few months, is now being rolled out more widely. I got it recently after I bought a compatible device (the Echo Show 8, which has an 8-inch screen) and enrolled in the upgraded version. (Prime members will get Alexa+ at no cost, while non-Prime members will have to pay $19.99 per month.)The New York Times recently announced a licensing deal with Amazon, which will allow Amazon to use Times content in its AI systems, including Alexa+. The Times is also suing OpenAI, the maker of ChatGPT, and Microsoft for alleged copyright violations related to the training of AI systems.I have good news and bad news for my fellow good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.)And I liked some of Alexa+'s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my new Alexa is also better at handling multistep requests. "Set three kitchen timers for 15, 25 and 45 minutes" and "write a one-day itinerary for a trip to San Diego and send it to my email" were two prompts that worked for Alexa+ doesn't require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT's voice mode and other AI voice assistants I've tried but also was noticeably worse than the original Alexa at some basic I asked Alexa+ to cancel an alarm the other morning -- a request I had made to the old Alexa hundreds of times with no issues -- it simply ignored I emailed a research paper to alexa@ in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn't be also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter 's recommended box grater and add it to my Amazon cart, it responded that "according to Wirecutter, the best box grater is the OXO Good Grips Box Grater." Wirecutter's actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new AI model on my laptop, it got tripped up and started repeating, "Oh, no, my wires got crossed."And I didn't have access to some of the new Alexa+ features Amazon advertised, such as a "routine" feature that triggers several different actions when a user enters a room. (I wanted to have Alexa+ greet me in the mornings with a motivational speech and a high-volume rendition of "Eye of the Tiger," but the presence-sensing feature hasn't been turned on yet, according to an Amazon spokesperson.)Daniel Rausch , the Amazon vice president who oversees Alexa and Echo, told me in a recent podcast interview that many of these flaws would be fixed soon as Alexa+ rolled out more widely and more of its features came online."We've got some edges to sand," he said the biggest challenge in building generative AI models into Alexa was that they were fundamentally different types of old Alexa, he said, was built on a complicated web of rule-based, deterministic algorithms. Setting timers, playing songs on Spotify, turning off the lamp in your living room -- all of these features required calling up different tools and connecting with different interfaces, and they all had to be programmed one by generative AI to Alexa forced Amazon to rebuild many of these processes, Rausch said. Large language models, he said, are "stochastic," meaning they operate on probabilities rather than a strict set of rules. That made Alexa more creative but less also made the voice assistant slow. Rausch recalled an early internal demo in which Alexa+ took more than 30 seconds to play a song -- an "excruciating" delay, he said, that led the team to rethink its approach."These models are slow to respond when they're following a deep set of instructions," he said. "We're asking them to do something quite hard."Another challenge to overcome, Rausch said, was generative AI's wordiness. Initially, when engineers hooked Alexa up to large language models, the system would sometimes produce long, verbose answers or introduce needless complexity. Alexa+ might respond to a user's request for a 10-minute kitchen timer with a 500-word essay about the history of kitchen solution, Rausch said, was to spend several years combining more than 70 AI models -- some Amazon's proprietary models and some from outside providers, like Anthropic's Claude -- into a single, voice-based interface, with an orchestration system that routes a user's request to the model that is best suited to handle it."The magic, when it is working really well, is to get those new ways of speaking to Alexa to interface with those predictable outcomes or behaviors," he are other barriers, too. One of them, Rausch said, is that many longtime users have learned how to "speak Alexa," phrasing their daily requests in familiar commands that they know the system will understand."We all sort of came up with our way of setting a timer to get the pasta done on time," he Alexa+ processes language in a more fluid way. Users can talk to it as they would talk to a human -- no robot pidgin required -- and that may necessitate some retraining.I assume that many of the flaws will be ironed out and that most users will acclimate to the new way of speaking to Alexa+. I'm also inclined to cut Amazon some slack, since building LLM-based technology into a reliable voice assistant seems like a thorny technical problem, and it's not like anyone else has solved it. ( Apple , which has been struggling to give Siri an AI upgrade for years, certainly hasn't.)I also don't think the limitations of Alexa+ suggest that generative AI models are inherently unreliable or that they'll never work as personal voice assistants. Ultimately, I think it's just really hard to combine generative AI with older, legacy systems -- a lesson many companies, both inside and outside tech, are learning the hard way right now -- and it's going to take some time to work out all the now, I'm going to downgrade my devices to the older, less intelligent version of Alexa and leave the beta testing to others. With AI, as with humans, sometimes raw intelligence matters less than how you use it.


Mint
a day ago
- Mint
Samsung Galaxy S24 Ultra gets a massive ₹50,000 price cut — here's how to grab the deal
Samsung's Galaxy S24 Ultra is getting a big discount during the recent sale on Amazon and Flipkart , which takes the effective price of the device to below ₹ 80,000, marking almost a ₹ 50,000 discount on its original selling price. While the phone may be over a year old, it continues to be among the best flagship devices one can buy , especially given the long software update policy offered by Samsung. Samsung Galaxy S24 Ultra price cut: Samsung Galaxy S24 Ultra was launched in India at a price of ₹ 1,29,999 for the 12GB RAM/256GB storage variant. However, during the current sale on Amazon and Flipkart, the phone is being listed at an almost ₹ 50,000 discount. On Amazon, the 256GB storage variant is listed for ₹ 80,490 , while on Flipkart, the same variant costs ₹ 81,980. Flipkart, however, also provides a ₹ 4,000 instant discount on the Flipkart Axis credit card , which takes the effective price of the device to below ₹ 80,000. Galaxy S24 Ultra specifications: The Galaxy S24 Ultra features a 6.8-inch Quad HD+ Dynamic AMOLED display with up to 2,600 nits of peak brightness and 120Hz refresh rate. The phone comes with Gorilla Glass Armor protection , which also provides up to a 75% reduction in reflection during a wide range of lighting conditions. The phone comes powered by the Qualcomm Snapdragon 8 Gen 3 processor with Adreno 740, the same SoC found on the likes of OnePlus 12 and OnePlus 13R (Review). It is paired with LPDDR5 RAM and UFS 4.0 storage. The phone currently runs on One UI 7 based on Android 15 , but Samsung has also announced plans to soon bring Android 16-based One UI 8 to the smartphone. Apart from these two upgrades, the S24 Ultra still has about 5 years of OS updates remaining , which should be good enough for most users. On the optics front, the phone comes with a quad camera setup with a 200MP primary shooter with OIS, a 12MP ultra-wide angle lens, a 10MP telephoto lens with 3x zoom , and a 50MP periscope lens with 5x zoom capabilities. On the front is a 12MP shooter for selfies and video calls.