logo
AI to help slash NHS hospital discharge delays, officials say

AI to help slash NHS hospital discharge delays, officials say

Yahoo11 hours ago
Technology is being piloted to help free up NHS hospital beds which are occupied by people fit to be sent home, officials have announced.
The platform uses artificial intelligence (AI) to help complete the documents needed to discharge patients, potentially saving hours of delays.
The move will allow doctors to spend less time on paperwork and more time focused on care, cutting waiting times in the process, according to Health Secretary Wes Streeting.
The technology, which is being piloted at Chelsea and Westminster NHS Trust, extracts information from medical records, including diagnoses and test results.
This helps medics to draft discharge summaries, which have to be completed before a person is sent home from hospital.
The document is then reviewed by healthcare professionals responsible for the patient and used to send them home or refer them to other services.
The tool will be hosted on the NHS Federated Data Platform (FDP), a shared software system aiming to make it easier for health and care organisations to work together and provide better services to patients
The current manual system can sometimes leave patients waiting for hours to be discharged, as doctors may be too busy to fill in forms, according to the Department for Science, Innovation and Technology.
Mr Streeting said: 'This potentially transformational discharge tool is a prime example of how we're shifting from analogue to digital as part of our 10-year health plan.
'We're using cutting-edge technology to build an NHS fit for the future and tackle the hospital backlogs that have left too many people waiting too long.
'Doctors will spend less time on paperwork and more time with patients, getting people home to their families faster and freeing up beds for those who need them most.'
The AI tool for discharge summaries is one of a number of projects to be backed by Prime Minister Sir Keir Starmer.
In January, he said AI will be used to 'turn around' the economy and public services.
Elsewhere, the Government has announced that technology shown to halve the time probation officers spend organising notes will be given to all 12,000 probation officers later this year.
The system helps to transcribe and take notes from meetings that officers have with offenders after they leave prison.
Speaking on a visit to Chelsea and Westminster Hospital, Technology Secretary Peter Kyle said: 'This is exactly the kind of change we need, AI being used to give doctors, probation officers and other key workers more time to focus on delivering better outcomes and speeding up vital services.
'This Government inherited a public sector decimated by years of underinvestment and is crying out for reform.
'These AI exemplars show the best ways in which we're using tech to build a smarter, more efficient state.
'When we get this right across government, we're talking about unlocking £45 billion in productivity gains, delivering our plan for change and investing in growth, not bureaucracy.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI gives students more reasons to not read books. It's hurting their literacy
AI gives students more reasons to not read books. It's hurting their literacy

Fast Company

time3 minutes ago

  • Fast Company

AI gives students more reasons to not read books. It's hurting their literacy

A perfect storm is brewing for reading. AI arrived as both kids and adults were already spending less time reading books than they did in the not-so-distant past. As a linguist, I study how technology influences the ways people read, write, and think. This includes the impact of artificial intelligence, which is dramatically changing how people engage with books or other kinds of writing, whether it's assigned, used for research, or read for pleasure. I worry that AI is accelerating an ongoing shift in the value people place on reading as a human endeavor. Everything but the book AI's writing skills have gotten plenty of attention. But researchers and teachers are only now starting to talk about AI's ability to 'read' massive datasets before churning out summaries, analyses, or comparisons of books, essays, and articles. Need to read a novel for class? These days, you might get by with skimming through an AI-generated summary of the plot and key themes. This kind of possibility, which undermines people's motivation to read on their own, prompted me to write a book about the pros and cons of letting AI do the reading for you. Palming off the work of summarizing or analyzing texts is hardly new. CliffsNotes dates back to the late 1950s. Centuries earlier, the Royal Society of London began producing summaries of the scientific papers that appeared in its voluminous Philosophical Transactions journal. By the mid-20th century, abstracts had become ubiquitous in scholarly articles. Potential readers could now peruse the abstract before deciding whether to tackle the piece in its entirety. The internet opened up an array of additional reading shortcuts. For instance, Blinkist is an app-based subscription service that condenses mostly nonfiction books into roughly 15-minute summaries —ccalled 'Blinks'—that are available in both audio and text. But generative AI elevates such work-arounds to new heights. AI-driven apps like BooksAI provide the kinds of summaries and analyses that used to be crafted by humans. Meanwhile, invites you to 'chat' with books. In neither case do you need to read the books yourself. If you're a student asked to compare Mark Twain's The Adventures of Huckleberry Finn with J. D. Salinger's The Catcher in the Rye as coming-of-age novels, CliffsNotes only gets you so far. Sure, you can read summaries of each book, but you still must do the comparison yourself. With general large language models or specialized tools such as Google NotebookLM, AI handles both the 'reading' and the comparing, even generating smart questions to pose in class. The downside is that you lose out on a critical benefit of reading a coming-of-age novel: the personal growth that comes from vicariously experiencing the protagonist's struggles. In the world of academic research, AI offerings like SciSpace, Elicit, and Consensus combine the power of search engines and large language models. They locate relevant articles and then summarize and synthesize them, slashing the hours needed to conduct literature reviews. On its website, Elsevier's ScienceDirect AI gloats: 'Goodbye wasted reading time. Hello relevance.' Maybe. Excluded from the process is judging for yourself what counts as relevant and making your own connections between ideas. Reader unfriendly? Even before generative AI went mainstream, fewer people were reading books, whether for pleasure or for class. In the U.S., the National Assessment of Educational Progress reported that the number of fourth graders who read for fun almost every day slipped from 53% in 1984 to 39% in 2022. For eighth graders? From 35% in 1984 to 14% in 2023. The U.K.'s 2024 National Literacy Trust survey revealed that only one in three 8- to 18-year-olds said they enjoyed reading in their spare time, a drop of almost 9 percentage points from just the previous year. Similar trends exist among older students. In a 2018 survey of 600,000 15-year-olds across 79 countries, 49% reported reading only when they had to. That's up from 36% about a decade earlier. The picture for college students is no brighter. A spate of recent articles has chronicled how little reading is happening in American higher education. My work with literacy researcher Anne Mangen found that faculty are reducing the amount of reading they assign, often in response to students refusing to do it. Emblematic of the problem is a troubling observation from cultural commentator David Brooks: 'I once asked a group of students on their final day at their prestigious university what book had changed their life over the previous four years. A long, awkward silence followed. Finally a student said: 'You have to understand, we don't read like that. We only sample enough of each book to get through the class.'' Now adults: According to YouGov, just 54% of Americans read at least one book in 2023. The situation in South Korea is even bleaker, where only 43% of adults said they had read at least one book in 2023, down from almost 87% in 1994. In the U.K., the Reading Agency observed declines in adult reading and hinted at one reason why. In 2024, 35% of adults identified as lapsed readers—they once read regularly, but no longer do. Of those lapsed readers, 26% indicated they had stopped reading because of time spent on social media. The phrase lapsed reader might now apply to anyone who deprioritizes reading, whether it's due to lack of interest, devoting more time to social media, or letting AI do the reading for them. All that's lost, missed, and forgotten Why read in the first place? The justifications are endless, as are the streams of books and websites making the case. There's reading for pleasure, stress reduction, learning, and personal development. You can find correlations between reading and brain growth in children, happiness, longevity, and slowing cognitive decline. This last issue is particularly relevant as people increasingly let AI do cognitive work on their behalf, a process known as cognitive offloading. Research has emerged showing the extent to which people are engaging in cognitive offloading when they use AI. The evidence reveals that the more users rely on AI to perform work for them, the less they see themselves as drawing upon their own thinking capacities. A study employing EEG measurements found different brain connectivity patterns when participants enlisted AI to help them write an essay than when writing it on their own. It's too soon to know what effects AI might have on our long-term ability to think for ourselves. What's more, the research so far has largely focused on writing tasks or general use of AI tools, not on reading. But if we lose practice in reading and analyzing and formulating our own interpretations, those skills are at risk of weakening. Cognitive skills aren't the only thing at stake when we rely too heavily on AI to do our reading work for us. We also miss out on so much of what makes reading enjoyable—encountering a moving piece of dialogue, relishing a turn of phrase, connecting with a character. AI's lure of efficiency is tantalizing. But it risks undermining the benefits of literacy.

The AGI-pilled and the damned
The AGI-pilled and the damned

Business Insider

time4 minutes ago

  • Business Insider

The AGI-pilled and the damned

Henry, a boyish-looking AI researcher, believes there's about a 50/50 chance that in the next few years AI will become so powerful and sophisticated it will pose an existential threat to all human life. For his day job, he's trying to prevent this from happening by working for a small safety-focused AI research lab in the Bay Area. He takes this mission seriously: He's sworn off romantic relationships to dedicate himself to the cause, and he donates a third of his income to AI safety nonprofits. In his free time, he's preparing for the possibility of failure, by building DIY bioshelters to protect him and his family from an AI apocalypse. Speaking from a video call in his office, Henry tells me it's remarkably easy to build a bioshelter capable of protecting against lethal pathogens created by or with the aid of advanced AI. First, you buy an off-the-shelf positively pressurized tent, the sort typically used as grow rooms for plants. Then you stack multiple professional-grade HEPA filters in front of the air intake. Finally, you stuff it with as much shelf-stable food, water, and supplies as you can fit. Henry's bioshelter will be "quite cheap," he tells me, "probably less than $10,000 including the three years worth of food I'm going to put in there." He asked that I use a pseudonym because of the social stigma associated with being a "prepper" — particularly if his fears do not come to pass and humanity prevails. Henry is far from alone in putting his money where his mouth is regarding his deepest fears and hopes for AI. For a certain class of Silicon Valley denizens, AI is not just the next buzzy technological wave; it is poised to fundamentally transform our society, and very soon. For them, there is little time to babble about the possible futures of artificial general intelligence, or AGI, over Slack channels and dinner parties. The time for radical change is now. Rationalists, a Silicon Valley-aligned philosophy centered on trying to improve human rationality and morality, have grown increasingly concerned about the perceived AI risk — while on the other side of the aisle, startup boosters' predictions for the tech are growing ever-more ebullient. "A lot of us are just going to look back on these next two years as the time when we could have done something." Some believe we're at the dawn of an age of superabundance — in which almost all intellectual labor can be automated — unlocking an unprecedented wave of human flourishing. They're embracing a lifestyle shift they call "smart-to-hot." Others are bracing for economic catastrophe and making major investments and career moves accordingly. And yet others, who think AI will inevitably wrest free of human control and gain the ability to kill all organic life. They're spending their retirement savings, having "weird orgies," and building survival bunkers. "A lot of us are just going to look back on these next two years as the time when we could have done something," Henry says. "Lots of people will look back on this and be like, 'Why didn't I quit my job and try to do something that really mattered when I had a chance to?'" A biomedical research data scientist living in Los Angeles, Srinivasan had historically been attracted to a certain kind of intelligent guy, prioritizing smarts over conventional attractiveness, she tells me. Now she saw that because generative AI is doing the intellectual labor of more and more people, raw intelligence has become less important to her than charisma, social engagingness, and hotness. Or as she recently quipped in a semi-viral tweet, "If you're smart, pivot to being cool/hot." Many of the people I spoke to for this story believe a variation of this: that because AI will soon subsume much of intellectual life, social life will become much more integral to human society, and being physically attractive will become all the more essential to flourishing within it. Brains are over, beauty and brawn are in. "I've sort of always loved fitness," says Soren Larson, a tech entrepreneur in Florida, "and I rather think that being hot and personable and funny are poised to be rare features when AI can do all the sort of intellectual things." Jason Liu, an AI consultant, tells me he's "already made that pivot." Several years ago, a debilitating, repetitive strain injury in his hands brought his career as a software engineer to a standstill. He retooled his life, diving into leisure pursuits like jiu jitsu and ceramics, and fashioned a second career as a consultant, trying to optimize for delegation and free time to socialize rather than hustle. "I personally did not want to be valued for my intelligence," he says. "I was like, this intelligence is what physically hurt me, and caused me to lose my job." When we spoke by phone, he was out strolling the streets of Paris, as part of an extended international jaunt. "Really leaning into leisure is kind of how I think about AGI," he says. Other people I meet with are reshaping their social lives today not because of their hopes for AI, but because of their fears. "If we are all going to be destroyed by an atomic bomb," C.S. Lewis wrote in 1948, "let that bomb when it comes find us doing sensible and human things — praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts — not huddled together like frightened sheep and thinking about bombs." In my conversations with people concerned about AI's impact, a few explicitly cited this Lewis quote, and many expressed a similar sentiment, in how they're trying to treasure the time they have now. "It's really freeing in some ways," Aella, a fetish researcher and sex worker in the San Francisco Bay Area with a cult following on X, tells me of her concerns about AI development. "I like throwing weird orgies, and I'm like — well, we're going to die. What's a weirder, more intense, crazier orgy we can do? Just do it now." As we sit out on the AstroTurf lawn at Lighthaven — a kitschy old hotel in Berkeley converted into an intellectual campus for the Rationalist movement — she talks about her fears of how AI may destroy humanity. "I can't face it all at once," she says. "I can catch glimpses of the big thing out of the corner of my eye and then I grieve it when I can, but I don't have the emotional capacity to really absorb it." As a result, she lives much more in the moment. She's gradually spending down her savings. She exercises less. She's tried "hard drugs" she would otherwise avoid. She's taking more sleeping pills, despite concerns about dementia risk. She's freezing a bunch of her eggs; "I'm just trying to get as many as I can, for fun." Over in San Francisco's Dolores Park, Vishal Maini, a venture capital investor, tells me something similar — though perhaps a little less extreme. "I think it makes sense to just adopt a little bit of a bucket-list-mentality around this," he says. "Do what's important to you in the time that we have." As we drink herbal tea, Maini talks me through his mental model for the coming years. He isn't sure if we're approaching a future where human capability is radically enhanced by AI, or a darker future in which humanity is "deprecated" by the technology. Amid this uncertainty, he advocates for "paleo-futurism": consciously prioritizing human interaction in a world replete with hyper-engaging, endlessly personalized digital content. "As we enter the era of AI companions, and increasingly dopamine-rich digital experiences, we have this choice," he says. "Do you go into the metaverse all the way, or do you stay in the real world?" For Holly Elmore, concerns about AI have impacted her life more intimately: It contributed to her decision to get divorced. At a coffee shop in San Francisco's Mission District, she tells me she and her husband were both deeply attuned to the risks of unconstrained AI development, but had different approaches to reining it in. Elmore, the executive director of anti-AI protest group Pause AI, believed steadfast mass organization against the big labs like OpenAI was the only viable way forward, which she says her ex-husband, Ronny Fernandez, was "unsupportive" of. "We had a lot of problems and I should have probably never been in that marriage, but it just became very clear that I wasn't going to be able to keep doing Pause AI if we stayed together," she says. "I had a very strong moral conviction on that and it did organize the priorities in my life very well. And honestly, I love living that way." "I do think that trying to use shaming and promoting in-group out-group thinking to achieve policy goals has a pretty bad track record," Fernandez, who is the manager of Lighthaven, writes over email. "Those disagreements led to resentments on both of our ends which contributed to our eventually getting divorced." While he believes that casting the AI scaling labs as "political enemies" will likely backfire, he stresses that "there is a significant chance that smarter than human AI will literally kill approximately everyone, or lead to even worse outcomes, within a few decades." For others, their dreams and worst fears about AI have transformed their approach to money. Sometime in 2020, Daniel Kokotajlo, then 28 years old, stopped saving for retirement. The AI researcher was growing concerned about the existential threat AI might pose to humanity. He worked at OpenAI from 2022 until he quit in 2024 over concerns about how it was handling AI safety — an issue he continues to work on. Earlier this year, he published AI 2027, a widely read online essay exploring how rapid advancements in AI may lead to several "loss-of-control" situations, from a world war over the AI arms race in the late 2020s to the extinction of human life by 2035, via an AI releasing a lethal chemical spray across civilization. Haroon Choudery, an AI startup founder, sees next few years as his last chance to make generational wealth. Amid these threats, he reasons, why bother saving for decades, when even the next few years look increasingly uncertain? "I have a decent amount of money, especially because of the equity, but I'm planning to spend it down," he tells me at a coffee shop in Berkeley. "I'm planning to have less every year." He says he knows of numerous other AI researchers doing the same. On a recent episode of Dwarkesh Patel's popular tech podcast, Trenton Bricken, a researcher at OpenAI rival Anthropic, shared that he, too, has quit putting money away for retirement, because he believes AGI is fast approaching. "It's hard for me to imagine a world in which I have all this money that's just sitting in this account and waiting until I'm 60 and things look so different then," he said. Others in the tech industry are taking a very different approach to their money. Among some of the most bullish about how bearish they see the future, there's a pervasive fear that there are only a few years left to earn as much as possible "before the music stops," when human intellectual labor becomes largely obsolete. "We have just a handful of years to try to make it financially," says a crypto writer in the Midwest who goes by the pseudonym Redphone professionally. "And if you don't, your bloodline could be relegated to this sort of peasant class under these technological overlords who control AI." Haroon Choudery, a former data integrity analyst at Facebook who now runs an AI startup called Autoblocks, has a similar concern. He emigrated from Pakistan to the United States when he was 5; his father was a cabbie, while his mother didn't work outside the home. He views the next few years as his last chance to make generational wealth for himself and his children. "Things are going to feel a lot more scarce from an upward mobility perspective, so people will generally freeze in their socioeconomic statuses," he tells me. Massey Branscomb, an executive at AI hedge fund AlphaFund puts this concept to me in even more blunt terms: "If you are not positioning yourself as a key member of these critical companies," by which he means top-flight AI labs like OpenAI and Anthropic, "and you're kind of living — the term is ironically a 'wagie' — you're living a wagie life, then you could be on the chopping block and then it's going to be harder. These jobs are not going to come back." Others are less sure AI will soon topple the global economy. As an assistant professor of philosophy at Vanderbilt University, David Thorstad could be considered a wagie. But he tells me he's not too worried about it. While he has increased the amount he's saving because of uncertainty around AI, he urges caution about any grand predictions. "I think that there are lots of communities," he says, "particularly in the Bay Area where groups of very smart, like-minded people live together, work together, read similar forums and podcasts, and when they get very immersed in a particular kind of an extreme worldview about AI, it tends to be very hard to break out." And then there are the people who aren't just preparing AI-driven financial apocalypse; they're preparing for an AI-driven apocalypse apocalypse. Ulrik Horn had always been interested in "societal problems," which led to work in renewable energies after graduating from the University of Pennsylvania in 2008. But in recent years, the Stockholm-based entrepreneur has been concerned about a different kind of problem: biosecurity. Horn is worried about "mirror life," an emerging field of biological research that involves creating mirror-forms of naturally occurring life. Specifically, he's worried that AI may help accelerate research into the field — and may lead to devastating biological weapons. We're five to 10 years out from AI developing this capability, he believes. After raising philanthropic funding to research protections against biothreats, he founded Fonix — a startup building off-the-shelf bioshelters with high-grade air filters. For $39,000, you can buy a shelter you can erect at home if and when the scat hits the fan. He has received a handful of pre-orders, he said, with shipping expected in 2026. Horn isn't the only one viewing the perceived threat of AI as a business opportunity. Ross Gruetzemacher, an assistant professor of business analytics at Wichita State University, is launching a "resiliency" consulting firm to help businesses and individuals prepare for significant shocks as a result of AI and other existential risks. He has also bought land in Wyoming, on which he plans to build his own secure facility. James Norris, an entrepreneur and longtime worrier about a variety of threats to humanity, has recently moved into what he describes as a "survival sanctuary" in an undisclosed location in South-East Asia, and is also offering consulting services and assistance setting up sanctuaries to others. Norris has also sworn off having children, he tells me, because of the havoc he believes AI will wreak on the world. Despite his personal fears, Kokotajlo, the ex- OpenAI researcher, is heavily skeptical of any attempt to aggressively prepare for a bad AI outcome today. "I think more likely it's either we're all dead, or we're all fine," he says. "I think if I spent a few weeks I could make a bug-out bag and make a bioshelter or whatever, and then in some sliver of possible futures it would save my family. But it is just more important for me to do my actual job than to do that." A few weeks after I first chatted with Henry, the young AI safety researcher, I check in via email. He's had a change of heart, and is no longer trying to build a DIY bioshelter. He's determined that he wasn't thinking big enough. Instead, he's now trying to buy land in California, where he can build more permanent defense structures to protect more of his friends and family. "The main scenario I think about is the one where misaligned superintelligence AI takes over," he says. He wants to be prepared for a near-future in which an all-powerful AI wages war against humans, but the "AI still has a little bit of empathy." Once the AI wins that war, he concludes, "maybe they'll take care of the survivors and they'll put humans in some kind of human zoo. And I'd much rather live in a human zoo than be killed by bioweapons."

Synthesia: The AI Avatar Generator Rethinking Corporate Communication
Synthesia: The AI Avatar Generator Rethinking Corporate Communication

Bloomberg

time4 minutes ago

  • Bloomberg

Synthesia: The AI Avatar Generator Rethinking Corporate Communication

London-based Synthesia uses AI to create life-like video avatars based on humans which can be deployed in a range of scenarios, including customer support, sales and staff training. It's valued at over $2 billion and serves 80 percent of Fortune 100 companies. Bloomberg's Tom Mackenzie spoke exclusively to Victor Riparbelli, Synthesia's co-founder and CEO, about the company's tech and what it means for jobs. (Source: Bloomberg)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store