Man with life-threatening peanut allergy now eats them every day thanks to study
Richard Lassiter, 44, says he has been admitted to hospital twice due to his severe nut allergy, but now eats four peanuts every morning as part of a trial which has seen him gradually increase his exposure to them.
One of his reactions came during a holiday in Chile with his wife in 2018, when he had a reaction so bad he had to stay in a high dependency unit overnight and have adrenaline and oxygen.
He is one of 21 people between the ages of 18 and 40 who took part in the research conducted by King's College London and Guy's and St Thomas' NHS Foundation Trust.
The study was the first entirely on adults with severe allergies to test whether daily doses of peanuts taken under strict supervision can be safely tolerated.
Other "desensitisation" studies are focused on children, the experts say, meaning adults don't get the opportunity to counter their allergies.
The study, called The Grown Up Peanut Immunotherapy trial, saw participants with allergies slowly increase their daily dosage from 0.8mg peanut flour mixed in with food.
Once they could tolerate 50-100mg of peanut protein, they were switched to eating whole peanuts, peanut butter or other peanut products.
By the end of the study, which was funded by the National Institute for Health and Care Research, two thirds were able to eat the equivalent of five peanuts without reacting.
Speaking to Sky's Wilfred Frost, Mr Lassiter said: "There was definitely a sense of nerves at first. You know, you have to get your mind around the idea of eating something you've tried to avoid your whole life.
"But I think at the point that I found out about the trial, it was something I was really keen to do. I obviously had a couple of [dangerous] incidents reasonably fresh in my mind."
He said the trial had changed his life, adding that he took the nuts like "medicine".
"The idea that I take four peanuts a day now after my breakfast is well-worn routine," he said.
"I know it's something I'll do for the rest of the rest of my life.
"I'm certainly much more confident and calm when I go out to dinner with my wife, or when we go travelling. I know that that accidental exposure to peanuts isn't going to cause a serious reaction like it has done in the past."
Read more:
He added that he knew it still wasn't safe to order a meal with a strong presence of nuts.
Chief investigator Stephen Till, professor of allergy at King's College London and consultant allergist at Guy's and St Thomas', told Sky News the trial provided hope for those with nut allergies, but was definitely "not something to do at home".
He said the next step would be to conduct larger trials and identify "the group of adult patients who would most likely benefit from oral immunotherapy, and see whether it can lead to long-term tolerance in this age group".
Asked if such trials could potentially benefit people with other types of allergies, Prof Till said: "Potentially, yes. The principle should be applicable to other food options, but what I would say is that different foods can behave differently in terms of the amounts that are required to cause reactions and how severe the reactions are. So to do it in other foods, you really do need to do trials for those specific foods individually."
Public health minister Ashley Dalton said: "This ground-breaking research offers hope to thousands living with peanut allergies. For too long, people have navigated daily life in fear of accidental exposure that could be life-threatening.
"I'm proud the UK is leading this vital work through NIHR funding. These results show how we're transforming lives through science, potentially changing care standards for adults with peanut allergies worldwide."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time Magazine
10 hours ago
- Time Magazine
Chatbots Can Trigger a Mental Health Crisis. What to Know About 'AI Psychosis'
The Brief August 6, 2025 Trump threatens to federalize D.C., New York's Legionnaires' disease outbreak, and more Length: Long Speed: 1.0x AI chatbots have become a ubiquitous part of life. People turn to tools like ChatGPT, Claude, Gemini, and Copilot not just for help with emails, work, or code, but for relationship advice, emotional support, and even friendship or love. But for a minority of users, these conversations appear to have disturbing effects. A growing number of reports suggest that extended chatbot use may trigger or amplify psychotic symptoms in some people. The fallout can be devastating and potentially lethal. Users have linked their breakdowns to lost jobs, fractured relationships, involuntary psychiatric holds, and even arrests and jail time. At least one support group has emerged for people who say their lives began to spiral after interacting with AI. The phenomenon—sometimes colloquially called 'ChatGPT psychosis' or 'AI psychosis'—isn't well understood. There's no formal diagnosis, data are scarce, and no clear protocols for treatment exist. Psychiatrists and researchers say they're flying blind as the medical world scrambles to catch up. What is 'ChatGPT psychosis' or 'AI psychosis'? The terms aren't formal ones, but they have emerged as shorthand for a concerning pattern: people developing delusions or distorted beliefs that appear to be triggered or reinforced by conversations with AI systems. Psychosis may actually be a misnomer, says Dr. James MacCabe, a professor in the department of psychosis studies at King's College London. The term usually refers to a cluster of symptoms—disordered thinking, hallucinations, and delusions—often seen in conditions like bipolar disorder and schizophrenia. But in these cases, 'we're talking about predominantly delusions, not the full gamut of psychosis.' Read More: How to Deal With a Narcissist The phenomenon seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It's closely tied to how chatbots communicate; by design, they mirror users' language and validate their assumptions. This sycophancy is a known issue in the industry. While many people find it irritating, experts warn it can reinforce distorted thinking in people who are more vulnerable. Who's most at risk? While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present. 'I don't think using a chatbot itself is likely to induce psychosis if there's no other genetic, social, or other risk factors at play,' says Dr. John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center. 'But people may not know they have this kind of risk.' The clearest risks include a personal or family history of psychosis, or conditions like schizophrenia or bipolar disorder. Read More: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study Those with personality traits that make them susceptible to fringe beliefs may also be at risk, says Dr. Ragy Girgis, a professor of clinical psychiatry at Columbia University. Such individuals may be socially awkward, struggle with emotional regulation, and have an overactive fantasy life, Girgis says. Immersion matters, too. 'Time seems to be the single biggest factor,' says Stanford psychiatrist Dr. Nina Vasan, who specializes in digital mental health. 'It's people spending hours every day talking to their chatbots.' What people can do to stay safe Chatbots aren't inherently dangerous, but for some people, caution is warranted. First, it's important to understand what large language models (LLMs) are and what they're not. 'It sounds silly, but remember that LLMs are tools, not friends, no matter how good they may be at mimicking your tone and remembering your preferences,' says Hamilton Morrin, a neuropsychiatrist at King's College London. He advises users to avoid oversharing or relying on them for emotional support. Psychiatrists say the clearest advice during moments of crisis or emotional strain is simple: stop using the chatbot. Ending that bond can be surprisingly painful, like a breakup or even a bereavement, says Vasan. But stepping away can bring significant improvement, especially when users reconnect with real-world relationships and seek professional help. Recognizing when use has become problematic isn't always easy. 'When people develop delusions, they don't realize they're delusions. They think it's reality,' says MacCabe. Read More: Are Personality Tests Actually Useful? Friends and family also play a role. Loved ones should watch for changes in mood, sleep, or social behavior, including signs of detachment or withdrawal. 'Increased obsessiveness with fringe ideologies' or 'excessive time spent using any AI system' are red flags, Girgis says. Dr. Thomas Pollak, a psychiatrist at King's College London, says clinicians should be asking patients with a history of psychosis or related conditions about their use of AI tools, as part of relapse prevention. But those conversations are still rare. Some people in the field still dismiss the idea of AI psychosis as scaremongering, he says. What AI companies should be doing So far, the burden of caution has mostly fallen on users. Experts say that needs to change. One key issue is the lack of formal data. Much of what we know about ChatGPT psychosis comes from anecdotal reports or media coverage. Experts widely agree that the scope, causes, and risk factors are still unclear. Without better data, it's hard to measure the problem or design meaningful safeguards. Many argue that waiting for perfect evidence is the wrong approach. 'We know that AI companies are already working with bioethicists and cyber-security experts to minimize potential future risks,' says Morrin. 'They should also be working with mental-health professionals and individuals with lived experience of mental illness.' At a minimum, companies could simulate conversations with vulnerable users and flag responses that might validate delusions, Morrin says. Some companies are beginning to respond. In July, OpenAI said it has hired a clinical psychiatrist to help assess the mental-health impact of its tools, which include ChatGPT. The following month, the company acknowledged times its 'model fell short in recognizing signs of delusion or emotional dependency.' It said it would start prompting users to take breaks during long sessions, develop tools to detect signs of distress, and tweak ChatGPT's responses in 'high-stakes personal decisions.' Others argue that deeper changes are needed. Ricardo Twumasi, a lecturer in psychosis studies at King's College London, suggests building safeguards directly into AI models before release. That could include real-time monitoring for distress or a 'digital advance directive' allowing users to pre-set boundaries when they're well. Read More: How to Find a Therapist Who's Right for You Dr. Joe Pierre, a psychiatrist at the University of California, San Francisco, says companies should study who is being harmed and in what ways, and then design protections accordingly. That might mean nudging troubling conversations in a different direction or issuing something akin to a warning label. Vasan adds that companies should routinely probe their systems for a wide range of mental-health risks, a process known as red-teaming. That means going beyond tests for self-harm and deliberately simulating interactions involving conditions like mania, psychosis, and OCD to assess how the models respond. Formal regulation may be premature, experts say. But they stress that companies should still hold themselves to a higher standard. Chatbots can reduce loneliness, support learning, and aid mental health. The potential is vast. But if harms aren't taken as seriously as hopes, experts say, that potential could be lost. 'We learned from social media that ignoring mental-health harm leads to devastating public-health consequences,' Vasan says. 'Society cannot repeat that mistake.'


Newsweek
3 days ago
- Newsweek
Toothpaste Made From Your Own Hair May Repair Your Teeth
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A new study from King's College London suggests that keratin—the protein found in human hair, skin and wool—can regenerate tooth enamel and potentially halt early tooth decay. Published in Advanced Healthcare Materials, the research found that keratin—when extracted from wool and applied to teeth—forms a crystal-like layer that mimics and rebuilds lost enamel. The study presents keratin as an eco-friendly, biomimetic alternative to traditional fluoride treatments, which can only slow decay, but not reverse it. "Enamel is the hardest tissue in the body, but unlike bone or skin, it cannot repair itself once damaged," Dr. Sherif Elsharkawy, senior author and prosthodontics consultant at King's College London, told Newsweek in an email. "Dentistry has relied for decades on synthetic materials such as plastic resins or ceramics, but these are never a perfect biological match. "I wanted to find a natural, sustainable material that could actually regenerate enamel rather than simply cover damage." Stock image: Woman pointing at her teeth. Stock image: Woman pointing at her teeth. Photo by Fizkes / Getty Images Why It Matters Tooth enamel erosion is irreversible and widespread, and it can be caused by a combination of factors. Acidic foods and drinks—such as soda, citrus and vinegar—erode enamel by lowering the pH in the mouth, which dissolves the minerals that keep teeth hard. Poor oral hygiene allows plaque bacteria to produce acids that contribute to decay, while aggressive brushing can physically wear away the enamel surface. Age also plays a role, as enamel naturally thins over time, exposing the softer dentin underneath. Conditions like acid reflux, dry mouth and eating disorders can accelerate enamel loss as well by increasing acid exposure or reducing saliva, which normally helps neutralize harmful acids. Once tooth enamel is lost, the tooth nerves are exposed, which is painful and makes the teeth sensitive. While fluoride toothpastes are standard for managing this issue, keratin offers a regenerative approach that goes beyond symptom control. What to Know Researchers used wool as a keratin source and applied it to teeth in a laboratory. "Keratin has a molecular structure that can guide minerals into forming enamel-like crystals," Elsharkawy told Newsweek. "Turning waste materials, mainly sheep wool, into a tooth-repair material felt both scientifically exciting and environmentally meaningful." The treatment is versatile, according to Elsharkawy. "Keratin can be transformed into a material as strong and functional as natural enamel," he explained. "It could be delivered through something as simple as toothpaste or as a professional in-clinic gel, making tooth-decay repair accessible, affordable, and sustainable worldwide." What People Are Saying "We knew keratin had promise, but I was impressed by how precisely it guided crystal growth into enamel-like structures," Elsharkawy said. "It did not just coat the tooth—it recreated the highly ordered crystal pattern of natural enamel, something that normally only happens during tooth development." What's Next The team is working on commercial pathways for the treatment, with Elsharkawy noting, "The response has been enormous from dentists, industry, and the public." He added that the group had launched the company Eterna Regeneratives to lead the translation of this breakthrough. "The dental industry already sees this as a game-changing innovation," he explained. "Our first go-to-market products will be toothpaste and mouthwash for daily use, followed by a more potent professional treatment for those at higher risk of decay. [...] "We are confident this will be available to the public within two years."

5 days ago
Body of Antarctic researcher found 66 years after he fell into crevasse while exploring glacier
The remains of a 25-year-old Antarctic researcher have been found 66 years after he disappeared when he fell into a crevasse in 1959 during a survey mission, officials said. Dennis "Tink" Bell's remains were found among rocks exposed by a receding glacier at Admiralty Bay on King George Island, situated off the Antarctic Peninsula after he fell into a crevasse on July 26, 1959, and his team was unable to recover his body after the accident, the British Antarctic Survey (BAS) said. Over 200 personal items were also found, including the remains of radio equipment, a flashlight, ski poles, an inscribed Erguel wristwatch, a Swedish Mora knife, ski poles and an ebonite pipe stem, the BAS confirmed. 'The remains were carried to the Falkland Islands on the BAS Royal Research Ship Sir David Attenborough and handed into the care of His Majesty's Coroner for British Antarctic Territory, Malcolm Simmons, who accompanied them on the journey from Stanley to London, supported by the Royal Air Force,' officials said. Samples of his DNA were then tested and compared to samples from his brother, David Bell, and his sister, Valerie Kelly, by Denise Syndercombe Court, a professor and forensic geneticist at King's College London, who was able to confirm that the remains found were that of Dennis Bell. 'When my sister Valerie and I were notified that our brother Dennis had been found after 66 years we were shocked and amazed,' said David Bell, who is now living in Australia. 'The British Antarctic Survey and British Antarctic Monument Trust have been a tremendous support and together with the sensitivity of the Polish team in bringing him home have helped us come to terms with the tragic loss of our brilliant brother.' Dennis Bell set out from the Antarctic base with three other men and two dogs on July 26, 1959, and attempted to climb a glacier leading to an ice plateau they were trying to get to so that they could carry out survey and geological work. As they ascended the glacier, Bell, along with surveyor Ben Stokes, negotiated a crevassed area and believed that they were in the clear, according to the BAS. 'The deep soft snow made the going difficult and the dogs showed signs of tiredness. To encourage them Bell went ahead to urge them on, tragically without his skis,' officials with the BAS said in their statement recounting what happened. 'Suddenly he disappeared leaving a gaping hole in the crevasse bridge through which he had fallen.' 'Despite the terrible conditions and the ever-present risk of falling into another crevasse they continued to search for the scene of the accident,' officials recounted. 'Ken Gibson [witness] remembers 'It was probably twelve hours before we found the site and there was no way he could have survived.'' The remains were discovered on the Ecology Glacier earlier this year in January by personnel from the Henryk Arctowski Polish Antarctic Station on King George Island, though officials made their discovery public on Monday. The family will now decide how to mark Dennis's memory. 'Dennis was one of the many brave FIDS personnel who contributed to the early science and exploration of Antarctica under extraordinarily harsh conditions,' Director of BAS Professor Dame Jane Francis said. 'Even though he was lost in 1959, his memory lived on among colleagues and in the legacy of polar research. This discovery brings closure to a decades-long mystery and reminds us of the human stories embedded in the history of Antarctic science.'