Girl who died in hospital ‘should have been watched at all times', inquest hears
Ruth Szymankiewicz was being treated for an eating disorder at Huntercombe Hospital in Berkshire and had been placed under strict one-to-one observation when on February 12 2022 she was able to shut herself in her bedroom alone for 15 minutes, a jury inquest held at Buckinghamshire Coroner's Court heard on Tuesday.
The 14-year-old girl self-harmed and died two days later at John Radcliffe Hospital in Oxford.
It later emerged the member of staff responsible for watching Ruth at the hospital's psychiatric intensive care unit – a man then known as Ebo Acheampong – had been using false identity documents and was hired under a false name.
The teenager was last captured on CCTV walking out of the ward's day room 'completely on her own' before going straight to her bedroom and closing the door behind her, coroner Ian Wade KC told the hearing.
Jurors were shown the footage in which 15 minutes pass before a nurse opens the door, then covers her mouth in what appears to be an expression of shock.
Ellesha Brannigan, who worked as a clinical team leader on the ward, told the court on Tuesday she ran to Ruth's bedroom after the alarm was raised and found the teenage girl lying unconscious.
'She was just still,' Ms Brannigan said, adding the teenage girl was not breathing and staff initiated chest compressions in an attempt to revive her.
About a week prior, Ruth's care plan had been escalated to 'level three observation' after a similar self-harming incident, Ms Brannigan told the court.
'Level three observation is within eyesight at all times,' she told the hearing.
Staff members would take turns watching Ruth for 60 minutes at a time, the inquest heard.
Ms Brannigan further told the inquest 'level two observation' would have still required a member of staff check on the patient 'every five or 10 minutes'.
Asked by the coroner if there were circumstances in which a member of staff responsible for a level three observation could take their eyes off the patient, she replied: 'No.
'If they (the patient) are prescribed level three observation, you must comply with what is prescribed.'
The coroner previously said Mr Acheampong will not be giving evidence at the inquest as he fled the UK for Ghana shortly after Ruth's death.
The inquest continues.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 minutes ago
- Yahoo
After again declaring suicide a crisis in Nunavut, officials and advocates look for new solutions
WARNING: This story discusses suicide. Victoria Madsen, Nunavut's assistant deputy minister of health, admits she was initially skeptical when she found out a few years ago about all the territorial funding allocated for some community initiatives. She questioned what music classes, sports or fishing derbies had to do with suicide prevention. However, she soon realized what those events can mean to people. "All year they practice, they look forward to it. It's a community event. It's part of the community's identity," she said. Now, Madsen encourages Nunavut's hamlets to apply for that funding, to help create those safe spaces and build stronger community ties. She also says there's no one single solution to suicide prevention, and that it's about more than just hiring more counselors – which is why Nunavut is still faced with a suicide crisis, 10 years after it was first declared. At a news conference in 2022, Inuit Tapiriit Kanatami president Natan Obed estimated suicide rates in Inuit Nunangat to be five to 25 times higher than the rate in the rest of Canada. In June, the Nunavut government again declared suicide a crisis in the territory. And last month, some of the partners in the territory's fourth suicide prevention action plan convened to talk about the next steps. Sgt. George Henrie, the RCMP's community policing officer in Nunavut, was among those in attendance. He said one concern that he took away from the meetings is the apparent gap between elders and youth. He believes youth today face different pressures with the advent of social media. "With the loss of language and some cultural disconnect, it makes it hard for a grandparent and elder to help give advice," he said. "I believe that as we try to come up with a strategy to help them, it's going to have to evolve with the times." Suicide prevention more than just seeing a counselor Madsen said she often hears people calling for more counselors and mental health staff when it comes to suicide prevention. There's a place for that, she says, but she also believes it's about creating safe and supportive living environments. That could involve teaching children coping skills while they're young, reducing the rates of overcrowding in homes, or helping people heal from trauma. "If they're going back to a home that's not safe or has high addiction in that home, or if there's domestic violence, seeing a counselor every few days isn't going to change that," she said. Jasmine Redfern, the Amautiit Nunavut Inuit Women's Association president, says she is heartened to see suicide being more openly talked about today, compared to when she worked on suicide prevention for Nunavut Tunngavik Incorporated, in the 2010s. She says an unsafe home environment is a major risk factor for suicide as it can create a feeling of helplessness. With violence, she believes there are two truths to acknowledge: how it's influenced by intergenerational trauma, but also how that isn't an excuse for inflicting harm on others. "It's important to be able to name when inappropriate coping strategies or behaviours are happening, without causing shame … and encouraging them toward better choices," she said. That involves pointing both the perpetrators and victims of violence to the services and support available, which she believes is everybody's responsibility. RCMP first port of call in mental distress Even in non-violent situations, RCMP are often the first port of call when someone is in mental distress. Sgt. George Henrie said the police are among the few agencies in Nunavut available 24/7. "Some people might think that they're [the police and family members] punishing the person that might be struggling with mental health … there might be some fear of retaliation, anger or resentment," he said. However, Henrie still urges people to call the police if they don't know where to get help. He said Nunavut RCMP officers do get additional training on how to respond to mental health crises, and they're taught that jail shouldn't be the only option for someone in distress. In 2021, there were 2,872 mental health-related calls to the Nunavut RCMP, 3,019 in 2022, 3,246 in 2023 and 3,066 last year. For context, that's among the roughly 38,000 service calls RCMP receive each year. In other parts of Canada, there are response models for people in distress which rely less on police. In Nunavik, there is Saqijuq's mobile intervention team, which pairs police officers with social workers to respond to certain calls. In Canada's largest city, a community crisis service funded by the City of Toronto is now operating in all parts of the city. Madsen said the Nunavut government and the RCMP had previously explored the possibility of a different crisis response model where police aren't the default responders. However, they found that many calls about people in distress often involved someone who is intoxicated, which would require police support. Madsen said RCMP can still ask for help from local mental health nurses if a person in distress is known to local health workers, and they're pushing to have more Inuktitut speakers to respond to calls. Monitoring people at risk To track cases of people with suicidal ideation, Madsen said there is an ongoing pilot for a mental health surveillance system in place in roughly half a dozen communities. That system documents people who present to a health centre with suicidal ideation, but that system relies on somebody coming forward. However, Madsen said there are also outreach workers and youth facilitators involved in community programs across the territory. They are also tasked with following up with youth who present struggles, which Isaksimagit Inuusirmi Katujjiqatiggit (Embrace Life Council) then follows up on. "They're very good at educating just the public on, if you think someone has suicidal ideation, this is how you can try to help them," she said. A lot of this work falls on territorial institutions, but she said everybody has a collective responsibility to look out for each other. "We need the communities to see what they need, what will bring them together, what will give the kids a sense of hope and purpose." If you or someone you know is struggling, here's where to look for help:
Yahoo
an hour ago
- Yahoo
Crackdown on cosmetic surgery ‘cowboys' after botched Brazilian butt lifts
Ministers want to clamp down on 'cowboy' cosmetic procedures including Botox and Brazilian butt lifts after a string of horror incidents which left customers dead or with catastrophic damage. Officials said the industry had been blighted by 'dodgy practitioners and procedures', with some patients 'maimed' during botched treatments. It follows the case of mother-of-five, Alice Webb, who died in September 2024. She is thought to be the first person to have died following an unregulated Brazilian butt lift (BBL) procedure at a UK clinic. The Department of Health and Social Care (DHSC) has proposed new restrictions on who can access and provide treatments in a bid to protect people from 'rogue operators' with no medical training who often provide 'invasive' procedures in homes, hotels and pop-up clinics. The move should also reduce the cost imposed upon the NHS to fix botched procedures, DHSC added. Tim Mitchell, president of the Royal College of Surgeons of England, hailed the proposals as an 'important first step forward for patient safety.' Health minister Karin Smyth said: 'The cosmetics industry has been plagued by a Wild West of dodgy practitioners and procedures. 'There are countless horror stories of cosmetic cowboys causing serious, catastrophic damage.' She said the government would take action too 'root out cowboys' and support 'honest and competent practitioners.' 'This isn't about stopping anyone from getting treatments – it's about preventing rogue operators from exploiting people at the expense of their safety and keeping people safe,' Ms Smyth added. 'We're giving them peace of mind and reducing the cost to the NHS of fixing botched procedures.' The government's proposals include: Only allowing healthy workers who are 'suitably qualified' to be able to deliver high-risk procedures such as (BBLs) Ensuring providers are regulated by the health regulator, the Care Quality Commission. Slapping sanctions and finanial penalties on those who break rules on high-risk procedures Ensuring clinics offering Botox and fillers are licensed Introducing age restrictions to prevent children from trying to follow 'dangerous beauty trends on social media' The timeline for the introduction and completion of these measures was not stated. But the DHSC said it will launch a consultation next year seeking views on the range of procedures which should be covered in the new restrictions. Last month, the Chartered Trading Standards Institute warned that fat injections, BBLs, Botox and fillers are being offered by untrained people in places such as public toilets. Before the proposed regulations come into force, the government has urged people seeking cosmetic procedures to ask for the provider's qualifications and insurance, and to be wary of 'suspiciously cheap' offers. Health officials launched an investigation after a number of people had reactions to Botox injections earlier this year. Professor David Sines CBE, the chair and registrar of Joint Council for Cosmetic Practitioners (JCCP), said the move will 'protect the public from untrained and inexperienced operators and it will save the NHS a considerable amount of time and money putting right the harm done through botched procedures.' The statement added the need for the new measures had become increasingly clear in recent years with the 'explosion of high street outlets offering high-risk procedures delivered by people with limited clinical knowledge and training.' He warned this has led to long-term health complications and, in some cases, has led to patient deaths. Mr Mitchell suggested the government must go further on liquid Brazilian Butt Lifts, which the RCS said the procedure should only be performed by a Cosmetic Surgery Board-certified surgeon. The surgeon warned that the procedure needs medical oversight to prevent serious complications and said that while the government's plans will improve the regulation of non-surgical interventions, it must also urgently improve the regulation of surgical procedures. Millie Kendall, chief executive of the British Beauty Council, said: 'Any measures that increase protection for the general public and professionalise the industry will help instil confidence as well as helping to prevent the normalisation of horror stories that have become synonymous with our sector.' Solve the daily Crossword


Forbes
an hour ago
- Forbes
Analysis Of Whether Generic Generative AI Falls Within The Purview Of Providing Therapy And Psychotherapeutic Advice
In today's column, I examine a seemingly straightforward question that asks whether contemporary generic generative AI and large language models (LLMs) are said to be providing therapy and psychotherapeutic advice. The deal is this. When you use ChatGPT, Claude, Llama, Gemini, Grok, and other such popular generative AI systems, you can readily engage the AI in conversations about mental health. This can be of a general nature. It can also be a very personal dialogue. Many people are using AI as their de facto therapist and doing so without nary a thought of reaching out to a human therapist or mental health professional. Does the use of those LLMs in this manner signify that the AI is proffering services constituting therapy and psychotherapy? You might declare that yes, of course, that is precisely what the AI is doing. It is blatantly obvious. But AI makers who make and maintain the AI are undoubtedly reluctant to agree with that plain-stated assessment or ad hoc opinion. You see, new laws are starting to be enacted that bear down on generic AI that provides unfettered services within the scope of therapy and psychotherapy. AI makers are likely to desperately contend that their generic AI falls outside that regulatory scope. The question arises whether they will be successful in making that kind of tortuous argument. Some would say they don't have a ghost of a chance. Others believe they can dance their way around the legally troubling matter and come out scot-free. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. State Law Ups The Ante I recently analyzed a newly enacted law on AI for mental health that had been signed and enacted in Illinois on August 1, 2025, see my coverage at the link here. This new law is quite a doozy. The reason that it is a doozy is that it lays out violations and penalties for AI that provides unfettered therapy and psychotherapy services. The implication is that any generic generative AI, such as the popular ones I noted earlier, is now subject to potential legal troubles. Admittedly, the legal troubles right now would seemingly be confined to aspects of or within Illinois, since this is a state law and not a broader federal law. Nonetheless, in theory, the use of generic generative AI by users in Illinois that, by happenstance, provides therapy or psychotherapeutic advice is presumably within the scope of getting dinged by the new law. You can bet your bottom dollar that similar new laws are going to be popping up in many other states. The clock is ticking. And the odds are that this type of legislation will also spur action in the U.S. Congress and potentially lead to federal laws of a like nature. It all could have a tremendous impact on AI makers, along with major impacts on how generative AI is devised and made available to the public. All in all, few realize the significance of this otherwise innocuous and under-the-radar concern. My view is that this is the first tiny snowball that is starting to roll down a snowy hill and soon will be a gigantic avalanche that everybody will be talking about. Time will tell. Background On AI For Mental Health I'd like to set the stage before we get into the particulars of this heady topic. You might be vaguely aware that the top-ranked public use of generative AI and LLMs is to consult with the AI on mental health considerations, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. AI makers already find themselves in a bit of a pickle on this usage of their AI. The deal is this. By allowing their AI to be used for mental health purposes, they are opening the door to legal liability if their AI gets caught dispensing inappropriate guidance and someone suffers harm accordingly. So far, AI makers have been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role. You might wonder why the AI makers don't just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold. One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren't supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don't use capacity. Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here. The Services Question My commentary on these sobering matters is merely a layman's viewpoint. Make sure to consult with your attorney to garner any legal ramifications pertaining to your situation and any potential legal entanglements regarding AI and mental health. Let's take a look at the Illinois law that was recently passed. According to the Wellness and Oversight for Psychological Resources Act, known as HB1806, these two elements are a core consideration (excerpts): Regarding the use of unregulated AI in this realm, a crucial statement about AI usage for mental health purposes is stated this way in the Act (excerpt): There are varying ways to interpret this wording. One interpretation is that if an AI maker has a generic generative AI that also happens to provide mental health advice, and if this is taking place without the supervision of a licensed professional, and this occurs in Illinois, the AI maker is seemingly in violation of this law. The AI maker might not even be advertising that their AI can be used that way, but all it takes is for the AI to act in such a manner (since it provides or offers as such). Generic AI Versus Purpose-Built AI Closely observe that the new law stipulates that the scope involves 'therapy or psychotherapy services'. This brings us back to my opening question: Before we unpack the thorny issue, I'd like to clarify something about the topic of AI for mental health. You might have noticed that I referred to generic generative AI. What does the word 'generic' mean in this context? Let me explain. Well, first, there are customized generative AI systems and AI-based apps that are devised specifically to carry out mental health activities. Those are specially built for that purpose. It is the obvious and clear-cut intent of the AI developer that they want their AI to be used that way, including that they are likely to advertise and promote the AI for said usage. See my coverage on such purpose-built AI for mental health at the link here and the link here. In contrast, there is generic generative AI that just so happens to have a capability that encompasses providing mental health advisement. Generic generative AI is intended to answer all kinds of questions and delve into just about any topic under the sun. The AI wasn't especially tuned or customized to support mental health guidance. It just happens to be able to do so. I am focusing here on the generic generative AI aspects. The custom-built AI entails somewhat similar concerns but has its own distinct considerations. I'll be going into those facets in an upcoming posting, so be on the watch. Definitions And Meaning Are Crucial An AI maker might claim that they aren't offering therapy or psychotherapy services and that their generic generative AI has nothing to do with therapy or psychotherapy services. It is merely AI that interacts with people on a wide variety of topics. Period, end of story. The likely retort is that if your AI is giving out mental health advice, it falls within the rubric of therapy and psychotherapy services (attorneys will have a field day on this). Thus, trying to dodge the law by being sneaky about wording isn't going to get you off the hook. If it walks like a duck and quacks like a duck, by gosh, it surely is a duck. One angle on this disparity or dispute would be to nail down what the meaning and scope of therapy and psychotherapy encompass. Before we look at what the Illinois law says, it is useful to consider definitions from a variety of informed sources. Definitions At Hand According to the online dictionary of the American Psychological Association (APA), therapy and psychotherapy are defined this way: The Mayo Clinic provides this online definition: The National Institute Of Health (NIH) provides this online definition: And, the popular website and publication Psychology Today has this online definition: Interpreting The Meanings Those somewhat informal definitions seem to suggest that the nature of therapy and psychotherapy includes these notable elements: (1) aiding mental health problems, (2) use 'talk' or interactive chatting as a mode of communication, and (3) undertaken by a mental health professional. Let's see what the Illinois law says about therapy and psychotherapy (excerpts per the Act): It is interesting and notable that some carve-outs were made. The scope appears to exclude peer support, along with excluding religious counseling. Contemplating The Matter It might be worthwhile to noodle on how an AI maker might seek to avoid repercussions from their generic generative AI getting caught up in this messy milieu. First, if therapy and psychotherapy were defined as requiring that a mental health professional be involved, this provides an angle of escape. Why so? Oddly enough, an AI maker could simply point out that their AI doesn't employ or otherwise make use of a mental health professional. Therefore, the AI cannot be providing these said services since it fails to incorporate a supposed requirement. Notably, the Illinois law seems not to fall into that trap, since it seems to simply indicate that there are services and does not name that a mental health professional is part and parcel of the definition. Some of the other definitions that I listed would potentially be in a murkier condition due to explicitly mentioning a required role of having a trained professional or other similar verbiage. Second, an AI maker might try to claim that their generic generative AI is more akin to peer support. The beauty there is that since peer support is a carve-out, perhaps their AI is no longer within scope. It would be a tough row to hoe. Peer support stipulates that individuals are involved. At this juncture, we do not genuinely recognize AI as having legal personhood, see my discussion at the link here, and therefore, trying to assert that AI is an 'individual' would be an extraordinary stretch. Third, an AI maker might go the route of claiming that their generic generative AI is a form of religious counseling. The advantage would be that the matter of religious consulting is a carve-out. In that case, if AI were said to be doing religious counseling when providing mental health advice, the AI maker would apparently be free of the constraint. This appears to be a failing strategy for several reasons, including that the AI is presumably not a clergy member, pastoral counselor, or other religious leader (maybe a desperate attempt could be made to anoint the AI in that fashion, but this would seem readily overturned). Caught In A Web Other potential dodges or efforts to skirt the coming set of laws will indubitably be a keen topic for legal beagles and legal scholars. If an AI maker doesn't find a viable workaround, they are going to be subject to various fines and penalties. Those could add up. For example, Illinois has a population of approximately twelve million people. Of those, suppose that half are using generic generative AI (that's a wild guess), and that half of those use the AI for mental health aspects from time to time (another wild guess). That would be three million people, and each time they use the AI for that purpose might be construed as a violation. If each person does so once per week, that's twelve million violations in a month. The Illinois law says that each violation is up to a maximum fine of $10,000. We'll imagine that instead of the maximum, an AI maker gets fined a modest $1,000 per violation. In one month, based on this spitball conjecture, that could be $12 billion in fines. Even the richest tech firms are going to pay attention to that kind of fine. Plus, once other states go the same route, you can multiply this by bigger numbers for each of the additional states and how they opt to penalize AI that goes over the line. Crucial Juncture At Hand An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society. If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity. See my analysis of the potential widespread impacts at the link here. So far, AI makers have generally had free rein with their generic generative AI. It seems that the proverbial rooster has finally headed home to roost. Gradually, new laws are going to be enacted that seek to prohibit generic generative AI from dispensing mental health advice that is absent of a human therapist performing counseling. Get yourself primed and ready for quite a royal battle that might determine the future mental status of us all.