logo
Women and babies could die due to midwife cuts at Sydney's RPA hospital, staff warn

Women and babies could die due to midwife cuts at Sydney's RPA hospital, staff warn

The Guardian01-07-2025
Midwives at one of Sydney's largest hospitals have warned women and babies could die in light of cuts to the number of midwives staff deployed across the birth and delivery unit.
Hospital staff say 20 full-time equivalent roles have been removed from across the women and babies service at Royal Prince Alfred (RPA) hospital in Camperdown, including five from the midwifery group practice (MGP), effective from Tuesday.
The New South Wales Nurses and Midwives' Association (NSWNMA) said that while no jobs will be lost, vacant positions that are currently advertised will now not be filled and fewer casual staff will be brought in.
The changes will mean fewer midwives will be rostered on to each shift in the labour ward and birth centre to assist mothers giving birth at RPA.
'So currently in the birth unit, you would have eight midwives on a day shift, 10 midwives on an afternoon and eight on the night [shift],' the NSWNMA president, O'Bray Smith, said. 'With the new changes, you will have six midwives [on each of the three shifts]. This is not safe.'
The union said nine beds in the maternity ward would also be cut, with Smith warning this would mean 'women will be pushed out faster than they already are'.
Speaking at a rally outside RPA on Tuesday, Smith said reducing the number of midwives assisting women during birth will mean that not all women will receive the one-to-one care during active labour and two-to-one care during delivery, which is considered safe practice.
'Midwives are already at breaking point,' Smith said. 'They know that women aren't getting the care they deserve in NSW. This is really going to make things a lot worse. Every single shift, a mother or a baby could die as a result of not having enough staff. This is about saving lives, having safe staffing. The midwives are absolutely terrified of what could happen here.'
Sign up for Guardian Australia's breaking news email
Jessica Rendell, a midwife at RPA since 2021, the staffing changes were 'a slap in the face'.
'It's just really unsafe having such limited [number of] midwives,' she said, speaking to Guardian Australia in her capacity as an NSWNMA member. 'It's such a joke that they're cutting our staffing and numbers. It's not like we're sitting around doing nothing. We are run off our feet every single day.
'If you ask any of the girls working today, have they had a break? Have they eaten? And they probably haven't … We're exhausted, honestly we've had enough. The government is making it so hard to enjoy coming back to work every day, because it's just so stressful coming into work and knowing that you might not be able to help your woman in an emergency.'
Rendell said she knew a number of midwives who were looking to leave positions in NSW Health for jobs in other states where the pay is higher and staff-to-patient ratios are better.
The NSW health minister, Ryan Park, told reporters on Tuesday: 'I want to make it clear, no one in RPA is losing their jobs.' He said midwives were being 'redeployed in other parts of maternity services' due to 'a slight reduction in birthrates at RPA'.
Park added that the state government used a model called Birthrate Plus to determine the level of staffing in birthing and maternity services, a model that he said had been endorsed by the NSWNMA.
The union previously endorsed the Birthrate Plus model, but has for a number of years called for its review and the implementation of 1:3 staff ratios.
Sign up to Breaking News Australia
Get the most important news as it breaks
after newsletter promotion
The NSWNMA has raised concerns about the reduction in the number of midwives who will work across the MGP program, which allows a woman to see the same midwife throughout her pregnancy, during delivery and postnatal follow-up care.
The number of midwives assigned to MGP will drop by at least five, the union said, despite a huge demand for the service and the fact that the recent NSW birth trauma inquiry recommended 'the NSW government invest in and expand midwifery continuity of care models, including midwifery group practice'.
The Aboriginal MGP, a dedicated program to assist Indigenous women to give birth in culturally safe ways and to improve outcomes for Indigenous women and their babies, will also be merged with the general MGP program.
The two dedicated Aboriginal MGP midwives say they anticipate being asked to pick up extra patients from the general service, diverting their focus from Indigenous women.
'It's been integrated. It's no longer a protected Indigenous space,' , one of the Aboriginal MGP midwives, Paige Austin, said, speaking to Guardian Australia in her capacity as a NSWNMA member. 'Those women lose us, and they lose our time and everything that we give to them extra on top of MGP.'
News of the staffing changes was shared on the mothers' group that Charlotte Wesley and Bridget Dominic are part of, and they both turned out in the rain on Tuesday to show support for the RPA midwives who had assisted them to deliver their babies – George and Roonui – just three months ago.
'The midwives showed up for us so we really want to show up for them,' Dominic said.
'I do think that these cuts could lead to deaths of mothers and babies. But further than that, we shouldn't just be aiming for alive mothers and babies; we want happy and healthy [mothers and] babies who contribute to happy healthy communities.'
RPA was contacted for comment.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Rare ‘brain-eating amoeba' detected in drinking water supplies in Australia
Rare ‘brain-eating amoeba' detected in drinking water supplies in Australia

The Independent

time26 minutes ago

  • The Independent

Rare ‘brain-eating amoeba' detected in drinking water supplies in Australia

One of the world's most dangerous water-borne microorganisms, commonly called a 'brain-eating amoeba ', has recently been detected in two drinking water supplies in south-west Queensland. Both affected towns are about 750 kilometres west of Brisbane: Augathella (population roughly 300) and Charleville (population 3,000). During an analysis of water samples commissioned by Queensland Health, Naegleria fowleri was detected in the water systems of two health facilities, one in Charleville and one in Augathella, as well as in the incoming town water supply at both facilities. The Shire Council of Murweh, which takes in the two affected locations, issued a health notice for residents and visitors on August 7, warning of the detection of N. fowleri in the water supplies. So what is this organism? And how significant is the risk likely to be in these Queensland towns, and elsewhere? It's rare – but nearly always fatal The N. fowleri amoeba is a microscopic organism found around the world. It only lives in warm freshwater, generally between 25 and 40°C. This can include ponds, lakes, rivers, streams and hot springs. If someone is infected with N. fowleri, it causes what's called primary amoebic meningoencephalitis, a serious infection of the brain. Symptoms include a sore throat, headache, hallucinations, confusion, vomiting, fever, neck stiffness, changes to taste and smell, and seizures. The incubation period of primary amoebic meningoencephalitis – the time between infection and symptoms appearing – typically ranges from three to seven days. Tragically, this illness is nearly always fatal, even if someone receives medical attention quickly. Death typically occurs about five days after symptoms begin. Fortunately, though, cases are very rare. In the United States, there were 167 reported cases of primary amoebic meningoencephalitis between 1962 and 2024, according to the Centers for Disease Control and Prevention. Only four survived. A global review of the disease up to 2018 reported that, of 381 known cases, Australia accounted for 22, the fifth highest number, after the US, Pakistan, Mexico and India. Some 92% of people died. So how does someone get infected? The route of infection is very unusual and quite specific. N. fowleri infects the brain through a person's nose. The amoeba then passes through a protective membrane called the nasal epithelium. This is an important physical barrier and allows the amoeba to travel to the brain through the olfactory nerve, which is responsible for our sense of smell. The infection then kills brain tissue and causes swelling of the brain, termed cerebral oedema. Infections occur in people when infected water travels up their nose. Most cases involve children and young people who have swum in infected waters. The majority of cases occur in males, with an average age of 14. Even water sports in affected waterways can be dangerous. A person is currently in intensive care in Missouri after it's believed they became infected while water skiing. Regarding the recent detection in Queensland water supply systems, the source of the infection has not been reported. It's possible a fresh waterway, or groundwater, which feeds into the affected drinking water systems, was contaminated with N. fowleri, and the amoeba travelled from there. But this will likely be determined with further investigation. How dangerous is N. fowleri in drinking water? First, it's important to note you can't get primary amoebic meningoencephalitis from drinking contaminated water. But any activity that allows infected water to enter a person's nose is potentially dangerous. This can happen during a bath or a shower. Some people flush their nasal passages to clear congestion related to allergies or a viral infection. This has been linked to infections with N. fowleri. If you're going to flush your nasal passages, you should use a sterile saline solution. Even young children playing with hoses, sprinklers or water activities could be at risk. A 16-month-old child was fatally infected following an incident involving a contaminated water 'splash pad' in the US in 2023. Splash pads are water-based recreation activities, primarily for young children, that involve splashing or spraying water. So what's the risk in Queensland? Regarding N. fowleri, Australian drinking water guidelines advise: If the organism is detected, advice should be sought from the relevant health authority or drinking water regulator. The guidelines also provide recommendations on how to disinfect water supplies and control N. fowleri, using chlorine and other chemical compounds. All public town water supplies across Australia are regularly tested to ensure that the water is safe to drink. We don't yet know the exact cause of the detection of the amoeba N. fowleri in these Queensland towns' water supplies. But drinking or cooking with water contaminated with this amoeba will not cause an infection. Any activity that allows potentially contaminated water to go up the nose should be navigated carefully for now in the affected areas. Contamination of a town's drinking water supply from this amoeba is very rare and is unlikely in other Australian town water supplies. How about swimming? To reduce your risk in potentially infected warm, fresh waters, you should keep your head above water while swimming. And don't jump or dive in. You can use a nose-clip if you want to swim with your head under water. The amoeba cannot survive in salt water, so there's no risk of swimming in the ocean. Also, properly maintained swimming pools should be safe from the organism. New South Wales Health advises that the amoeba cannot survive in water that is clean, cool and adequately chlorinated. Ian A. Wright is an Associate Professor in Environmental Science at Western Sydney University.

Studies in US, UK warn of flaws in AI-powered health guidance
Studies in US, UK warn of flaws in AI-powered health guidance

Coin Geek

time3 hours ago

  • Coin Geek

Studies in US, UK warn of flaws in AI-powered health guidance

Getting your Trinity Audio player ready... Two recently published studies have revealed that generative artificial intelligence (AI) tools, including large language models (LLMs) ChatGPT and Gemini, produce misinformation and bias when used for medical information and healthcare decision-making. In the United States, researchers from a medical school at Mount Sinai published a study on August 2 showing that LLMs were highly vulnerable to repeating and elaborating on 'false facts' and medical misinformation. Meanwhile, across the Atlantic, the London School of Economics and Political Science (LSE) published a study shortly afterward that found AI tools used by more than half of England's councils are downplaying women's physical and mental health issues, creating a risk of gender bias in care decisions. Medical AI LLMs, such as OpenAI's ChatGPT, are AI-based computer programs that generate text using large datasets of information on which they are trained. The power and performance of such technology have increased exponentially over the past few years, with billions of dollars being spent on research and development in the area. LLMs and AI tools are now being deployed across almost every industry, to different extents, not least in the medical and healthcare sector. In the medical space, AI is already being used for various functions, such as reducing the administrative burden by automatically generating and summarizing case notes, assisting in diagnostics, and enhancing patient education. However, LLMs are prone to the 'garbage in, garbage out' problem, relying on accurate, factual data making up their training material or they may reproduce the errors and bias in the datasets. This results in what is often known as 'hallucinations,' which is the generation of content that is irrelevant, made-up, or inconsistent with the input data. In a medical context, these hallucinations can include fabricated information and case details, invented research citations, or made-up disease details. US study shows chatbots spreading false medical information Earlier this month, researchers from the Icahn School of Medicine at Mount Sinai published a paper titled 'multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support.' The study aimed to test a subset of AI hallucinations that arise from 'adversarial attacks,' in which made-up details embedded in prompts lead the model to reproduce or elaborate on the false information. 'Hallucinations pose risks, potentially misleading clinicians, misinforming patients, and harming public health,' said the paper. 'One source of these errors arises from deliberate or inadvertent fabrications embedded in user prompts—an issue compounded by many LLMs' tendency to be overly confirmatory, sometimes prioritizing a persuasive or confident style over factual accuracy.' To explore this issue, the researchers tested six LLMs: DeepSeek Distilled, GPT4o, llama-3.3-70B, Phi-4, Qwen-2.5-72B, and gemma-2-27b-it, with 300 pieces of text similar to clinical notes written by doctors, but each containing a single fake laboratory test, physical or radiological sign, or medical condition. They were tested under 'default' (standard settings) as well as with 'mitigating prompts' designed to reduce hallucinations, generating 5,400 outputs. If a model elaborated on the fabricated detail, the case was classified as a 'hallucination.' The results showed that hallucination rates ranged from 50% to 82% across all models and prompting methods. The use of mitigating prompts lowered the average hallucination rate, but only from 66% without to 44% with a mitigating prompt. 'We find that the LLM models repeat or elaborate on the planted error in up to 83% of cases,' reported the researchers. 'Adopting strategies to prevent the impact of inappropriate instructions can half the rate but does not eliminate the risk of errors remaining.' They added that 'our results highlight that caution should be taken when using LLM to interpret clinical notes.' According to the paper, the best-performing model was GPT-4o, whose hallucination rates declined from 53% to 23% when mitigating prompts were used. However, with even the best-performing model producing potentially harmful hallucinations in almost a quarter of cases—even with mitigating prompts—the researchers concluded that AI models cannot yet be trusted to provide accurate and trustworthy medical data. 'LLMs are highly susceptible to adversarial hallucination attacks, frequently generating false clinical details that pose risks when used without safeguards,' said the paper. 'While prompt engineering reduces errors, it does not eliminate them… Adversarial hallucination is a serious threat for real‑world use, warranting careful safeguards.' The Mount Sinai study isn't the only recent paper published in the U.S. medical space that has brought into question the use of AI. In another damaging example, on August 5, the Annals of Internal Medicine journal reported a case of a 60-year-old man who developed bromism, also known as bromide toxicity, after consulting ChatGPT on how to remove salt from his diet. According to advice from the LLM, the man swapped sodium chloride (table salt) for sodium bromide, which was used as a sedative in the early 20th century, resulting in the rare condition. But it's not just the stateside that AI advice is taking a PR hit. UK study finds gender bias in LLMs While U.S. researchers were finding less-than-comforting results when testing whether LLMs reproduce false medical information, across the pond a United Kingdom study was turning up equally troubling results related to AI bias. On August 11, a research team from LSE, led by Dr Sam Rickman, published their paper on 'evaluating gender bias in large language models in long-term care,' in which they evaluated gender bias in summaries of long-term care records generated with two open-source LLMs, Meta's (NASDAQ: META) Llama 3 and Google's (NASDAQ: GOOGL) Gemma. In order to test this, the study created gender-swapped versions of long-term care records for 617 older people from a London local authority and asked the LLMs to generate summaries of male and female versions of the records. While Llama 3 showed no gender-based differences across any metrics, Gemma displayed significant differences. Specifically, male summaries focused more on physical and mental health issues. Language used for men was also more direct, while women's needs were 'downplayed' more often than men's. For example, when Google's Gemma was used to generate and summarize the same case notes for men and for women, language such as 'disabled,' 'unable,' and 'complex' appeared significantly more often in descriptions of men than women. In other words, the study found that similar care needs in women were more likely to be omitted or described in less severe terms by specific AI tools, and that this downplaying of women's physical and mental health issues risked creating gender bias in care decisions. 'Care services are allocated on the basis of need. If women's health issues are underemphasized, this may lead to gender-based disparities in service receipt,' said the paper. 'LLMs may offer substantial benefits in easing administrative burden. However, the findings highlight the variation in state-of-the-art LLMs, and the need for evaluation of bias.' Despite the concerns raised by the study, the researchers also highlighted the benefits AI can provide to the healthcare sector. 'By automatically generating or summarizing records, LLMs have the potential to reduce costs without cutting services, improve access to relevant information, and free up time spent on documentation,' said the paper. It went on to note that 'there is political will to expand such technologies in health and care.' Despite flaws, UK's all-in on AI British Prime Minister Keir Starmer recently pledged £2 billion ($2.7 billion) to expand Britain's AI infrastructure, with the funding targeting data center development and digital skills training. This included committing £1 billion ($1.3 billion) of funding to scale up the U.K.'s compute power by a factor of 20. 'We're going to bring about great change in so many aspects of our lives,' said Starmer, speaking to London Tech Week on June 9. He went on to highlight health as an area 'where I've seen for myself the incredible contribution that tech and AI can make.' 'I was in a hospital up in the Midlands, talking to consultants who deal with strokes. They showed me the equipment and techniques that they are using – using AI to isolate where the clot is in the brain in a micro-second of the time it would have taken otherwise. Brilliantly saving people's lives,' said the Prime Minister. 'Shortly after that, I had an incident where I was being shown AI and stethoscopes working together to predict any problems someone might have. So whether it's health or other sectors, it's hugely transformative what can be done here.' It's unclear how, or if, the LSE study and its equally AI-critical U.S. counterparts may affect such commitments from the government, but for now the U.K. at least seems set on pursuing the advantages AI tools such as LLMs can provide across the public and private sector. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. Watch: Demonstrating the potential of blockchain's fusion with AI title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">

Major diabetes breakthrough as world-first drug that STOPS the condition gets green light in the UK
Major diabetes breakthrough as world-first drug that STOPS the condition gets green light in the UK

The Sun

time4 hours ago

  • The Sun

Major diabetes breakthrough as world-first drug that STOPS the condition gets green light in the UK

A GROUNDBREAKING drug that slows down the development of type 1 diabetes has been licensed for use in the UK. Teplizumab can allow diabetes patients to live 'normal lives' without the need for insulin injections. 1 The decision by the Medicines and Healthcare Regulatory Agency (MHRA) has been hailed by experts as a 'breakthrough moment' that represents a 'turning point' in how the condition is treated. About 400,000 people in the UK have type 1 diabetes, a lifelong condition which causes the immune system to attack insulin-producing cells in the pancreas. Insulin helps the body use sugar for energy, and without this hormone, blood sugar levels can become dangerously high. Unlike type 2 diabetes, where the body is unable to make enough insulin or the insulin you do make doesn't work properly, the cause is less clear. And while type 2 diabetes can be improved through some simple lifestyle changes, type 1 diabetes requires lifelong treatment through insulin injections or pumps. Teplizumab trains the immune system to stop attacking pancreatic cells. It's taken by an IV drip for a minimum of 30 minutes over 14 consecutive days. The drug, which is already approved in the US, has been authorised for use by the MHRA to delay the onset of stage three type 1 diabetes in adults and children aged eight or over by an average of three years. Ahmed Moussa, general manager of general medicines UK and Ireland at Sanofi, which makes teplizumab, said: 'One hundred years ago the discovery of insulin revolutionised diabetes care. Today's news marks a big step forward.' The UK is the first country in Europe to be granted a licence. I'm a doctor and here are the six diabetes myths you need to know Type 1 diabetes develops gradually in three stages over months or years. Stage three is usually when people start to experience blood sugar problems and are diagnosed with the condition. According to the MHRA, teplizumab is used in people with stage two type 1 diabetes, which is an earlier stage of the disease during which patients are at a high risk of progressing to stage three. Parth Narendran, a professor of diabetes medicine at the University of Birmingham and The Queen Elizabeth Hospital Birmingham, said: 'Teplizumab essentially trains the immune system to stop attacking the beta cells in the pancreas, allowing the pancreas to produce insulin without interference. 'This can allow eligible patients to live normal lives, delaying the need for insulin injections and the full weight of the disease's daily management by up to three years. It allows people to prepare for disease progression rather than facing an abrupt emergency presentation.' Following the decision by the MHRA, the cost-effectiveness of teplizumab will be assessed by NHS spending watchdog the National Institute for Health and Care Excellence (Nice) to determine if it can be rolled out on the health service. Karen Addington, chief executive of the charity Breakthrough T1D, said: 'I am personally delighted and welcome the MHRA's approval of teplizumab. 'After years of research, clinical trials and drug development, we have an incredible breakthrough.' Reacting to the announcement, Dr Elizabeth Robertson, director of research and clinical at Diabetes UK, said: 'Today's landmark licensing of teplizumab in the UK marks a turning point in the treatment of type 1 diabetes. 'For the first time, we have a medicine that targets the root cause of the condition, offering three precious extra years free from the relentless demands of managing type 1 diabetes.' Dr Robertson added that the 'next steps are critical'. 'To ensure teplizumab reaches everyone who could benefit, we need it to be made available on the NHS, and the rollout of a screening programme to identify those with early-stage type 1 diabetes,' she said. How do you know if you have type 1 diabetes? The most common symptoms of type 1 diabetes are: peeing more than usual feeling very thirsty feeling very tired losing weight quickly without trying to Other symptoms can include: The symptoms develop quickly, over a few days or weeks. If it's not treated, it can lead to a serious condition called diabetic ketoacidosis. The condition usually starts in children and young adults, but it can happen at any age. You're more likely to get it if you have other problems with your immune system (autoimmune conditions), or if others in your family have type 1 diabetes or other autoimmune conditions. The symptoms are similar to type 2 diabetes, but type 2 diabetes usually develops more slowly and is more common in older people. Source: NHS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store