
Genevieve Meehan: Safety training call after baby's nursery death
The parents of a baby who was killed at a nursery are calling for mandatory safe sleep training to be introduced in all nurseries, among a raft of other safety measures.Nine-month-old Genevieve Meehan died from asphyxiation when she was tightly swaddled, strapped to a beanbag, and left unattended at the Tiny Toes nursery in Cheadle Hulme, Stockport, on 9 May 2022.Nursery worker Kate Roughley was later jailed for 14 years for manslaughter. Genevieve's parents, Katie Wheeler and John Meehan, have now launched a campaign to improve safety standards in early years settings, with extra training provided where necessary.
The couple have previously described as "horrifying" figures obtained by the BBC that show there were almost 20,000 reports of serious childcare incidents in England's nurseries in the past five years.The latest figures for serious incidents in the year 2023-24 are 40% higher than five years previously.
Ms Wheeler said when police reviewed CCTV footage at the nursery following Genevieve's death they "discovered multiple examples of harm being caused to children over a short period of time".The family is now calling for:Compulsory CCTV in nursery settingsUnannounced inspections by Ofsted to be routine in early years settingsReview of CCTV footage during Ofsted inspectionsClear, statutory safe sleep guidance for early years settingsMandatory safe sleep training for all nursery staff and Ofsted inspectors andClear statutory guidance as to the use of sleep products in early years settingsThe family are calling for people to write to their local MPs to support their calls.Writing on their campaign website, Ms Wheeler said: "Like many other working parents, we enrolled Gigi at a nursery. We trusted that she would be kept safe. We never imagined that she would come to harm whilst in the care of trained professionals."The way in which Gigi was put down to sleep carried a high and obvious risk of death. Her death was entirely preventable. "Gigi is not the only child to die in a nursery in the last five years. It is incomprehensible that other families are suffering the same heartache that we are and we want to ensure that no child dies or comes to harm in a place where they are meant to be safe."Gigi deserved to live a long and fulfilling life but instead she has suffered and died in a way that no child or person ever should," Ms Wheeler said."The system failed Gigi and urgent reforms are needed."
Listen to the best of BBC Radio Manchester on Sounds and follow BBC Manchester on Facebook, X, and Instagram. You can also send story ideas via Whatsapp to 0808 100 2230.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
2 hours ago
- Daily Mail
Post Office compensation chief steps down after Sir Alan Bates raised 'serious concerns' about schemes
A Post Office boss who backed compensation for Horizon IT scandal victims has left his position as Sir Alan Bates raised 'serious concerns' about schemes. Leader of the Post Office's Remediation Unit, Simon Recaldin, is believed to have opted for voluntary redundancy and left his post this week. It comes as the first part of a public inquiry report into the controversy, analysing the compensation process as well as the affect on victims, is anticipated to be released in the coming weeks. More than 900 sub-postmasters were prosecuted between 1999 and 2015 after faulty accounting software made it look as though money was missing from their accounts. Hundreds are still waiting for payouts despite the previous government announcing that those who have had convictions quashed are eligible for £600,000. A Post Office spokesperson said yesterday Mr Recaldin's departure was a part of an 'organisational design exercise' across the firm. Now Joanne Hanley, who was previously a managing director and global head of client servicing, data and operations for Lloyds', is understood to have taken up a large portion of the former Post Office chief, according to The Telegraph. It comes as Post Office hero Sir Alan Bates accused the government of running a 'quasi kangaroo court' payout system for the scandal's victims last month. More recently, Sir Alan said he would prefer to see the compensation schemes thrown out rather the people working on them. 'We have got serious concerns about the transparency and the parity across the schemes,' he told The Telegraph. Last November, Mr Recaldin giving evidence to the inquiry, apologised after it was unearthed staff who were managing compensation claims had also been embroiled in prosecutions relating to the scandal. When queried about ex Post Office investigators he said: 'So my regret – and it is a genuine regret – is that when I came in, in January 2022, that I didn't do that conflicts check, check back on my inherited team, and challenge that.' It comes as the Sir Alan, who famously won his High Court battle with the Post Office in 2019 revealed that he had been handed a 'take it or leave it' compensation offer of less than half his original claim. Mr Bates, 70, said the first offer, made in January last year, was just one sixth of what he was asking for, adding that it rose to a third in the second offer. He has now been given a 'final take it or leave it offer' - which he said amounts to 49.2 per cent of his original claim. He, alongside 500 other sub-postmasters, will now have to lodge their bid for compensation via the Group Litigation order, managed by the Government. Bates, who led the sub-postmasters' campaign for justice, attacked the government for reneging on assurances given when the compensation schemes were set up The Post Office currently manages the Horizon Shortfall Scheme, which is seperate to the aforementioned. This scheme was organised for victims who have not been compensated but believe they experienced financial loses due to the IT scandal. A Post Office spokesman said: 'As part of the Post Office's commitment to deliver a 'new deal for postmasters', we have undertaken a review of our operating model to ensure we have the right structure in place. 'We have been in consultation with a number of colleagues from across the business, including the Remediation Unit. As a result of this Post Office-wide organisational design exercise, Simon Recaldin has left the business.'


Daily Mail
2 hours ago
- Daily Mail
EXCLUSIVE I've been publicly crucified for arresting a knife-wielding teenager: Policeman sacked after 10 years' unblemished service gives his side of the story about divisive video
All week, the tributes have poured in. Those whose lives were touched by PC Lorne Castle haven't hesitated to come forward. One woman's account of how her son's life was saved by his 'kindness and humanity' and willingness to 'go beyond what is expected of a police officer' is particularly moving. She wrote about how the troubled teenager lost his way in life and became known to police, who were forever having to bring him home. It was PC Castle, himself a father of three, who ended up talking her boy down from the ledge, in a metaphorical sense as well as a literal one.


Daily Mail
2 hours ago
- Daily Mail
Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'
Lawyers in England and Wales have been warned they could face 'severe sanctions' including potential criminal prosecution if they present false material generated by AI in court. The ruling, by one of Britain's most senior judges, comes on the back of a string of cases in which which artificially intelligence software has produced fictitious legal cases and completely invented quotes. The first case saw AI fabricate 'inaccurate and fictitious' material in a lawsuit brought against two banks, The New York Times reported. Meanwhile, the second involved a lawyer for a man suing his local council who was unable to explain the origin of the nonexistent precedents in his legal argument. While large language models (LLMs) like OpenAI 's ChatGPT and Google 's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply. The programs are also prone to what researchers call 'hallucinations' - outputs that are misleading or lack any factual basis. AI Agent and Assistance platform Vectera has monitored the accuracy of AI chatbots since 2023 and found that the top programs hallucinate between 0.7 per cent and 2.2 per cent of the time - with others dramatically higher. However, those figures become astronomically higher when the chatbots are prompted to produce longer texts from scratch, with market leader OpenAI recently acknowledging that its flagship ChatGPT system hallucinates between 51 per cent and 79 per cent of the time if asked open-ended questions. While large language models (LLMs) like OpenAI's ChatGPT and Google's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply - which can lead to them 'hallucinating' false information Dame Victoria Sharp, president of the King's Bench Division of the High Court, and Justice Jeremy Johnson KC, authored the new ruling. In it they say: 'The referrals arise out of the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court. 'The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. 'They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.' The pair argued that existing guidance around AI was 'insufficient to address the misuse of artificial intelligence'. Judge Sharp wrote: 'There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,' While acknowledging that AI remained a 'powerful technology' with legitimate use cases, she nevertheless reiterated that the technology brought 'risks as well as opportunities.' In the first case cited in the judgment, a British man sought millions in damages from two banks. The court discovered that 18 out of 45 citations included in the legal arguments featured past cases that simply did not exist. Even in instances in which the cases did exist, often the quotations were inaccurate or did not support the legal argument being presented. The second case, which dates to May 2023, involved a man who was turned down for emergency accommodation from the local authority and ultimately became homeless. His legal team cited five past cases, which the opposing lawyers discovered simply did not exist - tipped off by the fact by the US spellings and formulaic prose style. Rapid improvements in AI systems means its use is becoming a global issue in the field of law, as the judicial sector figures out how to incorporate artificial intelligence into what is frequently a very traditional, rules-bound work environment. Earlier this year a New York lawyer faced disciplinary proceedings after being caught using ChatGPT for research and citing a none-existent case in a medical malpractice lawsuit. Attorney Jae Lee was referred to the grievance panel of the 2nd U.S. Circuit Court of Appeals in February 2025 after she cited a fabricated case about a Queens doctor botching an abortion in an appeal to revive her client's lawsuit. The case did not exist and had been conjured up by OpenAI's ChatGPT and the case was dismissed. The court ordered Lee to submit a copy of the cited decision after it was not able to find the case. She responded that she was 'unable to furnish a copy of the decision.' Lee said she had included a case 'suggested' by ChatGPT but that there was 'no bad faith, willfulness, or prejudice towards the opposing party or the judicial system' in doing so. The conduct 'falls well below the basic obligations of counsel,' a three-judge panel for the Manhattan-based appeals court wrote. In June two New York lawyers were fined $5,000 after they relied on fake research created by ChatGPT for a submission in an injury claim against Avianca airline. Judge Kevin Castel said attorneys Steven Schwartz and Peter LoDuca acted in bad faith by using the AI bot's submissions - some of which contained 'gibberish' - even after judicial orders questioned their authenticity.