CPR questions to be added to driving theory test
The Driver and Vehicle Standards Agency (DVSA) – which oversees driving tests in Britain – said motorists are often 'first on the scene' when someone suffers a cardiac arrest.
Adding questions on CPR and defibrillators to the theory test from early next year means candidates will 'have a better understanding of the skills to use in an emergency', it added.
More than 40,000 people in the UK suffer an out-of-hospital cardiac arrest each year, with fewer than one in 10 surviving.
Research has found survival rates can be as high as 70% if CPR is given and a defibrillator used within three to five minutes of collapse.
DVSA chief driving examiner Mark Winn said: 'Part of being a safe and responsible driver is knowing what to do in an emergency – how to step in and make a real, life-saving difference.
'Learning CPR and how to use a defibrillator is a very simple skill, and adding this into the official learning resource is a great way for DVSA to support the drive to raise awareness.'
The push for the change was led by Professor Len Nokes – chair of the Save a Life Cymru scheme – whose 24-year-old daughter Claire died in 2017 from complications following a cardiac arrest.
He said: 'When Claire, my daughter, had her cardiac arrest, some knowledge of CPR might have made a difference.
'I don't want any other family to go through this experience.
'All of us in this partnership hope that by making CPR and how to use a defibrillator part of the theory test, we will be able to significantly increase the number of people who have this life-saving awareness.'
James Cant, chief executive of charity Resuscitation Council UK, said: 'By embedding these life-saving skills into such a widely-taken assessment, we can help ensure that more people, from all communities, gain the knowledge and confidence to act during a cardiac arrest.'
Learners must pass the theory driving test before booking a practical test.
Theory test candidates are required to get at least 43 out of 50 multiple-choice questions correct, covering areas such as road signs, traffic laws, vehicle safety and first aid.
They must also pass a hazard perception video test.
More than two million theory tests are taken each year, with a pass rate of about 45%.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Medscape
an hour ago
- Medscape
Safety Body Warns Over Post-Discharge Information Gaps
Communication failures between hospitals and community services after patient discharge can lead to serious medication-related harm, a new report has warned. The Health Services Safety Investigations Body (HSSIB) found that gaps in patient records, poor information sharing, and insufficient post-discharge support can delay or prevent medication being taken, with potentially life-threatening consequences. Case Study: Missed Insulin After Discharge The investigation examined the case of a 53-year-old man whose diabetes medication was changed during a hospital stay. Although he received some education on self-administering insulin, he later said he struggled to remember the instructions. A referral for district nursing support was made via his GP, but the district nursing team was not informed. Seventeen days after discharge, he told a nurse that he had not been taking insulin. His glucose levels were dangerously high, leading to an emergency hospital readmission. Key Failings Identified The HSSIB identified several failings: Conflicting records on admission made it unclear whether the patient had been taking diabetes medication. No documented evidence confirmed his ability to self-administer insulin after discharge education. Communication about the need for district nursing support was inconsistent between hospital teams and community services, and no specific support for insulin administration was arranged. The patient was discharged with two insulin pens, one of which was unnecessary, causing him confusion. District nursing demand often exceeded capacity, limiting visit times. Multiple providers used incompatible electronic patient record systems, preventing the sharing of critical information. Recommendations for Safer Discharges The report calls for local-level learning prompts covering hospital care, discharge planning, and community follow-up to prevent similar risks. It stressed the importance of ensuring patients are confident in managing their medication before leaving hospital and improving interoperability between hospital and community electronic systems. 'A Matter of Patient Safety' Rebecca Doyle, safety investigator at HSSIB, said that improving information flow and patient support at discharge is more than an administrative task. 'It's a matter of patient safety,' she said. 'While individual cases can be complex, this incident clearly highlighted persistent challenges with information sharing – an issue we continue to see in investigations that explore communication and the interaction of digital systems. This information sharing is critical to keep people safe at home, managing their medical conditions, and avoiding readmission to hospital.' The report is the third and final in a series of HSSIB investigations exploring why medications intended to be given to patients were not given. Priscilla Lynch is a freelance writer for Medscape, with over 20 years' experience covering medicine and healthcare. She has a master's in journalism and recently undertook a Health Innovation Journalism Fellowship with the International Center for Journalists.


Forbes
an hour ago
- Forbes
Could Poor AI Literacy Cause Bad Personal Decisions?
A recent article in Ars Technica revealed that a man switched from household salt (sodium chloride) to sodium bromide after using an AI tool. He ended up in an emergency room. Nate Anderson wrote, "His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies…. But the bigger problem was that the man appeared to be suffering from a serious case of "bromism." This is an ailment related to excessive bromine. After seeing this, it made me wonder if poor critical thinking skills and low AI literacy could actually cause people to make bad or even harmful decisions. As a weather and climate scientist, I am particularly aware of widespread misinformation and disinformation propagating around. People think the Earth is flat or that scientists can steer hurricanes. National Weather Service offices are fielding calls from people with wacky theories about geoengineering, groundhogs, and so forth. My fear is that a lack of understanding of Generative AI might make things worse and even cause harm as we saw in the case of bromism. Even in my own circle of intelligent friends and family members, it is clear to me that some people have very limited understanding of AI. They are familiar with Large Language Model tools like ChatGPT, Gemini, Grok, CoPilot, and others. They assume that's AI. It certainly is AI, but there is more to AI too. I experience a version of these types of assumptions, ironically, in my professional field. People see meteorologists on television. Because that is the most accessible type of meteorologist to them, they assume all meteorologists are on television. The majority of meteorologists do not work in the broadcast industry at all, but I digress. Let's define AI. According to the 'Artificial intelligence (AI) is an emerging technology where machines are programmed to learn, reason, and perform in ways that simulate human intelligence. Although AI technology took a dramatic leap forward, the ability of machines to automate manual tasks has been around for a long time.' The popular AI tools like ChatGPT or Gemini are examples of Generative artificial intelligence or GenAI. A Congressional website noted, 'Generative artificial intelligence (GenAI) refers to AI models, in particular those that use machine learning (ML) and are trained on large volumes of data, that are able to generate new content.' Other types of AI models may do things like classify data, synthesize information, or even make decisions. AI, for example, is used in automated vehicles and is even integrated into emerging generations of weather forecast models. The website went on to say, 'GenAI, when prompted (often by a user inputting text), can create various outputs, including text, images, videos, computer code, or music.' Many people are using GenAI Large Language Models or LLMs daily without context, which brings me back to the salt case article in Ars Technica. Nate Anderson continued, '…. It's not clear that the man was actually told by the chatbot to do what he did. Bromide salts can be substituted for table salt—just not in the human body. They are used in various cleaning products and pool treatments, however.' Doctors replicated his search and found that bromide is mentioned but with proper context noting that it is not suitable for all uses. AI hallucination can happen when LLMs produce factually incorrect, outlandish, unsubstantiated or bad information. However, it seems that this case was more about context and critical thinking (or lack thereof). As a weather expert, I have learned over the years that assumptions about how the public consumes information can be flawed. You would be surprised at how many ways '30% chance of rain' or 'tornado watch' is consumed. Context matters. In my discipline, we have a problem with 'social mediarology.' People post single run hurricane models and snowstorm forecasts two weeks out for clicks, likes, and shared Most credible meteorologists understand the context of that information, but someone receiving it on TikTok or YouTube may not. Without context, the use of critical thinking skills, or an understanding of LLMs, bad information is likely to be consumed or spread. Kimberly Van Orman is lecturer in the Institute for Artificial Intelligence. She told me, 'I think considering them 'synthetic text generators' is really helpful. That's at the core of what they do. They have no means of distinguishing truth or falsity. They have no 'ground truth. University of Washington linguist Emily Bender studies this topic and has consistently warned that tools ChatGPT and all other language models are simply unverified text synthesis machines. In fact, she recently argued that the first 'L" in LLM should stand for 'limited' not "large". To be clear, I am actually an advocate of proper, ethical use of AI. The climate scientist side of me keeps an eye on the energy and water consumption aspects as well, but I believe we will find a solution to that problem. Microsoft, for example, has explored underwater data centers. AI is here. That ship has sailed. However, it is important that people understand its strengths, weakness, opportunities and threats. People fear what they don't understand.
Yahoo
an hour ago
- Yahoo
Four people taken to hospital after chlorine gas incident at Guy's Hospital
Four people have been taken to hospital after chlorine gas incident at a hospital in London. The incident happened at Guy's Hospital in Southwark just before 9am today (August 14). Nine people were treated by medics with four of those taken to hospital. The chlorine gas, which is toxic when inhaled, is believed to have been created by the mixing of chemicals inside a plant room. A staff member was injured as a result of the chemical reaction. Others who rushed to their aid have been treated for the inhalation of chlorine gas. The incident happened in a 'non-patient area', according to the hospital trust. Firefighters have checked for elevated readings of chlorine gas and the building has also been ventilated. Fire crews also supported with a precautionary evacuation of the basement and ground floors of the building. Two fire engines, two fire rescue units, a command unit and specialist hazardous materials officers attended the scene. They were finished at the scene by around 11am. A Guy's and St Thomas' spokesperson said: 'The London Fire Brigade attended a chemical incident in a non-patient area of Guy's Hospital today. 'One staff member was injured and several people, who came to the aid of the person, were treated for the inhalation of chlorine gas.' A London Ambulance Service spokesperson said: 'We sent multiple resources to the scene, including ambulance crews, an incident response officer, and London Ambulance Service HART 'We treated nine patients, taking four patients to hospital and discharging five patients at the scene.'