logo
#

Latest news with #OPMs

China unveils drone-mounted quantum device for submarine detection in South China Sea
China unveils drone-mounted quantum device for submarine detection in South China Sea

The Star

time25-04-2025

  • Science
  • The Star

China unveils drone-mounted quantum device for submarine detection in South China Sea

As US-China tensions simmer over submarine operations in the South China Sea, Chinese space scientists have unveiled a breakthrough in magnetic detection technology that could tip the balance in underwater warfare. A drone-mounted quantum sensor system, tested successfully in offshore trials, achieved picotesla precision to track magnetic anomaly and map seabed resources while overcoming some severe practical limitations of existing devices, researchers disclosed in a peer-reviewed paper. With such sensitivity, People's Liberation Army's (PLA) anti-submarine forces cannot only pinpoint a submarine but also catch the tail waves it generates, according to some previous studies. Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team. Traditional optically pumped magnetometers (OPMs) – widely used in submarine detection – face critical 'blind zones' in low-latitude regions like the South China Sea, where Earth's magnetic field runs nearly parallel to the surface. When the sensor's optical axis aligns too closely with magnetic field lines, signals weaken dramatically. Enter the Coherent Population Trapping (CPT) atomic magnetometer. Leveraging quantum interference effects in rubidium atoms, the device exploits Zeeman splitting – energy level shifts caused by magnetic fields – to generate seven microwave resonance signals. These frequencies correlate linearly with magnetic field strength, enabling omnidirectional detection regardless of orientation, according to the researchers. With a sensitivity of 8pT by design – on par with Canada's MAD-XR system used by Nato allies – the Chinese system eliminates blind zones while cutting costs and complexity. 'The MAD-XR is too sophisticated and expensive, limiting the scope of practical applications in real life,' said the team led by Wang Xuefeng, researcher with the Quantum Engineering Research Centre, China Aerospace Science and Technology Corporation (CASC). The towed system developed by Wang and his colleagues, described in a paper published in the Chinese Journal of Scientific Instrument on April 16, pairs the CPT sensor with a rotor drone via a 20-metre (65.6ft) cable to minimise electromagnetic interference from the aircraft. A fluxgate magnetometer corrects heading errors, while GPS and ground stations process data using algorithms that suppress noise, compensate for diurnal geomagnetic shifts, and generate high-resolution anomaly maps. During trials off Weihai, Shandong province, the drone surveyed a 400 by 300-metre grid with 34 crossover points. Raw data showed 2.517 nanotesla (nT) accuracy, refined to 0.849 nT after error correction – a threefold improvement. Crucially, two independent surveys achieved a 99.8 per cent correlation in magnetic anomaly maps, with a root mean square error (RMSE) of just 1.149 nT, 'demonstrating outstanding stability in real life tests', Wang's team added. This is not just about submarines, according to the researchers. At picotesla-level sensitivity, it can map oil reservoirs, archaeological wrecks, and tectonic shifts. Yet defence applications loom large. Unlike the MAD-XR – which uses multiple probes to avoid blind spots at high expense – the single-probe Chinese system costs just a fraction, while outperforming in low-latitude waters, according to the researchers. But battlefield readiness requires more testing under extreme conditions, which are absent from the published trials. MAD-XR has been proven by years of operational data from US, Japan and a few other countries, according to openly available information. The CASC is China's largest aerospace defence contractor. Beijing Institute of Aerospace Control Devices also took part in the project. Scientists in China and some other countries are developing other types of high-performance submarine detectors. The Spin-Exchange Relaxation-Free (SERF) detector, for instance, can reportedly boost sensitivity by 1,000 times to femtotesla range. More from South China Morning Post: For the latest news from the South China Morning Post download our mobile app. Copyright 2025.

AI health warning as researchers say algorithms could discriminate against patients
AI health warning as researchers say algorithms could discriminate against patients

The Independent

time12-04-2025

  • Health
  • The Independent

AI health warning as researchers say algorithms could discriminate against patients

Artificial intelligence in healthcare has left experts urging caution that a focus on predictive accuracy over treatment efficacy could lead to patient harm. Researchers in the Netherlands warn that while AI -driven outcome prediction models (OPMs) are promising, they risk creating "self-fulfilling prophecies" due to biases in historical data. OPMs utilise patient-specific information, including health history and lifestyle factors, to assist doctors in evaluating treatment options. AI's ability to process this data in real-time offers significant advantages for clinical decision-making. However, the researchers' mathematical models demonstrate a potential downside, namely if trained on data reflecting historical disparities in treatment or demographics, AI could perpetuate these inequalities, leading to suboptimal patient outcomes. The study highlights the crucial role of human oversight in AI-driven healthcare. Researchers emphasise the "inherent importance" of applying "human reasoning" to AI's decisions, ensuring that algorithmic predictions are critically evaluated and do not inadvertently reinforce existing biases. The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models "can lead to harm". 'Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,' researchers said. 'We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. 'These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.' The article, published in the data-science journal Patterns, also suggests the development of AI model development 'needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome'. Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire's department of computer science, said: 'This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics. 'These models will accurately predict poor outcomes for patients in these demographics. 'This creates a 'self-fulfilling prophecy' if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them. 'Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes. 'Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background. 'This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.' AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes. In January, Prime Minister Sir Keir Starmer pledged that the UK will be an 'AI superpower' and said the technology could be used to tackle NHS waiting lists. Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs 'are not that widely used at the moment in the NHS'. 'Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,' he said. Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: 'While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions. 'Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness. 'Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy. 'However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes. 'Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions and less encouragement overall.' He added that this leads to a slower recovery, more pain and reduced mobility in some patients. 'These are real issues affecting AI development in the UK,' Prof Harrison said.

AI could lead to patient harm, researchers suggest
AI could lead to patient harm, researchers suggest

Yahoo

time11-04-2025

  • Health
  • Yahoo

AI could lead to patient harm, researchers suggest

Artificial intelligence (AI) could lead to patient harm if the development of models is focused more on accurately predicting outcomes than treatment, researchers have suggested. Experts warned the technology could create 'self-fulfilling prophecies' when trained on historic data that does not account for demographics or the under-treatment of certain medical conditions. They added that the findings highlight the 'inherent importance' of applying 'human reasoning' to AI decisions. Academics in the Netherlands looked at outcome prediction models (OPMs), which use a patient's individual features such as health history and lifestyle information, to help medics weigh up the benefits and risks of treatment. AI can perform these tasks in real-time to further support clinical decision-making. The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models 'can lead to harm'. 'Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,' researchers said. 'We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. 'These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.' The article, published in the data-science journal Patterns, also suggests the development of AI model development 'needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome'. Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire's department of computer science, said: 'This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics. 'These models will accurately predict poor outcomes for patients in these demographics. 'This creates a 'self-fulfilling prophecy' if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them. 'Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes. 'Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background. 'This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.' AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes. In January, Prime Minister Sir Keir Starmer pledged that the UK will be an 'AI superpower' and said the technology could be used to tackle NHS waiting lists. Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs 'are not that widely used at the moment in the NHS'. 'Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,' he said. Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: 'While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions. 'Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness. 'Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy. 'However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes. 'Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions and less encouragement overall.' He added that this leads to a slower recovery, more pain and reduced mobility in some patients. 'These are real issues affecting AI development in the UK,' Prof Harrison said.

Why AI could end up harming patients, as researchers urge caution
Why AI could end up harming patients, as researchers urge caution

The Independent

time11-04-2025

  • Health
  • The Independent

Why AI could end up harming patients, as researchers urge caution

Artificial intelligence in healthcare has left experts urging caution that a focus on predictive accuracy over treatment efficacy could lead to patient harm. Researchers in the Netherlands warn that while AI -driven outcome prediction models (OPMs) are promising, they risk creating "self-fulfilling prophecies" due to biases in historical data. OPMs utilise patient-specific information, including health history and lifestyle factors, to assist doctors in evaluating treatment options. AI's ability to process this data in real-time offers significant advantages for clinical decision-making. However, the researchers' mathematical models demonstrate a potential downside, namely if trained on data reflecting historical disparities in treatment or demographics, AI could perpetuate these inequalities, leading to suboptimal patient outcomes. The study highlights the crucial role of human oversight in AI-driven healthcare. Researchers emphasise the "inherent importance" of applying "human reasoning" to AI's decisions, ensuring that algorithmic predictions are critically evaluated and do not inadvertently reinforce existing biases. The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models "can lead to harm". 'Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,' researchers said. 'We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. 'These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.' The article, published in the data-science journal Patterns, also suggests the development of AI model development 'needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome'. Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire's department of computer science, said: 'This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics. 'These models will accurately predict poor outcomes for patients in these demographics. 'This creates a 'self-fulfilling prophecy' if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them. 'Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes. 'Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background. 'This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.' AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes. In January, Prime Minister Sir Keir Starmer pledged that the UK will be an 'AI superpower' and said the technology could be used to tackle NHS waiting lists. Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs 'are not that widely used at the moment in the NHS'. 'Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,' he said. Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: 'While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions. 'Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness. 'Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy. 'However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes. 'Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions and less encouragement overall.' He added that this leads to a slower recovery, more pain and reduced mobility in some patients. 'These are real issues affecting AI development in the UK,' Prof Harrison said.

AI could lead to patient harm, researchers suggest
AI could lead to patient harm, researchers suggest

Yahoo

time11-04-2025

  • Health
  • Yahoo

AI could lead to patient harm, researchers suggest

Artificial intelligence (AI) could lead to patient harm if the development of models is focused more on accurately predicting outcomes than treatment, researchers have suggested. Experts warned the technology could create 'self-fulfilling prophecies' when trained on historic data that does not account for demographics or the under-treatment of certain medical conditions. They added that the findings highlight the 'inherent importance' of applying 'human reasoning' to AI decisions. Academics in the Netherlands looked at outcome prediction models (OPMs), which use a patient's individual features such as health history and lifestyle information, to help medics weigh up the benefits and risks of treatment. AI can perform these tasks in real-time to further support clinical decision-making. The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models 'can lead to harm'. 'Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,' researchers said. 'We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment. 'These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.' The article, published in the data-science journal Patterns, also suggests the development of AI model development 'needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome'. Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire's department of computer science, said: 'This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics. 'These models will accurately predict poor outcomes for patients in these demographics. 'This creates a 'self-fulfilling prophecy' if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them. 'Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes. 'Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background. 'This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.' AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes. In January, Prime Minister Sir Keir Starmer pledged that the UK will be an 'AI superpower' and said the technology could be used to tackle NHS waiting lists. Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs 'are not that widely used at the moment in the NHS'. 'Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,' he said. Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: 'While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions. 'Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness. 'Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy. 'However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes. 'Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions and less encouragement overall.' He added that this leads to a slower recovery, more pain and reduced mobility in some patients. 'These are real issues affecting AI development in the UK,' Prof Harrison said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store