logo
Royal Devon consultant wins stroke research award

Royal Devon consultant wins stroke research award

BBC News17-02-2025

A consultant in Devon has been given an outstanding achievement award for his work in improving stroke treatments over the past 25 years, a health trust says.Prof Martin James had been working with colleagues across 100 UK hospitals to run a £2m research study to enhance stroke care, the Royal Devon University Healthcare NHS Foundation Trust said.He was also working on integrating AI and machine learning into stroke care, it added. The award at the UK Stroke Forum recognised his 25 years of pioneering work in stroke research, health bosses said.
'Excited' about AI
Prof James, who is involved with a £1.7m AI research program, funded by the National Institute of Health and Care Research, said he was "honoured" to receive the award.He said: "Stroke has a profound impact on patients and families, and on society, so it is immensely rewarding to see how research can drive advancements in stroke treatment and improve outcomes for patients."I am particularly excited about the potential of AI to make stroke care more personalised and effective, ensuring that every stroke patient gets the best available treatment as quickly as possible."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How Deepseek AI is Transforming Medicine and Climate Science
How Deepseek AI is Transforming Medicine and Climate Science

Geeky Gadgets

time22 minutes ago

  • Geeky Gadgets

How Deepseek AI is Transforming Medicine and Climate Science

What if the next new cure for a disease, the key to reversing climate change, or the discovery of a innovative material was hidden in plain sight—buried within mountains of data too vast for human minds to process? Enter Deepseek AI, a innovative platform that is transforming the way science is conducted. Unlike traditional research tools, this advanced system doesn't just analyze data; it uncovers patterns, correlations, and insights that were previously invisible. By combining unparalleled speed and precision, Deepseek is not just accelerating scientific discovery—it's redefining what's possible in fields like medicine, physics, and environmental science. In this overview, Anastasi In Tech explore how Deepseek is reshaping the landscape of scientific research. From its ability to streamline complex processes to its self-improving algorithms that grow smarter with every dataset, this platform offers a glimpse into the future of innovation. You'll discover how it's already driving breakthroughs in genomics, climate modeling, and material engineering, and why its adaptability makes it an indispensable tool across disciplines. As you read on, consider this: What could humanity achieve if we had the power to solve problems once thought insurmountable? Deepseek AI Revolution The Core Strengths of Deepseek Deepseek AI's power lies in its ability to process and analyze massive datasets with unparalleled speed and precision. By using advanced problem-solving algorithms, it identifies patterns, correlations, and insights that might otherwise remain hidden. This capability is particularly valuable in fields where data complexity can hinder progress, such as genomics, climate science, and material engineering. For instance, in genomics, Deepseek can analyze genetic data to identify mutations linked to diseases, paving the way for targeted therapies. In climate modeling, it simplifies the analysis of intricate environmental data, allowing more accurate predictions of climate trends. By breaking down complex problems into manageable components, Deepseek AI ensures that researchers like you can focus on interpreting results and applying them to real-world challenges. What sets Deepseek AI apart is its ability to learn and adapt. As it processes more data, it refines its algorithms, becoming increasingly effective over time. This self-improving nature ensures that it remains a innovative tool, capable of meeting the evolving demands of scientific research. Applications Across Diverse Scientific Fields The versatility of Deepseek AI makes it a valuable asset across a wide spectrum of scientific disciplines. Its ability to analyze complex datasets and generate actionable insights has already proven beneficial in areas such as medicine, physics, and environmental science. Medicine: Deepseek AI accelerates drug discovery by simulating molecular interactions, predicting treatment outcomes, and identifying potential therapeutic targets. This reduces the time and cost associated with developing new medications. Deepseek AI accelerates drug discovery by simulating molecular interactions, predicting treatment outcomes, and identifying potential therapeutic targets. This reduces the time and cost associated with developing new medications. Physics: In physics, it enhances simulations of quantum systems, allowing researchers to explore phenomena that were previously too complex to model accurately. In physics, it enhances simulations of quantum systems, allowing researchers to explore phenomena that were previously too complex to model accurately. Environmental Science: By analyzing climate data, Deepseek AI supports sustainability efforts, helping to forecast environmental trends and assess the impact of human activities on ecosystems. These applications highlight the system's adaptability and its potential to drive progress in both established and emerging fields. Whether you are working on curing diseases, understanding the universe, or protecting the planet, Deepseek AI provides the tools needed to achieve your goals. HUGE Deepseek AI Breakthrough Watch this video on YouTube. Explore further guides and articles from our vast library that you may find relevant to your interests in Deepseek AI. Enhancing Innovation and Efficiency One of the most significant advantages of Deepseek is its ability to enhance both innovation and efficiency in scientific research. By automating repetitive tasks, such as data cleaning and preliminary analysis, it allows you to dedicate more time to creative and strategic aspects of your work. This shift not only accelerates the research process but also increases the likelihood of uncovering novel insights. For example, researchers in renewable energy have used Deepseek AI to develop advanced materials for solar panels, significantly improving their efficiency. Similarly, in material sciences, the system has assistd the discovery of compounds with unique properties, opening up new possibilities for industrial applications. By delivering deeper insights and reducing the time required for analysis, Deepseek AI enables you to push the boundaries of what is scientifically possible. Its ability to integrate data from multiple sources further enhances its utility, allowing interdisciplinary collaboration and fostering a more holistic approach to research. Redefining the Future of Scientific Research The emergence of Deepseek AI signals a fantastic moment in the evolution of scientific research. As artificial intelligence continues to advance, tools like Deepseek AI are set to become standard in laboratories worldwide. For researchers, this means access to more reliable, efficient, and innovative methods for addressing the most pressing scientific questions. Deepseek's potential to foster interdisciplinary collaboration is particularly noteworthy. By bridging gaps between fields, it enables researchers to combine expertise and tackle complex problems from multiple perspectives. This collaborative approach could lead to breakthroughs that redefine entire areas of study, from healthcare to environmental sustainability. As science evolves, the role of AI systems like Deepseek will only grow. By integrating advanced data analysis with innovative problem-solving capabilities, it equips you with the tools needed to overcome challenges that once seemed insurmountable. Whether your focus is on medicine, physics, or environmental science, Deepseek AI is poised to play a central role in shaping the future of research and innovation. Media Credit: Anastasi In Tech Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

‘End of homo sapiens?' Horrifying warning from top Scottish AI expert
‘End of homo sapiens?' Horrifying warning from top Scottish AI expert

The Herald Scotland

time9 hours ago

  • The Herald Scotland

‘End of homo sapiens?' Horrifying warning from top Scottish AI expert

The break with the past will be so revolutionary, so all-encompassing, that it could alter the very nature of what it means to be human. Talking to Professor Richard Susskind leaves your head spinning. You end the discussion with a sense that humanity is either cursed or blessed to live through this moment. Whatever the future holds, for good or ill, we're about to see the old world slide away and a new world come roaring into being in our lifetimes. Scottish author, speaker, and independent adviser to international professional firms and national governments Richard Susskind OBE The Glaswegian technologist is one of Britain's leading public intellectuals, and an adviser to governments and big business around the world on the impact AI has on society, commerce and humanity. He is currently special envoy for justice and AI to the Commonwealth, and was president of the Society for Computers and Law, as well as technology adviser to the Lord Chief Justice of England and Wales. He holds professorships at Oxford and Strathclyde universities. His new book, How To Think About AI, is an indispensable guide to understanding what artificial intelligence can – or will – do to us, our societies, our nations, and the world. Susskind's key point is that we have two choices: we either save the world 'with AI', or save the world 'from AI'. If harnessed correctly AI could usher in a near utopia. However, if allowed to run out of control, and kept in the hands of a few super-rich tech barons, then dystopia might be too soft a term for what the future holds. The clock is ticking. AI is moving at an incredible pace and unless citizens get to grips with what's coming we won't be equipped to demand that our politicians make the right choices. That, says Susskind, is why he's written this book: so ordinary people have the facts, figures and arguments required at their fingertips. The Herald on Sunday caught up with Susskind at his Hertfordshire home. Casually dressed and brimming with enthusiasm for his subject, he explained that the biggest issue we need to get our heads around is the arrival of artificial general intelligence (AGI). In broad terms, that's AI which can understand and learn just like humans and perform any task we can. Profound Currently, the most commonly used AI is generative AI, the sort of technology found in ChatGPT where machines perform functions like creating text and images, pulling research together, or automating customer service, based on the data it has been fed. The gulf between the two forms of AI is huge. Susskind believes that AGI may be a reality within the next five to 10 years. Its arrival would upend the world unless we're prepared. AGI would mean 'machines could match human performance across all cognitive tasks. They'll be able to do anything we can do with our remarkable brains. Historically, that was regarded as science fiction'. But not any more. 'The more I thought about this, the more I thought how ill-prepared we are as humanity for what would be the single most significant development in our history: that machines outperform us. The implications are so profound for you, me, our kids, government, warfare, everything.' Among technology experts and developers, the idea that 'we should plan for AGI between 2030/35 is now pretty mainstream-thinking'. He says: 'I want to urge people to ask this question: what if AGI?' Over the short-term – 'the next two or three years' – AI will mostly lead to 'automation', where we 'take out humans and plug in machines. That's why there's so much commercial interest in AI because if you're in government or business you can see how to increase productivity and efficiency'. The companies Susskind advises are all already using AI to 'summarise documents, draft emails, record meetings, produce action plans, create PowerPoints. There's no doubt they're enjoying efficiency gains'. Clearly, even this relatively straightforward form of AI causes human redundancies. However, the long-term future won't simply be about 'automation, it will be one of innovation and elimination'. Innovation is the upside. 'AI could allow us to do things that previously weren't possible.' Think of AI analysing all the healthcare data in Britain, identifying everyone at risk of cancer, and then creating the drugs required to keep them alive – all without the need for human doctors or scientists. However, that lack of human involvement leads inevitably to elimination. AGI, Susskind says, 'will bring about a state of affairs where we'll no longer have the need for human service'. For example, 'the future of healthcare', he suggests, could be one of 'non-invasive therapies, where AI finds way of fixing people without cutting them open, and preventative medicine finding ways of anticipating people's medical difficulties'. AI would, in this case, eliminate the need for surgeons. In medical terms it 'puts the fence at the top of the cliff rather than the ambulance at the bottom'. In other words, the very notion of healthcare would have to be rethought. The same is true for 'education, justice, the climate and so on', says Susskind. 'The common assumption is that technology will computerise what's already going on. I'm saying it's not going to end up that way. It's going to end up eliminating and fundamentally transforming the way we do much in life.' Susskind cautions us not to think of AI 'on the same spectrum as social media. It's a different phenomenon altogether'. The earthquake social media caused would be a mere rumble compared to the possible future ushered in by AI. READ MORE: It's not a shot that's been fired across the SNP's bows, it's a cruise missile 'It's what the Nazis did to Warsaw' The dark story of Glasgow's rise and tragic fall The rubbish the wine bar fakes like Farage talk about the working class makes me sick I don't exaggerate, this Scottish translation deserves the Nobel Prize for Literature Transform AGI will make us question the very nature of work, and of humanity's purpose on Earth. For instance, if AGI does transform medicine so radically that GPs and even surgeons become redundant, how would we respond to that? Susskind asks whether the intellectual 'starting point should be asking what this means for the recipients, the patients' rather than what it means for medics. 'It's not the purpose of law to give lawyers a living any more than it's the purpose of ill health to give doctors a living. 'Let's be blunt – if we can find cheaper, quicker, better, less intrusive, more convenient ways of solving problems, the market isn't going to show any loyalty to old ways of working.' He says that, if AI finds a cure for cancer: 'I don't think people would say, 'well hang on, what about the oncologists'.' Candles and wheels still exist today, but we don't see candlemakers or wheelwrights on the high street, Susskind notes. With AI, 'we should expect the same across the professions'. Societies will have to decide in which cases – if any – humans should remain in the driving seat, despite machines being able to accomplish tasks better. Susskind suggests the jury system may be one such field exempt from AI as we would still want to be judged by our peers, even though machines 'might be better at determining the facts of a dispute'. We need to start considering 'what humanity would look like without the notion of work'. Susskind believes it's a very 'middle-class' view of the world to feel that work gives meaning to life. Millions of people endure work as drudgery and would be glad 'if they could flick a switch' and end labouring for low wages to make some CEO rich. However, a world without work becomes a dystopia of poverty if those who are surplus to requirements can't earn money. A future in which AGI replaces vast swathes of humanity in the workplace is filled with 'socio-economic and political risks'. Technology is now 'concentrated in the hands of a very small number of non-state actors'. These giant corporations aren't subject to international law, or much regulation. Susskind says we need to imagine a world where only 5% of today's workforce is still employed, yet where 'economic productivity is wildly enhanced by AI. The big question is: how would we ensure that the wealth created is distributed from those who own the technology to those who have been disenfranchised through technology'. Susskind stresses he's 'not remotely Marxist', but these very issues were 'raised by Marx in his objection to capitalism'. Some thinkers believe the arrival of AGI will usher in an era of 'techno-communism' where nobody works, and we can all be writers and artists, or play sports or do whatever fulfils us while AI looks after the world and provides us with all our needs. But as Susskind says, the tech giants are 'under no obligation to redistribute', adding: 'I keep saying: we haven't thought through the fundamental political and social questions arising. We don't want to be discussing this at the last minute. We want to be discussing it now.' Disruption SUSSKIND refers to how anthropologists divide much of human history into a series of stages: 'the era of orality' when knowledge was shared through speech; 'the era of script' when we used laboriously handwritten books; and the 'era of print'. He adds: 'With each transformation you see fundamental changes in societal structure, particularly law, our economic models, and the rules we create.' The era of AI will cause far more disruption 'so we should expect fundamental change to our political, economic, social and legal institutions'. Trying to adapt our current systems to what's coming is futile. It would be like Victorian engineers using Sumerian clay tablets written in cuneiform to design railways. 'If you ask the question 'what if AGI', you realise we need to fundamentally rethink society.' To make matters worse, traditionally it's accepted that 'the law lags 10 years behind technology'. Yet in 10 years, we could have AGI. 'We need to snap into action.' Consider just one facet of human life that's very important but seldom discussed: the company audit. It's statutory and vital, says Susskind, as it ensures all is above board and finances are reported honestly. But in the era of AI, auditors already seem redundant. They take samples of company transactions to compile their annual reports. AI could look at all data, and not annually but constantly. What happens to all those auditors? What happens to the giant companies which today do the audits – firms like Deloitte, PwC and KPMG? 'I'm trying to raise a flare,' says Susskind, 'and say we're institutionally ill-equipped to deal with this emerging technology. The mindset of most leaders is to say 'this is amazing, we can become more efficient'. 'What I'm saying is once you start to think about AGI you realise this isn't an efficiency play, this is going to completely shatter our conception of the labour market. It's going to require us to generate rules and regulations in days rather than years. 'My dilemma is that on one hand I see AI helping us solve some of humanity's greatest challenges like healthcare, education, the climate, and access to justice, but on the other hand I see this mountain range of risks and vulnerabilities. 'That's why I'm saying that balancing the benefits and threats – saving humanity both from AI and with AI – is the defining challenge of our era.' He adds: 'As humans we must be able to hold two thoughts in our heads: that these systems are both potentially marvellous and potentially harmful. I can see both the horrors and the life-changing potential. 'We need to call up an army of our very best economists, lawyers and technologists to work on this. It's our Apollo programme.' However, Susskind adds: 'The question is: is it too late? Are we sufficiently advanced in AI, without regulatory constraints, that we're destined to be sharing the planet with a greatly superior capability?' No political leader is discussing 'technological unemployment and what this means for humanity. When I raise these issues, what I get from most leaders is 'I see what you're saying, Richard, but we've got enough to worry about'. It's the next person's problem, not theirs'. Political and business leaders think too short-term, he says. Risk BUSINESS and governments should, evidently, explore AI's financial benefits, but in tandem they should also be future-proofing society against the risk. Indeed, companies must ask themselves 'do we have a sustainable business in five to 10 years'. Again, this means we must get to grips with who ultimately owns AI. Unlike most previous breakthrough technologies, like nuclear power or telecommunications, which were 'initiated and funded within the state, AI originated in the private sector'. Susskind adds: 'That's novel. There has been greater investment in AI than in any single enabling technology in history.' Trillions of dollars are going into research. That phenomenal investment alone should squash any notion that AI is all hype and won't progress any further than it has today, he believes. Rather, we need to think about the AI systems that 'aren't yet invented' and the power they might unleash. 'My gut tells me that, by 2030, our lives will have been transformed by technologies that haven't yet been invented. What will be left for humans if machines are performing at such high levels? Can we have meaningful lives without work?' Essentially, the coming of AGI will force us to ask 'what are humans really for'. In a world without work, religious people may find meaning in life through worship, but what of others? And if we managed to somehow create a world where few of us work yet the financial spoils of AI were relatively fairly shared, then wouldn't we have effectively built a new slave society, a modern Ancient Rome? If we did create a society built on AI 'slavery', then what might that do to us? Would we mistreat AIs? We're clearly on a path where robotics and AI collide, so would we abuse or mistreat AI automatons? Technology often veers into the darkest recesses of human sexuality and crime. So what might this do to us morally? 'The mind boggles,' Susskind adds. 'We can imagine within 10 years little robots that are our companions, therapists, research assistants, joint authors, our pals. How should we feel about that?' Humans show love to their pets. Would we love AIs? 'We haven't yet begun to think what human-machine relations will be.' Susskind dismisses claims that AI is all hype and no risk as 'disingenuous and probably dangerous. It's like an asteroid [hitting Earth]. Surely we need to plan for it. You and I may reconvene 10 years from now and say we were worrying unnecessarily but I don't think that's the conversation we'll be having'. Claims from academics that talking about the risks of AI is 'catastrophising' are simply 'technological myopia', he says. Such views look at AI from the perspective of its technological 'limitations today', not its 'future potential. It underestimates what's going to happen. Look at the scale of the investment, the trajectory of the breakthroughs'. Race FROM the 1950s onwards, computer breakthroughs came every five to 10 years, 'now it's every six to 12 months. Look at the market appetite in government and business'. The computing resources used to train AI are 'doubling every six months, that means in 10 years we'll see a one million fold increase'. Sir Keir Starmer wants to turn Britain into an AI state and 'just let it rip'. America, China and Russia are in an AI arms race. 'There are tens of thousands of start-ups worldwide.' We're now entering a period of 'accelerating returns', where AI 'will develop the next generation of AI', the technology getting better and better, faster and faster. 'Once these systems begin to self-develop, self-improve, self-propagate we're in a different universe. When I hear people say it's all overblown so pay AI no attention, I think it's irresponsible.' Even now there are some aspects of AI that we simply cannot fully explain. AI could today, Susskind says, listen to the conversation he and I are having and then write a limerick about it. 'There's no way to explain how it did that. The interesting analogy is that in some ways we humans can't explain our own thoughts. But the point is that we don't really have scientific models yet to explain these incredibly high-performing systems.' AI can 'give a better summary of a book than most humans. To say that's simply computational statistics – just ones and zeros – is like saying humans are just molecules. It's not a helpful explanation'. It's routinely said that AI cannot be empathetic. However, it could learn to come up with a simulation of empathy which completely convinces humans. That's an AI psychotherapist. AI may not be creative like a human novelist but it could come up with 'a new configuration of words which are meaningful and impactful' to humans. That's an AI artist. 'We can imagine robots running faster than Usain Bolt.' Susskind's daughter sent him some music recently. He listened to it, and liked it. Then she told him it was AI-generated. 'In the future, AI just might be wildly better than us.' He wonders if, in years to come, we'll seek out human creations or interactions the way we today might prefer to spend money on handmade furniture rather than mass-produced goods. 'We might feel the same about literature, art and music, but I'm not sure our grandchildren will.' The coming of AI will shake humanity's sense of self to the foundations. No longer would we be the dominant intelligence on Earth. 'It will have a fundamental psychological effect on us and our perception of our position in the scheme of things. The idea of sharing the planet with entities more capable than us is deeply challenging.' Susskind speculates that if we get on top of AI early enough we can confine it to a 'zombie' status, where it has 'no consciousness, will, or awareness' but is just 'phenomenally capable but non-sentient'. He adds: 'Our perception of ourselves would be less diminished if we're simply sharing the planet with high-performing zombies.' But if AI becomes self-aware and conscious 'we'd move down a division'. Even if AI just gives the impression of consciousness it would still leave 'this huge question mark hanging over us'. Scottish author, speaker, and independent adviser to international professional firms and national governments Richard Susskind OBE has a new book out Explosion THE possibility exists that 'if we invent machines as intelligent as us then that will be our last invention'. The machine will become the inventor. This could lead to 'an intelligence explosion, where you go from AGI to a super-intelligence that's unfathomably more capable than us. When would this recursive self-improvement stop? That deeply concerns me'. This super-intelligence hypothesis – 'the AI-evolution hypothesis' – raises profound cosmologically questions. AI which continually self-improves at an astonishing rate could find ways to invent space travel and then 'spread out across the cosmos, in due course replacing us'. We'd be but a footnote: the creature which invented the most powerful 'mind' in the universe. 'I find such ideas fascinating and terrifying,' Susskind adds. An alternative hypothesis is the singularity 'which says that organic biological humans and digital machines will converge, so the next generation will be digitally-enhanced humans'. The problem with that theory is this: 'If the machines are so much more capable than us, the contribution humans make to this merger will fade over time. That might eliminate us.' Susskind's 'preoccupation' is that we will advance to AGI and 'that will lead to the super-intelligence hypothesis'. Among technologists, debate now rages over whether we should embrace the idea of super-advanced AI colonising the universe as 'our legacy', or if 'our obligation should be to preserve humanity'. Susskind comes down on the side of preserving humanity. 'I just think of my family, my friends and the joy humans have, and I want this for more people. My hope is that AGI can improve the wellbeing, health and happiness of humanity rather than populate the cosmos.' He's bewildered why society managed to have deep, intelligent debates in recent years about matters like genetic engineering but has failed to have a 'public conversation' about AI. If we built a system designed to save the world 'with AI' then, Susskind believes, we could genuinely 'eliminate disease and ill health. That's deliverable'. Each child could have a personalised tutor. Pupils would 'have Aristotle in the afternoon, then art lessons with Michelangelo'. With climate change, AI could 'develop and perfect new sources of power, ways of disposing of carbon – systems far more promising than we mere humans can put together'. AI could 'increase economic productivity' to a point that allowed us to effectively eliminate poverty. But, again, 'that requires the redistribution of the wealth gained by these systems away from the current providers and across the rest of humanity'. Threat THEN, of course, there are the consequences of failing to save the world 'from AI'. There are many 'existential and catastrophic threats: the weaponisation of this technology; the unintentional by-products; that it begins to perform in ways that are deeply damaging and not foreseeable. A powerful autonomous system over which we have little control presents major threats to humanity. 'The socio-economic threat is the biggest: what this does to the labour force, our conception of work, the idea that we have these phenomenally unaccountable powerful organisations which own these systems. Then there's the risk these systems just get things wrong. 'All the classic challenges that we've had since the dawn of civilisation come into sharp focus: how do we organise ourselves politically, what is a just distribution of resources, what is a happy, meaningful life?' Susskind says: 'If we develop AGI – and this does remain an 'if' but not an unlikely 'if' – then in my view this would represent the most significant discontinuity in the history of humanity and society. A greater leap than fire, agriculture, print or industry, partly because AI will match or outperform our most prized and distinguishing feature – our intellect, our brains, our minds.' He added that 'this revolution could well signal the end of pure homo sapiens, whether through the realisation of transhumanism – if/when we become digitally enhanced, perhaps as the next stage in our evolution – or as some cosmologists believe, we become extinct, in the very long run replaced by the unfathomably capable systems that we have invented. 'That is why I think the question 'what if AGI' is the most pressing and momentous question of our time. The future of humanity could be at stake.' Mind-bending metaphysical questions are raised by the advances of AI. The technology can now create highly convincing virtual reality worlds. So AGI could eventually create worlds indistinguishable from reality. 'It genuinely leads to the Matrix question,' Susskind adds. If a future AI super-intelligence could create a convincing virtual world, then that means today 'we can't be sure we're not in a virtual world'. In other words, we might already be a computer simulation in a digital universe created by AI. With technology, it is usually 'people from the dark side' who become early adopters, at a time 'unconstrained by rules, regulations, ethics and qualms'. That's why governments should consider taking power away from corporations and developing state-controlled AI systems. 'That seems to me a very serious policy option,' he adds. It would be one way of ensuring a fairer distribution of AI profits. Until a few years ago, Susskind was 'irreducibly optimistic about technology'. Today, he's both 'optimistic and pessimistic. AI could be channelled for massive human benefit, but the real risks are so profound that to not be fearful is irrational. That's my call to arms. The first thing we must do is understand what's going on'. Susskind adds: 'I advise governments. I'm closely connected to governments. I speak to lots of ministers all around the world.' But all he hears from those in power is 'how can we use ChatGPT, rather than any thinking about how, in 10 years, we're going to be in the biggest social crisis we've ever faced'. 'That's why I'm on this mission.'

One of NHS's biggest AI projects is halted after fears it used health data of 57 MILLION people without proper permissions
One of NHS's biggest AI projects is halted after fears it used health data of 57 MILLION people without proper permissions

Daily Mail​

time13 hours ago

  • Daily Mail​

One of NHS's biggest AI projects is halted after fears it used health data of 57 MILLION people without proper permissions

NHS England has paused a ground-breaking AI project designed to predict an individual's risk of health conditions after concerns were raised data from 57 million people was being used without the right permissions. Foresight, which uses Meta 's open-source AI model, Llama 2, was being tested by researchers at University College London and King's College London as part of a national pilot scheme exploring how AI could be used to tailor healthcare plans for patients based on their medical history. But the brakes were applied to the pioneering scheme after experts warned even anonymised records could contain enough information to identify individuals, The Observer reported. A joint IT committee between the British Medical Association (BMA) and the Royal College of General Practitioners (RCGP) also said it they had not been made aware that data collected for research into Covid was now being used to train the AI model. The bodies have also accused the research consortium, led by Health Data Research UK, of failing to consult an advisory body of doctors before feeding the health data of tens of millions of patients into Foresight. Both BMA and RGCP have asked NHS England to refer itself to the Information Commissioner over the matter. Professor Kamila Hawthorne, chair of RGCP, said the issue was one of 'fostering patient trust' that their data was not being used 'beyond what they've given permission for.' She said: 'As data controllers, GPs take the management of their patients' medical data very seriously, and we want to be sure data isn't being used beyond its scope, in this case to train an AI programme. 'We have raised our concerns with NHS England, through the Joint GP IT Committee, and the committee has called for a pause on data processing in this way while further investigation takes place, and for NHS England to refer itself to the Information Commissioner. 'Patients need to be able to trust their personal medical data is not being used beyond what they've given permission for, and that GPs and the NHS will protect their right to data privacy. 'If we can't foster this patient trust, then any advancements made in AI – which has potential to benefit patient care and alleviate GP workload – will be undermined. 'We hope to hear more from NHS England in due course, providing definitive and transparent answers to inform our next steps.' Katie Bramall, BMA England GP committee chair, said: 'For GPs, our focus is always on maintaining our patients' trust in how their confidential data is handled. 'We were not aware that GP data, collected for Covid-19 research, was being used to train an AI model, Foresight. 'As such, we are unclear as to whether the correct processes were followed to ensure that data was shared in line with patients' expectations and established governance processes. 'We have raised our concerns with NHS England through the joint GP IT committee and appreciate their verbal commitment to improve on these processes going forward. 'The committee has asked NHS England to refer itself to the Information Commissioner so the full circumstances can be understood, and to pause ongoing processing of data in this model, as a precaution, while the facts can be established.' 'Patients shouldn't have to worry that what they tell their GP will get fed to AI models without the full range of safeguards in place to dictate how that data is shared.' An NHS spokesperson confirmed that development of the Foresight model had been paused for the time being.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store