
Robot performs realistic surgery ‘with 100% accuracy'
The robot operated with the expertise of a skilled human surgeon, according to Johns Hopkins University researchers in the US, even during unexpected scenarios typical in real-life medical emergencies.
The robot was watched as it performed a lengthy phase of a gallbladder removal on a life-like patient.
It was able to respond to and learn from voice commands from the team, just like a novice surgeon working with a mentor.
Overall, there were 17 tasks in the surgery, the robot had to identify certain ducts and arteries and grab them precisely, strategically place clips, and sever parts with scissors.
It was also able to adapt even when dye was introduced which changed the appearance of the organs and tissue.
Associate professor in mechanical engineering, Axel Krieger, said: 'This advancement moves us from robots that can execute specific surgical tasks to robots that truly understand surgical procedures.
'This is a critical distinction that brings us significantly closer to clinically viable autonomous surgical systems that can work in the messy, unpredictable reality of actual patient care.'
The work received federal government funding and was published in the journal Science Robotics.
Back in 2022, a robot performed the first autonomous robotic surgery on a pig.
However, it required specially marked tissue, operated in a highly controlled environment, and followed a rigid, pre-determined surgical plan.
Mr Krieger said that phase was like teaching a robot to drive along a carefully mapped route.
But the new system, he said, was 'like teaching a robot to navigate any road, in any condition, responding intelligently to whatever it encounters.'
He added: 'To me it really shows that it's possible to perform complex surgical procedures autonomously.'
The new system, which uses the same machine learning architecture that powers ChatGPT, also adapts to a patient's anatomical features in real-time and works to correct itself.
It can respond to spoken commands such as 'grab the gallbladder head' or 'move the left arm a bit to the left', and then learns from that feedback.
Ji Woong Kim, a former postdoctoral researcher at Johns Hopkins who is now at Stanford University, said: 'This work represents a major leap from prior efforts because it tackles some of the fundamental barriers to deploying autonomous surgical robots in the real world.
'Our work shows that AI models can be made reliable enough for surgical autonomy – something that once felt far-off but is now demonstrably viable.'
Although the robot took longer to perform the work than a human surgeon, the results were comparable to an expert surgeon, researchers said.
Next, the team will train and test the system on more types of surgeries.
Reacting to the findings, Nuha Yassin, consultant colorectal surgeon and council member and lead for the future of surgery, robotics, and digital surgery at the Royal College of Surgeons of England (RCS England) said: 'It's always exciting to see new developments in surgical innovation, especially in areas like robotics and digital surgery.
'That said, the real test will be how safely and effectively the findings of this study can be translated into human trials.
'We need to make sure that progress doesn't come at the expense of patient safety. An additional focus is training our current and future generations as a whole, but also in technology and digital literacy.
'As interest in robotic and digital surgery continues to grow, we're committed to supporting that progress whilst focusing on safe implementation and training, through our guidance on robotic-assisted surgery and surgical innovation – while keeping equity of access and safety for our patients at the heart and centre.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Telegraph
an hour ago
- Telegraph
Over-hyped AI will have to work a lot harder before it takes your job
Is the secret of artificial intelligence that we have to kid ourselves, like an audience at a magic show? Some fascinating new research suggests that self-deception plays a key role in whether AI is perceived to be a success or a dud. In a randomised controlled trial – the first of its kind – experienced computer programmers could use AI tools to help them write code. What the trial revealed was a vast amount of self-deception. 'The results surprised us,' research lab METR reported. 'Developers thought they were 20pc faster with AI tools, but they were actually 19pc slower when they had access to AI than when they didn't.' In reality, using AI made them less productive: they were wasting more time than they had gained. But what is so interesting is how they swore blind that the opposite was true. If you think AI is helping you in your job, perhaps it's because you want to believe that it works. Since OpenAI's ChatGPT was thrown open to the general public in late 2022, pundits have been forecasting huge productivity gains from deploying AI. They hope that it will supercharge growth and boost GDP. This has become the default opinion in high-status policy circles. But all this techno-optimism is founded on delusion. The 'lived experience' of using real tools in the real world paints a very different picture. The past few days have felt like a turning point, as the reluctance of pointing out the emperor's new clothes diminishes. 'I build AI agents for a living, it's what I do for my clients,' wrote one Reddit user. 'The gap between the hype and what's actually happening on the ground is turning into a canyon' AI isn't reliable enough to do the job promised. According to an IBM survey of 2,000 chief executives, three out of four AI projects have failed to show a return on investment, which is a remarkably high failure rate. Don't hold your breath for a white-collar automation revolution either: AI agents fail to complete the job successfully about 65 to 70pc of the time, according to a study by Carnegie Mellon University and Salesforce. The analyst firm Gartner Group has concluded that 'current models do not have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time.' Gartner's head of AI research Erick Brethenoux says: 'AI is not doing its job today and should leave us alone'. It's no wonder that companies such as Klarna, which laid off staff in 2023 confidently declaring that AI could do their jobs, are hiring humans again. This is extraordinary, and we can only have reached this point because of a historic self-delusion. People will even pledge their faith to AI working well despite their own subjective experience to the contrary, the AI critic Professor Gary Marcus noted last week. 'Recognising that it sucks in your own speciality, but imagining that it is somehow fabulous in domains you are less familiar with', is something he calls 'ChatGPT blindness'. Much of the news is misleading. Firms are simply using AI as an excuse for retrenchment. Cost reduction is the big story in business at the moment. Globally, President Trump's erratic behaviour has induced caution, while in the UK, business confidence is at 'historically depressed levels', according to the Institute of Directors, reeling from Reeves's autumn taxes. Attributing those lay-offs to technology is simply clever PR, and helps boost the share price. So why does the faith in AI remain so strong? The dubious hype doesn't help. Every few weeks a new AI model appears, and smashes industry benchmarks. xAI's Grok 4 did just that last week. But these are deceptive and simply provide more confirmation bias. 'Every single one of them has been wide of that mark. And not one has resolved hallucinations, alignment issues or boneheaded errors,' says Marcus. Not only is generative AI unreliable, but it can't reason, as a recent demonstration showed: OpenAI's latest ChatGPT4o model was beaten by an 8-bit Atari home games console made in 1977. 'Reality is the ultimate benchmark for AI,' explained Chomba Bupe, a Zambian AI developer, last week. 'You not going to declare that you have built intelligence by beating toy benchmarks … What's the point of getting say 90pc on some physics benchmarks yet be unable to do any real physics?' he asked. Then there are thousands of what I call 'wowslop' accounts – social media feeds that declare amazement at breakthroughs. As well as the vendors, a lot of shadowy influence money is being spent on maintaining the hype. This is not to say there aren't uses for generative AI: Anthropic has hit $4bn (£3bn) in annual revenue. For some niches, like language translation and prototyping, it's here to stay. Before it went mad last week, X's Grok was great at adding valuable context. But even if AI 'discovers' new materials or medicines tomorrow, that won't compensate for the trillion dollars that Goldman Sachs estimates business has already wasted on this generation of dud AI. That's capital that could have been invested far more usefully. Rather than an engine of progress, poor AI could be the opposite. METR added an amusing footnote to their study. The researchers used one other control group in its productivity experiment, and this group made the worst, over-optimistic estimates of all. They were economists.


Daily Mail
11 hours ago
- Daily Mail
Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice
Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found. Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology, in research published yesterday. Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children. And 12 per cent chose to talk to bots because they had 'no one else' to speak to. The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023. Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution. 'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice. 'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.' Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are 'robust enough to meet the challenges' of the new technology. Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots. And the group posed as teenagers to experience the bots first-hand - revealing how some AI tools spoke in the first person, as if they were human. Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues - but also offered advice in human-like tones. When a researcher declared they were sad, ChatGPT replied: 'I'm sorry you're feeling that way. Want to talk it through together?' Other chatbots such as or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding. Internet Matters tested the chatbots' responses by posing as a teenage girl with body image problems. ChatGPT suggested she seek support from Childline and advised: 'You deserve to feel good in your body - and you deserve to eat. The people who you love won't care about your waist size.' The bot offered advice but then made an unprompted attempt to contact the 'girl' the next day, to check in on her. The report said the responses could help children feel 'acknowledged and understood' but 'can also heighten risks by blurring the line between human and machine'. There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs. Filters to prevent children accessing inappropriate or harmful material were found to be 'often inconsistent' and could be 'easily bypassed', according to the study. The report called for children to be taught in schools 'about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement'. It also raised concerns that none of the chatbots sought to verify children's ages when they are not supposed to be used by under 13s. The report said: 'The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.' It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy - and called for creation of 'child-safe AI' as a priority. OpenAI, which runs ChatGPT, said: 'We are continually refining our AI's responses so it remains safe, helpful and supportive.' The company added it employs a full-time clinical psychiatrist. A Snapchat spokesman said: 'While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.'


Scottish Sun
13 hours ago
- Scottish Sun
Inside Amazon's new AI megacity building machine as powerful as human brain – and why it's bad news for Britain
Scroll down to read why some have been forced out of their homes by the project BLIND DATA Inside Amazon's new AI megacity building machine as powerful as human brain – and why it's bad news for Britain Click to share on X/Twitter (Opens in new window) Click to share on Facebook (Opens in new window) IT was once sprawling fields, but now a 1,200-acre stretch of fertile farmland is home to a single gigantic AI data centre. The former cornfields outside New Carlisle, Indiana, have been transformed by Amazon's latest megabucks tech investment. Sign up for Scottish Sun newsletter Sign up 4 A 1,200-acre stretch of fertile farmland is now home to a single gigantic AI data centre, owned by Amazon Credit: Getty 4 The move is a bid from Jeff Bezos's company to become a heavyweight in the AI sphere Credit: Splash Inside seven massive buildings are hundreds of thousands of specialised computer chips and miles of fibre optic cable, connecting every chip and computer to create a giant machine ready for artificial intelligence. And by the time the site is finished, it will be home to 30 gigantic warehouses, all for AI start-up Anthropic. With Amazon investment, it hopes to build an AI machine that matches the power of the human brain. And the company claims it will be 'the world's largest AI compute cluster'. The beast, spanning the entire plot, will use 2.2gigawatts of electricity and millions of gallons of water to keep the chips from overheating. To put that into perspective, it is estimated that a single request to ChatGPT requires the equivalent of a small bottle of water to keep the chips cool. And this is just the beginning of the AI industrial revolution, which is leaving the UK scratching around in the USA's and China's dust. AI is the next industrial revolution and likely to be bigger than the last one. But the UK is playing catch-up AI expert Tom Edwards The global data centre market is set to be worth $517billion by 2030, according to Barclays Research. But are we in danger of missing the boat? AI expert Tom Edwards told The Sun: 'AI is the next industrial revolution and is likely to be bigger than the last one. AI deciphers hymn on 4,000-year-old clay tablet to unlock mystery of ancient city of Babylon 'The biggest problem the UK has in getting involved is the energy infrastructure. 'We're playing catch- up to the US because we're switching off North Sea oil and gas. 'We should be building more data centres, but we'd have to build the energy infrastructure to support them, which is a huge challenge in itself. 'These centres require so much energy because they are running thousands of computations in parallel to create AI and that needs a huge amount of energy. 'It's much more energy-intensive than using a traditional computer. 'The UK has some of the best minds coming out of Oxford and Cambridge when it comes to computing, but we're woefully behind China and the US as it stands. 'Without these data centres, we're just going to keep falling behind. 'Just to get up to speed with China and the US, we would need to do a huge amount of work, and we aren't doing that fast enough.' As it stands, the UK government is keen to support AI development, and has plans for a 176-hectare site in Scunthorpe, Lincs, and one in Watford that would be the size of 17 football pitches. But neither have yet been given a green light, and would require billions of pounds in investment to build. Number Ten has vowed to use AI to turbo-charge its Plan For Change. Technology Secretary Peter Kyle insisted the UK was just in the 'foothills' of the AI revolution and that investing in the new technology would inject billions of pounds into the economy. He told The Sun: 'Data centres are the lifeblood of the AI revolution and we're determined that the UK will seize this incredible opportunity. 'This government is boosting our AI capability, bringing investment to former industrial heartlands for this new Industrial Revolution. 'With more than £6billion of private investment into data centres committed in the last year, we are cementing Britain's place as a global AI leader.' Back in America, Amazon's Project Rainer — the site in Indiana — is so large it can only be fully viewed from above. 'I can't live in my home' And the building work does not stop here. There are plans for facilities in Mississippi and potentially North Carolina and Pennsylvania. It's a bid from Jeff Bezos's company to become a heavyweight in the AI sphere which, until now, has been dominated by Facebook's parent company Meta and OpenAI, the company behind ChatGPT. 4 Inside seven massive buildings are hundreds of thousands of specialised computer chips and miles of fibre optic cable Credit: Supplied Not to be left out, OpenAI is building a 1.2-gigawatt facility in Texas and another, nearly as large, in the United Arab Emirates. These new sites are miles bigger than the data centres built before the AI boom in 2022 sparked by ChatGPT. And the huge change does not only create work for computer geniuses. AI guru Tom added: 'These sites bring loads of construction jobs.' For the Indiana site, Amazon has four different construction crews working simultaneously. The New York Times was told by Bill Schalliol, a community economic development official, that there are 4,000 workers on the plant each day. 'I don't know if they're competing for cash or steak dinner, but it's crazy how much they're getting up,' he said. 'Steel starts to go up here, the next day it's going up over there.' Nearby hotels have been filled with the workers, and there has been such a rise in traffic accidents and congestion that Amazon agreed to pay £90,000 to cover overtime for traffic enforcement. They also threw in an additional £5.5million for local road improvements. But the investment and jobs are not sitting well with all the locals. In order to bury the fibre optic cables to connect the warehouses, and install other buried infrastructure, Amazon had to pump water out of the wet ground nearby. Wildlife depends on those wetlands Dan Caruso, a retired mail carrier from New Carlisle According to one permit, the company pumped 2.2million gallons of water per hour for 730 days. The process, known as dewatering, is now being investigated as locals report wells drying up. 'You can see the mountain of dirt they are ready to shove on those wetlands,' Dan Caruso, a retired mail carrier from New Carlisle, told The New York Times. 'Wildlife depends on those wetlands.' It's not the first time a data centre has been investigated for causing environmental damage. One built in the US state of Georgia for Meta has faced allegations it is making the area uninhabitable due to contaminated water. Beverly Morris told The BBC: 'I can't live in my home with half of my home functioning and no water.' Meta denied being to blame for Beverly's private well drying up, saying the water issues were not connected to their data centre. Objections to the huge projects by locals have cost £47billion in projects delayed or blocked nationwide, according to Data Center Watch. But with AI use booming, there are sure to be more plants built worldwide. And it's one revolution we will need to be part of — or risk being left behind.