logo
Gut bacteria changes could be ‘warning sign for worsening Parkinson's symptoms'

Gut bacteria changes could be ‘warning sign for worsening Parkinson's symptoms'

Leader Livea day ago

With the help of artificial intelligence (AI), scientists have linked these shifts to cognitive decline – problems with memory and learning – in people with the disease.
They suggest the toxins could one day be used as 'markers' for medics to pinpoint the Parkinson's patients at a higher risk of dementia, as well as being used to help develop targeted treatments for the disease.
Parkinson's disease is a progressive condition that affects the brain, with symptoms including involuntary shaking, known as tremors, as well as psychological problems such as depression, loss of balance, trouble sleeping and memory issues.
According to the Alzheimer's Society, around a third of people with Parkinson's eventually develop dementia.
Dr Saeed Shoaie, group leader of the quantitative systems biology (QTS) lab at King's College London, said: 'The human gut and oral bacterial communities are increasingly linked to neurodegenerative diseases.
'Disruptions in the gut-brain axis could trigger inflammation and immune responses that contribute to neuronal damage.
'A common gum disease bacterium like porphyromonas gingivalis has been discovered as a potential driver of Alzheimer's.'
For the study, led by experts at King's College London and published in the journal Gut Microbes, scientists analysed 228 spit and stool samples.
The samples came from 41 patients with Parkinson's and mild cognitive impairment, which causes problems with thinking and memory, 47 patients with Parkinson's and dementia and 26 healthy patients.
The team found that the gut of people with mild cognitive impairment contained more harmful bacteria, many of which are likely to have come from the mouth.
This bacteria releases toxins that can damage gut tissue, promote inflammation and potentially affect the brain, according to researchers.
For a more in-depth analysis, the team used AI to pinpoint the bacterial species and functions not usually picked up by traditional testing, allowing them to link toxins specifically to cognitive decline.
Dr Shoaie added: 'The emerging evidence underscores the potential importance of maintaining oral and gut health in mitigating or slowing neurodegenerative processes.
'As people with Parkinson's become increasingly reliant on carers, routine practices such as oral hygiene and nutritional intake may be neglected.
'Our findings suggest that promoting a healthy microbiome through consistent oral care, a balanced diet and potentially targeted probiotic interventions could support improved disease management in Parkinson's.'
Dr Frederick Clasen, research associate at the QTS lab, said: 'We don't yet know if the bacteria are causing the cognitive decline or if changes in the body due to Parkinson's allow these bacteria to grow.
'But our findings suggest they may play an active role in worsening symptoms.'
Dr Clasen suggests these markers could one day be used to develop targeted treatments.
'These toxins could be used as biological markers to identify patients at higher risk of dementia in Parkinson's,' he said.
'In the future, they might also be targets for new treatments that protect the brain by changing the gut environment.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Cutting-edge AI models ‘collapse' in face of complex problems, Apple study finds
Cutting-edge AI models ‘collapse' in face of complex problems, Apple study finds

The Guardian

time16 hours ago

  • The Guardian

Cutting-edge AI models ‘collapse' in face of complex problems, Apple study finds

Apple researchers have found 'fundamental limitations' in cutting-edge artificial intelligence models, in a paper raising doubts about the technology industry's race to develop ever more powerful systems. Apple claimed in a paper published at the weekend that large reasoning models (LRMs) – an advanced form of AI – faced a 'complete accuracy collapse' when presented with highly complex problems. It found that standard AI models outperformed LRMs in low-complexity tasks, while both types of model suffered 'complete collapse' with high-complexity tasks. Large reasoning models attempt to solve complex queries by generating detailed thinking processes that break down the problem into smaller steps. The study, which tested the models' ability to solve puzzles, added that as LRMs neared performance collapse they began 'reducing their reasoning effort'. The Apple researchers said they found this 'particularly concerning'. Gary Marcus, a US academic who has become a prominent voice of caution on the capabilities of AI models, described the Apple paper as 'pretty devastating'. Marcus added that the findings raised questions about the race to artificial general intelligence (AGI), a theoretical stage of AI at which a system is able to match a human at carrying out any intellectual task. Referring to the large language models [LLMs] that underpin tools such as ChatGPT, Marcus wrote: 'Anybody who thinks LLMs are a direct route to the sort [of] AGI that could fundamentally transform society for the good is kidding themselves.' The paper also found that reasoning models wasted computing power by finding the right solution for simpler problems early in their 'thinking'. However, as problems became slightly more complex, models first explored incorrect solutions and arrived at the correct ones later. For higher-complexity problems, however, the models would enter 'collapse', failing to generate any correct solutions. In one case, even when provided with an algorithm that would solve the problem, the models failed. The paper said: 'Upon approaching a critical threshold – which closely corresponds to their accuracy collapse point – models counterintuitively begin to reduce their reasoning effort despite increasing problem difficulty.' The Apple experts said this indicated a 'fundamental scaling limitation in the thinking capabilities of current reasoning models'. The paper set the LRMs puzzle challenges, such as solving the Tower of Hanoi and River Crossing puzzles. The researchers acknowledged that the focus on puzzles represented a limitation in its work. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion The paper concluded that the current approach to AI may have reached limitations. It tested models including OpenAI's o3, Google's Gemini Thinking, Anthropic's Claude 3.7 Sonnet-Thinking and DeepSeek-R1. Anthropic, Google and DeepSeek have been contacted for comment. OpenAI, the company behind ChatGPT, declined to comment. Referring to 'generalizable reasoning' – or an AI model's ability to apply a narrow conclusion more broadly – the paper said: 'These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.' Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the Apple paper signalled the industry was 'still feeling its way' on AGI and that the industry could have reached a 'cul de sac' in its current approach. 'The finding that large reason models lose the plot on complex problems, while performing well on medium- and low-complexity problems implies that we're in a potential cul-de-sac in current approaches,' he said.

AI Literature Review Tools Exposed: Which Ones Are Lying to You?
AI Literature Review Tools Exposed: Which Ones Are Lying to You?

Geeky Gadgets

time16 hours ago

  • Geeky Gadgets

AI Literature Review Tools Exposed: Which Ones Are Lying to You?

What if the tool you trusted to streamline your research was quietly feeding you lies? In the race to harness AI for academic productivity, literature review tools promise to save hours of tedious work. But here's the catch: not all of them tell the truth. Imagine submitting a paper only to discover that 1 in 4 of your references is fabricated. That's the reality I uncovered while testing three popular AI-powered tools—Manis, Gen Spark, and Gemini AI. The results? Eye-opening. Only one of them delivered the accuracy and reliability essential for serious research, while the others left me questioning their place in academic workflows. If you've ever wondered whether AI can truly be trusted with your literature reviews, this rundown might surprise you. Andy Stapleton breaks down the performance of these tools based on speed, usability, and—most critically—accuracy. You'll discover which AI tool churned out a 61-page report with near-perfect references, and which one sacrificed credibility for speed. Whether you're a researcher seeking to save time or just curious about the limits of AI in academia, this comparison will help you navigate the trade-offs. By the end, you'll know which tool is worth your trust—and which might lead you astray. Because when it comes to academic integrity, the stakes are too high for guesswork. AI Literature Review Tools Manis: Speed Over Accuracy Manis demonstrated impressive speed, completing a literature review in just three minutes. It generated a 14-page document with 38 references, making it an appealing option for researchers who prioritize efficiency. However, its accuracy raised concerns. Approximately 16% of the references were either fabricated or inaccurate, posing a risk to the credibility of any research relying on its output. Key Strengths: Exceptional processing speed (3 minutes). Organized research themes for easier navigation. Downloadable PDF format for immediate use. Key Weaknesses: Moderate fabrication rate (16%). Repetition and inaccuracies in references. Manis is a viable option for generating quick overviews, but its reliability is compromised by the need for thorough manual verification. While its speed is a clear advantage, the trade-off in accuracy limits its utility for rigorous academic research. Gen Spark: A Balanced but Limited Option Gen Spark offered a more balanced approach, completing the task in 5-7 minutes. It produced 19 references and demonstrated a reasonable understanding of the research prompt. However, its fabrication rate was higher at 26%, and its limited output made it less suitable for in-depth academic projects. Key Strengths: Moderate processing time (5-7 minutes). Reasonable comprehension of research prompts. Key Weaknesses: High fabrication rate (26%). Limited number of references (19). Output format is less user-friendly compared to competitors. Gen Spark may serve as a starting point for preliminary research, but its higher error rate and limited scope make it less dependable for detailed academic work. Researchers seeking comprehensive and accurate results may find its limitations restrictive. I Tested 3 Literature Review AIs – Only One Didn't Lie to Me Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in AI Literature Review Tools. Gemini AI: The Benchmark for Reliability Gemini AI emerged as the most reliable tool among the three tested. While it required the longest processing time—20 minutes—it delivered a 61-page document with 105 references. Only 1% of these references were problematic, and the issues were related to accessibility rather than outright fabrication. Gemini AI also stood out for its inclusion of structured data, tables, and up-to-date references, providing a level of detail unmatched by the other tools. Key Strengths: Extensive output (61 pages, 105 references). Minimal inaccuracies (1%). Inclusion of tables and structured data for clarity. Key Weaknesses: Longest processing time (20 minutes). Does not strictly adhere to peer-reviewed sources. Lacks integration with reference management tools. For researchers who value accuracy and depth, Gemini AI is the most dependable choice. While its longer processing time requires patience, its detailed output and low error rate make it a standout tool for academic literature reviews. Final Assessment After evaluating all three tools, Gemini AI clearly stands out as the most reliable option for academic literature reviews. Its detailed output, minimal error rate, and comprehensive analysis set it apart, despite its longer processing time. Manis, with its speed and moderate accuracy, is a reasonable alternative for quick overviews, while Gen Spark falls short due to its higher fabrication rate and limited scope. Final Rankings: First Place: Gemini AI for its depth, accuracy, and comprehensive output. Gemini AI for its depth, accuracy, and comprehensive output. Second Place: Manis for its speed and relatively low fabrication rate. Manis for its speed and relatively low fabrication rate. Third Place: Gen Spark due to its higher inaccuracy and limited scope. Practical Insights for Researchers AI tools for literature reviews hold significant potential, but they are not without flaws. Regardless of the tool you choose, manual verification remains essential to ensure the accuracy and credibility of your references. Among the tested options, Gemini AI sets the standard for academic productivity, offering a balance of precision and thoroughness that researchers can trust. While Manis and Gen Spark have their merits, they fall short of the reliability and depth required for rigorous academic work. Researchers should weigh their priorities—whether speed, accuracy, or comprehensiveness—when selecting the right tool for their needs. Media Credit: Andy Stapleton Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

UK to become 'AI maker not taker', says Sir Keir Starmer
UK to become 'AI maker not taker', says Sir Keir Starmer

Sky News

time21 hours ago

  • Sky News

UK to become 'AI maker not taker', says Sir Keir Starmer

The UK will become an "AI maker not an AI taker", according the prime minister, as he announced millions of pounds of funding to train students in AI. Sir Keir Starmer said the UK public needed to "lean in" and embrace artificial intelligence. NVIDIA chief executive Jensen Huang, who shared the stage with the prime minister, predicted that in the next 10 years, "every industry in the UK will be a tech industry" because of AI. He said the UK has one of the "richest AI communities anywhere in the world" and is the biggest country for AI investment other than the US and China. 3:54 However, it currently doesn't have the infrastructure needed to become a global AI superpower, he said. "You can't do machine learning without a machine. The ability to build AI supercomputers here in the UK will naturally attract more AI startups and create a better ecosystem, said Mr Huang. His comments come on the morning . When Labour entered office, it cancelled a planned supercomputer, saying it wasn't funded. 1:47 It has since announced it wants to increase UK computing power 20-fold by 2030 and build a brand-new supercomputer. As he announced a series of new investments in training for workers and students, Sir Keir said that within the coming years the government should be able to "look every parent in the eye" and pledge that tech can create a "better future" for their children. Can we trust ChatGPT despite it 'hallucinating' answers? "By the end of this Parliament we should be able to look every parent in the eye in every region in Britain and say 'look what technology can deliver for you'," he said. During the session, he announced a new £187m "TechFirst" scheme to equip secondary school pupils with AI skills for future jobs. He also announced a scheme backed by Google and Microsoft to train 7.5 million workers in AI skills by 2030. Google, Microsoft, IBM, Nvidia, BT and Amazon have signed up to make "high-quality" training materials widely available to workers free of charge over the next five years, according to Number 10. Sir Keir hosted a private reception at Chequers on Sunday with leading technology bosses and investors, including former Google chief executive Eric Schmidt, Faculty AI co-founder Angie Ma, Google DeepMind chief Demis Hassabis and Scale boss Alex Wang.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store