logo
Yoga, tai chi, walking and jogging best ways to tackle insomnia

Yoga, tai chi, walking and jogging best ways to tackle insomnia

Independent2 days ago
Yoga, tai chi, jogging and walking could be the best forms of exercise to help tackle the sleep disorder insomnia, a study suggests.
These workouts are 'well-suited' to be recommended to patients due to their low cost and minimal side-effects, researchers said.
People with insomnia regularly have trouble falling asleep, staying asleep, or wake several times during the night.
It can cause people to have difficulty concentrating or to be tired and irritable during the day.
To explore the effectiveness of different workouts on sleep quality and insomnia, researchers in China analysed 22 trials.
The review included 1,348 patients and 13 different measures to boost sleep, including seven exercises: yoga, tai chi; walking or jogging; aerobic plus strength exercise; strength training alone; aerobic exercise combined with therapy; and mixed aerobic exercises.
The study found that yoga, in particular, resulted in an increase in sleep time of almost two hours, and could also cut the amount of time spent awake after falling asleep by nearly an hour.
Walking or jogging could reduce insomnia severity, while tai chi could boost sleep quality.
According to researchers, yoga's focus on body awareness and controlled breathing could help with symptoms of anxiety and depression to help people get a good night's sleep.
Tai chi, an ancient Chinese martial art that involves slow, flowing movements, 'emphasises breath control and physical relaxation', they added, and could boost emotional regulation.
Elsewhere, the study suggests walking or jogging could reduce levels of the stress hormone cortisol, while boosting melatonin, the hormone that regulates sleep cycles.
Researchers said: 'The findings of this study further underscore the therapeutic potential of exercise interventions in the treatment of insomnia.
'Given the advantages of exercise modalities such as yoga, tai chi, and walking or jogging – including low cost, minimal side effects, and high accessibility – these interventions are well-suited for integration into primary care and community health programmes.'
Researchers stressed there were some 'methodological limitations' to some of the trials included in the analysis.
However, they said the study, published in BMJ Evidence Based Medicine, 'provides comprehensive comparative evidence supporting the efficacy of exercise interventions in improving sleep outcomes among individuals with insomnia'.
They also called for large-scale, high-quality trials to confirm and extend their findings.
Other non-exercise-based approaches in the trials included the likes of cognitive behavourial therapy (CBT), acupuncture, massage and lifestyle changes.
A number of trials found CBT is 'more effective and has a longer-lasting impact on insomnia than medication', researchers said.
However, they highlighted a number of 'barriers' to CBT, including a lack of trained professionals.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The IVF technique that created healthy babies from three sets of DNA
The IVF technique that created healthy babies from three sets of DNA

The Independent

time2 hours ago

  • The Independent

The IVF technique that created healthy babies from three sets of DNA

Researchers at Newcastle University have demonstrated a pioneering three-person IVF technique known as mitochondrial donation therapy. This advanced therapy is designed to prevent children from inheriting severe health conditions from their mothers. The process involves transferring genetic material from a mother's fertilised egg into a donor's fertilised egg, which has had its nucleus removed, ensuring the resulting embryo has healthy mitochondria. Eight healthy babies, four boys and four girls, have been delivered using this method, successfully avoiding their parents' incurable genetic disorders. Watch the video in full above.

Oxford University Press scraps China-sponsored journal after longstanding protests
Oxford University Press scraps China-sponsored journal after longstanding protests

The Independent

time2 hours ago

  • The Independent

Oxford University Press scraps China-sponsored journal after longstanding protests

Oxford University Press will cease publishing the Chinese government-sponsored journal, Forensic Sciences Research, after 2025. The decision follows years of controversy regarding the journal's alleged ethical breaches over DNA collection from Uyghur and other ethnic minority groups in China. Critics highlighted concerns that studies published in the journal used DNA samples from minorities without their consent. OUP had previously retracted at least two papers and issued an "expression of concern" for another due to these ethical issues. Forensic Sciences Research will be published by KeAi in China from next year.

Context Rot and Overload : How Feeding AI More Data Can Decrease Accuracy
Context Rot and Overload : How Feeding AI More Data Can Decrease Accuracy

Geeky Gadgets

time3 hours ago

  • Geeky Gadgets

Context Rot and Overload : How Feeding AI More Data Can Decrease Accuracy

What happens when the very thing designed to make AI smarter—more context—starts to work against it? Large Language Models (LLMs), celebrated for their ability to process vast amounts of text, face a surprising Achilles' heel: as input lengths grow, their performance often falters. This phenomenon, sometimes referred to as 'context rot,' reveals a paradox at the heart of AI: the more we feed these models, the harder it becomes for them to deliver accurate, coherent results. Imagine asking an AI to summarize a 50-page report, only to receive a response riddled with errors or omissions. The culprit isn't just the complexity of the task—it's the overwhelming flood of tokens that dilutes the model's focus and reliability. In this feature, Chroma explore the intricate relationship between input length and LLM performance, uncovering why even the most advanced models struggle with long and complex inputs. From the role of ambiguity and distractors to the uneven prioritization of context, you'll gain a deeper understanding of the hidden challenges that affect AI's ability to reason and respond. But it's not all bad news—by adopting strategies like summarization and retrieval, users can mitigate these issues and unlock more consistent, reliable outputs. How can we strike the right balance between context and clarity? Let's delve into the mechanics of context rot and the solutions that can help us harness the true potential of LLMs. How Long Inputs Impact LLM Performance When tasked with processing long inputs, LLMs often experience a noticeable decline in both accuracy and efficiency. While they excel at handling short and straightforward tasks, their performance diminishes when faced with extended or intricate inputs. This issue becomes particularly evident in tasks requiring reasoning or memory, where the model must integrate information from multiple sections of the input. For instance, even seemingly simple tasks like replicating structured data or answering direct questions can become unreliable when the input includes excessive or irrelevant context. The sheer volume of tokens can overwhelm the model, leading to errors, inconsistencies, and a reduced ability to focus on the primary task. These limitations highlight the importance of tailoring inputs to the model's capabilities to maintain performance and reliability. The Role of Ambiguity and Distractors Ambiguity and distractors are two critical factors that exacerbate the challenges LLMs face when processing long inputs. These elements can significantly impair the model's ability to generate accurate and relevant outputs. Ambiguity: When input questions or content lack clarity, the model may interpret multiple possible meanings. This often results in outputs that are either incorrect, overly generalized, or fail to address the intended query. When input questions or content lack clarity, the model may interpret multiple possible meanings. This often results in outputs that are either incorrect, overly generalized, or fail to address the intended query. Distractors: Irrelevant or misleading information embedded within the input can divert the model's attention from the core task. This reduces the accuracy and reliability of the output, as the model struggles to differentiate between critical and non-essential details. To mitigate these issues, it is essential to structure inputs carefully and eliminate unnecessary or confusing elements. By doing so, you can help the model maintain focus and precision, making sure more consistent and reliable results. AI Context Rot : How Increasing Input Tokens Impacts LLM Performance Watch this video on YouTube. Advance your skills in AI performance by reading more of our detailed content. Inconsistent Context Processing Another significant limitation of LLMs is their inconsistent handling of long inputs. As the context window fills, the model may prioritize certain sections of the input while neglecting others. This uneven processing can lead to incomplete or irrelevant outputs, particularly for tasks that require a comprehensive understanding of the entire input. For example, summarizing a lengthy document or extracting key details from a complex dataset becomes increasingly error-prone as the input grows. The model's inability to uniformly process all parts of the context undermines its reliability in such scenarios. This inconsistency highlights the need for strategies that ensure the model can focus on the most relevant portions of the input without losing sight of the broader context. Strategies for Effective Context Management To address these challenges, adopting robust context management strategies is essential. Two primary approaches—summarization and retrieval—have proven effective in optimizing LLM performance when dealing with long inputs: Summarization: Condensing lengthy inputs into shorter, more relevant summaries reduces the cognitive load on the model. By retaining only the most critical information, summarization improves the accuracy and relevance of the outputs while minimizing the risk of errors caused by excessive or irrelevant context. Condensing lengthy inputs into shorter, more relevant summaries reduces the cognitive load on the model. By retaining only the most critical information, summarization improves the accuracy and relevance of the outputs while minimizing the risk of errors caused by excessive or irrelevant context. Retrieval: Using vector databases to fetch only the most pertinent information for a given task can significantly enhance performance. This involves indexing input data into a searchable format, allowing the model to access specific pieces of information as needed, rather than processing the entire input at once. In many cases, combining these strategies can yield even better results. For example, summarization can be used to refine the input before retrieval, making sure that the model focuses solely on the most relevant details. This layered approach allows for more efficient and accurate processing of complex or lengthy inputs. Key Takeaways Despite advancements in token capacities and context window sizes, LLMs continue to face challenges when processing long inputs. Issues such as ambiguity, distractors, and inconsistent context handling can hinder their performance and reliability. However, by employing effective context management strategies like summarization and retrieval, you can optimize the model's ability to generate accurate and relevant outputs. Understanding these limitations and tailoring your approach to specific use cases is essential for using the full potential of LLMs. By structuring inputs carefully and adopting strategic context management techniques, you can overcome the challenges of context rot and ensure more consistent and reliable results from these powerful tools. Media Credit: Chroma Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store