logo
Intermittent Hypoxia-Hyperoxia Therapy (IHHT) Launched: Breakthrough Breathing Therapy for Heart, Brain, and Healthy Aging

Intermittent Hypoxia-Hyperoxia Therapy (IHHT) Launched: Breakthrough Breathing Therapy for Heart, Brain, and Healthy Aging

The Sun14-05-2025

KUALA LUMPUR, MALAYSIA - Media OutReach Newswire - 14 May 2025 - Ai Mediq SA today officially launched Intermittent Hypoxia-Hyperoxia Therapy (IHHT), a science-backed, non-invasive breathing therapy that strengthens the cardiovascular system at its cellular core. By training the body to adapt to fluctuating oxygen levels, IHHT enhances heart function, improves metabolic efficiency, supports cognitive health, and reduces inflammation.
IHHT cycles between low-oxygen (hypoxia) and high-oxygen (hyperoxia) breathing intervals, acting like 'interval training for your cells.' It optimizes oxygen use, lowers inflammation, and boosts resilience without the risks associated with continuous hypoxia.
Rooted in oxygen-sensing molecular mechanisms, a Nobel Prize-winning discovery, IHHT triggers regenerative cellular processes. Devices like ReOxy® and HIBERG® use Self-Regulated Treatment (SRT) technology to personalize sessions based on real-time data such as heart rate and oxygen saturation, ensuring optimal and safe results.
Clinical Evidence
A 2023 review published in Metabolites analyzed 16 clinical studies (https://www.mdpi.com/2218-1989/13/2/181), confirming IHHT's benefits across cardiovascular, cognitive, and metabolic health:
--> Cardiac Rehabilitation: 15 sessions improved exercise tolerance, lipid profiles, and quality of life (Glazachev et al., 2017).
--> Cognitive Health: Elderly patients showed a 20% MoCA score improvement after IHHT (Serebrovska et al., 2019).
Metabolic Syndrome: IHHT reduced LDL cholesterol, inflammatory markers, and improved blood sugar control (Bestavashvili et al., 2021).
--> Geriatric Care: Combined therapy led to a 30% greater improvement in mobility and cognitive function (Bayer et al., 2017).
Ripple Effects of IHHT
Beyond the proven benefits, IHHT enhances cognitive function, supports metabolism and longevity, improves athletic recovery, and reduces systemic inflammation.
From Clinical Evidence to Real-World Application: ReOxy® and HIBERG®
Once reserved for elite athletes and clinical trials, IHHT is now accessible to both medical professionals and health-conscious individuals:
ReOxy® is a CE-certified medical device designed for clinical settings. It delivers fully automated IHHT protocols tailored to individual patients, making it ideal for cardiology, rehabilitation, and integrative medicine practices.
HIBERG® adapts the same core technology for personal use, offering guided breathing sessions to support recovery, stress relief, sleep, and energy balance. It's a powerful wellness solution for anyone seeking performance and longevity.
Both devices are powered by SRT-Technology, ensuring personalised, safe, and effective treatment every session.
About Ai Mediq SA
Headquartered in Luxembourg, Ai Mediq SA develops advanced breathing therapies backed by clinical research. Its flagship innovations, ReOxy® and HIBERG®, bring cutting-edge, non-invasive solutions into everyday health practices.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

GIA Redefines Lab-grown Diamond Grading Standards Discontinuation of 4Cs Grading System Enhances Differentiation Between Natural and Lab-Grown Diamonds
GIA Redefines Lab-grown Diamond Grading Standards Discontinuation of 4Cs Grading System Enhances Differentiation Between Natural and Lab-Grown Diamonds

The Sun

time4 hours ago

  • The Sun

GIA Redefines Lab-grown Diamond Grading Standards Discontinuation of 4Cs Grading System Enhances Differentiation Between Natural and Lab-Grown Diamonds

HONG KONG SAR - Media OutReach Newswire - 10 June 2025 - The Gemological Institute of America (GIA), the world's foremost authority in gemology announced a redefinition in diamond grading on June 2, 2025. Beginning at the end of 2025, GIA will cease using the internationally recognized 4Cs grading system (Cut, Colour, Clarity, Carat) for lab-grown diamonds. Instead, GIA will implement a new descriptive grading system that lab-grown diamonds submitted to GIA will receive simplified descriptors—categorized broadly as either 'premium' or 'standard' or no grade at all if the quality is subpar. This transformative change marks a historic shift in the global diamond industry, not only redefining the value perception of lab-grown diamonds but also enhancing the differentiation between natural and lab-grown diamonds. This initiative is not merely a terminology adjustment; it represents a systematic effort to separate the grading systems for lab-grown and natural diamonds. As a non-profit organization, GIA emphasizes the fundamental differences between the two, including their formation processes, physical characteristics, and market values. According to Tom Moses, GIA Executive Vice President and Chief Laboratory and Research Officer, 'More than 95% of lab-grown diamonds entering the market fall into a very narrow range of color and clarity. Because of that, it is no longer relevant for GIA to describe man-made diamonds using the nomenclature created for the continuum of color and clarity of natural diamonds.' Reaffirming the Unique Value of Natural Diamonds This revision of grading standards is another milestone following GIA's abandonment of the term 'Synthetic' and its move to describe lab-grown diamonds in relation to natural diamond standards. GIA created the 4Cs—cut, colour, clarity, and carat weight—as a rigorous system to help consumers understand the unique and qualities of natural diamonds. With the new lab-grown diamond grading system, the core value of natural diamonds—rarity and emotional attributes are further emphasized. No two natural diamonds are exactly alike. Every natural diamond is unique, characterized by its distinct growth patterns, inclusions, and colour formed over billions of years. These nature's treasures, formed deep within the Earth, are considered valuable collectibles due to their beauty, rarity and non-renewability. They symbolize values associated with love, commitment, and eternity, while also contributing to the social and economy welfare of the diamond mining communities through responsible mining practices. In contrast, lab-grown diamonds are man-made and mass-produced using high-pressure high-temperature (HPHT) or Chemical Vapor Deposition (CVD) processes. Their industrial nature limits their ability to embody the multiple values associated with natural diamonds, including emotional attributes, rarity, investment potential, and heritage. GIA's reform not only reaffirms the irreplaceable status of natural diamonds but also clarifies that lab-grown diamonds should not be assessed using the same criteria as natural diamonds. Ensuring Consumer Awareness and Transparency Over time, some lab-grown diamond sellers have been using ambiguous marketing terms such as 'sustainability' and 'equivalency' that may mislead consumers about the differences between lab-grown and natural diamonds. GIA's new approach ensures consumers can make informed choices without confusion, protecting their rights to knowledge, choice, and fair trade. By discontinuing the use of the 4Cs standard for lab-grown diamonds, GIA reaffirms its commitment to scientific integrity and public transparency. GIA's grading redefinition is poised to have a profound impact on the global jewellery industry. As this change takes effect by the end of 2025, it is anticipated that gemological institutes worldwide will follow suit. The boundaries between natural and lab-grown diamonds are clearly defined through GIA's new grading standards.

PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts
PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts

The Sun

time18 hours ago

  • The Sun

PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts

HONG KONG SAR - Media OutReach Newswire - 9 June 2025 - Can one truly understand what 'flower' means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs) suggest that language alone can build deep, meaningful representations of the world. By exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour. Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms datasets. The research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair 'pasta' and 'roses' might receive equally high olfactory ratings, but 'pasta' is in fact more similar to 'noodles' than to 'roses' when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and LLMs. The representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans' conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual data. In light of the findings, the researchers examined whether grounding would improve the LLMs' performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human representations. Prof. Li Ping said, 'The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future.' Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, 'The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect'. Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, 'These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM's representation will then be indistinguishable from that of humans.'

New Research Reveals How Language Models Use Sensory and Motor Inputs to Represent Complex Concepts
New Research Reveals How Language Models Use Sensory and Motor Inputs to Represent Complex Concepts

The Sun

time18 hours ago

  • The Sun

New Research Reveals How Language Models Use Sensory and Motor Inputs to Represent Complex Concepts

HONG KONG SAR - Media OutReach Newswire - 9 June 2025 - Can one truly understand what 'flower' means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs) suggest that language alone can build deep, meaningful representations of the world. By exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour. Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms datasets. The research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair 'pasta' and 'roses' might receive equally high olfactory ratings, but 'pasta' is in fact more similar to 'noodles' than to 'roses' when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and LLMs. The representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans' conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual data. In light of the findings, the researchers examined whether grounding would improve the LLMs' performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human representations. Prof. Li Ping said, 'The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future.' Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, 'The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect'. Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, 'These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM's representation will then be indistinguishable from that of humans.' Hashtag: #PolyU #HumanCognition #LargeLanguageModels #LLMs #GenerativeAI The issuer is solely responsible for the content of this announcement.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store