Hong Kong Baptist University and Elsevier Report Highlights Global Expansion of Chinese Medicine Research, Urges Standardised Frameworks for Broader Integration
ISTANBUL, TURKEY - Media OutReach Newswire - 18 June 2025 – The report 'Evolving Legacy: Decoding the Scientific Trajectory of Chinese Medicine' released today by Hong Kong Baptist University (HKBU) and Elsevier, reveals the growing impact of Chinese Medicine research on global healthcare and modern health challenges. In this context, 'Chinese Medicine research' encompasses not only clinical practice, but also basic and translational science, AI-driven systems medicine, and the network pharmacology of herbal formulas. This first bibliometric analysis of Chinese Medicine research from 2014 to 2023 highlights the field's rapid growth, interdisciplinarity, and societal impact on Digital Health and Systems Medicine.
HKBU and Elsevier launched a new report 'Evolving Legacy: Decoding the Scientific Trajectory of Chinese Medicine', highlighting the growing global impact of Chinese Medicine research in diversified healthcare solutions.
Key findings from the report include:
Bridging a global integration gap
Chinese Medicine's mainstream integration lacks a global, transparent, evidence-based framework. To elevate its scientific profile, key strategies include developing internationally accepted clinical guidelines through expert consensus, enhancing trial reporting standards to improve research quality, and fostering interdisciplinary, cross-regional collaboration to drive innovation and systemic impact.
''Evolving Legacy' demonstrates how Chinese Medicine research has matured into a global, evidence-informed field,' said Professor Lyu Aiping, Vice-President (Research & Development), HKBU, and a leading expert in Chinese Medicine. 'We envision a future where data science and AI illuminate what Chinese Medicine research has long intimated, providing deeper insights into health classifications and compound interventions. The next frontier is using Chinese Medicine research to uncover insights possibly overlooked in Digital Health and Systems Medicine, strengthening their scientific foundation and impact.'
'This report offers unprecedented insights into Chinese Medicine's evolving scientific landscape,' added Basak Candemir, Business Development Director, Analytical and Data Services, Elsevier. 'It illuminates the growing global impact of Chinese Medicine research and its role in diversified healthcare solutions, making a vital step towards bridging traditional knowledge with modern validation for the benefit of global health.'
The report was developed using Elsevier's bibliometric tools, Scopus and SciVal, based on a comprehensive dataset and case studies compiled in collaboration with HKBU Chinese Medicine research experts. The report can be accessed on https://cm-evaluation.scm.hkbu.edu.hk/.
Hashtag: #HKBU
The issuer is solely responsible for the content of this announcement.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
an hour ago
- Fast Company
AI users have to choose between accuracy or sustainability
PREMIUM New research shows the smarter and more capable AI models become, the larger their environmental impact. [Images: hramovnick/Adobe Stock; yLemon/Adobe Stock] BY Listen to this Article More info 0:00 / 2:38 Cheap or free access to AI models keeps improving, with Google the latest firm to make its newest models available to all users, not just paying ones. But that access comes with one cost: the environment. In a new study, German researchers tested 14 large language models (LLMs) of various sizes from leading developers such as Meta, Alibaba, and others. Each model answered 1,000 difficult academic questions spanning topics from world history to advanced mathematics. The tests ran on a powerful, energy-intensive NVIDIA A100 GPU, using a specialized framework to precisely measure electricity consumption per answer. This data was then converted into carbon dioxide equivalent emissions, providing a clear comparison of each model's environmental impact. The researchers found that many LLMs are far more powerful than needed for everyday queries. Smaller, less energy-hungry models can answer many factual questions just as well. The carbon and water footprints of a single prompt vary dramatically depending on model size and task type. Prompts requiring reasoning, which force models to 'think aloud,' are especially polluting because they generate many more tokens. One model, Cogito, topped the accuracy table—answering nearly 85% of questions correctly—but produced three times more emissions than similar-sized models, highlighting a trade-off rarely visible to AI developers or users. (Cogito did not respond to a request for comment.) 'Do we really need a 400-billion parameter GPT model to answer when World War II was, for example,' says Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and one of the study's authors. advertisement The final deadline for Fast Company's Next Big Things in Tech Awards is Friday, June 20, at 11:59 p.m. PT. Apply today. Subscribe to see the rest. Already Subscribed? Login. GET UNLIMITED ACCESS TO FAST COMPANY Enjoy the latest trends from the world-leading progressive business media brand just $1 Join for $1 Sign up for our weekly tech digest. SIGN UP This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Privacy Policy ABOUT THE AUTHOR Chris Stokel-Walker is a contributing writer at Fast Company who focuses on the tech sector and its impact on our daily lives—online and offline. He has explored how the WordPress drama has implications for the wider web, how AI web crawlers are pushing sites offline, as well as stories about ordinary people doing incredible things, such as the German teen who set up a MySpace clone with more than a million users. More Explore Topics Artificial Intelligence


Gizmodo
an hour ago
- Gizmodo
Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question
Like it or not, large language models have quickly become embedded into our lives. And due to their intense energy and water needs, they might also be causing us to spiral even faster into climate chaos. Some LLMs, though, might be releasing more planet-warming pollution than others, a new study finds. Queries made to some models generate up to 50 times more carbon emissions than others, according to a new study published in Frontiers in Communication. Unfortunately, and perhaps unsurprisingly, models that are more accurate tend to have the biggest energy costs. It's hard to estimate just how bad LLMs are for the environment, but some studies have suggested that training ChatGPT used up to 30 times more energy than the average American uses in a year. What isn't known is whether some models have steeper energy costs than their peers as they're answering questions. Researchers from the Hochschule München University of Applied Sciences in Germany evaluated 14 LLMs ranging from 7 to 72 billion parameters—the levers and dials that fine-tune a model's understanding and language generation—on 1,000 benchmark questions about various subjects. LLMs convert each word or parts of words in a prompt into a string of numbers called a token. Some LLMs, particularly reasoning LLMs, also insert special 'thinking tokens' into the input sequence to allow for additional internal computation and reasoning before generating output. This conversion and the subsequent computations that the LLM performs on the tokens use energy and releases CO2. The scientists compared the number of tokens generated by each of the models they tested. Reasoning models, on average, created 543.5 thinking tokens per question, whereas concise models required just 37.7 tokens per question, the study found. In the ChatGPT world, for example, GPT-3.5 is a concise model, whereas GPT-4o is a reasoning model. This reasoning process drives up energy needs, the authors found. 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach,' study author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a statement. 'We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.' The more accurate the models were, the more carbon emissions they produced, the study found. The reasoning model Cogito, which has 70 billion parameters, reached up to 84.9% accuracy—but it also produced three times more CO2 emissions than similarly sized models that generate more concise answers. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,' said Dauner. 'None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.' CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases. Another factor was subject matter. Questions that required detailed or complex reasoning, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, according to the study. There are some caveats, though. Emissions are very dependent on how local energy grids are structured and the models that you examine, so it's unclear how generalizable these findings are. Still, the study authors said they hope that the work will encourage people to be 'selective and thoughtful' about the LLM use. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner said in a statement.


CBS News
an hour ago
- CBS News
AHN Wexford is one of the nation's quietest hospitals at night, surveys say
Allegheny Health Network's AHN Wexford Hospital has received high marks as one of the nation's quietest hospitals at night for the fourth consecutive year, according to new patient surveys that were compiled by the U.S. Centers for Medicare and Medicaid Services. Data snapshots are released periodically throughout the year. A star rating is then assigned to hospitals for varying patient experience categories, per a news release from Allegheny Health Network. Patients who had spent the night in the hospital were asked whether "the area around their room was always quiet at night." Data from a recently released snapshot indicated that 72% of respondents answered "yes" to the question, besting the national average (62%) and the Pennsylvania average (56%). An additional 24% said the environment was "usually" quiet. In the winter 2025 snapshot, Wexford's "always quiet" number was even higher, at 74%. That was good for the highest-possible five-star "quietness rating" from the U.S. Centers for Medicare and Medicaid Services. "Nighttime noise affects patient satisfaction and, more importantly, disrupts sleep and recovery," said Lisa Graper, chief nursing officer at AHN Wexford. "Noise reduction begins with staff training and evidence-based interventions, but it also includes elements of hospital design, such as all-private rooms and soft-closing drawers." "At AHN Wexford, we pride ourselves on providing exceptional medicine and an exceptional patient experience," said Allan Klapper, MD, president, AHN Wexford. "That experience takes many forms, but keeping the volume down at night is especially important to us. Study after study shows that a quiet hospital environment can improve patient healing and well-being, which reduces stress and can even lead to shorter hospital stays." AHN Wexford also earned five stars in the areas of cleanliness, nurse communication, and the patient's understanding of care instructions when they transition out of the hospital, the news release added.