
Elsevier unveils Embase AI to transform biomedical data research
The tool has been developed in collaboration with the scientific community and is built upon Elsevier's Embase platform, a widely used biomedical literature database. According to feedback from beta users, Embase AI can reduce the time spent on reviewing biomedical data by as much as 50%.
Natural language features
Among its central features, Embase AI allows users to conduct searches in natural language, ranging from basic to complex scientific queries. The system then provides instant summaries of the relevant data and research insights. Each answer comes with a list of linked citations to assist users in evaluating the evidence and meeting expectations around medical regulation.
Unlike some other AI solutions that may obscure data provenance, Embase AI delivers transparency by presenting citations and ensuring that the underlying sources can be cross-checked. The database underpinning Embase AI is updated continuously and includes records such as adverse drug reaction reports, peer-reviewed journal articles, and around 500,000 clinical trial listings from ClinicalTrials.gov. This makes it suitable for a range of professional needs, including medical research, pharmacovigilance, regulatory submissions and the generation of market insights.
Expanded access
By enabling natural language querying, Embase AI seeks to open up biomedical data analysis to a broader group of users, including those who may lack advanced technical experience with literature reviews. Information is summarised for swift consumption while retaining the supporting references, limiting the likelihood that important findings go overlooked.
The AI solution uses a dual-stage ranking system to generate summary responses with inline citations. This approach is designed to ensure transparency and help users trust the results. A human-curated hierarchy of medical concepts and their synonyms underpins the system, contributing to the precision and transparency of its outputs. Embase AI's records are updated daily, and its architecture allows the tool to function in real time, searching the platform's full content including peer-reviewed research, clinical trials, preprints and conference abstracts.
Security and privacy
Elsevier has stated that Embase AI was developed in accordance with its Responsible AI Principles and Privacy Principles to ensure robust data privacy and security. The company notes that the model's use of third-party large language models (LLMs) is private, with no user information being stored or employed to train public versions of these models. All data is retained solely within Elsevier's protected environment. "Embase AI is changing the way researchers and other users go about solving problems and helps them save valuable time searching for answers, digesting information, and avoiding the risk of missing valuable insights. Every user should have access to trusted research tools that help them advance human progress, and we remain committed to working in partnership with scientists across academia, life sciences and other innovative industries to ensure that our solutions address their needs. We know that our users seek solutions that they can trust, and we built Embase AI in a way that ensures transparency, explainability and accuracy."
This statement was made by Mirit Eldor, Managing Director, Life Sciences at Elsevier.
Ongoing development
Embase AI is the latest addition to Elsevier's suite of products aimed at supporting the biomedical research community by facilitating discovery, analysis, and evidence synthesis using responsibly developed AI tools underpinned by trusted content. The platform is designed to meet the needs of professionals in roles such as research and development, medical affairs, academic research, knowledge management, and medical education.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
08-07-2025
- Techday NZ
New Zealand government unveils national strategy for AI adoption
The New Zealand government has released its first national artificial intelligence (AI) strategy, setting out a roadmap to foster innovation and productivity while ensuring responsible and safe AI development. The strategy, titled "New Zealand's Strategy for Artificial Intelligence: Investing with Confidence," aims to provide clarity and confidence for businesses looking to integrate AI into their operations and signals a step-change in the country's approach to emerging technologies. A response to international trends and local needs Until now, New Zealand had been the only country in the OECD without a formal AI strategy, despite rapidly advancing AI uptake internationally. The new strategy is designed to help New Zealand catch up with other small, advanced economies and to address a growing gap in AI readiness among local organisations. Government analysis has found that most New Zealand businesses are still in the early stages of adopting AI, with many lacking a clear plan for integrating the technology. The government's announcement highlights the potential for AI to drive significant economic growth, with estimates suggesting the technology could add up to NZ$76 billion to the country's GDP by 2038. Recognising both the opportunities and the challenges, the strategy focuses on building local capability, promoting innovation, and managing risks responsibly. Light-touch regulation and clear guidance A key feature of the strategy is its commitment to light-touch regulation. The government has opted not to introduce new AI-specific laws at this stage, instead relying on existing legal frameworks around privacy, consumer protection, and human rights. The aim is to reduce barriers to AI adoption and provide clear regulatory guidance that enables innovation, while still protecting New Zealanders' rights and building public trust. To support safe and responsible use of AI, the government has released voluntary Responsible AI Guidance alongside the strategy. This guidance is intended to help organisations use, develop, and innovate with AI in a way that is ethical, transparent, and aligned with international best practice. The approach is informed by the OECD's AI Principles and emphasises transparency, accountability, and the importance of maintaining public confidence. The strategy also commits to monitoring international developments closely and participating in global efforts to develop consistent approaches to the governance of AI. Focus on business adoption and workforce skills Unlike some strategies that prioritise AI research and development, New Zealand's approach is focused primarily on enabling the adoption and application of AI by businesses. The government sees the greatest opportunity in supporting local firms to take up AI technologies, adapt them to New Zealand's needs, and use them to create value in key sectors such as agriculture, healthcare, and education. The strategy acknowledges several barriers to wider AI adoption, including a shortage of skilled workers, a lack of understanding about how to deploy AI effectively, and uncertainty about the regulatory environment. To address these issues, the government is supporting a range of initiatives to build AI skills within the workforce and is providing advice and support to help businesses prepare for and benefit from AI. The strategy is designed to give businesses the confidence to invest in AI, with the government promising to identify and remove any legal or practical obstacles that may hinder innovation. There is also a commitment to ensuring that AI is developed and used in a way that is inclusive and reflects the needs of Māori and other communities. Supporting both public and private sector innovation While the new strategy is primarily focused on the private sector, the government has signalled its intention to lead by example in the public sector as well. A separate stream of work, led by the Minister for Digitising Government, is underway to explore how AI can improve public services and support digital transformation across government agencies. By taking a coordinated and enabling approach, the government hopes to position New Zealand as a leader among smaller advanced economies in the responsible adoption of AI. The strategy sets out clear expectations for businesses and government agencies alike, encouraging investment in AI technologies that drive productivity, deliver better services, and help New Zealand compete on the global stage. Next steps The government will continue to monitor the rollout of the AI strategy and engage with industry, academia, and communities to ensure that New Zealand's approach remains responsive to technological and social change. The Responsible AI Guidance will be updated as required, and officials will keep a close watch on international developments to ensure New Zealand's regulatory environment remains fit for purpose. With this announcement, New Zealand signals its intention to embrace the opportunities of artificial intelligence, with a focus on responsible, inclusive, and innovation-driven adoption for the benefit of all New Zealanders.


Techday NZ
26-06-2025
- Techday NZ
Elsevier unveils Embase AI to transform biomedical data research
Elsevier has launched Embase AI, a generative artificial intelligence tool aimed at changing how researchers and medical professionals access and analyse biomedical data. The tool has been developed in collaboration with the scientific community and is built upon Elsevier's Embase platform, a widely used biomedical literature database. According to feedback from beta users, Embase AI can reduce the time spent on reviewing biomedical data by as much as 50%. Natural language features Among its central features, Embase AI allows users to conduct searches in natural language, ranging from basic to complex scientific queries. The system then provides instant summaries of the relevant data and research insights. Each answer comes with a list of linked citations to assist users in evaluating the evidence and meeting expectations around medical regulation. Unlike some other AI solutions that may obscure data provenance, Embase AI delivers transparency by presenting citations and ensuring that the underlying sources can be cross-checked. The database underpinning Embase AI is updated continuously and includes records such as adverse drug reaction reports, peer-reviewed journal articles, and around 500,000 clinical trial listings from This makes it suitable for a range of professional needs, including medical research, pharmacovigilance, regulatory submissions and the generation of market insights. Expanded access By enabling natural language querying, Embase AI seeks to open up biomedical data analysis to a broader group of users, including those who may lack advanced technical experience with literature reviews. Information is summarised for swift consumption while retaining the supporting references, limiting the likelihood that important findings go overlooked. The AI solution uses a dual-stage ranking system to generate summary responses with inline citations. This approach is designed to ensure transparency and help users trust the results. A human-curated hierarchy of medical concepts and their synonyms underpins the system, contributing to the precision and transparency of its outputs. Embase AI's records are updated daily, and its architecture allows the tool to function in real time, searching the platform's full content including peer-reviewed research, clinical trials, preprints and conference abstracts. Security and privacy Elsevier has stated that Embase AI was developed in accordance with its Responsible AI Principles and Privacy Principles to ensure robust data privacy and security. The company notes that the model's use of third-party large language models (LLMs) is private, with no user information being stored or employed to train public versions of these models. All data is retained solely within Elsevier's protected environment. "Embase AI is changing the way researchers and other users go about solving problems and helps them save valuable time searching for answers, digesting information, and avoiding the risk of missing valuable insights. Every user should have access to trusted research tools that help them advance human progress, and we remain committed to working in partnership with scientists across academia, life sciences and other innovative industries to ensure that our solutions address their needs. We know that our users seek solutions that they can trust, and we built Embase AI in a way that ensures transparency, explainability and accuracy." This statement was made by Mirit Eldor, Managing Director, Life Sciences at Elsevier. Ongoing development Embase AI is the latest addition to Elsevier's suite of products aimed at supporting the biomedical research community by facilitating discovery, analysis, and evidence synthesis using responsibly developed AI tools underpinned by trusted content. The platform is designed to meet the needs of professionals in roles such as research and development, medical affairs, academic research, knowledge management, and medical education.


Scoop
24-06-2025
- Scoop
How Academics Are Pushing Back On The For-Profit Academic Publishing Industry
According to the independent news organization the Conversation, five publishing houses control about half the global academic publishing industry's market share. Relx, the parent company of the 'biggest player in this business,' Elsevier, reaped a profit margin of almost 40 percent in 2023, 'rivalling tech giants such as Microsoft and Google,' pointed out the March 2025 article. 'Many of the most trusted and prestigious research journals are owned by commercial publishers,' the Conversation noted. 'For example, the Lancet is owned by Elsevier.' In 2024, the editorial board for the paleoanthropology bulletin Journal of Human Evolution (JHE) collectively resigned. Besides deficient copyediting and unethical use of AI, which resulted in what the journal Science calls 'scientifically significant errors,' the board accused its publisher, Elsevier, of overcharging. High article processing charges (APCs) are common in the for-profit academic publishing industry. The 2021 paper 'Equitable Open Access Publishing: Changing the Financial Power Dynamics in Academia' notes that high APCs 'exacerbate disparities between funded and unfunded researchers.' 'Traditional academic publishers exploit scholars in several ways,' says Denis Bourguet, co-founder of Peer Community In (PCI), a nonprofit platform that offers 'peer review, recommendation, and publication of scientific articles in open access (OA) for free,' according to its website. Bourguet says common practices within the traditional academic publishing model commodify scholarly knowledge, treating it not as a public good but as a resource to extract profit. 'Researchers produce articles, conduct peer reviews, and often serve as editors, typically without pay, while publishers profit by charging high fees to both authors and readers. With this model, authors must pay substantial article processing charges to publish in open access. Yet, in some journals, since some articles remain behind paywalls, universities and libraries must pay subscriptions to give their members free access to the full content of these journals,' adds Bourguet. PCI co-founder Thomas Guillemaud notes that costly paywalls make 'access difficult for researchers without institutional support, especially in low-income regions.' He adds that the 'pay-to-read or pay-to-publish model encourages researchers to focus on publishing in prestigious journals for career advancement, sometimes at the expense of research quality. This 'prestige economy' can distort scientific priorities and integrity. Pressures to publish in prestigious journals contribute to issues like irreproducible results, publication bias, and even scientific misconduct.' According to a 2025 report in the Proceedings of the National Academy of Sciences, despite major advances such as the antiretroviral therapy and vaccines during the pandemic, science 'faces challenges due to the incentive systems,' with for-profit publishers trying to 'capitalize on unpaid reviewers and [charging] high fees for sharing and accessing knowledge.' PCI is one of many academic-led initiatives challenging the dominance of for-profit publishers and, as Guillemaud puts it, 'reshaping scholarly communication.' Lifecycle Journal, for instance, does not charge its authors or readers. It 'is a new transparent model of scholarly communication that aims to put publishing and evaluation in the control of the scholarly community itself,' its website states. Similarly, SciPost, 'the home of genuine open publishing,' claims, 'We don't charge authors, we don't charge readers, we don't send bills to anybody for our services, and we certainly don't make any profit; we are an academic community service surviving on support from organizations that benefit from our activities. Said otherwise, our system is academia's antidote to APCs.' The Free Journal Network curates and promotes Diamond OA journals that charge neither authors nor readers, ensuring adherence to fair open access principles and supporting a growing ecosystem of scholar-led publications. The French nonprofit publishing platform Centre Mersenne 'endeavors to fight research output's privatization and outrageous profit-making out of the scientific commons,' according to its site. Its 'agenda is to support Diamond Open Access or Gold OA without APC (no fees required to read nor to publish).' Diamond and Gold are two of many OA publishing models. Journals that use the Diamond Open Access model do not charge fees for readers or authors. Funding comes from academic institutions, research funders, philanthropists, governments, advertisers, and nonprofit organizations. Meanwhile, the Medical College of Wisconsin describes the Green OA model as 'the practice of placing a version of an author's manuscript into a repository, making it freely accessible for everyone… No article processing charges are paid.' The Georgia State University Library also outlines various types of OA models. Besides adopting the OA model, academics are countering for-profit academic journals by publishing academic-led journals, putting pressure on publishers to lower their fees, renegotiating contracts, and forming consortia. PCI embraces the Diamond OA model. Its support officer, Barbara Class, explains that its Peer Community Journal is free for authors and readers. This 'removes financial barriers imposed by article processing charges or subscription fees common in for-profit publishing. In addition, PCI publishes peer reviews and editorial decisions openly, promoting transparency and accountability in contrast to the often-opaque evaluation processes performed by for-profit journals.' Class adds, 'PCI focuses on the intrinsic value and quality of research rather than journal-based metrics.' Guillemaud says PCI is sustained through a 'community-driven funding model based primarily on small, recurring public subsidies from universities, libraries, and research institutions. These institutions contribute annually on a pay-what-you-can basis… allowing broad participation regardless of size or budget. This stable and diversified funding base enables PCI to cover its operational costs without large private donors or charging fees to authors or readers.' Author Bio: Damon Orion is a writer, journalist, musician, artist, and teacher in Santa Cruz, California. His work has appeared in Revolver, Guitar World, Spirituality + Health, Classic Rock, and other publications. Read more of his work at