
‘AI-scientist' discovers that common medication could kill cancer cells
Commonly used non-cancer drugs could help in the treatment of the disease, an 'AI-scientist' has discovered.
It seems that technology is reaching new heights, as an AI-powered 'scientist' has made a significant discovery. Working alongside human researchers, the AI model GPT-4 (not to be confused with ChatGPT) has suggested that combinations of cheap and safe drugs could also be effective at treating cancer.
The research team, led by the University of Cambridge, used the GPT-4 large language model (LLM) to sift through extensive heaps of scientific literature in order to identify potential new cancer drugs. It was found that drugs for conditions such as high cholesterol and alcohol dependence could potentially kill cancer cells, in research results published in the Journal of the Royal Society Interface.
The researchers asked GPT-4 to identify potential new drug combinations that could have an impact on a type of breast cancer cell which is commonly used in medical research. They instructed the 'AI scientist' to avoid standard cancer drugs and identify medications that would attack cancer cells without harming healthy cells.
They also prompted the AI model to prioritise drugs that were affordable and approved by regulators.
When GPT-4 had made its suggestions, the chosen drugs were then tested by human scientists to measure their effectiveness against breast cancer cells.
It was found that three of the 12 drug combinations suggested by GPT-4 worked better than current breast cancer drugs. The AI model then learned from these tests and suggested a further four combinations, three of which also showed promising results.
Simvastatin (commonly used to lower cholesterol) and disulfiram (used in alcohol dependence) stood out against breast cancer cells. And while these drugs are not traditionally associated with cancer care, they could be used as potential cancer treatments- although they would first have to go through extensive clinical trials.
The researchers have emphasised that AI is not a replacement for scientists, but that supervised AI researchers have the potential to accelerate discovery in areas like cancer research.
Models like GPT-4 have been known to return results that aren't true.
But in scientific research, these incorrect suggestions, which are known as hallucinations, can still lead to new ideas that are worth testing.
'Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn't thought of before,' said Professor Ross King from Cambridge's Department of Chemical Engineering and Biotechnology, who led the research.
'This is not automation replacing scientists, but a new kind of collaboration,' added co-author Dr Hector Zenil from King's College London.
'Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner—rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach.
'This study demonstrates how AI can be woven directly into the iterative loop of scientific discovery, enabling adaptive, data-informed hypothesis generation and validation in real time."
Join the Daily Record WhatsApp community!
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Geeky Gadgets
7 hours ago
- Geeky Gadgets
OpenAI Codex Updates and Agent API Updates : Now Available for Plus Users
OpenAI has announced new updates to its Codex and its Agent API, enhancing accessibility, functionality, and safety for developers. These updates include expanded access to Codex, new internet-enabled capabilities, improved agent development tools, and advancements in voice agent technology. OpenAI's Codex, an AI-powered tool designed to generate and execute code, is now available to a broader audience. Previously limited to Enterprise, Team, and Pro users, Codex has been extended to ChatGPT Plus subscribers. This move aligns with OpenAI's mission to provide widespread access to advanced AI tools, allowing a wider range of developers to use its capabilities for diverse applications. Key improvements to Codex include the introduction of controlled internet access during task execution. This feature allows developers to perform tasks such as installing dependencies, testing scripts that require staging servers, and executing complex workflows. To address potential risks, internet access is disabled by default and governed by strict safeguards, including: – Domain restrictions to limit access to specific websites. – HTTP method limitations to control the types of requests made. – Prompt injection monitoring to detect and mitigate malicious inputs. These measures ensure that developers can innovate securely while maintaining control over their environments, balancing functionality with safety. Agent API and SDKs: New Tools and Real-Time Capabilities The Agent API has undergone substantial upgrades, particularly in its development tools. The Agents SDK now supports TypeScript, achieving parity with the existing Python SDK. This addition broadens the programming options available to developers, making it easier to create AI agents with advanced features such as: – Handoffs for seamless transitions between automated and human interactions. – Guardrails to enforce safety and compliance. – Tracing for monitoring agent activity. – Human-in-the-loop approvals to allow human oversight during critical decision-making processes. These tools streamline the development process, allowing faster and more secure deployment of AI agents across various industries. One of the most notable updates is the introduction of the RealtimeAgent feature. This capability allows developers to build voice agents that operate in real time, either on the client or server side. RealtimeAgents come equipped with advanced functionalities, including: – Automated tool calls to perform tasks dynamically. – Safety guardrails to prevent misuse and ensure ethical operation. – Seamless handling of audio input/output and interruptions for smoother interactions. By integrating these features, the RealtimeAgent enhances the practicality and reliability of voice-based AI systems, opening up new possibilities for real-world applications such as customer service, virtual assistants, and accessibility tools. Watch this video on YouTube. Monitoring and Managing AI Agent Performance To help developers optimize the performance of their AI agents, OpenAI has introduced the Traces Dashboard. This tool provides a detailed visualization of Realtime API sessions, offering insights into key metrics such as: – Audio input/output performance. – Tool usage during interactions. – Interruptions and how they are handled. By giving developers a clear view of agent performance, the Traces Dashboard helps identify and address potential issues, making sure smoother operation and improved outcomes. This level of transparency and control is particularly valuable for developers working on complex or high-stakes applications. Additionally, the Speech-to-Speech model has been updated to improve its reliability in areas such as instruction following, tool calling, and handling interruptions. The latest version, `gpt-4o-realtime-preview-2025-06-03`, is now available through both the Realtime API and Chat Completions API. These updates enhance the model's ability to assist seamless voice-to-voice communication, further expanding its utility in diverse scenarios, including multilingual communication and real-time translation. Safety and Oversight: A Core Priority Safety remains a cornerstone of OpenAI's approach to AI development. The latest updates include robust guardrails designed to prevent misuse and ensure ethical operation. Key safety measures include: – Prompt injection monitoring to protect against malicious inputs that could compromise system integrity. – Human-in-the-loop mechanisms to allow human operators to intervene when necessary, adding an extra layer of oversight. – Domain and method restrictions to limit the scope of internet access and reduce potential vulnerabilities. These safeguards reflect OpenAI's dedication to responsible AI deployment, balancing innovation with accountability. By prioritizing safety, OpenAI aims to build trust in its technologies while allowing developers to explore new possibilities with confidence. Advancing AI Development with Practical Applications The updates to OpenAI's Codex and Agent API represent a significant advancement in the field of AI technology. By broadening access to Codex, introducing real-time capabilities through the RealtimeAgent, and enhancing safety mechanisms, OpenAI continues to empower developers to create innovative solutions. These tools are designed to address the challenges of integrating AI into practical applications, offering developers the resources they need to build systems that are both effective and responsible. The combination of expanded functionality, real-time interaction capabilities, and robust safety measures positions OpenAI's tools as valuable assets for developers across industries. Whether used for automating workflows, improving customer interactions, or allowing accessibility, these updates highlight the growing potential of AI-driven solutions to address real-world needs. By focusing on accessibility, functionality, and safety, OpenAI sets a benchmark for responsible AI development. These updates not only expand the potential of AI technologies but also underscore the importance of ethical considerations in their deployment. As AI continues to evolve, tools like Codex and the Agent API will play a critical role in shaping the future of technology and its applications. Advance your skills in AI code generation. by reading more of our detailed content. Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Reuters
18 hours ago
- Reuters
OpenAI to appeal New York Times suit demand asking to not delete any user chats
June 5 (Reuters) - OpenAI said on Thursday that it is appealing The New York Times' (NYT.N), opens new tab lawsuit demanding that the ChatGPT-maker retain consumer ChatGPT and API customer data indefinitely.


Reuters
a day ago
- Reuters
Artificial Intelligence is now an A+ law student, study finds
June 5 (Reuters) - The latest generation of generative artificial intelligence can ace most law school final exams, a new study has found. OpenAI's newest model, called o3, earned grades ranging from A+ to B on eight spring finals given by faculty at the University of Maryland Francis King Carey School of Law, researchers found in a new paper published on SSRN, opens new tab. Those high grades represent a significant improvement from previous studies done on earlier versions of ChatGPT, also from OpenAI, which scored B's, C's, and even one D's when researchers had them take law school finals in 2022 and 2023, according to the paper. Studies conducted earlier by other researchers had also found that ChatGPT earned 'mediocre' grades on law school finals and that although it improved the speed of legal writing, it did not improve the quality. Researchers also have found that AI can pass the bar exam. However, generative AI looks to be catching up to actual high-performing law students, based on the latest study. Unlike ChatGPT, which immediately generates text in response to a user's query, o3 is what is known as a reasoning model. This means that it generates tentative answers and multiple approaches to questions after internally evaluating and revising those responses, after which it produces the final text for the user. The study's authors — seven law professors from University of Maryland — graded the final answers from o3 on the same curve they use for their students. The program's answers earned an A+ in Constitutional Law, Professional Responsibility, and Property. Its answers got an A in Income Taxation, and an A- in Criminal Procedure. It scored a B+ in Secured Transactions and Torts, and a B in Administrative Law. The's answers program did well on both multiple choice questions and essays, the study found. However, there were some limitations on o3's answers. The program's relatively low grade in administrative law was attributable to the fact that o3 did not know about the 2024 U.S. Supreme Court opinion in Loper Bright Enterprises v. Raimondo, which overturned the Chevron doctrine, which was central to administrative law. That ruling had come shortly after the o3's knowledge cutoff date. The o3 program performed worse on one final when given access to the professor's notes — an unanticipated outcome the researchers attributed to the program being 'distracted' by too much text. OpenAI did not immediately respond to a request for comment on Thursday about the study's findings. The study's authors wrote that they are already contemplating an updated experiment to determine how much of a cheating threat AI poses by instructing the program to make occasional spelling and grammar mistakes, so that those exams will be difficult to distinguish from those completed by real students. Read more: ChatGPT passes law school exams despite 'mediocre' performance AI improves legal writing speed, not quality - study