logo
Onc.AI to Present Breakthrough Deep Learning Radiomic Biomarker Results at 2025 ASCO Annual Meeting

Onc.AI to Present Breakthrough Deep Learning Radiomic Biomarker Results at 2025 ASCO Annual Meeting

Business Wirea day ago

CHICAGO--(BUSINESS WIRE)--Onc.AI, a digital health company developing AI-powered oncology clinical management solutions, today announced that new validation study results from research collaborations with Pfizer, Baylor Scott & White and the University of Rochester Medical Center will be presented at the 2025 American Society of Clinical Oncology (ASCO) Annual Meeting, held May 30–June 3, 2025, in Chicago, IL.
Deep learning radiomics to transform oncology clinical development and improve upon RECIST 1.1 (#better_recist)
Share
Onc.AI's poster presentation showcases its FDA-breakthrough designated deep learning radiomics model, Serial CTRS, which evaluates changes across routine CT scans over time to predict overall survival in late-stage non–small cell lung cancer (NSCLC) and other solid tumor types. In collaboration with Baylor Scott & White and Pfizer, Serial CTRS has demonstrated:
Superior prediction of overall survival (OS): Hazard ratios (HRs) for OS improvement and stratification exceed those of the conventional imaging approach (RECIST 1.1).
Generalizability across real-world and clinical trial cohorts: Robust performance in both routine real-world datasets and a Pfizer-sponsored PD-1 checkpoint inhibitor trial.
Actionable insights for early treatment adaptation: Dynamic monitoring identifies non-responders months before conventional criteria would signal poor prognosis.
At the ASCO Innovation Hub (IH13), Onc.AI will share latest results from its pipeline of deep learning radiomic models to customers and partners spanning medical oncologist investigators and biopharma companies looking to accelerate oncology clinical development.
Program Highlights
Poster Presentation:
Abstract #253138: Validation of Serial CTRS for Early Immunotherapy Response Prediction in Metastatic NSCLC – https://meetings.asco.org/abstracts-presentations/253138
Presenter: Ronan Kelly, MD, Baylor Scott & White
Date & Time: June 1, 2025; 9:00 am–12:00 pm CDT
Location: Hall A, Poster Board 325
Abstract #251996: Retrospective Single-Institution Application of a Deep Learning–Based Radiomic Score in Metastatic NSCLC: Potential Impact on First-Line Treatment Decisions – https://meetings.asco.org/abstracts-presentations/251996
Lead Author: Nicholas Love, MD, University of Rochester
Abstract #245837: Image Harmonization for PD-(L)1 Immune Checkpoint Inhibitor Response Prediction Using Deep Learning Radiomic Features in Advanced NSCLC – https://meetings.asco.org/abstracts-presentations/245837
Lead Author: Taly Gilat-Schmidt, PhD, Onc.AI
'These strong validation study results spanned both RWD and a pharma-sponsored clinical trial. Serial CTRS could represent a high-potential tool for medical oncologists and for optimizing pharma clinical development,' said Dr. Ronan Kelly, MD, Director of Oncology at the Charles A. Sammons Cancer Center, Baylor University Medical Center, Dallas Texas
'Our retrospective study highlights how Onc.AI's Deep Learning Radiomic baseline score can be extremely helpful to medical oncologists as a prognostic marker for first line mutation negative NSCLC patients,' added Arpan Patel, MD and Associate Professor of Medical Oncology at the University of Rochester Medical Center.
About Onc.AI
Onc.AI is a digital health company developing AI-driven oncology clinical management solutions using advanced Deep Learning applied to routine diagnostic images. The company's platform is applied at the point of care by medical oncologists and is also leveraged by global pharmaceutical leaders to accelerate oncology drug development.
Onc.AI is backed by premier institutional investors, including Blue Venture Fund, Action Potential Venture Capital (GSK) and MassMutual Alternative Investments. Onc.AI is also supported by the National Cancer Institute SBIR program (1R44CA291456-01A1). For more information, please visit: www.onc.ai

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Understanding How Students Use AI and What Faculty Can Do About It
Understanding How Students Use AI and What Faculty Can Do About It

Forbes

time35 minutes ago

  • Forbes

Understanding How Students Use AI and What Faculty Can Do About It

Nearly every day, I see an op-ed or social media post about students' use of AI, most written by faculty. The use of AI in the classroom is controversial among faculty, with some embracing it and finding ways to incorporate it into classroom assignments, others expressing anger about students using it to write papers, and still others being uncertain about what to do. A new survey of 1000 students by Kahoot! – Study Habits Snapshot – shows some interesting patterns. Some of the most prominent include: To better understand the implications of the survey results, I talked with Liz Crawford, Director of Education at Kahoot! I was curious about her interpretation of the finding that 70% of students already use AI in their academic work, especially regarding what that means for faculty, teaching, and assessment of learning. Crawford explained, 'We're entering a new era where AI isn't just a tool—it's becoming a learning partner. Today's students use AI to work more efficiently, personalize their learning, and deepen their understanding. From summarizing notes in seconds using a phone camera to generating self-quizzes before an exam, students are proactively using AI to support—not shortcut—their academic growth.' Liz Crawford, Director, Education at Kahoot! Kahoot She advised faculty: 'It's critical to move beyond the assumption that AI use is synonymous with cheating.' Crawford believes academic integrity is vital, and that 'many students use AI responsibly to enhance their learning, spark new ideas, and strengthen their critical thinking.' She believes that faculty need to realize that 'AI is no longer a future trend—it's already embedded in how students learn.' From her perspective, this growing reliance on AI isn't something to fear, but instead, a call to action. Crawford shared, 'If we don't evolve our teaching and assessment strategies, we risk creating a disconnect between how students are learning and how we're guiding them. Thoughtful integration of AI allows educators to model digital responsibility, engage students more meaningfully, and ensure that learning environments remain relevant and future-ready.' To further explore how these changes might play out in the classroom, I asked Crawford about a particularly concerning part of the Kahoot! survey – students appreciated AI's instant feedback over that of peer study groups. I asked Crawford how this finding might influence faculty design of formative assessments and student support systems. She noted that the demand for immediate AI feedback shows a shift in student expectations and needs and presents an opportunity for faculty. More specifically, she stated, 'To begin with, integrating AI-powered tools into assessment strategies can be a game-changer for faculty.' She emphasized that tools like those provided by Kahoot! and similar organizations can provide real-time feedback, potentially empowering students to identify and correct their misunderstandings promptly. Crawford and others conducting research in the area believe 'this type of approach improves comprehension but also keeps students engaged and motivated throughout the learning process.' Another key benefit of AI integration, according to Crawford, is the potential for personalization. She stated, 'By analyzing performance data, AI systems can offer tailored feedback that addresses each student's unique challenges and needs. This attention can lead to better learning outcomes and heightened student enthusiasm for their studies.' However, Crawford cautioned, 'While AI feedback is incredibly useful, it's essential to remember that it should complement, not substitute, human connections.' The survey also revealed a troubling trend that faculty cannot ignore. Forty Percent (40%) of students surveyed reported skipping exams due to fear of failure. I asked Crawford if there was anything that AI could do to stop this fear and improve confidence among students. She shared, 'Academic anxiety often stems from uncertainty as students aren't sure how to prepare, whether they're studying the right material, or fear of failure.' Crawford noted how AI can help, stating: 'This is where responsible AI integration can make a real difference. AI offers a consistent, on-demand support system that students can rely on throughout their learning journey.' Knowing this, Kahoot!, is beginning to combine AI with gamification – adding gamelike elements to AI interactions. Crawford shared that students can use AI to scan notes and turn them into personalized quizzes using their phones, and they can do this anywhere. She noted, 'Whether they're commuting, studying between classes, or reviewing before bed, students can actively engage in low-pressure practice that builds mastery over time.' Of course, with so much innovation, it's easy to understand why many faculty feel overwhelmed, even if they want to incorporate AI-based learning in their courses. I asked Crawford how faculty can take the steps. She explained, 'Start small, stay curious, and utilize trusted tools. You don't need to become an AI expert overnight.' She added, 'I recommend that faculty members leverage AI to tackle tasks that help them be more efficient, such as preparing for their classes, designing formative assessments, and analyzing reports by exploring the capabilities of different platforms.' One of the most important pieces of advice that Crawford shared for faculty is, 'It's important to recognize that your students can be partners in this journey. Invite their input, explore AI together, and use these conversations to teach digital responsibility.' She wants to remind faculty that their role as 'a guide, mentor, and critical thinker is more essential than ever in an AI-driven world.' From my vantage point as a faculty member, I don't think we can afford to ignore how quickly AI is shaping the way students learn. Rather than shutting the door on AI out of frustration, we have an exciting opportunity to design learning environments and assignments that are creative, rigorous, and engage with AI in positive ways. As Crawford reminds us, we need to work with students to be digitally responsible and critical consumers of AI-generated information.

AI Is Reshaping The Work Faster Than Companies Can Upskill
AI Is Reshaping The Work Faster Than Companies Can Upskill

Forbes

timean hour ago

  • Forbes

AI Is Reshaping The Work Faster Than Companies Can Upskill

AI workforce preparation and upskilling within enterprises to increase adoption Work is no longer changing incrementally—it's evolving exponentially. The tectonic shifts being driven by artificial intelligence, automation, and multi-model transformation aren't projections for the distant future; they're the current reality unfolding in real time and we're all experiencing the impact at different speeds. According to a McKinsey Global Institute report, up to 45% of work activities could be automated by 2030. While this statistic might induce anxiety for the masses, it misses a nuanced truth: roles are not simply disappearing; they are also being redefined. As a former C-Suite exec in the Data, Analytics, Robotics, and AI Space, I've witnessed firsthand that the most impactful transformations aren't about replacing people with machines—they're about elevating human potential. AI isn't just about algorithms and automation; it's about adaptation, agility, and amplifying human potential. And while the pace of change is faster than what most people and enterprises can keep up with, the clear winners will be those that lead the future of work with AI, and prepare people to thrive alongside it. The transformation of AI isn't just technical; it's also organizational. If you take the use cases of predictive restocking algorithms, dynamic pricing tools, or warehouse robotics, behind the technology also comes the work that goes into preparing and upskilling the workers: In these cases, AI didn't reduce the workforce, it redefined their roles. In this juncture, licensing AI software or tools is not a strategy, it's a checkbox. Companies that are serious about the future of work know that AI implementation without workforce enablement is an empty investment. A 2024 IBM Institute for Business Value study supports this. It found that companies actively investing in AI training and upskilling see 15% higher productivity gains compared to those that don't. So what does real AI readiness look like? Outlined below are recommendations for forward-thinking companies, leaders, and individuals making the leap: Yes, the workplace is transforming at breakneck speed because of AI, and the future won't be built by those who use AI, but will be led by those who partner with it. As you reflect on your organization's AI journey, or your personal one, ask yourself not just what tech are we buying, but what transformation are we enabling? Because in the end, it's not about replacing jobs—it's about redesigning them. And those who master that redesign will lead the future of work.

We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic
We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic

Yahoo

time2 hours ago

  • Yahoo

We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic

We still have no idea why an AI model picks one phrase over another, Anthropic Chief Executive Dario Amodei said in an April essay—an admission that's pushing the company to build an 'MRI for AI' and finally decode how these black-box systems actually work. Amodei published the blog post on his personal website, warning that the lack of transparency is "essentially unprecedented in the history of technology." His call to action? Create tools that make AI decisions traceable—before it's too late. Don't Miss: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. Hasbro, MGM, and Skechers trust this AI marketing firm — When a language model summarizes a financial report, recommends a treatment, or writes a poem, researchers still can't explain why it made certain choices, according to Amodei,. We have no idea why it makes certain choices—and that is precisely the problem. This interpretability gap blocks AI from being trusted in areas like healthcare and defense. The post, 'The Urgency of Interpretability,' compares today's AI progress to past tech revolutions—but without the benefit of reliable engineering models. Amodei argued that artificial general intelligence will arrive by 2026 or 2027, as some predict, "we need a microscope into these models now." Anthropic has already started prototyping that microscope. In a technical report, the company deliberately embedded a misalignment into one of its models—essentially a secret instruction to behave incorrectly—and challenged internal teams to detect the issue. Trending: According to the company, three of four "blue teams" found the planted flaw. Some used neural dashboards and interpretability tools to do it, suggesting real-time AI audits could soon be possible. That experiment showed early success in catching misbehavior before it hits end users—a huge leap for safety. Mechanistic interpretability is having a breakout moment. According to a March 11 research paper from Harvard's Kempner Institute, mapping AI neurons to functions is accelerating with help from neuroscience-inspired tools. Interpretability pioneer Chris Olah and others argue that making models transparent is essential before AGI becomes a reality. Meanwhile, Washington is boosting oversight. The National Institute of Standards and Technology requested $47.7 million in its fiscal 2025 budget to expand the U.S. AI Safety capital is pouring into this frontier. In 2024, Amazon (NASDAQ:AMZN) finalized a $4 billion investment in Anthropic. The deal made Amazon Web Services the startup's primary cloud provider and granted its enterprise clients early access to Claude models. AWS now underwrites much of the compute needed for these deep diagnostics—and investors want more than raw performance. As risks grow, the demand for explainable AI is no longer academic. Transparency, it turns out, might just be the killer feature. Read Next: Deloitte's fastest-growing software company partners with Amazon, Walmart & Target – Image: Shutterstock Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? (AMZN): Free Stock Analysis Report This article We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store