
Mandolin Raises $40M to Improve Access to Life-Saving Therapies for Diseases like Cancer and Alzheimer's Using AI Agents
SAN FRANCISCO--(BUSINESS WIRE)-- Mandolin, the leading AI automation platform for specialty drug access, announced that it has raised $40 million in funding from Greylock Partners, SignalFire, Maverick, SV Angel, along with Jerry Yang (co-founder of Yahoo!) and Guillermo Rauch (CEO of Vercel). Founded by repeat entrepreneurs Will Yin (CEO) and Rohit Rustagi (COO), Mandolin is used by many of the nation's largest infusion providers, pharmacies, and health systems.
Recent advances in drug development have caused an explosion of specialty therapies addressing rare and chronic conditions like cancers, immune disorders, and Alzheimer's. Specialty therapies represented $250 billion in drug spend in 2024 and are expected to reach $1.5 trillion in drug spend in eight years. They are also 75% of the drugs in the FDA's approval pipeline.
Unlike traditional drugs bought at retail pharmacies, these drugs are often administered by healthcare professionals in a clinical setting. They are also processed through an arduous administrative process established by insurance companies, often delaying patients' access to life-changing treatments by weeks.
'Insurance companies make the approval process challenging for specialty medications. Infusion providers, pharmacies, and health systems spend an excessive amount of manpower on basic tasks like checking insurance coverage, submitting prior authorizations, or verifying reimbursement amounts, which take weeks of time per prescription and lead to millions in bad debt,' said Will Yin, CEO and co-founder of Mandolin. 'Leveraging the latest advancements in AI, we saw an opportunity to build autonomous agents that can tackle these workflows for providers in minutes and more reliably.'
Mandolin's founders, Will Yin and Rohit Rustagi, have a strong passion and vision for improving our healthcare system using AI. After initially pursuing academic research on conditions like Alzheimer's and cancer that have impacted their families, they saw first-hand the difficulty and delays associated with specialty drug approvals. Recognizing how broken the healthcare system was for accessing these treatments, combined with the insight that large language models could now reason like the best performing employees, they joined forces to start Mandolin in 2024.
Mandolin's AI platform automates the end-to-end administrative side of infused and injected drug delivery for providers. Mandolin's AI agents act just like your best employees, completing tasks like reasoning about clinical policies, calling payers, parsing faxes and handwritten notes, and making decisions across entire workflows. They integrate into existing electronic health records (EHRs), payer portals, and manufacturer hubs. By centralizing operational logic and real-time decision-making into a single platform, Mandolin dramatically reduces time-to-treatment from weeks to days, lowers back-office costs, improves billing accuracy, and unlocks visibility into drug usage and patient pathways.
Since launching its product in January, the industry has been quick to respond. Mandolin is already working with many of the largest US infusion providers, pharmacies, and health systems, including Vivo Infusion, FlexCare Infusion, OI Infusion, TwelveStone Health Partners, and Amber Specialty Pharmacy. Across customers, Mandolin is deployed in over 700 clinic locations and serves over 250,000 new patients a year.
"Mandolin has been nothing short of transformational for our business. Tasks that used to take days, now happen in under an hour,' said Cannon Loughry, COO of TwelveStone. 'We've automated key workflows across billing, patient communication, and insurance verification, driving real gains in terms of better cash flow and the reduction of headcount as we scale. Mandolin acts as an AI employee integrated directly into our core systems. We just tell it what needs to get done, and it does the work.'
'Will and Rohit saw the opportunity to bring agentic AI into a system drowning in paperwork, delays, and revenue leakage,' said Jerry Chen, partner at Greylock Partners. 'Despite being founded a year ago, Mandolin is already proving that AI agents can unlock faster treatment for patients and far better economics for providers. We at Greylock are thrilled to partner with them on their journey.'
About Mandolin
Mandolin is the leading AI automation platform for specialty drug access. The company's AI agents act just like a best employee, completing tasks like reasoning about clinical policies, calling payers, parsing faxes and handwritten notes, and making decisions across entire workflows. Backed by Greylock, SignalFire, Maverick, and SV Angel, Mandolin works with the largest consolidated infusion providers, specialty and home infusion pharmacies, and health systems in the country. For more information, visit: https://www.mandolin.com/

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
2 hours ago
- Business Wire
KAYTUS Enhances KSManage for Intelligent Management of Liquid-Cooled AI Data Centers
SINGAPORE--(BUSINESS WIRE)--KAYTUS, a leading provider of end-to-end AI and liquid cooling solutions, has announced the release of the enhanced KSManage V2.3, its advanced device management platform for AI data centers. The latest version introduces expanded monitoring and control capabilities tailored for GB200 and B200 systems, including integrated liquid cooling detection features. Leveraging intelligent automation, KSManage V2.3 enables AI data centers to operate with greater precision, efficiency, and sustainability, delivering comprehensive refined management across IT infrastructure and maximizing overall performance. As Generative AI technology accelerates, AI data centers have emerged as critical infrastructure for enabling innovations in artificial intelligence and big data. Next-generation devices such as NVIDIA's B200 and GB200 are being rapidly adopted to meet growing AI compute demands. However, their advanced architectures differ substantially from traditional systems, driving the need for more sophisticated management solutions. For instance, the GB200 integrates two B200 Blackwell GPUs with an Arm-based Grace CPU, creating a high-performance configuration that poses new management challenges. From hardware status monitoring to software scheduling, more precise and intelligent control mechanisms are essential to maintain operational efficiency. Moreover, the elevated computing power of these devices leads to higher energy consumption, increasing the risk of performance bottlenecks, or even system outages in the event of failures. As a result, energy efficiency and real-time system monitoring have become mission-critical for ensuring the stability and sustainability of AI data center operations. KSManage Provides Intelligent, Refined Management for AI Data Centers KSManage builds on a wealth of experience in traditional device management and supports more than 5,000 device models. Its comprehensive management framework spans IT, network, security, and other infrastructure components. The platform enables real-time monitoring of critical server components, including CPU, memory, and storage drives. Leveraging intelligent algorithms, KSManage can predict potential faults, issue early warnings, and support preventive maintenance, helping ensure servers operate at peak performance and reducing the risk of unplanned downtime. The upgraded KSManage delivers comprehensive monitoring of key performance indicators for GB200 and B200 devices, including GPU performance, CPU utilization, and memory bandwidth. Through 3D real-time modeling, it dynamically visualizes resource distribution and intelligently adjusts allocation based on workload demands. The platform also features automated network topology management, enabling real-time optimization of NVLink connectivity, and contributing to a 90% boost in operational efficiency. During large model training, KSManage automatically allocates more computing resources to relevant tasks, optimizing the distribution of CPU, GPU, and other components. This ensures higher device utilization, improved computational efficiency, and significantly faster training times. Specific for intelligent fault detection, the upgraded KSManage introduces a three-tier monitoring framework spanning the component, machine, and cluster levels. At the component level, it leverages the PLDM protocol to enable precise monitoring of critical metrics such as GPU memory status. When computational errors are detected in B200 GPUs, KSManage rapidly analyzes error logs to distinguish between hardware faults and software conflicts, achieving over 92% accuracy in fault localization and taking timely corrective actions. At the machine level, KSManage integrates both BMC out-of-band logs and OS in-band logs to support fast and reliable hardware diagnostics. At the cluster level, federated management technology enables cross-domain alarm correlation and analysis, and triggers self-healing engines capable of responding to risks within seconds. The system also synchronizes with a high-precision liquid leak monitoring solution to enhance equipment safety. Collectively, these capabilities significantly reduce Mean Time to Repair (MTTR) and improve Mean Time Between Failures (MTBF), ensuring higher stability and resilience across AI data center operations. Intelligent Management of Green, Liquid-Cooled AI Data Centers As power density in AI data centers continues to increase, cooling has become a critical factor influencing both device performance and operational lifespan. To address this challenge, liquid cooling technology—recognized for its high thermal efficiency—has been widely adopted across next-generation AI infrastructure. The upgraded KSManage introduces a new liquid cooling detection feature that enhances both the efficiency and safety of liquid cooling operations in AI data centers. The system provides real-time monitoring of key parameters such as coolant flow rate, temperature, and pressure, ensuring stable and optimal performance of the liquid cooling infrastructure. By analyzing data from chip power consumption and cooling circuit pressure, KSManage employs a multi-objective optimization algorithm to dynamically adjust flow rates and calculate the optimal coolant distribution under varying workloads. Powered by AI-driven precision control, the platform achieves a 50% improvement in flow utilization and delivers up to 10% additional energy savings in the liquid cooling system. In addition, KSManage enhances operational reliability by providing real-time anomaly detection in the liquid cooling system. When issues such as abnormal flow rates, pressure fluctuations, temperature control failures, or condensation are detected, the system triggers instant alerts and delivers detailed fault diagnostics, enabling maintenance teams to quickly identify and resolve problems. In the event of a critical coolant leak, KSManage coordinates with the Coolant Distribution Unit (CDU) to deliver a millisecond-level response. Upon detection, the system immediately shuts off coolant flow and initiates an automatic power-down of the CDU, ensuring maximum protection of devices and infrastructure. For high-power devices such as the GB200 and B200, KSManage delivers fine-grained energy consumption management at the GPU level. It dynamically adjusts the Thermal Design Power (TDP) thresholds of H100/B200 GPUs, while integrating intelligent temperature regulation technologies—such as variable-frequency fluorine pumps—within the liquid cooling system. These optimizations help reduce Power Usage Effectiveness (PUE) to below 1.3. Additionally, the platform's power-environment interaction module leverages AI algorithms to predict potential cooling system failures. Through synergistic optimization of computing power and energy consumption, KSManage reduces the power usage per cabinet by 20%, effectively lowering device failure rates and improving overall energy efficiency. KSManage has been successfully deployed across a wide range of industries globally, including internet, finance, and telecommunications. With its intelligent, refined, and sustainable management capabilities, it has become an essential tool for overseeing device operations in AI data centers. In one notable case, an AI data center in Central Asia achieved more than a fourfold increase in operational efficiency by leveraging KSManage's intelligent diagnostic features. Device fault handling time was also reduced by 80%. Monitoring and control of the liquid cooling system, and firmware optimization collectively contributed to a 20% reduction in energy consumption. Additionally, the hardware service lifespan was extended by one to two years. KSManage continues to play a critical role in ensuring the efficient, stable, and sustainable operation of AI data center infrastructure. KAYTUS is a leading provider of end-to-end AI and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at and follow us on LinkedIn and X


Business Wire
2 hours ago
- Business Wire
Verax AI Unveils Verax Protect to Safeguard Companies Against Rising AI Risks
LONDON--(BUSINESS WIRE)-- Verax AI, a leader in enabling the safe and responsible adoption of AI for enterprise use, announces the global launch of Verax Protect. This cutting-edge solution – suitable even for companies in highly regulated industries – aims to help large enterprises uncover and mitigate Generative AI risks, including unintended leaks of sensitive data. As companies race to embrace the productivity potential of Generative AI, they're also increasingly exposed to the risks associated with this technology. One of the most pressing risks is data leakage – employees including sensitive data or proprietary information in Gen AI prompts, and thereby unintentionally leaking it to external third-party tools. Over 40% of US businesses now have paid subscriptions to AI models, platforms and tools – up from just 5% in 2023 – while 30% of organizations currently using AI have already experienced AI-related security incidents. Compounding concern, these threats are becoming increasingly costly: the global average cost of a data breach surged to an all-time high of $4.88 million in 2024, marking a 10% rise year-on-year. Verax Protect empowers enterprises — including those in highly regulated industries such as finance, healthcare, and defense — to unlock the benefits of AI without compromising their strict data privacy and broader cybersecurity practices. Key capabilities of Verax Protect: Prevent sensitive data from leaking into third-party AI tools: AI tools encourage users to input as much data as possible into them in order to maximise their productivity benefits. This often leads to proprietary and sensitive data being shared with unvetted third-party providers. Prevent AI tools from exposing information to users that they are not authorized to access: The increasing use of AI tools to generate internal reports and summarize sensitive company documents opens the door to oversharing data, raising the risk of other employees seeing information they're not meant to access. Enforce organizational policies on AI: In contrast to the currently popular —but largely ineffective—methods of ensuring employee compliance with AI policies, such as training sessions and reminder pop-up banners, Verax Protect enables automatic enforcement of corporate AI policies, preventing both accidental and deliberate violations. Comply with security and data protection certifications. Many compliance certifications, such as those dealing with GDPR in Europe or sector-specific laws in the U.S. like HIPAA for healthcare or GLBA for financial services require evidence of an effort to safeguard sensitive and private data. Gen AI adoption makes such efforts more difficult to implement and even harder to demonstrate. Verax Protect helps to prove that sensitive and private data is safeguarded even when AI is used. Leo Feinberg, Co-founder and CEO of Verax AI, commented: 'Generative AI is a double-edged sword. It promises unprecedented gains in productivity, but it also introduces unprecedented risks. With Verax Protect, we're enabling enterprises to stay competitive by leveraging the power of AI without compromising the security, privacy, and compliance of their most sensitive data.' Verax AI's latest product builds on the deep experience of its co-founders Leo Feinberg and Oren Gev, CTO, who have previously launched and scaled several successful technology businesses, including CloudEndure – a cloud migration and disaster recovery company later acquired by Amazon Web Services for $250 million. The company also offers Verax Explore and Verax Control, which help companies to get visibility and control of their internal and external AI tools. The company's latest offering, Verax Protect, is an enterprise-grade, real-time oversight and risk mitigation tool tailored for enterprise use in the AI age. It addresses the most pressing and fast-moving AI cybersecurity threats companies are facing today. The solution integrates seamlessly with internal systems and offers granular controls that align with both technical security requirements and organizational policies. About Verax AI Verax AI is a leading provider of enterprise-grade AI trust solutions. Founded in 2023 by serial tech entrepreneurs Leo Feinberg and Oren Gev, the company enables large organizations to fully harness the productivity benefits of AI technologies, while mitigating the security and trust issues associated with their adoption. Verax AI's suite of cutting-edge solutions includes Verax Protect and is designed for cross-industry use, including in highly regulated sectors such as healthcare, financial services and defence, empowering companies to adopt AI tools with confidence.


Business Wire
2 hours ago
- Business Wire
Spendesk Becomes First Profitable Spend Management Platform, Redefining Finance with AI
LONDON--(BUSINESS WIRE)-- Spendesk, the AI-powered spend management and procurement platform, today announces a major milestone: it has achieved profitability, marking one full quarter in the black. This achievement makes Spendesk the first European spend management and procurement platform to reach profitability, a bold step forward in the industry's evolution. Since its €2 million seed round in 2017, Spendesk has rapidly evolved from a disruptive startup to a profitable market leader. The company surpassed €1 billion in spend under management by 2021, then secured a €100 million Series C+ round in 2022 to reach unicorn status and €10 billion managed on the platform. Following the launch of Spendesk Financial Services, its regulated payment institution, and a strategic procurement acquisition, Spendesk doubled spend under management to €20 billion in 2024. Spendesk now processes tens of billions in purchases annually across more than 200,000 business users. The company's successful drive to profitability fulfills its publicly stated objective for 2025, ahead of schedule, while maintaining double-digit growth. 'Spendesk has proven that it's possible to lead the spend management category while balancing growth with profitability,' said Axel Demazy, CEO of Spendesk. 'When I became CEO in 2024, we focused on three priorities: deepening our procurement offering, driving new revenue with Spendesk Financial Services, and accelerating internal efficiency with AI. These priorities have enabled us to achieve profitability ahead of schedule, while delivering even greater value and innovation to our customers.' Rodolphe Ardant, founder of Spendesk, added: 'We set out to transform how companies manage spending in Europe. Since 2017, we've integrated AI into our technology, and today thousands of customers rely on Spendesk's AI to validate receipts and invoices, automate spend allocation in bookkeeping, and flag potential errors. Now, thanks to our new milestone of profitability, we can invest even further in the innovation our customers expect, increasing their efficiency through agentic assistants and providing deeper insights to optimise spend.' As Spendesk enters this next chapter, it remains committed to continuing its double-digit growth by driving the next wave of transformation for finance teams. The company will invest beyond spend management, exploring AI-first opportunities in areas such as FP&A and ESG, where real-time insights and smarter decision-making will empower customers to excel. 'Profitability means more value for our customers,' continued Demazy. 'Our customers can be confident they are partnering with a sustainable provider, here for the long term. Profitability allows Spendesk to keep investing in new features, put AI at customers' fingertips, and help them make smarter decisions, ultimately optimising their P&L.' Spendesk operates in the €70 billion Office of the CFO software industry, expected to grow at 13 percent annually through 2028. By combining advanced technology with financial discipline, Spendesk is uniquely positioned to sustain profitable, double-digit growth as the category continues to expand. About Spendesk Spendesk is the AI-powered spend management and procurement platform that transforms company spending. By simplifying procurement, payment cards, expense management, invoice processing, and accounting automation, Spendesk sets the new standard for spending at work. Its single, intelligent solution makes efficient spending easy for employees and gives finance leaders the full visibility and control they need across all company spend, even in multi-entity structures. Trusted by thousands of companies, Spendesk supports over 200,000 users across brands such as SoundCloud, Gousto, SumUp, and Bloom & Wild. With offices in the United Kingdom, France, Spain, and Germany, Spendesk also puts community at the heart of its mission. For more information: