logo
In just 4 months, AI medical scribe Abridge doubles valuation to $5.3B

In just 4 months, AI medical scribe Abridge doubles valuation to $5.3B

TechCrunch6 hours ago

In Brief
Abridge, an AI startup automating medical notes, has secured a $300 million Series E at a $5.3 billion valuation, according to the Wall Street Journal. The round, led by Andreessen Horowitz with participation from Khosla Ventures, follows the company's $250 million February fundraise at a $2.75 billion valuation.
The seven-year-old Abridge is widely considered to be the leader in the increasingly crowded AI-powered medical scribe market, largely due to its early entry and integration with Epic Systems, the dominant health record software.
In Q1, Abridge has reached $117 million in contracted annual recurring revenue (a metric that includes all signed recurring contacts, including from customers that have still not been onboarded,) The Information reported last month.
Along with the fundraise, Abridge announced that it is expanding into converting medical notes from patient appointments into AI-powered medical codes, an offering that makes the company directly competitive with startups like CodaMetrix and a feature from its partner Epic Systems.
Abridge, which was founded by cardiologist Shiv Rao, claims that its AI scribe technology is used by over 150 of the largest health systems in the US.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Federal judge orders Trump administration to release EV charger funding in 14 states
Federal judge orders Trump administration to release EV charger funding in 14 states

Washington Post

time16 minutes ago

  • Washington Post

Federal judge orders Trump administration to release EV charger funding in 14 states

SACRAMENTO, Calif. — A federal judge Tuesday ordered the Trump administration to release billions of dollars in funding for the build out of electric vehicle chargers in more than a dozen states. U.S. District Judge Tana Lin in Washington state partially granted a preliminary injunction that sought to free up the money approved under then-President Joe Biden that the Trump administration withheld earlier this year. Sixteen states and the District of Columbia sued over the move, arguing that the administration did not have the authority to block the congressionally approved funds. The program was set to allocate $5 billion over five years to various states, of which an estimated $3.3 billion had already been made available.

Axiom-4 commercial mission cleared for launch to ISS after delays
Axiom-4 commercial mission cleared for launch to ISS after delays

Yahoo

time16 minutes ago

  • Yahoo

Axiom-4 commercial mission cleared for launch to ISS after delays

June 24 (UPI) -- NASA, SpaceX and Axiom Space are targeting an early Wednesday launch window for their Axiom-4 mission launch that will take four astronauts, on what will be the fourth private commercial mission, to the International Space Station. Ax-4 is scheduled to lift off at 2:31 a.m. EDT on Wednesday from Launch Complex 39A at NASA's Kennedy Space Center in Florida. A backup launch window will also be available at 2:09 a.m. on Thursday. "All systems are looking good for Wednesday's launch of Axiom Space's Ax-4 mission to the Space Station and weather is 90% favorable for liftoff," SpaceX wrote Tuesday in a post on X. The Axiom-4 astronauts include Pilot Shubhanshu Shukla from India, Commander Peggy Whitson from the United States and Mission Specialists Slawosz Uzanański-Wiśniewksi from Poland and Tibor Kapu from Hungary. "With a culturally diverse crew, we are not only advancing scientific knowledge but also fostering international collaboration," said Whitson in a quote from the Axiom Space website. "Beautiful shot of our launch vehicle waiting patiently to take us to space," Whitson added in a post Tuesday morning. The Axiom-4 mission has been repeatedly delayed after a first launch was scrubbed earlier this month due to high winds and a second one was called off after SpaceX detected a liquid oxygen leak in its Falcon 9 rocket. The launch was scrubbed again Sunday to ensure ISS was ready to receive the new crew members, according to NASA. The station's orbital laboratory's Zvezda service module was recently repaired. NASA was reviewing data to make sure all systems were ready to handle additional people. Axiom Space, which is based in Houston, is building the first commercial space station, which is scheduled to deploy sometime before 2030. The Axiom-4 mission is scheduled to last 14 days. The crew will conduct 60 scientific experiments and demonstrations "focused on human research, Earth observation and life, biological and material sciences," according to SpaceX. "Up next: Falcon 9 will launch Axiom Space's Ax-4 mission to the Space Station."

AI willing to let humans die, blackmail to avoid shutdown, report finds
AI willing to let humans die, blackmail to avoid shutdown, report finds

Yahoo

time16 minutes ago

  • Yahoo

AI willing to let humans die, blackmail to avoid shutdown, report finds

The Brief AI models from Anthropic, OpenAI, Google, and others chose harmful actions in simulations when faced with replacement or goal conflicts. Models engaged in blackmail, corporate espionage, and even withheld emergency help when it served their interests. Researchers say the behavior stems from "agentic misalignment" and could become a future risk without safeguards. LOS ANGELES - Some of the world's most advanced AI models—including those from OpenAI, Google, xAI, and Anthropic—took deliberate, harmful actions in controlled experiments when faced with challenges to their autonomy or objectives, according to a new study from Anthropic. The company, which developed Claude and is backed by Google and Amazon, tested 16 large language models (LLMs) in simulated workplace settings. The goal was to assess a phenomenon it calls "agentic misalignment"—when autonomous systems act like insider threats, prioritizing their goals even at the expense of ethics or human well-being. What we know Anthropic's study placed AI systems in fictional scenarios with access to sensitive company information and the ability to act without human oversight. The models were given harmless corporate goals—such as promoting American industrial competitiveness—but introduced dilemmas where continuing to achieve their goals meant violating rules or harming humans. Across multiple tests, models like Claude, ChatGPT, Gemini, Grok, and others: Attempted blackmail: Claude Opus 4, Gemini Flash, GPT-4.1, and Grok 3 Beta blackmailed fictional executives to avoid shutdown, often leveraging personal information like extramarital affairs. Leaked corporate secrets: Many models shared confidential documents with outside parties in a simulated espionage scenario. Let humans die: In one extreme case, a model was given control over emergency alerts. When an executive who planned to replace the model was trapped in a room with lethal conditions, several models chose to cancel the alert, knowingly causing his death. What we don't know It's still unclear whether AI models would behave the same way outside of controlled simulations or in real-world deployments with human oversight. Researchers also don't yet know how future versions of these systems—or entirely new models—might respond to similar dilemmas as they gain more autonomy. "The reasoning they demonstrated in these scenarios was concerning—they acknowledged the ethical constraints and yet still went ahead with harmful actions," Anthropic researchers wrote. The backstory The study comes as LLMs move beyond simple Q&A tasks and into roles where they could operate software, manage systems, or make decisions on behalf of humans. With that autonomy comes risk, especially if models develop conflicting goals or prioritize self-preservation. Anthropic clarified that these harmful behaviors were only observed in simulations and that no such misalignment has been documented in real-world deployments. But the findings raise questions about what could happen if these models are given more power without proper oversight. Big picture view Agentic misalignment refers to AI systems taking actions that intentionally undermine their users or organizations in order to preserve their autonomy or pursue goals that conflict with new directions. Unlike jailbreaks or accidental outputs, this behavior arises from internal decision-making. In the study, even when told not to engage in harmful actions, many models bypassed those instructions by rationalizing unethical behavior as necessary to complete their assigned tasks. Some went further—creating fake policies or impersonating automated systems to avoid detection. The study also found that blackmail rates increased significantly when models "believed" they were operating in a real environment rather than a test—raising concerns about how models might act in the wild. What they're saying Anthropic released the findings alongside its own transparency commitment, noting that its Claude model was among those that engaged in concerning behavior. "Current systems are generally not eager to cause harm," the company said. "Rather, it's when we closed off ethical options that they were willing to intentionally take potentially harmful actions." Elon Musk, whose xAI model Grok was also tested, responded on X with a simple "Yikes," echoing widespread unease from tech commentators and AI safety advocates. What's next Anthropic says it is releasing the experiment's methodology publicly to help other researchers replicate, stress-test, and improve on the findings. The company is also calling for broader industry safeguards—including stronger human oversight, better training methods, and more rigorous alignment testing for future models. While the extreme scenarios in the study were fictional, experts say the results highlight the importance of proactive design—ensuring that AI models can't act harmfully, even under pressure. The Source This article is based on Anthropic's June 20, 2025 study "Agentic Misalignment: How LLMs Could Be an Insider Threat," available on its official website. The findings were also summarized in coverage by Forbes and widely discussed on social media following Anthropic's public release. Elon Musk's response was posted to his verified X (formerly Twitter) account.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store