logo
Snorkel AI Announces $100 Million Series D and Expanded Platform to Power Next Phase of AI with Expert Data

Snorkel AI Announces $100 Million Series D and Expanded Platform to Power Next Phase of AI with Expert Data

Business Wire5 days ago

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Today, Snorkel AI announced general availability of two new product offerings on the Snorkel AI Data Development Platform: Snorkel Evaluate and Snorkel Expert Data-as-a-Service. These launches advance its mission to turn knowledge into specialized AI—helping teams move from prototype to production at scale by leveraging Snorkel AI's programmatic data development technology. In addition, Snorkel AI announced it has raised $100 million in Series D funding at a $1.3 billion valuation, led by Addition. This new funding will fuel continued research and innovation in evaluating and tuning specialized AI systems with expert data.
While large language models (LLMs) offer immense potential, enterprises cannot confidently deploy them 'off the shelf' for specialized business cases. According to Gartner, through 2026, organizations that fail to establish scalable AI data practices will see over 60% of AI projects abandoned. Achieving production-ready AI requires domain-specific data for fine-grained evaluation and model tuning methodologies.
'We are seeing a surge of momentum around agentic AI, but specialized enterprise agents aren't ready for production in most settings,' said Alex Ratner, Co-founder and CEO of Snorkel AI. 'Enterprises need domain-specific data and expertise to make this a reality. We're excited to deliver on this need and help AI innovators develop expert data to bring their LLM and agentic systems into production with our new offerings, which round out Snorkel's unified AI data development stack.'
Snorkel Evaluate
Snorkel AI is expanding its AI Data Development Platform with the general availability of Snorkel Evaluate, enabling users to build specialized, fine-grained evaluation of models and agents. Powered by Snorkel AI's unique programmatic approach to curating AI ready data, this new offering allows enterprises to scale their evaluation workflows to confidently deploy AI systems to production.
Snorkel Evaluate includes programmatic tooling for benchmark dataset creation, the development of specialized evaluators, and error mode correction. These tools help users go beyond generic datasets and off-the-shelf 'LLM-as-a-judge' approaches to efficiently build actionable, domain-specific evaluations.
'To unlock Claude's full potential, we need new evaluation approaches with domain expertise and human feedback,' said Kate Jensen, Head of Revenue at Anthropic. 'Anthropic is committed to working with innovators like Snorkel AI to ensure AI systems are refined, reliable, and aligned to enterprise needs.'
Snorkel Expert Data-as-a-Service
Snorkel Expert Data-as-a-Service is a white-glove solution to deliver expert datasets for frontier AI system evaluation and tuning to enterprises. Leading LLM developers are already partnering with Snorkel AI to create datasets for advanced reasoning, agentic tool use, multi-turn user interaction, and domain-specific knowledge.
The offering combines Snorkel's network of highly trained subject matter experts with its unique programmatic technology platform for data labeling and quality control, enabling efficient delivery of specialized datasets. Snorkel Expert Data-as-a-Service equips enterprises to mix in-house expertise and data with proprietary datasets developed using outsourced expertise.
Snorkel AI's Series D Funding and Market Momentum
The rollout of these new offerings underscores Snorkel AI's commercial momentum. Today, the company also announced it has raised $100 million in Series D funding at a $1.3 billion valuation, led by Addition, with participation from Prosperity 7 Ventures, existing investors Greylock and Lightspeed, and existing strategic investors including BNY and QBE Ventures. The round brings Snorkel AIʼs total funding to $237 million since its founding in 2019. This fresh capital supports the company's expansion of its engineering, research, and go-to-market efforts for its unified AI Data Development Platform.
'With innovations like Snorkel Expert Data-as-a-Service and Snorkel Evaluate, Snorkel AI enables organizations to build AI models more efficiently and ensure they perform at the highest levels in specialized, real-world applications,' said Todd Arfman, Partner at Addition. 'This powerful, data-centric approach is accelerating the deployment of reliable AI at scale — and we're proud to partner with Snorkel AI as they redefine what's possible in enterprise AI.'
The latest round follows Snorkel AI's strong growth trajectory across the Fortune 500 and AI startups, and its wide AI industry recognition, including features in Fast Company's Most Innovative Companies list and Forbes' AI 50.
Resources
Join Snorkel AI and innovators from Accenture, Comcast, Stanford University, QBE, University of Wisconsin-Madison, and more on June 26 for the virtual live event.
Watch the launch video featuring Snorkel's Co-founder and CEO and customer speak on the real-world impact of Snorkel Expert Data-as-a-Service and Snorkel Evaluate.
Read the blog post from Snorkel's CEO, expanding on the announcement and what it means for the future of enterprise AI.
See how Snorkel Evaluate and Snorkel Expert Data-as-a-Service are used to evaluate and develop a specialized agentic AI system for an enterprise use case in this blog post.
About Snorkel AI
Snorkel AI is building the Snorkel AI Data Development Platform for evaluating and tuning specialized AI at scale. Snorkel AI's offerings, including Snorkel Evaluate and Snorkel Expert Data-as-a-Service, accelerate evaluation and tuning of specialized AI systems with expert data—helping teams move from prototype to production at scale by leveraging Snorkel AI's programmatic data development technology.
Launched out of the Stanford AI Lab, Snorkel AI's platform is used in production by Fortune 500 companies, including BNY, Wayfair, and Chubb, as well as across the U.S. federal government, including the U.S. Air Force. Visit snorkel.ai and follow on LinkedIn or @SnorkelAI on X for more information.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Wiretap: Trump Says Goodbye To The AI Safety Institute
The Wiretap: Trump Says Goodbye To The AI Safety Institute

Forbes

time20 minutes ago

  • Forbes

The Wiretap: Trump Says Goodbye To The AI Safety Institute

The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here. (Photo by Jim WATSON / AFP) (Photo by JIM WATSON/AFP via Getty Images) The Trump administration has announced plans to reorganize the U.S. AI Safety Institute (AISI) into the new Center for AI Standards and Innovation (CAISI). Set up by the Biden administration in 2023, AISI operated within the National Institute of Standards & Technology (NIST) to research risks in widely-used AI systems like OpenAI's ChatGPT or Anthropic's Claude. The move to dismantle the body had been expected for some time. In February, as JD Vance headed to France for a major AI summit, his delegation did not include anyone from the AI Safety Institute, Reuters reported at the time. The agency's inaugural director Elizabeth Kelly had stepped down earlier in the month. The Commerce Department's announcement marking the change is thin on details about the reorganization, but it appears the aim is to favor innovation over red tape. 'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,' said Secretary of Commerce Howard Lutnick. What could be gleaned from Lutnick's paradoxical phrasing – national security-focused standards are limiting, but America needs national security-focused standards – is that it's very difficult to tell just how much the new body will differ from the old one. The announcement goes on to state that CAISI will 'assist industry to develop voluntary standards' in AI, which summed up much of what the old body did. Similarly, just as the AI Safety Institute was tasked with assessing risks in artificial intelligence, CAISI will 'lead unclassified evaluations of AI capabilities that may pose risks to national security.' CAISI will also still be a part of NIST. And, despite Lutnick's apparent disdain for standards, the Commerce press release concludes that CAISI will 'ensure U.S. dominance of international AI standards.' That there's little obvious change between the Institute and CAISI might alleviate any immediate concerns the U.S. is abandoning commitments to keep AI safe. Just earlier this year, a coalition of companies, nonprofits and academics called on Congress to codify the Institute's existence before the year was up. That included major players like OpenAI and Anthropic, both of which had agreements to work with the agency on research projects. What happens to those is now up in the air. The Commerce Department hadn't responded to a series of questions at the time of publication, and NIST declined to comment. Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964. (Photo by Melina Mara-Pool/Getty Images) Unknown individuals have impersonated President Trump's chief of staff Susie Wiles in calls and texts to Republican lawmakers and business executives. Investigators suspect the perpetrators used artificial intelligence to clone Wiles' voice. One lawmaker was asked by the impersonator to assemble a list of individuals for potential presidential pardons, according to the Wall Street Journal. It's unclear that motives lay behind the impersonation, or how they pulled the stunt off. Wiles had told confidantes that some of her contacts from her personal phone had been stolen by a hacker. A Texas police officer searched Flock Safety's AI-powered surveillance camera network to track down a woman who had carried out a self-administered abortion, 404 Media reports. Because the search was conducted across different states, experts raised concerns about police using Flock to track down individuals getting abortions in states where it's legal before going back home to a state where it's illegal. The cops said they were simply worried about the woman's safety. Nathan Vilas Laatsch, a 28-year-old IT specialist at the Defense Intelligence Agency, has been arrested and charged with leaking state secrets after becoming upset at the Trump administration. The DOJ did not specify to which country Laatsch allegedly tried to pass secrets, sources told the Washington Post it was Germany. He was caught out by undercover agents posing as interested parties, according to the DOJ. Europol announced it had identified more than 2,000 links 'pointing to jihadist and right-wing violent extremist and terrorist propaganda targeting minors.' The agency warned that it had seen terrorists using AI to generate content like short videos and memes 'designed to resonate with younger audiences.' A 63-year-old British man, John Miller, was charged alongside a Chinese national by the Department of Justice with conspiring to ship missiles, air defense radar, drones and unspecified 'cryptographic devices' to China. They're also charged with trying to stalk and harass an individual who was planning protests against Chinese president Xi.

Will new nuclear energy deals generate FOMO mentality in Big Tech?
Will new nuclear energy deals generate FOMO mentality in Big Tech?

Yahoo

time25 minutes ago

  • Yahoo

Will new nuclear energy deals generate FOMO mentality in Big Tech?

Constellation Energy (CEG) has inked a 20-year deal to provide tech giant Meta Platforms (META) with power from its clean nuclear energy plant starting in 2027. Mizuho Americas managing director and senior analyst of utilities Anthony Crowdell discusses the Constellation-Meta deal, as well as other energy agreements as grid demand soars to power AI data centers, and the regulation around nuclear plants after President Trump signed an executive order in May to ease restrictions around nuclear reactor development. Catch Yahoo Finance's interview with Nano Nuclear Energy founder, executive chairman, and president Jay Yu on the nuclear energy landscape. To watch more expert insights and analysis on the latest market action, check out more Market Domination here. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

New Tennessee law criminalizes AI technology for child pornography creation
New Tennessee law criminalizes AI technology for child pornography creation

Yahoo

time37 minutes ago

  • Yahoo

New Tennessee law criminalizes AI technology for child pornography creation

KNOXVILLE, Tenn. (WATE) — Beginning on July 1, 2025, it will be a crime in Tennessee to create or share technology specifically designed to create or facilitate the creation of AI-generated child pornography. The legislation, sponsored by Senator Ken Yager (R-Kingston), was signed into law by Governor Bill Lee on April 24, and it will take effect on July 1. The law makes it a felony to 'knowingly possess, distribute, or produce any software or technology specifically designed to create or facilitate the creation of AI-generated child sexual abuse material.' Possession will be a Class E felony, distribution will be a Class C felony and production will be a Class B felony. Locals, tourists recall major Gatlinburg crash that injured seven 'When in the wrong hands, artificial intelligence has the ability to make exploitive crimes even worse,' said Tennessee Bureau of Investigation Director David Rausch. 'I applaud the General Assembly and Governor Lee for seeing the value in strengthening our state's laws to better protect Tennesseans, and I'm proud our state is leading the way on common sense measures to ensure this emerging technology doesn't become a dangerous tool for bad actors.' Yager said the goal of the bill was to combat the rise of AI-generated child pornography while preserving legitimate AI applications. A release from his office explained that the legislation does not broadly ban AI, but targets tools with the intent to generate child pornography. The senator worked with the TBI to create the legislation. 'This law is about keeping pace with rapidly evolving technology to protect our children from unthinkable exploitation,' said Yager. 'Bad actors are using AI to create disturbing and abusive content, and Tennessee is taking a strong stand to stop it. By criminalizing the tools used to generate AI child pornography, we're giving law enforcement what they need to pursue offenders and protect victims.' What state laws protect kids against AI-generated deepfakes? This is not the first law in Tennessee targeting AI and child pornography. In 2024, the state added AI-generated images to the state's anti-child pornography laws. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store