A biotech company sold over 500,000 AI-powered health testing kits. Two C-suite leaders share how they kept science at the center.
This article is part of "Build IT: Connectivity," a series about tech powering better business.
Viome is aiming to transform disease detection, starting with the gut.
The Washington-based biotech startup offers at-home testing kits that analyze saliva, stool, and blood samples. Using RNA analysis, scientists at Viome can evaluate how genes and gut microbes are behaving in real time.
Once the tests are done, AI is applied to the results to generate personalized food and supplement recommendations. Users might be told to avoid spinach to reduce inflammation or take specific probiotics to support digestion and immunity.
So far, the company said it has sold more than half a million testing kits. Backed by Salesforce CEO Marc Benioff and venture capital firm Khosla Ventures, Viome is now scaling its tools to detect early signs of disease, including oral and throat cancer.
As Viome expands, the stakes are high. Grand View Research found that the global home care testing market is projected to grow more than 9% annually through 2030. As more consumers turn to medical testing kits for early disease detection and preventive care, the risks of misdiagnosis or ineffective treatment may surge if the tools aren't built with precision.
To ensure its technology is both scientifically accurate and commercially viable, Viome relies on tight, ongoing collaboration between its research, engineering, and product teams.
In a roundtable interview, Business Insider spoke with Momo Vuyisich, Viome's chief science officer, and Guru Banavar, the company's chief technology officer, to discuss how the science and technology teams work together to deliver products that are ready for market.
The following has been edited for length and clarity.
Business Insider: Viome offers a range of products, including microbiome kits and early-stage cancer detectors. How do your science and tech teams work together to keep the AI models accurate, safe, and compliant?
Momo Vuyisich: It's not just collaboration between science and tech — it's a companywide effort. On the science side, we focus on three areas: lab work, data analysis, and clinical research.
Whenever we're working on a health product, we rely on clinical research to guide development. This includes observational studies, where we learn from large groups of people, and interventional trials, where we test whether a tool works in real-world settings. For diagnostics, that means formal device trials.
In the lab, we use a method called metatranscriptomics, measuring RNA to understand what's happening in the body right now. Unlike DNA, which stays the same, RNA changes based on things like diet or environmental exposure. That allows us to detect early signs of disease like inflammation or even cancer, based on how genes are being expressed.
We measure gene activity across human cells, bacteria, and fungi, and we also identify the types of microbes present in a sample.
Guru Banavar: What makes our approach powerful is the scale and detail of the data we collect. Each customer sends us stool, blood, and saliva samples, which we use to generate tens of millions of data points showing what's happening in their gut, blood, and mouth.
Once that data hits Viome's cloud platform, my team steps in. We use AI to figure out not just what organisms are present, but what they're doing, like whether they're producing anti-inflammatory compounds or if certain biological systems are out of balance.
We work with molecular data, which is far more complex than the text data most AI tools are trained on. So we use a range of machine learning methods, such as generative AI and algorithms that learn from labeled examples and draw insights based on patterns, where it's appropriate. The key is using the right tool for the right problem, whether we're detecting disease, recommending foods, or flagging health risks.
And because this work spans many fields, our team includes experts in biology, computing, cloud engineering, and more. Today, everything runs in the cloud, which allows us to operate at scale.
At-home medical testing and preventive health are fast-moving industries. How do you make sure you're not moving too fast and overpromising on scientific outcomes?
Vuyisich: From the very beginning, we made clinical research a core part of how we operate. We didn't just start building products. We started by measuring biological markers that were already published to impact human health, especially those linked to micronutrients. That was our foundation.
One of our earliest major studies was on glycemic response, how people's blood sugar changes after eating. We spent millions of dollars running large-scale studies in the US and Japan, and we used that data to build machine learning models that predicted how a person would respond to certain foods. Afterward, we validated those models before we integrated them into our app.
We've followed that same process for everything from food and nutrition recommendations to our diagnostic test for cancer. We learn from both customer data and formal research, but the bottom line is we validate before we implement.
Banavar: On the tech side, we've built systems that help us move quickly while still being careful. We've automated a lot of the heavy lifting — like processing biological data and generating recommendations — so we're not starting from scratch every time. When a new cohort of users joins Viome, we often retrain our models to reflect new biological data and ensure relevance. Some parts of that process are automated, but the final checks and tuning are still done by hand to make sure the model meets our standards before it goes live.
Another important piece is user education. Our app is designed to let people engage however they want, whether they're just looking for simple guidance or want to dive deep into science. It's an important part of making sure our customer base understands and can follow our recommendations.
Have you ever had to resolve conflicts between business priorities and scientific standards?
Banavar: Yes, and it's natural in a multidisciplinary environment. We all come from different backgrounds. Biologists and machine learning engineers often describe the same process in totally different ways. Momo comes from the molecular side, I come from the computational side. Sometimes we talk past each other, meaning we miss things we say to one another that go beyond our domains of expertise. That's why ongoing communication is so important.
There's also the tension between speed and robustness. For example, when we're building a new feature in the app, I'm OK launching a minimum viable product, MVP for short, which is a working prototype with basic functionality. But when it comes to health models, we won't release them until we've validated the science. If it takes two more weeks to fine-tune, so be it. We'll put a message in the app saying that a specific score, or a health indicator based on a user's test results, is still being worked on — and that's fine with me.
Vuyisich: It all comes down to defining what the MVP is. If it provides enough value for someone to pay for it and feel good about it, that's the threshold. But an MVP for a toy can be rough and basic. An MVP for a cancer diagnostic needs to be very mature.
We don't have a dynamic where business tells science what to do. We sit at the same table and make decisions together. If the science can't hit the original target, we reassess. Can we lower the bar slightly and still provide value? If the answer is yes, we'll launch.
The worst-case scenario is launching something that isn't ready, but even that teaches you something. If no one buys it, you've learned a lot. Sometimes your friends and family say it's amazing, but no one pays for it. That's a signal.
But an even worse scenario is waiting too long for perfection. That's buried more companies than anything else. If Apple had waited until the iPhone had all the features of iPhone 16, it would've gone out of business. Instead, they launched the first iPhone. They could be embarrassed today about how poor it was. But it worked. People paid for it. That's what matters: bring it to market.
What lessons have you learned from building and scaling Viome that could help other companies trying to bring AI health products to market responsibly?
Banavar: First, there is no substitute for generating robust scientific data to support the value of health products. Second, when applying AI to health products, focus on areas and methods that can be independently validated and, ideally, interpretable, where companies can explain how the AI models reached their results to scientists, clinicians, and users. Finally, it's possible, even in the health domain, to build products with an MVP mindset and implement a process for continuous improvement.
Vuyisich: Deeply understand the problem you're trying to solve and identify a robust solution. At Viome, we set out to find the root causes of chronic diseases and cancer, which required measuring tens of thousands of human biomarkers relevant to health.
Also, use a method that's accurate, affordable, and scalable. We spent over six years optimizing one lab test — metatranscriptomics — to go beyond the gold standard. This one test gives us thousands of biomarkers across multiple sample types with high accuracy.
Finally, it's all about the people. Build a leadership team that deeply understands business and science, is aligned with the mission, and puts the company ahead of personal interests. Hire motivated, self-managed employees, train them well, and continuously coach them.
Read the original article on Business Insider

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
27 minutes ago
- Yahoo
Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors
Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors originally appeared on TheStreet. AI agents are quickly becoming integral to how businesses manage portfolios, automate workflows, and navigate digital markets. But most of today's tools—from ChatGPT to private analytics stacks—leave users exposed to platform risks, hidden logic, and limited control. Valory, a Zurich-based company building on the Olas protocol, is offering a decentralized alternative. The company's open-source agents combine machine learning models with smart contracts and crypto wallets, enabling users to operate AI-driven strategies across DeFi, prediction markets, and marketing—without relying on black-box infrastructure. 'We launched Olas so that people could truly own their AI,' said David Diez, CEO of Valory. 'That means owning the models, the logic, and the economics.' Valory's platform targets high-net-worth individuals and institutions that want more than generic SaaS offerings. Rather than outsourcing sensitive tasks like portfolio optimization or campaign automation, Diez says firms can now control how their AI behaves, where it operates, and how it handles assets. 'How much of your stack do you want to own?' Diez asked. 'For core business functions, it's not just about cost—it's about sovereignty over data and margin.' The agents, licensed under Apache 2.0, can be customized or reused for various use cases. Valory currently supports integration with more than 50 DeFi protocols, including Aave and Uniswap, and has reached $400 million in locked value as of Q4 2024, according to company posts on X. Security remains a central concern for institutional adoption. Valory's agents include built-in guardrails and operate with support from Safe, a widely used multi-signature wallet provider. These controls limit agents to predefined actions—such as caps on transaction size or protocol access—reducing the likelihood of errant behavior. Users retain full custody over their funds via wallets like MetaMask or Trust Wallet. Valory also supports MPC (multi-party computation) wallets, splitting key access for added redundancy. 'You can pull the plug on the agent anytime,' Diez said. Valory's stack is fully open-source and publicly audited, which the company believes is essential for attracting TradFi investors who require end-to-end transparency. Valory's Decentralized AI Agents Aim to Bring Transparency and Control to DeFi Investors first appeared on TheStreet on Jun 17, 2025 This story was originally reported by TheStreet on Jun 17, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


The Hill
29 minutes ago
- The Hill
White House says Trump will push TikTok deadline another 90 days
President Trump will sign another executive order this week extending the deadline for TikTok's parent company to divest the video sharing app, White House press secretary Karoline Leavitt said Tuesday. 'As he has said many times, President Trump does not want TikTok to go dark,' Leavitt said in a statement shared with The Hill. 'This extension will last 90 days, which the Administration will spend working to ensure this deal is closed so that the American people can continue to use TikTok with the assurance that their data is safe and secure.' Leavitt's confirmation came just hours after Trump said earlier in the day he would likely extend the divesture deadline to prevent a ban on TikTok from taking effect in the United States. When asked whether he would give the popular video-sharing platform another extension, the president told reporters aboard Air Force One, 'Probably, yeah.' 'Probably have to get China approval, but I think we'll get it,' Trump said as he traveled back from the Group of Seven summit in Canada. 'I think President Xi [Jinping] will ultimately approve it.' The expected order will mark the third extension from Trump since he took office in January. The law requiring TikTok's China-based parent company ByteDance to divest from the platform or face a ban on U.S. networks and app stores was signed by former President Biden last year. The law initially went into effect on Jan. 19 — the day before Trump was sworn into his second term — causing the platform to go dark for a few hours after the Supreme Court upheld the divest-or-ban law. But the platform was quickly brought back online after Trump pledged to issue an executive order once back in office to give the company an extension. The president made good on that promise, giving TikTok's Chinese parent company ByteDance 75 days beyond the initial January deadline to divest form the platform amid national security concerns. The White House finalized a deal on TikTok in early April, but it fell apart when Trump announced sweeping new 'reciprocal' tariffs, including significant import taxes on Chinese goods. The president extended the TikTok deadline by another 75 days, which was set to expire on June 19. Brett Samuels contributed reporting.
Yahoo
30 minutes ago
- Yahoo
Musk's X sues New York over requirement to show how social media platforms handle problematic posts
NEW YORK (AP) — Elon Musk 's X sued Tuesday to try to stop New York from requiring reports on how social media platforms handle problematic posts — a regulatory approach that the company successfully challenged in California. New York's law, which Democratic Gov. Kathy Hochul signed late last year, is poised to take effect later this year. X maintains that the measure impinges on free speech rights and on a 1996 federal law that, among other things, lets internet platforms moderate posts as they see fit. New York is improperly trying 'to inject itself into the content-moderation editorial process' by requiring 'politically charged disclosures' about it, Bastrop, Texas-based X Corp. argues in the suit. 'The state is impermissibly trying to generate public controversy about content moderation in a way that will pressure social media companies, such as X Corp., to restrict, limit, disfavor or censor certain constitutionally protected content on X that the state dislikes,' says the suit, filed in federal court in Manhattan. New York Attorney General Letitia James' office didn't immediately respond to a request for comment on the case. The law requires social media companies to report twice a year on whether and how they define hate speech, racist or extremist content, disinformation and some other terms. The platforms also have to detail their content moderation practices and data on the number of posts they flagged, the actions they took, the extent to which the offending material was seen or shared, and more. Sponsors Sen. Brad Hoylman-Sigal and Assembly Member Grace Lee, both Democrats, have said the measure will make social media more transparent and companies more accountable. The law applies broadly to social media companies. But X is among those that have faced intense scrutiny in recent years, and in a 2024 letter to an X lobbyist, the sponsors said the company and Musk in particular have a 'disturbing record' that 'threatens the foundations of our democracy.' The lawmakers wrote before Musk became, for a time, a close adviser and chainsaw-wielding cost-cutter in Republican President Donald Trump's administration. The two billionaires have since feuded and, perhaps, made up. Since taking over the former Twitter in 2022, Musk, in the name of free speech, has dismantled the company's Trust and Safety advisory group and stopped enforcing content moderation and hate speech rules that the site followed. He has restored the accounts of conspiracy theorists and incentivized engagement on the platform with payouts and content partnerships. Outside groups have since documented a rise in hate speech and harassment on the platform. X sued a research organization that studies online hate speech – that lawsuit was dismissed last March. The New York legislation took a page from a similar law that passed in California — and drew a similar lawsuit from X. Last fall, a panel of federal appellate judges blocked portions of the California law, at least temporarily, on free speech grounds. The state subsequently settled, agreeing not to enforce the content-moderation reporting requirements. ___ AP Technology Writer Barbara Ortutay contributed from San Francisco. Jennifer Peltz, The Associated Press Sign in to access your portfolio