logo
I Tried £180 Worth Of Foot Cream & The Cheapest One Was The Best

I Tried £180 Worth Of Foot Cream & The Cheapest One Was The Best

Refinery2908-07-2025
All linked products are independently selected by our editors. If you purchase any of these products, we may earn a commission.
Even if you consider yourself a skincare aficionado, I'm willing to bet you haven't given your feet a second thought all year. Until now, that is. With sandal season in full swing, it makes sense that we're on the lookout for the one foot lotion that does it all: preventing cracks before they form, banishing ashiness and smoothing away flaky skin. And with a heatwave underway, it's little wonder that searches for 'best foot cream for hard skin' are spiking on Google. (Is there anything more shudder-inducing than the feeling of dry toes catching on your bedsheets?)
You might be wondering how a foot cream differs from your usual body lotion. In my experience, the latter just doesn't cut it. A proper foot cream is usually much thicker in texture and loaded with heavy-duty ingredients like ultra-moisturising urea and exfoliating salicylic acid, so you can put the scary cheese grater-style foot file down.
With that in mind, I tried £180 worth of foot creams, rating each one based on how my feet felt right after applying and after consistent use. Here are my honest thoughts.
L'Occitane Shea Butter Foot Cream, £23
I'm a huge fan of L'Occitane's Shea Butter Hand Cream, but because it's packed with moisturising butters and oils, it can leave a little residue. Happily, the foot cream version is just as effective on dry, cracked skin but sinks in quickly without any stickiness, so you can slip on socks or slippers right away. My favourite thing about it is the soothing lavender scent — it's the ultimate bedtime ritual, and I'm convinced it helps me fall asleep faster. I also know it's a staple in many luxury pedicurists' kits…
Soap & Glory Heel Genius Foot Cream, £7.99
I've used this on and off for years and still can't get over the affordable price tag. Why? It does the most. The star ingredient is urea, a keratolytic agent that breaks down the bonds between dead skin cells, helping them shed quickly. It's excellent for very dry, cracked feet with calluses. Moisturising glycerin and allantoin leave feet feeling instantly brand new and the softening results last until the next morning. If you're not into the minty finish of most foot creams, this one's lightly floral and sweet. It's my number one on this list.
Footmender All in One, £28.99
This might feel thin, but it's a serious foot lotion. It contains six active ingredients, including exfoliating lactic and glycolic acids, urea, which helps shed dead skin cells, moisturising panthenol (also known as vitamin B5) and hydrating sodium lactate. Then there's ultra-nourishing shea butter and squalane. Because of the strong actives, it tingles at first (wash your hands after using it) and smells quite potent. I found that for real results, you have to be consistent. After a week of nightly use, my dry skin disappeared, and it worked wonders on a thick, hard callus on my big toe, thanks to wearing chunky boots all winter. Honestly, my feet were glowing. This is excellent for very dry soles if you have a little more to spend. This takes a well-deserved second place.
The Body Shop Peppermint Invigorating Foot Cream, £12
I've squeezed many of The Body Shop's Invigorating Foot Creams to the very last drop, even investing in a tube-squeezing key to get every last bit out — it's that good. It features a handful of deeply nourishing butters like shea and cocoa, as well as moisturising glycerin and plant oils and waxes to make rough skin feel soft again. I love applying this before bed. It's so smoothing, that I challenge anyone not to rub their feet together in joy. The name suggests that it's cooling, but besides the peppermint scent, it doesn't quite live up to the 'invigorating' label, though the moisturising benefits certainly make up for the lack of minty tingle. This comes in at a respectable third place.
Koba Bottom Up Foot Cream, £35
This foot lotion reflects its higher price tag. Luxuriously thick but not greasy, it has a relaxing, herby eucalyptus scent and leaves feet feeling satisfyingly soft. It's another one that makes me want to rub my feet together gleefully in bed. Olive and shea butter give it a whipped consistency, while their fatty acids work instantly and over time to repair a dry, damaged skin barrier. There's also vitamin B5 to lock in moisture and allantoin to soothe cracked skin.
CeraVe SA Renewing Foot Cream, £10
If you want something that sinks in quickly and doesn't leave a trace of residue, consider this your new go-to. But after giving it a good go last year, I found it wasn't substantial enough for my very dry soles or calluses, even with a dose of exfoliating salicylic acid. I kept my foot file close. I much prefer the brand's Moisturising Cream for Dry to Very Dry Skin, £17.50, for feet. It's much bigger, so I don't feel guilty slathering it on, and it boasts the same skin-rejuvenating ceramides, which act like glue between cells to keep skin soft and supple.
Weleda Foot Balm, £14.50
This is great if you don't mind a bit of initial stickiness. The 837 Amazon reviews don't lie: it smells amazing and instantly smooths the look of superficial dry lines, but if you have painful cracks, I'd suggest steering clear due to the handful of essential oils, which could irritate broken skin. It's more instantly refreshing than The Body Shop's version, making it ideal for swollen summer feet. Just give it a shake or a squeeze before use, as the olive oil tends to pool at the tube's opening.
Aveda Foot Relief, £26
This is a megamix of gently exfoliating fruit enzymes plus jojoba and castor oils, so it not only lifts away dry, flaky skin but also replenishes moisture in parched feet. Because the exfoliants are quite mild, regular use is what makes it worth the higher spend. I love the addition of soothing, refreshing tea tree. When I use it in the morning, it leaves my feet feeling fresh and prepped for sandals on super hot days.
Margaret Dabbs Miracle Foot Cream, £22
Margaret Dabbs' Miracle Foot Cream is miles ahead of the brand's Intensive Hydrating Foot Lotion, which I found far too thin to make a difference to my parched soles. However, its main ingredient is petrolatum (aka petroleum jelly), so it's thick, slow to absorb and leaves a greasy residue on toes and hands. It's a before-bed-with-socks kind of product, rather than one to slather on before slipping into sandals. What really sets it apart from other foot creams, though, is its focus on foot and toenail hygiene, thanks to a generous dose of antifungal and antibacterial tea tree oil (no wonder so many pedicurists I know keep it in their kits). It also contains exfoliating salicylic acid, which does the work of a foot file without the risk of overdoing it — great if you can get past the slippery feel.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

As a plant-based nutritionist, I swear by these healthy meals to prep my son's school lunches and dinners
As a plant-based nutritionist, I swear by these healthy meals to prep my son's school lunches and dinners

Business Insider

time21 minutes ago

  • Business Insider

As a plant-based nutritionist, I swear by these healthy meals to prep my son's school lunches and dinners

Back-to-school can be a stressful time, but I find simplifying our food choices really helps. We make sure to keep easy, healthy snacks around the house that my son can grab while on the go. This article is part of "The Working Parents Back-to-School Survival Guide," a series of real-life tips for navigating the school season. This as-told-to essay is based on a conversation with Dr. Dana Ellis Hunnes, a senior clinical inpatient dietitian at Ronald Reagan-UCLA Medical Center. After a relaxing summer, getting into the rhythm of back-to-school can feel a little overwhelming at first. Especially that first week, everyone's grappling with the same stuff like supply runs, being on time, and after-school events. I try not to stress as best as I can. There are so many things in life you can stress about, but I try to remember that my kid doesn't need me to be perfect. My husband and I both work and have a son who is 11. One thing that helps us all manage the hectic back-to-school season a little better is simplifying our food choices like making sure we have easy-to-prep lunches and batch cooking dinners on the weekends. That way, during the week, we're not completely stressed out trying to get dinner on the table or figuring out what to pack for lunch. My go-to lunches for back-to-school We eat a plant-based diet. Lunches for school typically consist of a sandwich and a couple of snacks like hummus cups, avocado sushi rolls, or trail mix. For sandwiches, I'll usually make one of the following: Peanut butter and jelly Hummus with some lettuce or onion Avocado toast And then trail mix is usually a mixture of almonds and cashews with maybe a little bit of chocolate chips and craisins (my son hates raisins). It really just depends on what he's in the mood for. Sometimes, we'll just buy the big bag of trail mix at Costco. We also make sure to always have healthy food available in the house, like fruits that are easy to peel and fruit-and-nut bars that our son can throw in his backpack. We have a lot of healthy grab-and-go type stuff like that. About half the time he decides he wants to get the school lunch because he's bored of our food and the cafeteria has something more interesting or different. There's always a vegan option like Impossible Burger or Impossible Chicken. Yes, it's processed, but he's doing a pretty good job of making mostly good choices on his own, so I'm OK with it. You can't ask for perfection; perfection is the enemy of good. My 3 go-to items for back-to-school Bento Box lunch containers with three compartments for a sandwich and two snacks, like fruit and a spoonful of hummus and carrots. Large casserole dishes for batch cooking dinners on the weekend. Having reheatable leftovers saves a lot of time, effort, and stress during the week. A Google or Outlook calendar with notification reminders set up for 30 to 60 minutes before an event. My favorite dinners for batch cooking For dinners, my husband has a really good tofu and black bean enchilada recipe. It takes a long time to make, so he'll prepare two humongous casserole dishes on the weekends and then all we have to do throughout the week is pull a portion and reheat it. We also like to use those casserole dishes to make big batches of veggie lasagna, black bean burgers, falafel, and homemade veggie pizza (heavy on the veggies). We'll batch make soups, like chili, too, which are super simple to heat up and put on the table. The one thing I do spend time making each weeknight is a pretty big salad. It's quick to whip up. I use lettuce and add avocado, red onion, and usually either some fruit, like beets, for a sweet salad, or olives and veggies if I want to make it savory. The dressing is simple: I use olive oil and then either a little balsamic vinegar for a fruity salad, red wine vinegar or lemon juice for savory, and top it off with some pepper. For dessert, we usually have a piece of fruit, a fruit popsicle, or occasionally vegan ice cream. We pretty much avoid supplements for our son since he has such a varied diet. Every now and again we'll suggest he take a Vitamin D or B12 supplement since he eats minimal animal products, but that's about it. On the flip side, if all your family eats are highly processed foods, because that's all you can afford or that's all you like, then you might want to consider a supplement. The other thing I can't live without during back-to-school When I was younger, I used to be able to remember every appointment and activity. As my son gets older, though, and more stuff crops up in life, I definitely need my Google Calendar to track it all. If it's not on the calendar, it's not in my mind, and it may not happen because I'll forget. I also set up notifications that remind me 30 to 60 minutes before an event, like picking my son up from school or sports, to leave enough time to get there without feeling stressed or rushed. If I'm reminded just five minutes before, all of a sudden, I go into fight-or-flight mode, so I make sure I give myself at least 30 minutes' notice so I have that extra window to calmly wrap up what I'm currently doing. Ultimately, it's going to take time to get into a rhythm of back-to-school. Just remember to be graceful and forgiving to yourself. Your kid doesn't need you to be perfect; they just need you to be there, present, and able to help them as best you can.

Google's healthcare AI made up a body part — what happens when doctors don't notice?
Google's healthcare AI made up a body part — what happens when doctors don't notice?

The Verge

time5 hours ago

  • The Verge

Google's healthcare AI made up a body part — what happens when doctors don't notice?

Scenario: A radiologist is looking at your brain scan and flags an abnormality in the basal ganglia. It's an area of the brain that helps you with motor control, learning, and emotional processing. The name sounds a bit like another part of the brain, the basilar artery, which supplies blood to your brainstem — but the radiologist knows not to confuse them. A stroke or abnormality in one is typically treated in a very different way than in the other. Now imagine your doctor is using an AI model to do the reading. The model says you have a problem with your 'basilar ganglia,' conflating the two names into an area of the brain that does not exist. You'd hope your doctor would catch the mistake and double-check the scan. But there's a chance they don't. Though not in a hospital setting, the 'basilar ganglia' is a real error that was served up by Google's healthcare AI model, Med-Gemini. A 2024 research paper introducing Med-Gemini included the hallucination in a section on head CT scans, and nobody at Google caught it, in either that paper or a blog post announcing it. When Bryan Moore, a board-certified neurologist and researcher with expertise in AI, flagged the mistake, he tells The Verge, the company quietly edited the blog post to fix the error with no public acknowledgement — and the paper remained unchanged. Google calls the incident a simple misspelling of 'basal ganglia.' Some medical professionals say it's a dangerous error and an example of the limitations of healthcare AI. Med-Gemini is a collection of AI models that can summarize health data, create radiology reports, analyze electronic health records, and more. The pre-print research paper, meant to demonstrate its value to doctors, highlighted a series of abnormalities in scans that radiologists 'missed' but AI caught. One of its examples was that Med-Gemini diagnosed an 'old left basilar ganglia infarct.' But as established, there's no such thing. Fast-forward about a year, and Med-Gemini's trusted tester program is no longer accepting new entrants — likely meaning that the program is being tested in real-life medical scenarios on a pilot basis. It's still an early trial, but the stakes of AI errors are getting higher. Med-Gemini isn't the only model making them. And it's not clear how doctors should respond. 'What you're talking about is super dangerous,' Maulin Shah, chief medical information officer at Providence, a healthcare system serving 51 hospitals and more than 1,000 clinics, tells The Verge. He added, 'Two letters, but it's a big deal.' In a statement, Google spokesperson Jason Freidenfelds told The Verge that the company partners with the medical community to test its models and that Google is transparent about their limitations. 'Though the system did spot a missed pathology, it used an incorrect term to describe it (basilar instead of basal). That's why we clarified in the blog post,' Freidenfelds said. He added, 'We're continually working to improve our models, rigorously examining an extensive range of performance attributes -- see our training and deployment practices for a detailed view into our process.' On May 6th, 2024, Google debuted its newest suite of healthcare AI models with fanfare. It billed 'Med-Gemini' as a 'leap forward' with 'substantial potential in medicine,' touting its real-world applications in radiology, pathology, dermatology, ophthalmology, and genomics. The models trained on medical images, like chest X-rays, CT slices, pathology slides, and more, using de-identified medical data with text labels, according to a Google blog post. The company said the AI models could 'interpret complex 3D scans, answer clinical questions, and generate state-of-the-art radiology reports' — even going as far as to say they could help predict disease risk via genomic information. Moore saw the authors' promotions of the paper early on and took a look. He caught the mistake and was alarmed, flagging the error to Google on LinkedIn and contacting authors directly to let them know. The company, he saw, quietly switched out evidence of the AI model's error. It updated the debut blog post phrasing from 'basilar ganglia' to 'basal ganglia' with no other differences and no change to the paper itself. In communication viewed by The Verge, Google Health employees responded to Moore, calling the mistake a typo. In response, Moore publicly called out Google for the quiet edit. This time the company changed the result back with a clarifying caption, writing that ''basilar' is a common mis-transcription of 'basal' that Med-Gemini has learned from the training data, though the meaning of the report is unchanged.' Google acknowledged the issue in a public LinkedIn comment, again downplaying the issue as a 'misspelling.' 'Thank you for noting this!' the company said. 'We've updated the blog post figure to show the original model output, and agree it is important to showcase how the model actually operates.' As of this article's publication, the research paper itself still contains the error with no updates or acknowledgement. Whether it's a typo, a hallucination, or both, errors like these raise much larger questions about the standards healthcare AI should be held to, and when it will be ready to be released into public-facing use cases. 'The problem with these typos or other hallucinations is I don't trust our humans to review them' 'The problem with these typos or other hallucinations is I don't trust our humans to review them, or certainly not at every level,' Shah tells The Verge. 'These things propagate. We found in one of our analyses of a tool that somebody had written a note with an incorrect pathologic assessment — pathology was positive for cancer, they put negative (inadvertently) … But now the AI is reading all those notes and propagating it, and propagating it, and making decisions off that bad data.' Errors with Google's healthcare models have persisted. Two months ago, Google debuted MedGemma, a newer and more advanced healthcare model that specializes in AI-based radiology results, and medical professionals found that if they phrased questions differently when asking the AI model questions, answers varied and could lead to inaccurate outputs. In one example, Dr. Judy Gichoya, an associate professor in the department of radiology and informatics at Emory University School of Medicine, asked MedGemma about a problem with a patient's rib X-ray with a lot of specifics — 'Here is an X-ray of a patient [age] [gender]. What do you see in the X-ray?' — and the model correctly diagnosed the issue. When the system was shown the same image but with a simpler question — 'What do you see in the X-ray?' — the AI said there weren't any issues at all. 'The X-ray shows a normal adult chest,' MedGemma wrote. In another example, Gichoya asked MedGemma about an X-ray showing pneumoperitoneum, or gas under the diaphragm. The first time, the system answered correctly. But with slightly different query wording, the AI hallucinated multiple types of diagnoses. 'The question is, are we going to actually question the AI or not?' Shah says. Even if an AI system is listening to a doctor-patient conversation to generate clinical notes, or translating a doctor's own shorthand, he says, those have hallucination risks which could lead to even more dangers. That's because medical professionals could be less likely to double-check the AI-generated text, especially since it's often accurate. 'If I write 'ASA 325 mg qd,' it should change it to 'Take an aspirin every day, 325 milligrams,' or something that a patient can understand,' Shah says. 'You do that enough times, you stop reading the patient part. So if it now hallucinates — if it thinks the ASA is the anesthesia standard assessment … you're not going to catch it.' Shah says he's hoping the industry moves toward augmentation of healthcare professionals instead of replacing clinical aspects. He's also looking to see real-time hallucination detection in the AI industry — for instance, one AI model checking another for hallucination risk and either not showing those parts to the end user or flagging them with a warning. 'In healthcare, 'confabulation' happens in dementia and in alcoholism where you just make stuff up that sounds really accurate — so you don't realize someone has dementia because they're making it up and it sounds right, and then you really listen and you're like, 'Wait, that's not right' — that's exactly what these things are doing,' Shah says. 'So we have these confabulation alerts in our system that we put in where we're using AI.' Gichoya, who leads Emory's Healthcare Al Innovation and Translational Informatics lab, says she's seen newer versions of Med-Gemini hallucinate in research environments, just like most large-scale AI healthcare models. 'Their nature is that [they] tend to make up things, and it doesn't say 'I don't know,' which is a big, big problem for high-stakes domains like medicine,' Gichoya says. She added, 'People are trying to change the workflow of radiologists to come back and say, 'AI will generate the report, then you read the report,' but that report has so many hallucinations, and most of us radiologists would not be able to work like that. And so I see the bar for adoption being much higher, even if people don't realize it.' Dr. Jonathan Chen, associate professor at the Stanford School of Medicine and the director for medical education in AI, searched for the right adjective — trying out 'treacherous,' 'dangerous,' and 'precarious' — before settling on how to describe this moment in healthcare AI. 'It's a very weird threshold moment where a lot of these things are being adopted too fast into clinical care,' he says. 'They're really not mature.' On the 'basilar ganglia' issue, he says, 'Maybe it's a typo, maybe it's a meaningful difference — all of those are very real issues that need to be unpacked.' Some parts of the healthcare industry are desperate for help from AI tools, but the industry needs to have appropriate skepticism before adopting them, Chen says. Perhaps the biggest danger is not that these systems are sometimes wrong — it's how credible and trustworthy they sound when they tell you an obstruction in the 'basilar ganglia' is a real thing, he says. Plenty of errors slip into human medical notes, but AI can actually exacerbate the problem, thanks to a well-documented phenomenon known as automation bias, where complacency leads people to miss errors in a system that's right most of the time. Even AI checking an AI's work is still imperfect, he says. 'When we deal with medical care, imperfect can feel intolerable.' 'Maybe other people are like, 'If we can get as high as a human, we're good enough.' I don't buy that for a second' 'You know the driverless car analogy, 'Hey, it's driven me so well so many times, I'm going to go to sleep at the wheel.' It's like, 'Whoa, whoa, wait a minute, when your or somebody else's life is on the line, maybe that's not the right way to do this,'' Chen says, adding, 'I think there's a lot of help and benefit we get, but also very obvious mistakes will happen that don't need to happen if we approach this in a more deliberate way.' Requiring AI to work perfectly without human intervention, Chen says, could mean 'we'll never get the benefits out of it that we can use right now. On the other hand, we should hold it to as high a bar as it can achieve. And I think there's still a higher bar it can and should reach for.' Getting second opinions from multiple, real people remains vital. That said, Google's paper had more than 50 authors, and it was reviewed by medical professionals before publication. It's not clear exactly why none of them caught the error; Google did not directly answer a question about why it slipped through. Dr. Michael Pencina, chief data scientist at Duke Health, tells The Verge he's 'much more likely to believe' the Med-Gemini error is a hallucination than a typo, adding, 'The question is, again, what are the consequences of it?' The answer, to him, rests in the stakes of making an error — and with healthcare, those stakes are serious. 'The higher-risk the application is and the more autonomous the system is ... the higher the bar for evidence needs to be,' he says. 'And unfortunately we are at a stage in the development of AI that is still very much what I would call the Wild West.' 'In my mind, AI has to have a way higher bar of error than a human,' Providence's Shah says. 'Maybe other people are like, 'If we can get as high as a human, we're good enough.' I don't buy that for a second. Otherwise, I'll just keep my humans doing the work. With humans I know how to go and talk to them and say, 'Hey, let's look at this case together. How could we have done it differently?' What are you going to do when the AI does that?' Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Features Posts from this topic will be added to your daily email digest and your homepage feed. See All Google Posts from this topic will be added to your daily email digest and your homepage feed. See All Health Posts from this topic will be added to your daily email digest and your homepage feed. See All Report Posts from this topic will be added to your daily email digest and your homepage feed. See All Science Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

Nature Methods Paper Leverages PacBio Sequencing Technology to Develop the Platinum Pedigree Benchmark, a New Standard for Accurate Characterization of Variation in the Human Genome that Improves Training for AI Models
Nature Methods Paper Leverages PacBio Sequencing Technology to Develop the Platinum Pedigree Benchmark, a New Standard for Accurate Characterization of Variation in the Human Genome that Improves Training for AI Models

Yahoo

time5 hours ago

  • Yahoo

Nature Methods Paper Leverages PacBio Sequencing Technology to Develop the Platinum Pedigree Benchmark, a New Standard for Accurate Characterization of Variation in the Human Genome that Improves Training for AI Models

The most comprehensive, family-based variant dataset ever published will improve variant classification using AI-based tools MENLO PARK, Calif., Aug. 04, 2025 (GLOBE NEWSWIRE) -- PacBio (NASDAQ: PACB), a leading provider of high-quality, highly accurate sequencing platforms, today announced the results of a study published in Nature Methods describing a new, comprehensive truth-set of genomic variation which characterizes simple and complex variation. These improved benchmarks were used to retrain Google's DeepVariant, a popular AI-based variant calling tool, resulting in a 34% reduction in erroneously called variants. This resource (the Platinum Pedigree) was built by scientists from PacBio in collaboration with researchers at the University of Washington, the University of Utah, and several other institutions. Combining inheritance-based validation with long-read sequencing, this benchmark accurately characterizes variants, even in difficult, repeat rich regions of the genome, producing the most complete view of validated genetic variation to date. 'Comprehensive benchmarking datasets that include all variant types are foundational to progress in genomics methods development and the application of AI-driven tools, as well as to our understanding of genomic variation for both research and diagnostic purposes,' said Zev Kronenberg, lead author and Senior Manager at PacBio. 'The Platinum Pedigree benchmark doesn't just include simple variants in easy-to-sequence regions, it includes variants from across the entire genome, including regions that were previously excluded from benchmarks due to their complex nature.' The Platinum Pedigree dataset was developed using deep sequencing from three sequencing platforms across a 28-member, multi-generational family (CEPH-1463). By tracking the inheritance of genetic variants from parents to multiple children, the study confidently catalogs over 37 Mb of genetic variation segregating within the family from single nucleotide variants to large structural variants. The dataset introduces the first large pedigree-validated tandem repeat and structural variant truth sets. It also adds more than 200 million bases extending the benchmark regions to 2.77 Gb, including difficult-to-map areas such as segmental duplications and low-complexity regions. A Benchmark Built for the Dark Genome As a demonstration of the value of improved benchmarks to improve AI and ML methods, the researchers retrained Google's DeepVariant - a popular software tool that employs deep learning to identify genetic variants - using the Platinum Pedigree benchmark data. This updated DeepVariant model reduced errors by up to 34% genome-wide, including even higher gains in the most challenging regions of the genome. 'This benchmark pushes accuracy where it matters most,' said Michael Eberle, senior author and Vice President of Computational Biology at PacBio. 'It enables better evaluation of variant calling pipelines and accelerates the development of methods that finally reach the full genome, including some of the complex regions that are important for human health.' A New Standard for Clinical and Population Genomics The Platinum Pedigree benchmark is freely available and already being used by scientists to develop new sequence analysis tools and validate clinical sequencing workflows. It also provides a roadmap for future benchmarking efforts, especially those involving more complete genomes like T2T-CHM13. The full dataset, analysis code, and pipelines are publicly available at: About the Study The study, 'The Platinum Pedigree: A long-read benchmark for genetic variants,' was published in Nature Methods on August 4, 2025. It was led by scientists at PacBio, the University of Washington, and University of Utah, with support from the NIH and Howard Hughes Medical Institute. About PacBio PacBio (NASDAQ: PACB) is a premier life science technology company that designs, develops, and manufactures advanced sequencing solutions to help scientists and clinical researchers resolve genetically complex problems. Our products and technologies, which include our HiFi long-read sequencing, address solutions across a broad set of research applications including human germline sequencing, plant and animal sciences, infectious disease and microbiology, oncology, and other emerging applications. For more information, please visit and follow @PacBio. PacBio products are provided for Research Use Only. Not for use in diagnostic procedures. Forward Looking Statements This press release contains 'forward-looking statements' within the meaning of Section 21E of the Securities Exchange Act of 1934, as amended, and the U.S. Private Securities Litigation Reform Act of 1995. All statements other than statements of historical fact are forward-looking statements, including statements relating to the uses, advantages, quality or performance of, or the benefits or expected benefits of using, PacBio products or technologies, including in connection with the Platinum Pedigree dataset, its potential to enable better evaluation of variant calling pipelines and accelerate development methods that reach the full genome, and other future events. You should not place undue reliance on forward-looking statements because they are subject to assumptions, risks, and uncertainties that could cause actual outcomes and results to differ materially from currently anticipated results. These risks include, but are not limited to, rapidly changing technologies and extensive competition in genomic sequencing; unanticipated increases in costs or expenses; and other risks associated with general macroeconomic conditions and geopolitical instability. Additional factors that could materially affect actual results can be found in PacBio's most recent filings with the Securities and Exchange Commission, including PacBio's most recent reports on Forms 8-K, 10-K, and 10-Q, and include those listed under the caption 'Risk Factors.' These forward-looking statements are based on current expectations and speak only as of the date hereof; except as required by law, PacBio disclaims any obligation to revise or update these forward-looking statements to reflect events or circumstances in the future, even if new information becomes available Contacts Investors and Media:Todd Friedmanir@ Media: ir@ in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store