Stopping Norovirus: How UMass Amherst is working to prevent the spread of the dreaded illness
'Some people have called it the nearly perfect pathogen,' explains Associate Professor Matthew Moore with the Food Science Department at UMass Amherst. 'Primarily it's transmitted person to person, but it is also transmitted via foods.'
Moore says 60 percent of all foodborne illness comes from Norovirus. 'It's through food handlers. So it's people shedding the virus.'
And just a small amount of the virus packs a powerful punch. Moore says just 18 tiny particles of Norovirus are enough to make you feel pretty sick.
Researchers at UMass Amherst are looking for ways to fight back against the virus, something that includes better detection in food and in the environment.
Boston 25 News observed one student using a harmless bacteria to capture Norovirus, enabling it to be detected more easily. Other research in the Food Science Department focuses on better identifying strains of the virus. As Moore explains, 'Knowing what strain of Norovirus it is is really important in identifying an outbreak. It's sort of like if you have a series of bank robberies, you want to know the name of the person and connect them, right?'
Once Norovirus is found, there's the task of getting rid of it. And that's a challenge. 'Not only is it hard to inactivate or destroy the virus, it can also persist on surfaces,' Moore says.
But UMass Amherst researchers are studying that too. 'So we have some interesting projects, not only related to finding really good disinfectants, but also understanding the consequences of not disinfecting properly. If we keep not applying these disinfectants properly, could we be selecting for or creating variants of this virus that are more resistant to those disinfectants,' Moore explained.
Moore says if your home gets hit with Norovirus, you should first clean any infected surfaces.
Then, disinfect with bleach, which is considered the most effective against Norovirus.
Moore says if you're sick, don't prepare food for others.
But food preparation is out of your hands when it comes to dining at your favorite restaurant. 'Obviously there might be, you can look up health ratings and health reviews for restaurants and those can be somewhat of an indicator of, you know, how well you'll be or how risky it will be to go there. But it really is kind of in the hands of the food handlers.'
And speaking of hands, Moore says to ditch the hand sanitizer. Wash your hands with soap and water to scrub out Norovirus. Advice many people say, they're following frequently these days.
Boston 25 News also reported this week on the work being done at UMass to increase food safety and reduce food recalls. You can see that story here.
Download the FREE Boston 25 News app for breaking news alerts.
Follow Boston 25 News on Facebook and Twitter. | Watch Boston 25 News NOW

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
a day ago
- Business Wire
Minze Health Appoints Thomas Moore as President and Chief Executive Officer
ANTWERP, Belgium & MINNEAPOLIS--(BUSINESS WIRE)--Minze Health, a leader in digital diagnostics and therapeutics for urology, announced today that Thomas Moore has been appointed President and Chief Executive Officer, effective immediately. Moore brings over two decades of commercial leadership in medtech and a proven track record of driving growth, reimbursement, and global market expansion across the U.S. and Europe. Prior to joining Minze, Moore served as Chief Commercial Officer at GT Medical Technologies and executive leadership roles at Ablative Solutions and CVRx, where he played a central role in revitalizing trial enrollment, scaling commercial operations, and raising over $100M in venture funding. His leadership has accelerated pivotal clinical trials, launched novel therapies, and expanded reimbursement in key global markets. 'Minze is uniquely positioned to redefine the patient pathway for millions affected by BPH and OAB through our home-based diagnostics and evidence-backed therapeutics,' said Moore. 'I'm honored to join this world-class team and lead our expansion into the U.S. while advancing strategic partnerships, reimbursement access, and product innovation. The opportunity to make urologic care more accurate, efficient, and accessible has never been more urgent or exciting.' Moore's appointment marks a new chapter in Minze's growth trajectory, as the company prepares for commercial scaling, key reimbursement milestones, and Series B fundraising. 'Tom is the right leader at the right time,' said Steffen Hovard, Executive Chairman of the Board. 'He brings a powerful combination of strategic vision, commercial discipline, and passion for patient-centered innovation. We're thrilled to welcome him as CEO and confident he'll accelerate Minze's mission to become the global standard in digital urology care.' About Minze Health Minze Health is transforming urology by combining accurate at-home diagnostics, remote monitoring, and digital therapeutics into a single, scalable platform. With CE mark and 510(k) exemptions, Minze's ecosystem supports early diagnosis and personalized care for conditions like BPH and OAB—empowering patients and relieving pressure on overburdened urology systems. For more information, visit


National Geographic
7 days ago
- National Geographic
Did ancient ‘hobbit' humans create these million-year-old tools?
Seven newly discovered stone tools, dating to between 1.04 and 1.48 million years ago, were found on the Indonesian island of Sulawesi. The stone tools may have been created by an ancient hominin, such as Homo floresiensis, Homo luzonensis, their relatives, or members of a yet-undiscovered species. Photograph courtesy of M.W. Moore/University of New England. In 2004, archaeologists discovered a new species of ancient human, Homo floresiensis, on the Indonesian island of Flores. Nicknamed 'the hobbit,' this three-foot-tall hominin lived between about 60,000 and 100,000 years ago. Its discovery kickstarted a broader search across Southeast Asia's islands for fossils and other traces of early human relatives. On Luzon in the Philippines, scientists later found another small-bodied hominin, Homo luzonensis, the remains of which dated to between 50,000 and 67,000 years ago. Across the region, researchers have also uncovered artifacts that predate the fossils, including flaked stone tools from 1.02 million years ago on Flores and 700,000-year-old stone tools on Luzon, which strongly suggests that hominins have arrived far earlier than evidenced by the oldest known fossils. Now, on the larger Indonesian island of Sulawesi, a team of researchers has unearthed stone tools dating to between 1.04 and 1.48 million years ago, pushing back the presence of ancient human relatives on the island by hundreds of thousands of years. Previously, the earliest signs of hominin activity on Sulawesi dated to about 194,000 years ago. Excavations at Calio in southern Sulawesi, Indonesia found seven flaked stone tools that were likely made by an ancient hominin. Photograph courtesy of the National Research and Innovation Agency of Indonesia (BRIN) 'At least one million years ago, there were tool-producing hominins on Sulawesi,' says Gerrit van den Bergh, a vertebrate paleontologist from the University of Wollongong in Australia, and one of the authors of a paper about the findings that was published Wednesday in Nature. But a mystery remains: Were these early toolmakers the 'hobbit' Homo floresiensis, Homo luzonensis, their relatives, or members of a yet-undiscovered species? Hammering for stone tools The Sulawesi expedition was led by Budianto Hakim, an archaeologist from Indonesia's National Research and Innovation Agency. Limited Time: Bonus Issue Offer Subscribe now and gift up to 4 bonus issues—starting at $34/year. The team found seven tools, all embedded in sandstone, at a site called Calio on the south of the island. Hakim found the youngest at the surface, and the oldest was found about two feet deeper down. Based on the approximate age of the surrounding rock and a giant pig jaw buried just above it, the tools are estimated to be at least a million, possibly nearly 1.5 million years old. The youngest stone tools, dating to around 1.04 million years ago, were found embedded near the surface of sandstone. Photograph by Adam Brumm/Griffith University The oldest of the stone tools dated to around 1.48 million years ago and were found two feet below the younger tools. Photograph by Adam Brumm/Griffith University Though two feet may not sound like much of a dig, 'you have to break up the hard rock with a hammer and a chisel,' says van den Bergh, who previously explored the area in the early 1990s. Underneath, the researchers discovered an ancient riverbed in which the tools had been preserved. 'We don't know what they were doing with these sharp-edged flakes of stone, but most likely they were cutting or scraping implements of some kind,' says Adam Brumm, an archaeologist from Griffith University in Queensland, Australia, and an author of the study. Closer investigation revealed the stones were turned around to be struck with another stone at the most promising points to produce useful flakes, showing that whoever made the tools was skilled at it. Some flakes were then struck again to create even sharper tools. Seafaring or swept away? The stone tool discovery hammers home another point: ancient humans, whoever they were, somehow made it to these islands and found a way to survive. Brumm does not think they did so by boat, however. 'Most likely, they crossed to Sulawesi from the Asian mainland in the same way rodents and monkeys are suspected to have done—by accident, presumably as castaways on natural 'rafts' of floating vegetation, maybe after a tsunami,' he says. Flores, where H. floresiensis was found, is hundreds of miles south of Sulawesi. It's also possible hominins from the Philippines—maybe 'hobbits,' maybe not—first made it to Sulawesi, and then ended up on Flores, like how animals did, says van den Bergh. 'If you look at the islands from north to south, the fauna becomes increasingly impoverished,' he says. 'Luzon had rhinos, buffaloes, deer, wild pigs, two kinds of elephants. Sulawesi never had rhinos, but it did have wild pigs and both elephants. Flores had only one of the elephants–and several rat species.' The recent discovery at Calio in southern Sulawesi, Indonesia pushes back the presence of ancient human relatives on the island by hundreds of thousands of years. Photograph courtesy of the National Research and Innovation Agency of Indonesia (BRIN) The further from the Philippines–or the mainland–the fewer animals appear to have made it across. The relationship between the hominins on Sulawesi and H. floresiensis and H. luzonensis cannot be made without fossil remains. But it's possible they were at least distantly related. Our own species, Homo sapiens, and our relatives Neandertals and Denisovans did not yet exist, so van den Bergh says the small-bodied hominins most likely descended from Homo erectus, 'which we know was on the mainland at the right place at the right time.' Thomas Ingicco, a paleoanthropologist at the National Museum of Natural History in Paris, France, and a National Geographic Explorer, agrees Homo erectus is the most likely ancestor of the hominins on South-East Asian islands in this period. Ingicco led a 2018 study documenting the earliest known stone tools and evidence of animal butchery in the Philippines but was not involved in this study. He warns, though, that even though it's 'tempting to think that hominins may have arrived on Sulawesi first, more findings might come out from Luzon and Flores,' and it would therefore be too early, he says, 'to hypothesize too fast about migration paths.' So, were these stone tool-making ancient hominins on Sulawesi 'hobbits,' Homo luzonensis, Homo erectus, or something else? Without any fossils, the answer remains unknown, at least for now. 'I can assure you that in Sulawesi,' van den Bergh says, 'the hunt for hominins will start soon.'


The Verge
04-08-2025
- The Verge
Google's healthcare AI made up a body part — what happens when doctors don't notice?
Scenario: A radiologist is looking at your brain scan and flags an abnormality in the basal ganglia. It's an area of the brain that helps you with motor control, learning, and emotional processing. The name sounds a bit like another part of the brain, the basilar artery, which supplies blood to your brainstem — but the radiologist knows not to confuse them. A stroke or abnormality in one is typically treated in a very different way than in the other. Now imagine your doctor is using an AI model to do the reading. The model says you have a problem with your 'basilar ganglia,' conflating the two names into an area of the brain that does not exist. You'd hope your doctor would catch the mistake and double-check the scan. But there's a chance they don't. Though not in a hospital setting, the 'basilar ganglia' is a real error that was served up by Google's healthcare AI model, Med-Gemini. A 2024 research paper introducing Med-Gemini included the hallucination in a section on head CT scans, and nobody at Google caught it, in either that paper or a blog post announcing it. When Bryan Moore, a board-certified neurologist and researcher with expertise in AI, flagged the mistake, he tells The Verge, the company quietly edited the blog post to fix the error with no public acknowledgement — and the paper remained unchanged. Google calls the incident a simple misspelling of 'basal ganglia.' Some medical professionals say it's a dangerous error and an example of the limitations of healthcare AI. Med-Gemini is a collection of AI models that can summarize health data, create radiology reports, analyze electronic health records, and more. The pre-print research paper, meant to demonstrate its value to doctors, highlighted a series of abnormalities in scans that radiologists 'missed' but AI caught. One of its examples was that Med-Gemini diagnosed an 'old left basilar ganglia infarct.' But as established, there's no such thing. Fast-forward about a year, and Med-Gemini's trusted tester program is no longer accepting new entrants — likely meaning that the program is being tested in real-life medical scenarios on a pilot basis. It's still an early trial, but the stakes of AI errors are getting higher. Med-Gemini isn't the only model making them. And it's not clear how doctors should respond. 'What you're talking about is super dangerous,' Maulin Shah, chief medical information officer at Providence, a healthcare system serving 51 hospitals and more than 1,000 clinics, tells The Verge. He added, 'Two letters, but it's a big deal.' In a statement, Google spokesperson Jason Freidenfelds told The Verge that the company partners with the medical community to test its models and that Google is transparent about their limitations. 'Though the system did spot a missed pathology, it used an incorrect term to describe it (basilar instead of basal). That's why we clarified in the blog post,' Freidenfelds said. He added, 'We're continually working to improve our models, rigorously examining an extensive range of performance attributes -- see our training and deployment practices for a detailed view into our process.' On May 6th, 2024, Google debuted its newest suite of healthcare AI models with fanfare. It billed 'Med-Gemini' as a 'leap forward' with 'substantial potential in medicine,' touting its real-world applications in radiology, pathology, dermatology, ophthalmology, and genomics. The models trained on medical images, like chest X-rays, CT slices, pathology slides, and more, using de-identified medical data with text labels, according to a Google blog post. The company said the AI models could 'interpret complex 3D scans, answer clinical questions, and generate state-of-the-art radiology reports' — even going as far as to say they could help predict disease risk via genomic information. Moore saw the authors' promotions of the paper early on and took a look. He caught the mistake and was alarmed, flagging the error to Google on LinkedIn and contacting authors directly to let them know. The company, he saw, quietly switched out evidence of the AI model's error. It updated the debut blog post phrasing from 'basilar ganglia' to 'basal ganglia' with no other differences and no change to the paper itself. In communication viewed by The Verge, Google Health employees responded to Moore, calling the mistake a typo. In response, Moore publicly called out Google for the quiet edit. This time the company changed the result back with a clarifying caption, writing that ''basilar' is a common mis-transcription of 'basal' that Med-Gemini has learned from the training data, though the meaning of the report is unchanged.' Google acknowledged the issue in a public LinkedIn comment, again downplaying the issue as a 'misspelling.' 'Thank you for noting this!' the company said. 'We've updated the blog post figure to show the original model output, and agree it is important to showcase how the model actually operates.' As of this article's publication, the research paper itself still contains the error with no updates or acknowledgement. Whether it's a typo, a hallucination, or both, errors like these raise much larger questions about the standards healthcare AI should be held to, and when it will be ready to be released into public-facing use cases. 'The problem with these typos or other hallucinations is I don't trust our humans to review them' 'The problem with these typos or other hallucinations is I don't trust our humans to review them, or certainly not at every level,' Shah tells The Verge. 'These things propagate. We found in one of our analyses of a tool that somebody had written a note with an incorrect pathologic assessment — pathology was positive for cancer, they put negative (inadvertently) … But now the AI is reading all those notes and propagating it, and propagating it, and making decisions off that bad data.' Errors with Google's healthcare models have persisted. Two months ago, Google debuted MedGemma, a newer and more advanced healthcare model that specializes in AI-based radiology results, and medical professionals found that if they phrased questions differently when asking the AI model questions, answers varied and could lead to inaccurate outputs. In one example, Dr. Judy Gichoya, an associate professor in the department of radiology and informatics at Emory University School of Medicine, asked MedGemma about a problem with a patient's rib X-ray with a lot of specifics — 'Here is an X-ray of a patient [age] [gender]. What do you see in the X-ray?' — and the model correctly diagnosed the issue. When the system was shown the same image but with a simpler question — 'What do you see in the X-ray?' — the AI said there weren't any issues at all. 'The X-ray shows a normal adult chest,' MedGemma wrote. In another example, Gichoya asked MedGemma about an X-ray showing pneumoperitoneum, or gas under the diaphragm. The first time, the system answered correctly. But with slightly different query wording, the AI hallucinated multiple types of diagnoses. 'The question is, are we going to actually question the AI or not?' Shah says. Even if an AI system is listening to a doctor-patient conversation to generate clinical notes, or translating a doctor's own shorthand, he says, those have hallucination risks which could lead to even more dangers. That's because medical professionals could be less likely to double-check the AI-generated text, especially since it's often accurate. 'If I write 'ASA 325 mg qd,' it should change it to 'Take an aspirin every day, 325 milligrams,' or something that a patient can understand,' Shah says. 'You do that enough times, you stop reading the patient part. So if it now hallucinates — if it thinks the ASA is the anesthesia standard assessment … you're not going to catch it.' Shah says he's hoping the industry moves toward augmentation of healthcare professionals instead of replacing clinical aspects. He's also looking to see real-time hallucination detection in the AI industry — for instance, one AI model checking another for hallucination risk and either not showing those parts to the end user or flagging them with a warning. 'In healthcare, 'confabulation' happens in dementia and in alcoholism where you just make stuff up that sounds really accurate — so you don't realize someone has dementia because they're making it up and it sounds right, and then you really listen and you're like, 'Wait, that's not right' — that's exactly what these things are doing,' Shah says. 'So we have these confabulation alerts in our system that we put in where we're using AI.' Gichoya, who leads Emory's Healthcare Al Innovation and Translational Informatics lab, says she's seen newer versions of Med-Gemini hallucinate in research environments, just like most large-scale AI healthcare models. 'Their nature is that [they] tend to make up things, and it doesn't say 'I don't know,' which is a big, big problem for high-stakes domains like medicine,' Gichoya says. She added, 'People are trying to change the workflow of radiologists to come back and say, 'AI will generate the report, then you read the report,' but that report has so many hallucinations, and most of us radiologists would not be able to work like that. And so I see the bar for adoption being much higher, even if people don't realize it.' Dr. Jonathan Chen, associate professor at the Stanford School of Medicine and the director for medical education in AI, searched for the right adjective — trying out 'treacherous,' 'dangerous,' and 'precarious' — before settling on how to describe this moment in healthcare AI. 'It's a very weird threshold moment where a lot of these things are being adopted too fast into clinical care,' he says. 'They're really not mature.' On the 'basilar ganglia' issue, he says, 'Maybe it's a typo, maybe it's a meaningful difference — all of those are very real issues that need to be unpacked.' Some parts of the healthcare industry are desperate for help from AI tools, but the industry needs to have appropriate skepticism before adopting them, Chen says. Perhaps the biggest danger is not that these systems are sometimes wrong — it's how credible and trustworthy they sound when they tell you an obstruction in the 'basilar ganglia' is a real thing, he says. Plenty of errors slip into human medical notes, but AI can actually exacerbate the problem, thanks to a well-documented phenomenon known as automation bias, where complacency leads people to miss errors in a system that's right most of the time. Even AI checking an AI's work is still imperfect, he says. 'When we deal with medical care, imperfect can feel intolerable.' 'Maybe other people are like, 'If we can get as high as a human, we're good enough.' I don't buy that for a second' 'You know the driverless car analogy, 'Hey, it's driven me so well so many times, I'm going to go to sleep at the wheel.' It's like, 'Whoa, whoa, wait a minute, when your or somebody else's life is on the line, maybe that's not the right way to do this,'' Chen says, adding, 'I think there's a lot of help and benefit we get, but also very obvious mistakes will happen that don't need to happen if we approach this in a more deliberate way.' Requiring AI to work perfectly without human intervention, Chen says, could mean 'we'll never get the benefits out of it that we can use right now. On the other hand, we should hold it to as high a bar as it can achieve. And I think there's still a higher bar it can and should reach for.' Getting second opinions from multiple, real people remains vital. That said, Google's paper had more than 50 authors, and it was reviewed by medical professionals before publication. It's not clear exactly why none of them caught the error; Google did not directly answer a question about why it slipped through. Dr. Michael Pencina, chief data scientist at Duke Health, tells The Verge he's 'much more likely to believe' the Med-Gemini error is a hallucination than a typo, adding, 'The question is, again, what are the consequences of it?' The answer, to him, rests in the stakes of making an error — and with healthcare, those stakes are serious. 'The higher-risk the application is and the more autonomous the system is ... the higher the bar for evidence needs to be,' he says. 'And unfortunately we are at a stage in the development of AI that is still very much what I would call the Wild West.' 'In my mind, AI has to have a way higher bar of error than a human,' Providence's Shah says. 'Maybe other people are like, 'If we can get as high as a human, we're good enough.' I don't buy that for a second. Otherwise, I'll just keep my humans doing the work. With humans I know how to go and talk to them and say, 'Hey, let's look at this case together. How could we have done it differently?' What are you going to do when the AI does that?' Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Features Posts from this topic will be added to your daily email digest and your homepage feed. See All Google Posts from this topic will be added to your daily email digest and your homepage feed. See All Health Posts from this topic will be added to your daily email digest and your homepage feed. See All Report Posts from this topic will be added to your daily email digest and your homepage feed. See All Science Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech