
Dogs could help predict spread of Valley fever in humans: Study
California scientists have determined that dogs might be able to help predict the spread of a dangerous fungus, which has surged in recent years due to the impacts of climate change.
Drought conditions across the Golden State have been ramping up the dispersal of a soil-dwelling organism called coccidioides, which causes the flu-like disease known as coccidioidomycosis, or 'Valley fever.'
The disease, which can cause grave or even deadly complications, has risen sharply among California residents over the past two decades: Reported cases tripled from 2014 to 2018 and again from 2018 to 2022.
Valley fever was previously concentrated in parts of Arizona and California's lower San Joaquin Valley. Rather than passing from person to person, the disease develops from the direct inhalation of these fungal spores.
But Valley fever is also common in animals, particularly dogs that dig in the dirt, according to researchers from the University of California, who published a study on Thursday in the Journal of Infectious Diseases.
The scientists assessed nearly 835,000 blood antibody tests taken from dogs nationwide between 2012 and 2022 — and found that 40 percent tested positive for the disease.
'Dogs are sentinels for human infections,' lead author Jane Sykes, a professor of small animal internal medicine at the UC Davis School of Veterinary Medicine, said in a statement.
'They can help us understand not just the epidemiology of the disease but they're also models to help us understand the disease in people,' she added.
Along with colleagues at UC Berkeley, Sykes mapped positive results by location and found that the presence of Valley fever in dogs surged from just 2.4 percent of U.S. counties in 2012 to 12.4 percent in 2022.
'We were also finding cases in states where valley fever is not considered endemic,' Sykes said. 'We should be closely watching those states because there could be under-recognition of the emerging fungal disease in humans.'
The sheer number of cases, the authors explained, cannot be attributed to dogs visiting other states, since the animals travel far less frequently than humans do. Dog cases were also correlated with human Valley fever 'hot spots,' per the study.
Arizona was responsible for 91.5 percent of positive tests, followed by California at 3.7 percent; Nevada, Utah, Colorado, New Mexico and Texas at 2.6 percent combined; and Washington, Oregon and Idaho at 0.6 percent combined.
The remaining states reported far fewer positive results, with 1.3 percent combined, the researchers found.
The authors also determined that dog breeds that tend to dig, such as medium-to-large animals and terriers, are more likely to get Valley fever. They also exhibit some of the same symptoms of the disease that humans experience, including a cough and lung infection.
The fungus can spread to the bones, brain and skin and require lifelong anti-fungal injections, and can potentially result in death, the scientists warned.
By learning more about Valley fever in dogs, Sykes suggested that humans could develop new tests or routes for treatment — potentially preventing misdiagnosis or undiagnosed disease among themselves.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
3 hours ago
- New York Post
Salto the squirrelly robot could be the future of space exploration — and challenging rescue operations here on Earth: researchers
Engineers developing space exploration robots drew inspiration from the common squirrel for their latest cutting-edge bot design, which they say could also help in search and rescue missions during disasters. Scientists at the University of California, Berkeley, created a robot capable of imitating the furry woodland creatures' ability to hop and land on a narrow target, like squirrels do when they leap from branch to branch, according to a study. 'We've been inspired by squirrels,' study co-author and Berkeley grad Justin Yim told Science News Explores on Tuesday. Advertisement 'Squirrels are nature's best athletes,' added Robert Full, one of the study's senior authors and a professor of integrative biology at UC Berkeley. 'The way that they can maneuver and escape is unbelievable.' The researchers studied how the bushy-tailed acrobats leap – and more importantly land – and applied that knowledge while building the one-legged robot they named Salto. 'Based on studies of the biomechanics of squirrel leaps and landings, they have designed a hopping robot that can stick a landing on a narrow perch,' the university said in a press release. Advertisement Salto is designed to mimic the acrobatic parkour skills of a squirrel and land with pinpoint precision. The pogo-stick-like bot's nickname stands for Saltatorial Agile Locomotion on Terrain Obstacles, for its skill borrowed from the nut-loving rodents. 'Saltatorial' is the scientific word to describe animals such as kangaroos, grasshoppers and rabbits that have evolved to be natural leapers. 3 Researchers drew inspiration from squirrels when building Salto, a tiny one-legged robot. Sebastian Lee (top image) and Justin Yim (bottom) Salto will be able to not only explore low-gravity objects in space, researchers said, but also help people trapped in disasters here on Earth. Advertisement 'The robots we have now are OK, but how do you take it to the next level? How do you get robots to navigate a challenging environment in a disaster where you have pipes and beams and wires? Squirrels could do that, no problem. Robots can't do that,' Full said. 'For example, in a disaster scenario, where people might be trapped under rubble, robots might be really useful at finding the people in a way that is not dangerous to rescuers and might even be faster than rescuers could have done unaided,' Yim, who now works at the University of Illinois Urbana-Champaign, told Core77. A video produced by Berkeley shows Salto in action, crouching down before it leaps from one metal dowel to another, wrapping its claw-like foot along the dowel as it sticks the landing. 3 Salto the squirrelly robot uses his flywheel to correct his balance. UC Berkeley Advertisement Researchers first started working on the spindly robot in 2016 and have made several tweaks over the years to improve its balance. Salto was able to successfully leap from one pipe to another 25 out of 30 times, but its landings could be better, according to researchers. 'There's lots of room for improvement,' Yim told Science News Explores. 3 Researchers said once Salto is perfected, he can explore space and rescue people here on Earth from disasters. Justin Yim, UIUC Engineers could refine Salto's claw, he said, so it has a firmer grasp, the way a squirrel grips a tree branch with its toes. The researchers' goal is to get Salto to be able to hop the length of a football field and land on an area as tiny as a dime. Once perfected, the rodent-inspired robot could explore Enceladus, a moon of Saturn where the gravity is one-eightieth that of Earth, the researchers said.


USA Today
16 hours ago
- USA Today
Check out this interactive map of the early universe, considered largest ever created
Check out this interactive map of the early universe, considered largest ever created An intricate astral tapestry, the map gives stargazers digital views of the ancient cosmos in unprecedented detail and breadth. A team of astronomers have put together the largest, most detailed map of the universe ever created – and you can explore it now. The interactive online map, created using data from NASA's James Webb Space Telescope, details some 800,000 galaxies across a vast cosmic distance – which in astronomy amounts to peering back in time. In fact, some of the galaxies are so far away, they appear as they existed not long after the Big Bang. Depicting a section of the universe known as the COSMOS-Web field, the new map is far more expansive than even the iconic Hubble Ultra Deep Field, a view of 10,000 galaxies NASA released in 2004. Spanning nearly all of cosmic time, the new map has the potential to challenge existing notions of the infant universe, the astronomers who created it claimed in a press release. The best part? The interactive map is available for the public to use. See interactive map of the universe A team of international scientists who are part of the Cosmic Evolution Survey program (COSMOS) created and released the map of the universe Thursday, June 5. Compiled from more than 10,000 images of COSMOS-Web – the largest observing program of James Webb Space Telescope's first year in orbit – the map covers about three times as much space as the moon takes up when viewed from Earth. That makes it the largest contiguous image available from Webb, according to the Rochester Institute of Technology, whose Jeyhan Kartaltepe is a lead researcher on the project. An intricate astral tapestry, the map gives stargazers digital views of the ancient cosmos in unprecedented detail and breadth. Scrolling and zooming in can take users some 13.5 billion years back in time when the universe was in its infancy and stars, galaxies and black holes were still forming. 'If you had a printout of the Hubble Ultra Deep Field on a standard piece of paper, our image would be slightly larger than a 13-foot by 13-foot-wide mural, at the same depth," Caitlin Casey, a physicist at the University of California, Santa Barbara and co-lead for the COSMOS project, said in a statement. "It's really strikingly large.' Explore the interactive map here. NASA's Webb telescope gathers data for online map Using its powerful resolution and infrared capabilities, the James Webb Space Telescope observed a region of space known as the COSMOS-web field, which scientists have been surveying for years. The raw data from the COSMOS field observations was made publicly available once it was collected by Webb, but that didn't mean it was easily accessible. That's why the COSMOS project spent two years creating the map from Webb's raw data to make it more digestible for amateur astronomers, researchers and even the general public. "In releasing the data to the public, the hope is that other astronomers from all over the world will use it to, among other things, further refine our understanding of how the early universe was populated and how everything evolved to the present day," according to a statement from UC Santa Barbara. What is the James Webb Space Telescope? The James Webb Space Telescope, which launched in 2021, far surpasses the abilities of the Hubble Space Telescope, launched 35 years ago in 1990. Orbiting the sun rather than Earth, the Webb is outfitted with a gold-coated mirror and powerful infrared instruments to observe the cosmos like no instrument before. Since reaching the cosmos, Webb has not only facilitated countless scientific breakthroughs in astrophysics, but it also has produced gorgeous images of planets and other celestial objects, including star-forming regions. In March, NASA also deployed into orbit its SPHEREx telescope to collect data on more than 450 million galaxies. Scientists say the SPHEREx observatory will be able to get a wider view of the galaxy – identifying objects of scientific interest that telescopes like Hubble and Webb can then study up close. SPHEREx became operational in May, constantly snapping images of the cosmos. Eric Lagatta is the Space Connect reporter for the USA TODAY Network. Reach him at elagatta@


Scientific American
20 hours ago
- Scientific American
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches. The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion. In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. 'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.' Real-time decoder The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear. Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words. 'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.' The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease. The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary. Freedom of speech Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'. In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech. 'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.'