logo
The International Space Station is too clean, scientists say

The International Space Station is too clean, scientists say

Yahoo28-02-2025
The International Space Station may be too sterile – and the astronauts on board could benefit from it being a little dirtier, according to a new study.
When astronauts spend time on the ISS, they often experience immune problems, skin disorders and other conditions.
That could be because the station has a much less diverse array of microbes than on Earth, a new study suggests.
Those microbes that are around tend to have been carried there by humans. As such, there might be some benefit from astronauts 'dirtying' the space station with more microbes from nature, researchers suggest.
They compared it to the benefits found in gardening, which has been well-demonstrated to boost the immune system of those people who do it.
'There's a big difference between exposure to healthy soil from gardening versus stewing in our own filth, which is kind of what happens if we're in a strictly enclosed environment with no ongoing input of those healthy sources of microbes from the outside,' said Rob Knight, from UC San Diego, in a statement.
In the study, scientists worked with astronauts to swab 803 different surfaces on the space station. That is about 100 times more than the samples that have been taken in previous similar surveys.
Researchers then created 3D maps that showed where the swabs were taken, what microbes they showed, and how they could be interacting with the chemicals found there.
Most of the microbes came from human skin, they found. Cleaning chemicals were also found throughout the station.
They found that the collection of microbes tended to be much less diverse than Earth, and were most similar to other highly sterile environments, such as hospitals.
The work is described in a new paper, 'The International Space Station Has a Unique and Extreme Microbial and Chemical Environment Driven by Use Patterns', published in the journal Cell.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Scientists develop brain implant to turn thoughts into speech
Scientists develop brain implant to turn thoughts into speech

UPI

timean hour ago

  • UPI

Scientists develop brain implant to turn thoughts into speech

Stanford University scientists have developed a brain implant designed to "hear" and vocalize words a person with severe paralysis is imagining in their mind. File Photo by Terry Schmitt/UPI | License Photo For the first time, scientists have created a brain implant that can "hear" and vocalize words a person is only imagining in their head. The device, developed at Stanford University in California, could help people with severe paralysis communicate more easily, even if they can't move their mouth to try to speak. "This is the first time we've managed to understand what brain activity looks like when you just think about speaking," Erin Kunz, lead author of the study, published Thursday in the journal Cell, told the Financial Times. "For people with severe speech and motor impairments, brain-computer interfaces capable of decoding inner speech could help them communicate much more easily and more naturally," said Kunz, a postdoctoral scholar in neurosurgery. Four people with paralysis from amyotrophic lateral sclerosis or brainstem stroke volunteered for the study. One participant could only communicate by moving his eyes up and down for "yes" and side to side for "no." Electrode arrays from the BrainGate brain-computer interface were implanted in the brain area that controls speech, called the motor cortex. Participants were then asked to try speaking or to silently imagine certain words. The device picked up brain activity linked to phonemes, the small units that make up speech patterns, and artificial intelligence software stitched them into sentences. Imagined speech signals were weaker than attempted speech but still accurate enough to reach up to 74% recognition in real time, the research shows. Senior author Frank Willett, an assistant professor of neurosurgery at Stanford, told the Financial Times the results show that "future systems could restore fluent, rapid and comfortable speech via inner speech alone," with better implants and decoding software. "For people with paralysis attempting to speak can be slow and fatiguing and, if the paralysis is partial, it can produce distracting sounds and breath control difficulties," Willett said. The team also addressed privacy concerns. In one surprising finding, the BCI sometimes picked up words participants weren't told to imagine -- such as numbers they were silently counting. To protect privacy, researchers created a "password" system that blocks the device from decoding unless the user unlocks it. In the study, imagining the phrase "chitty chitty bang bang" worked 98% of the time to prevent unintended decoding. "This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural and comfortable as conversational speech," Willett said. More information Learn more about the technology by reading the full study in the journal Cell. Copyright © 2025 HealthDay. All rights reserved.

New Tool Guides Blood Cancer Txs For Patients 60+
New Tool Guides Blood Cancer Txs For Patients 60+

Medscape

time5 hours ago

  • Medscape

New Tool Guides Blood Cancer Txs For Patients 60+

TOPLINE: A novel Comprehensive Health Assessment Risk Model (CHARM) incorporating seven health variables effectively predicts nonrelapse mortality and survival in allogeneic hematopoietic cell transplantation recipients aged at least 60 years. The model stratified 1-year nonrelapse mortality rates from 8.1% to 23.3% across risk groups, outperforming traditional assessment methods. METHODOLOGY: A multicenter (n = 49), prospective, observational clinical trial enrolled 1105 recipients of allogeneic hematopoietic cell transplantation aged ≥ 60 years (range, 60-82 years) from centers across the US. Researchers analyzed associations between 13 measurements of older adult health and nonrelapse mortality within 1 year to construct a comprehensive health assessment risk model using multivariate Fine-Gray model and grouped penalized variable selection. Analysis included two machine learning models (Cox and pseudo-value boosting) for comparison, with performance evaluated using area under the curve, bootstrap and cross-validation sampling, decision curve analysis, calibration, and Brier scores. Primary outcome measure was 1-year nonrelapse mortality, defined as death without relapse or progression of primary hematologic malignancy. TAKEAWAY: Primary-CHARM identified seven key predictors: higher comorbidity burden, C-reactive protein, weight loss, and age, along with lower albumin, patient-reported performance score, and cognitive score (hazard ratio [HR], 2.72; P < .0001). Patients in low, intermediate, and high CHARM score tertiles showed 1-year nonrelapse mortality rates of 8.1% (95% CI, 5.6-11.1), 12.1% (95% CI, 9.1-15.7), and 23.3% (95% CI, 19.0-27.7), respectively. Overall survival at 1 year was 71.7% (95% CI, 68.2-75.1), with CHARM scores stratifying survival to 81.2%, 73.8%, and 59.6% for low, intermediate, and high-risk tertiles. CHARM demonstrated higher net benefit than HCT-comorbidity index across a wide range of threshold probabilities for nonrelapse mortality. IN PRACTICE: 'The CHARM should improve decision-making [and] selection of the best transplant strategy by weighing risks vs benefits, allow calibration of data across trials and institutions, and ensure that appropriate older patients are not excluded from curative-intent allo-HCT,' wrote the authors of the study. SOURCE: The study was led by Mohamed L. Sorror, PhD, Clinical Research Division, Fred Hutchinson Cancer Center in Seattle, Washington. It was published online in Blood Advances. LIMITATIONS: The researchers acknowledged that the cross-validation bias-corrected area under the curve of 0.591 for primary-CHARM was modest, indicating room for improvement in predictive accuracy. The study's large sample size requirement and declining nonrelapse mortality made a parallel external validation cohort impractical. Additionally, the contribution from underrepresented minority groups was modest despite broad eligibility and supporting three languages. The study was conducted only in US centers, potentially limiting its global applicability. DISCLOSURES: The study received support from grants U10HL069294 and U24HL138660 to the Blood and Marrow Transplant Clinical Trials Network from the National Heart, Lung, and Blood Institute and the National Cancer Institute. Sorror reported receiving consultancy fees and honoraria from JAZZ Pharmaceuticals for educational talks and research funding from BlueNote. Additional disclosures are noted in the original article. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

New Brain Device Is First to Read Out Inner Speech
New Brain Device Is First to Read Out Inner Speech

Scientific American

time4 days ago

  • Scientific American

New Brain Device Is First to Read Out Inner Speech

After a brain stem stroke left him almost entirely paralyzed in the 1990s, French journalist Jean-Dominique Bauby wrote a book about his experiences—letter by letter, blinking his left eye in response to a helper who repeatedly recited the alphabet. Today people with similar conditions often have far more communication options. Some devices, for example, track eye movements or other small muscle twitches to let users select words from a screen. And on the cutting edge of this field, neuroscientists have more recently developed brain implants that can turn neural signals directly into whole words. These brain-computer interfaces (BCIs) largely require users to physically attempt to speak, however—and that can be a slow and tiring process. But now a new development in neural prosthetics changes that, allowing users to communicate by simply thinking what they want to say. The new system relies on much of the same technology as the more common 'attempted speech' devices. Both use sensors implanted in a part of the brain called the motor cortex, which sends motion commands to the vocal tract. The brain activation detected by these sensors is then fed into a machine-learning model to interpret which brain signals correspond to which sounds for an individual user. It then uses those data to predict which word the user is attempting to say. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. But the motor cortex doesn't only light up when we attempt to speak; it's also involved, to a lesser extent, in imagined speech. The researchers took advantage of this to develop their 'inner speech' decoding device and published the results on Thursday in Cell. The team studied three people with amyotrophic lateral sclerosis (ALS) and one with a brain stem stroke, all of whom had previously had the sensors implanted. Using this new 'inner speech' system, the participants needed only to think a sentence they wanted to say and it would appear on a screen in real time. While previous inner speech decoders were limited to only a handful of words, the new device allowed participants to draw from a dictionary of 125,000 words. 'As researchers, our goal is to find a system that is comfortable [for the user] and ideally reaches a naturalistic ability,' says lead author Erin Kunz, a postdoctoral researcher who is developing neural prostheses at Stanford University. Previous research found that 'physically attempting to speak was tiring and that there were inherent speed limitations with it, too,' she says. Attempted speech devices such as the one used in the study require users to inhale as if they are actually saying the words. But because of impaired breathing, many users need multiple breaths to complete a single word with that method. Attempting to speak can also produce distracting noises and facial expressions that users find undesirable. With the new technology, the study's participants could communicate at a comfortable conversational rate of about 120 to 150 words per minute, with no more effort than it took to think of what they wanted to say. Like most BCIs that translate brain activation into speech, the new technology only works if people are able to convert the general idea of what they want to say into a plan for how to say it. Alexander Huth, who researches BCIs at the University of California, Berkeley, and wasn't involved in the new study, explains that in typical speech, 'you start with an idea of what you want to say. That idea gets translated into a plan for how to move your [vocal] articulators. That plan gets sent to the actual muscles, and then they carry it out.' But in many cases, people with impaired speech aren't able to complete that first step. 'This technology only works in cases where the 'idea to plan' part is functional but the 'plan to movement' part is broken'—a collection of conditions called dysarthria—Huth says. According to Kunz, the four research participants are eager about the new technology. 'Largely, [there was] a lot of excitement about potentially being able to communicate fast again,' she says—adding that one participant was particularly thrilled by his newfound potential to interrupt a conversation—something he couldn't do with the slower pace of an attempted speech device. To ensure private thoughts remained private, the researchers implemented a code phrase: 'chitty chitty bang bang.' When internally spoken by participants, this would prompt the BCI to start or stop transcribing. Brain-reading implants inevitably raise concerns about mental privacy. For now, Huth isn't concerned about the technology being misused or developed recklessly, speaking to the integrity of the research groups involved in neural prosthetics research. 'I think they're doing great work; they're led by doctors; they're very patient-focused. A lot of what they do is really trying to solve problems for the patients,' he says, 'even when those problems aren't necessarily things that we might think of,' such as being able to interrupt a conversation or 'making a voice that sounds more like them.' For Kunz, this research is particularly close to home. 'My father actually had ALS and lost the ability to speak,' she says, adding that this is why she got into her field of research. 'I kind of became his own personal speech translator toward the end of his life since I was kind of the only one that could understand him. That's why I personally know the importance and the impact this sort of research can have.' The contribution and willingness of the research participants are crucial in studies like this, Kunz notes. 'The participants that we have are truly incredible individuals who volunteered to be in the study not necessarily to get a benefit to themselves but to help develop this technology for people with paralysis down the line. And I think that they deserve all the credit in the world for that.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store