Alien life might have flourished in ‘vacation-style beaches' on Mars, scientists say
Researchers now believe that the planet was covered in a huge ocean to its north – as well as a more habitable environment for life.
'We're finding places on Mars that used to look like ancient beaches and ancient river deltas,' said Benjamin Cardenas, assistant professor of geology at Penn State and co-author on the new study. 'We found evidence for wind, waves, no shortage of sand — a proper, vacation-style beach.'
That is thanks to data taken from the Zhurong Mars rover, which was sent to the red planet by China and landed in 2021.
That rover carried a radar that was able to explore the area underneath the surface of Mars. Using low- and high-frequency radar, it could see buried rock formations.
Researchers have used that data to find hidden layers of rock under the surface of Mars that suggest it once had an ocean. The radar data showed a similar layered structure to that found on beaches on Earth, where there are deposits that slope towards the ocean and are made when sediments are carried by waves into a body of water.
'This stood out to us immediately because it suggests there were waves, which means there was a dynamic interface of air and water,' Cardenas said. 'When we look back at where the earliest life on Earth developed, it was in the interaction between oceans and land, so this is painting a picture of ancient habitable environments, capable of harboring conditions friendly toward microbial life.'
The work suggests that Mars was generally once much wetter than it is today. Researchers believe that as well as its large ocean it was generally warm and wet for maybe tens of millions of years, during which it may have supported alien life.
'The capabilities of the Zhurong rover have allowed us to understand the geologic history of the planet in an entirely new way,' said Michael Manga, professor of Earth and planetary science at the University of California, Berkeley, and a corresponding author on the paper.
'Its ground-penetrating radar gives us a view of the subsurface of the planet, which allows us to do geology that we could have never done before. All these incredible advancements in technology have made it possible to do basic science that is revealing a trove of new information about Mars.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
20 hours ago
- Yahoo
Eating this surprising fruit might be the key to getting a better night's sleep
Eating one avocado a day might be the key to a better night's sleep. Adults who included the fruit -- and yes, it's a fruit -- in their daily diet over the course of just six months reported better quality sleep compared to people who consumed fewer than two avocados each month, researchers said. 'Sleep is emerging as a key lifestyle factor in heart health, and this study invites us to consider how nutrition — and foods like avocado — can play a role in improving it,' Dr. Kristina Petersen, an associate professor of nutritional sciences at Penn State University, explained. The recent findings on sleep were tied to avocado's beneficial nutrients including tryptophan, folate, and magnesium. An essential mineral, magnesium is involved in muscle contraction and relaxation. Tryptophan and folate are involved in producing melatonin, the hormone that regulates our sleep cycles. These conclusions were drawn following an assessment of 969 U.S. adults who had what is considered a larger waistline -- 35 inches or more in women and 40 inches or more in men. The study was supported by the Avocado Nutrition Center, although the agency had no role in data collection, analysis, or interpretation. The researchers had initially intended to focus on heart health alone, before participants reported sleep-related benefits. "This was a cardiovascular health trial, making the sleep benefits more credible since they emerged as unexpected secondary findings in a well-designed randomized controlled trial," Dr. John Saito, a spokesperson for the American Academy of Sleep Medicine and pulmonologist at Children's Hospital of Orange County, told Verywell this week. Eating avocados daily was also associated with a better diet and lower cholesterol levels, the researchers reported, building on similar research from 2022. Studies have previously found that disrupted sleep is linked to high levels of cholesterol: the waxy, fat-like substance that can lead to heart attack and stroke. Improved heart health can help people to get better sleep, and getting enough sleep can also improve heart health and slash the risk of disease, according to the Centers for Disease Control and Prevention. Avocado's healthy fat and fiber contributed to positive cardiovascular impacts. The healthy fat can also help to reduce cholesterol levels, and lower the risk for potentially life-threatening cardiac events. The fiber is tied to a healthy gut and reduced risk of death from any cause. Adults should consume at least 25 grams of fiber each day, according to Harvard Medical School. Of course, healthy fat is still fat, and avocados are packed with both nutrients and calories. An entire large avocado can add upward of 400 calories to your daily diet, the Cleveland Clinic says. But, the fruits are still considered to be healthy when consumed in moderation like being added to a morning smoothie or a salad at lunch. Half an avocado has more potassium than a banana, the Harvard T.H. Chan School of Public Health said. Humans need potassium to help fend off high blood pressure, which can result in kidney disease, eye damage, coronary artery disease, and other complications, if it's left untreated. While the findings of the new study cannot be generalized to all populations, Petersen noted that heart health is influenced by many factors, such as fitness and genetics. 'This is an encouraging step in expanding the science around avocados and the potential benefits of consumption,' she said.
Yahoo
a day ago
- Yahoo
AI-powered brain implant restores speech in paralysis patient after 18 years
Eighteen years after a brainstem stroke left her with near-total paralysis, Ann Johnson heard her voice again, thanks to a brain-computer interface (BCI) that decodes speech directly from brain activity. Johnson, then 30, was a high school teacher and coach in Saskatchewan, Canada, when the 2005 stroke caused locked-in syndrome, a rare condition in which a person remains conscious but unable to speak or move. Since then, she has communicated using an eye-tracking system at just 14 words per minute, a far from natural conversational speed of about 160 words per minute. In 2022, she became the third participant in a clinical trial led by researchers at the University of California, Berkeley, and UC San Francisco aimed at restoring speech for people with severe paralysis. The team used a neuroprosthesis that records signals from the speech motor cortex, bypassing damaged neural pathways to produce audible words. Turning thought into voice The device relies on an implant placed over the brain's speech production area. When Johnson attempts to speak, the implant detects neural activity and sends the signals to a connected computer. An AI decoder then translates these signals into text, speech, or facial animation on a digital avatar. Originally, the system used sequence-to-sequence AI models that required an entire sentence before producing output, creating an eight-second delay. In March 2025, the team reported in Nature Neuroscience that they had switched to a streaming architecture, allowing near-real-time translation with just a one-second delay. To personalize the experience, researchers recreated Johnson's voice from a recording of her 2004 wedding speech. She also selected an avatar to match her appearance, which can mimic facial expressions such as smiling or frowning. Engineering for everyday use Lead researchers Assistant Professor of Electrical Engineering and Computer Sciences Gopala Anumanchipalli (UC Berkeley), neurosurgeon Edward Chang (UCSF), and Berkeley Ph.D. student Kaylo Littlejohn say the goal is to make neuroprostheses 'plug-and-play,' turning them from experimental systems into standard clinical tools. Future improvements could include wireless implants, eliminating the need for direct computer connections, and photorealistic avatars for more natural interactions. The team envisions digital 'clones' that replicate not just a user's voice but also their conversational style and visual cues. The breakthrough could help a relatively small but highly vulnerable population, including people who lose the ability to speak due to stroke, ALS, or injury, reclaim faster, more natural communication. Researchers emphasize that the system only works when the participant intentionally tries to speak, preserving user agency and privacy. For Johnson, the trial was life-changing. 'I want patients to see me and to know their lives are not over now,' she said in a statement to UCSF. She hopes to one day work as a counselor in a rehabilitation center, using a neuroprosthesis to talk with clients. With latency down to about a second and ongoing advances in AI modeling, the researchers believe practical, real-time speech restoration could arrive within just a few years, reshaping how technology gives voice to those who have lost their own. Solve the daily Crossword
Yahoo
a day ago
- Yahoo
AI-powered radar can spy on phone calls from 10 feet, exposing new privacy risks
Believe it or not, your phone's tiniest vibrations can reveal your conversations — thanks to AI. A team of computer science researchers at Penn State has developed a startling new way to eavesdrop on phone calls remotely by decoding subtle vibrations emitted by a cellphone's earpiece. Using millimeter-wave radar combined with an AI speech recognition system, their setup can capture and transcribe conversations from up to 10 feet away with about 60% accuracy. This breakthrough raises significant privacy concerns about the potential misuse of such emerging technologies. The research builds on a 2022 project where the team achieved up to 83% accuracy in recognizing 10 predefined words using a similar approach. The new work extends this capability to continuous speech transcription, though the accuracy is lower due to the complexity of decoding noisy radar data. 'When we talk on a cellphone, we tend to ignore the vibrations that come through the earpiece and cause the whole phone to vibrate,' said first author Suryoday Basak, a doctoral candidate in computer science. 'If we capture these same vibrations using remote radars and bring in machine learning to help us learn what is being said, using context clues, we can determine whole conversations. By understanding what is possible, we can help the public be aware of the potential risks.' The team used a millimeter-wave radar sensor, the same technology employed in self-driving cars, motion detectors, and 5G wireless networks, to measure the tiny surface vibrations generated by speech played through a phone earpiece. To interpret this noisy and low-quality data, they adapted Whisper, an open-source AI speech recognition model developed for clean audio, using a low-rank adaptation machine learning technique. This method allowed them to retrain just 1 percent of Whisper's parameters specifically for radar data, improving transcription results without rebuilding the entire model from scratch. Radar tech breakthrough The experimental setup involved positioning the radar sensor about three meters (10 feet) away from the phone to capture the minute vibrations. The data was then fed into the customized AI model, which produced transcriptions with around 60 percent accuracy over a vocabulary of up to 10,000 words. While this is far from perfect, the researchers noted that even partial keyword matches could have serious security implications. 'The result was transcriptions of conversations, with an expectation of some errors, which was a marked improvement from our 2022 version, which outputs only a few words,' said co-author Mahanth Gowda, associate professor of computer science and engineering. 'But even picking up partial matches for speech, such as keywords, are useful in a security context.' The team compared their approach to lip reading, which typically captures only 30% to 40% of spoken words but can still help people infer conversations when combined with context. Similarly, the radar-AI system's output, though imperfect, can reveal sensitive information when supplemented with prior knowledge or manual correction. Privacy risks amplified Basak emphasized the potential privacy risks posed by this emerging technology. 'Similar to how lip readers can use limited information to interpret conversations, the output of our model combined with contextual information can allow us to infer parts of a phone conversation from a few meters away,' he said. 'The goal of our work was to explore whether these tools could potentially be used by bad actors to eavesdrop on phone conversations from a distance. Our findings suggest that this is technically feasible under certain conditions, and we hope this raises public awareness so people can be more mindful during sensitive calls.' The U.S. National Science Foundation supported the research, and the team stressed that their experiments are intended to highlight possible vulnerabilities before malicious actors exploit them. They envision future efforts to develop protective measures to secure personal conversations from this kind of remote surveillance. As wireless technology and AI evolve rapidly, this study serves as a crucial warning: even the faintest vibrations from your everyday devices can potentially betray your most private words. The study has been published in, published in the Proceedings of WiSec 2025: 18th ACM Conference on Security and Privacy in Wireless and Mobile Networks. Solve the daily Crossword