New GPS-Like System For The Moon Could Be An Astronaut Version Of Waze
GPS-based navigation apps, like Google Maps and Waze, have essentially rendered every prior form of navigation obsolete. While it's difficult to imagine life without these services, imagine being an astronaut 238,900 miles from the nearest intersection. Spanish tech conglomerate GMV unveiled a GPS-like system for the Moon last week called LUPIN. Unlike during NASA's Apollo missions, the European Space Agency-backed project would provide real-time location information to astronauts and rovers.
Space agencies can't just copy-paste the technologies we use here for our GPS systems on Earth for a similar service on the Moon. First, there isn't a fleet of GPS satellites orbiting the Moon. NASA aims to establish a network of communication relay satellites in conjunction with the Artemis program to assist a permanent human presence on the lunar surface. LUPIN would operate using NASA's Lunar Communication Navigation System. GMV validated its system after an 11-day field testing campaign on Fuerteventura, one of the Canary Islands.
Read more: 2025 Cadillac Escalade IQ Is All About Big Numbers
Currently, lunar navigation is done using onboard inertial navigation systems, optical cameras and lidar sensors. Vehicles on the surface have to be able to operate independently because their only communications link is with Earth. This isn't ideal because of delays caused by distance and the lack of coverage on the far side of the Moon. LUPIN would free up valuable vehicle computing power for other tasks. Optimistically, the system would be able to provide real-time updates on the lunar terrain to make agencies aware of changing conditions, like moon dust drifts and meteorite impacts.
Despite the current financial and technological perils impacting the Artemis program, NASA is still preparing to send astronauts across the lunar surface in a new rover. The new Lunar Terrain Vehicle will be far more robust than its Apollo-era counterpart. The three companies building new rovers will also be required to drive autonomously, meaning that LUPIN will be a vital aspect of living on the Moon. No mission engineer wants to deal with a rover that rolled over after climbing a boulder that didn't appear on decades-long satellite imagery.
Want more like this? Join the Jalopnik newsletter to get the latest auto news sent straight to your inbox...
Read the original article on Jalopnik.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
28 minutes ago
- Yahoo
We're offloading mental tasks to AI. It could be making us stupid
Koen Van Belle, a test automation engineer who codes for a living, had been using the artificial intelligence large language model Copilot for about six months when one day the internet went down. Forced to return to his traditional means of work using his memory and what he had decades of experience doing, he struggled to remember some of the syntax he coded with. 'I couldn't remember how it works,' Van Belle, who manages a computer programming business in Belgium, told Salon in a video call. 'I became way too reliant on AI … so I had to turn it off and re-learn some skills.' As a manager in his company, Van Belle oversees the work of a handful of interns each year. Because their company has limits on the use of AI, the interns had to curb their use as well, he said. But afterward, the amount and quality of their coding was drastically reduced, Van Belle said. 'They are able to explain to ChatGPT what they want, it generates something and they hope it works,' Van Belle said. 'When they get into the real world and have to build a new project, they will fail.' Since AI models like Copilot and ChatGPT came online in 2022, they have exploded in popularity, with one survey conducted in January estimating that more than half of Americans have used Copilot, ChatGPT, Gemini or Claude. Research examining how these programs affect users is limited because they are so new, but some early studies suggest they are already impacting our brains. 'In some sense, these models are like brain control interfaces or implants — they're that powerful,' said Kanaka Rajan, a computational neuroscientist and founding faculty member at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. 'In some sense, they're changing the input streams to the networks that live in our brains.' In a February study conducted by researchers from Microsoft and Carnegie Mellon University, groups of people working with data worked more efficiently with the use of generative AI tools like ChatGPT — but used less critical thinking than a comparator group of workers who didn't use these tools. In fact, the more that workers reported trusting AI's ability to perform tasks for them, the more their critical thinking was reduced. Another 2024 study published last year reported that the reduction in critical thinking stemmed from relying on AI to perform a greater proportion of the brain work necessary to perform tasks in a process called cognitive offloading. Cognitive offloading is something we do everyday when we write our shopping list, make an event on the calendar or use a calculator. To reduce our brain's workload, we can 'offload' some of its tasks to technology, which can help us perform more complex tasks. However, it has also been linked in other research to things like having a worse memory. As a review published in March concluded: 'Although laboratory studies have demonstrated that cognitive offloading has benefits for task performance, it is not without costs.' It's handy, for example, to be able to rely on your brain to remember the grocery list in case it gets lost. So how much cognitive offloading is good for us — and how is AI accelerating those costs? This concept is not new: The Greek philosopher Socrates was afraid that the invention of writing would make humans dumber because we wouldn't exercise our memory as much. He famously never wrote anything down, though his student, Plato, did. Some argue Socrates was right and the trend is escalating: with each major technological advancement, we increasingly rely on tools outside of ourselves to perform tasks we once accomplished in-house. Many people may not perform routine calculations in their head anymore due to the invention of the calculator, and most people use a GPS instead of pulling out a physical map or going off physical markers to guide them to their is no doubt these inventions have made us more efficient, but the concern lies in what happens when we stop flexing the parts of the brain that are responsible for these tasks. And over time, some argue we might lose those abilities. There is an old ethos of "use it or lose it" that may apply to cognitive tasks as well. Despite concerns that calculators would destroy our ability to do math, research has generally shown that there is little difference in performance when calculators are used and when they are not. Some have even been critical that the school system still generally spends so much time teaching students foundational techniques like learning the multiplication tables when they can now solve those sorts of problems at the touch of a button, said Matthew Fisher, a researcher at Southern Methodist University. On the other hand, others argue that this part of the curriculum is important because it provides the foundational mathematical building blocks from which students learn other parts of math and science, he explained. As Fisher told Salon in a phone interview: "If we just totally get rid of that mathematical foundation, our intuition for later mathematical study, as well as just for living in the world and understanding basic relationships, is going to be off.' Other studies suggest relying on newer forms of technology does influence our brain activity. Research, for example, has found that students' brains were more active when they handwrote information rather than typing it on a keyboard and when using a pen and paper versus a stylus and a tablet. Research also shows that 'use it or lose it' is somewhat true in the context of the skills we learn. New neurons are produced in the hippocampus, the part of the brain responsible for learning. However, most of these new cells will die off unless the brain puts effort and focus into learning over a period of time. People can certainly learn from artificial intelligence, but the danger lies in forgoing the learning process to simply regurgitate information that it feeds us. In 2008, after about two decades of the public internet, The Atlantic published a cover story asking "Is Google making us stupid?" Since then, and with the emergence of smart phones and social media, research has shown that too much time on the internet can lower our ability to concentrate, make us feel isolated and lower our self-esteem. One 2011 review found that people increasingly turn to the internet for difficult questions and are less able to recall the information that they found on the internet when using it to answer those questions. Instead, participants had an enhanced ability to recall where they found it. 'The internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves,' the authors concluded. In 2021, Fisher co-authored research that also found people who used internet searches more had an inflated sense of their own knowledge, reporting exaggerated claims about things they read on the internet compared to a control group who learned things without it. He termed this phenomenon the 'Google effect.' 'What we seem to have a hard time doing is differentiating where our internally mastered knowledge stops and where the knowledge we can just look up but feels a lot like our knowledge begins,' Fisher said. Many argue that AI takes this even further and cuts out a critical part of our imaginative process. In an opinion piece for Inside Higher Education, John Warner wrote that overrelying on ChatGPT for written tasks 'risks derailing the important exploration of an idea that happens when we write.' 'This is particularly true in school contexts, when the learning that happens inside the student is far more important than the finished product they produce on a given assignment,' Warner wrote. Much of the energy dedicated to understanding how AI affects our brains has been focused on adolescents because younger generations use these tools more and may also be more vulnerable to changes that occur because their brains are still developing. One 2023 study, for example, found junior high school students who used AI more had less of an ability to adapt to new social situations. Another 2023 paper also found that students who more heavily relied on AI to answer multiple choice questions summarizing a reading excerpt scored lower than those who relied on their memory alone, said study author Qirui Ju, a researcher at Duke University. 'Writing things down is helping you to really understand the material,' Ju told Salon in a phone interview. 'But if you replace that process with AI, even if you write higher quality stuff with less typos and more coherent sentences, it replaces the learning process so that the learning quality is lower.' To get a better idea of what is happening with people's brains when using large language models, researchers at the Massachusetts Institute of Technology connected 32-channel electroencephalograms to three groups of college-age students who were all answering the same writing prompts: One group used ChatGPT, another used Google and the third group simply used their own brains. Although the study was small, with just 55 participants, its results suggest large language models could affect our memory, attention and creativity, said Nataliya Kos'myna, the leader of the 'Your Brain on LLM' project, and a research scientist at the MIT Media Lab. After writing the essay, 85% of the group using Google and the group using their brains could recall a quote from their writing, compared to only 20% of those who used large language models, Kos'myna said. Furthermore, 16% of people using AI said they didn't even recognize their essay as their own after completing it, compared to 0% of students in the other group, she added. Overall, there was less brain activity and interconnectivity in the group that used ChatGPT compared to the groups that used Google or their brains only. Specifically, activity in the regions of the brain corresponding to language processing, imagination and creative writing in students using large language models were reduced compared to students in other groups, Kos'myna said. The research team also performed another analysis in which students first used their brains for the tasks before switching to performing the same task with the large language models, and vice versa. Those who used their brains first and then went on to try their hand at the task with the assistance of AI appeared to perform better and had the aforementioned areas of their brains activated. But the same was not true for the group that used AI first and then went on to try it with just their brains, Kos'myna said. 'It looks like the large language models did not necessarily help you and provide any additional interconnectivity in the brain,' Kos'myna told Salon in a video call. 'However, there is potential … that if you actually use your brain and then rework the task when being exposed to the tool, it might be beneficial.' Whether AI hinders or promotes our capacity for learning may depend more on how we use it than whether we use it. In other words, it is not AI that is the problem, but our overreliance on it. Van Belle, in Belgium, now uses large language models to write social media posts for his company because he doesn't feel like that is where his skills are most refined and the process can be very time-consuming otherwise. 'I would like to think that I would be able to make a fairly decent LinkedIn post by myself, but it would take me an extra amount of time,' he said. 'That is time that I don't want to waste on something I don't really care about.' These days, he sees AI as a tool, which it can be — as long as we don't offload too much of our brain power on it. 'We've been on this steady march now for thousands of years and it feels like we are at the culmination of deciding what is left for us to know and for us to do,' Fisher said. 'It raises real questions about how best to balance technology and get the most out of it without sacrificing these essentially human things.'


Forbes
39 minutes ago
- Forbes
3 New Studies Remind Us Eating Well Is About More Than Just Weight
Healthy foods may matter more than weight alone. getty If you've ever made a genuine, perhaps painstaking, effort to eat healthier, only to find that your weight doesn't budge, it's easy to feel like you're failing. Or like your body isn't behaving like it should. But a few new studies remind us that this isn't always true. New research published in the European Journal of Preventive Cardiology by a team at Ben-Gurion and Harvard Universities followed more than 700 adults with abdominal obesity who committed to different types of healthy eating—low-fat, low-carb, Mediterranean, and green-Mediterranean—for up to two years. Nearly a third of them didn't lose weight, and some even gained weight. But their health improved in meaningful ways. Perhaps not surprisingly, the people who did lose weight saw the most change in their heart and metabolic stats: each kilogram lost was linked to a 1.44% increase in HDL cholesterol (the good kind), a 1.37% reduction in triglycerides (blood fats), a 2.46% drop in insulin, a 2.79% drop in leptin (the hormone signaling hunger), as well as reductions in blood pressure, liver fat, and liver enzymes. But the good news for some of us with more stubborn scales was that in those whose weight didn't change (who tended to be older adults and women), the researchers also measured higher HDL cholesterol, lower levels of leptin, and a reduction in visceral fat (the type that surrounds organs and increases disease risk). These are not meaningless changes—they can reduce long-term risk for heart disease, diabetes, and other chronic conditions. Also revealing was when the team looked into the biology behind these patterns, they identified 12 DNA methylation sites that predicted long-term weight loss outcomes. These sites may help explain why two people can follow the same diet with different results. 'We have been conditioned to equate weight loss with health, and weight loss-resistant individuals are often labeled as failures,' said lead author and Harvard Chan School postdoctoral researcher Anat Yaskolka Meir in a statement. 'Our findings reframe how we define clinical success. People who do not lose weight can improve their metabolism and reduce their long-term risk for disease. That's a message of hope, not failure.' This idea—that health and weight loss are not synonymous—echoes across two other new studies, too. (Note that these two were presented at the American Society for Nutrition conference last week, and not yet published in peer-review journals.) In a massive analysis of nearly 200,000 people over several decades, researchers found that the quality of food mattered more than whether someone followed a low-carbohydrate or low-fat diet. Neither diet was better than the other: Low-carb and low-fat diets both lowered the risk of developing heart disease by about 15% compared to lower quality foods. The difference came from just that—the quality of foods. Eating more whole grains, fruits, vegetables, legumes, and nuts rather than potatoes, refined grains, and saturated fats and proteins from animal-based foods. In other words, whether your diet has more fat or fewer carbs may be less important than whether you're eating real food vs. processed foods. A third study focused simply on…beans. Researchers found that a daily serving of black beans or chickpeas significantly lowered cholesterol and inflammation in people with pre-diabetes over just 12 weeks. While this one only looked at people with pre-diabetes, lots of other research before it has shown health benefits of eating beans for people without pre-diabetes. The new studies should bring some hope to those of us who were raised to treat diet success like a numbers game, with weight the only outcome that matters. The reality is that in many cases, the body is doing far more behind the scenes than we know.


The Hill
43 minutes ago
- The Hill
Trump's palace coup leaves NASA in limbo
When President-elect Donald Trump nominated Jared Isaacman to become NASA administrator, it seemed like a brilliant choice. Business entrepreneur, private astronaut, Isaacman was just the man to revamp NASA and make it into a catalyst for taking humanity to the moon, Mars and beyond. Isaacman sailed through the confirmation process in the Senate Commerce Committee, chaired by Sen. Ted Cruz (R-Texas), by a vote of 19 to 9. He was poised to be confirmed by the full Senate when something so bizarre happened that it beggars the imagination. The White House suddenly and with no clear reason why, pulled Isaacman's nomination. After months of a confirmation process, NASA was back to square one for getting a new leader. Ars Technica's Eric Berger offered an explanation as to why. 'One mark against Isaacman is that he had recently donated money to Democrats,' he wrote. 'He also indicated opposition to some of the White House's proposed cuts to NASA's science budget.' But these facts were well known even before Trump nominated Isaacman. Trump himself, before he ran for president as a Republican, donated to Democrats and was close friends with Bill and Hillary Clinton. Berger goes on to say that a source told the publication that, 'with Musk's exit, his opponents within the administration sought to punish him by killing Isaacman's nomination.' The idea that Isaacman's nomination is being deep-sixed because of Musk runs contrary to the public praise that the president has given the billionaire rocket and electric car entrepreneur. Trump was uncharacteristically terse in his own social media post. 'After a thorough review of prior associations, I am hereby withdrawing the nomination of Jared Isaacman to head NASA,' he wrote. 'I will soon announce a new nominee who will be mission aligned, and put America First in Space. Thank you for your attention to this matter!' CNN reports that Isaacman's ouster was the result of a palace coup, noting that a source said, 'Musk's exit left room for a faction of people in Trump's inner circle, particularly Sergio Gor, the longtime Trump supporter and director of the White House Presidential Personnel Office, to advocate for installing a different nominee.' The motive seems to be discontent about the outsized influence that Musk has had on the White House and a desire to take him down a peg or two. Isaacman was profoundly gracious, stating in part, 'I am incredibly grateful to President Trump @POTUS, the Senate and all those who supported me throughout this journey. The past six months have been enlightening and, honestly, a bit thrilling. I have gained a much deeper appreciation for the complexities of government and the weight our political leaders carry.' The idea that a man like Isaacman, well respected by the aerospace community, who was predicted to sail through a confirmation vote in the full Senate, could be taken down by an obscure bureaucrat in White House intrigue, motivated by petty spite, is mind boggling. Even Sen. Mark Kelly (D-Ariz.), who has not been fond of Trump's space policy, was appalled. He posted on his X account that Isaacman 'ran into the kind of politics that is damaging our country.' 'Republicans and Democrats supported him as the right guy at the right time for the top job at NASA, but it wasn't enough.' NASA is in for months more of turmoil and uncertainty as the nomination process gets reset and starts grinding its way through the Senate. The draconian, truncated budget proposal is certainly not helpful, either. Congress, which had been supportive of Trump's space policy, is not likely to be pleased by the president's high-handed shivving of his own nominee. Whoever Trump chooses to replace Isaacman as NASA administrator nominee, no matter how qualified, should face some very direct questioning. Trump's NASA budget proposal should be dead on arrival, which, considering the cuts in science and technology, is not necessarily a bad thing. China must be looking at the spectacle of NASA being mired in political wrangling, a leadership vacuum and budget uncertainty with glee. Beijing has its own space ambitions, with a planned crewed lunar landing by 2030. It's possible that the Chinese will steal a march on NASA, with all the damage that will do to America's standing in the world. It didn't have to be this way. Isaacman could be settling in as NASA administrator, deploying his business acumen and vision to lead the space agency to its greatest achievements. Instead, America's space effort has received a self-inflicted blow from which it will be long in recovering, Mark R. Whittington, who writes frequently about space policy, has published a political study of space exploration entitled 'Why is It So Hard to Go Back to the Moon?' as well as 'The Moon, Mars and Beyond,' and, most recently, 'Why is America Going Back to the Moon?' He blogs at Curmudgeons Corner.