logo
The Milky Way Has a Mysterious ‘Broken Bone'

The Milky Way Has a Mysterious ‘Broken Bone'

WIRED26-05-2025

May 26, 2025 5:00 AM Galactic bones, filaments of radio-wave-emitting particles, run through our galaxy, and one of them has a fracture. New analysis suggests collision with a neutron star may have caused it. A photo of the galactic bone known as The Snake. Photograph: NASA/CXC/Northwestern University
If you look at the Milky Way through a powerful telescope, you'll notice that close to the center of the galaxy there are elongated filaments that seem to outline its spiral shape. Scientists have a nickname for these structures: 'galactic bones.' Recently, astronomers found that one of the Milky Way's bones is 'fractured,' and they believe they've now found a possible culprit: a neutron star that may have collided with it.
According to NASA, these bones are huge elongated formations of energized particles that spiral along magnetic fields running through the galaxy. The particles release radio waves, and so are detectable using radio telescopes.
Scientists have found several such bones in the galaxy, but one of the most striking is called G359.13142-0.20005, also known as 'the Snake.' It is a 230-light-year-long filament that appears to have a fracture. It is also one of the brightest. One of the first explanations was that some as yet undetected body had disturbed the filament.
A study by Harvard University, published in the journal Monthly Notice of the Royal Astronomical Society , set out to test this hypothesis. The research team involved found signs of a pulsar, a neutron star spinning at high speed, in the same region as the broken bone. These stars are extremely dense, and are the small remnants left after the explosion of a supermassive star.
Using NASA's Chandra X-ray Observatory, which orbits Earth, along with the MeerKAT telescope array in South Africa and the Very Large Array in New Mexico—two systems that detect radio waves—scientists found what appear to be traces of a pulsar in the filament. Based on data from these observatories, they estimate that this pulsar impacted the bone at a speed of between 1,609,000 and 3,218,000 kilometers per hour. The suspected collision is thought to have distorted the magnetic field of the bone, causing its radio signal to deform.
The structure G359.13, with the fracture visible on its right-hand side. Photograph: NASA/CXC/Northwestern University
In the above image provided by NASA, the Snake can be seen, and there is a body that appears to be interacting with the structure, in the middle of its length. It is possibly the aforementioned neutron star.
Pulsars are alternative versions of a neutron star where, in addition to being compact objects, they rotate at high velocities and produce strong magnetic fields. At the moment there is no instrument that can see them directly due to their size and distance, but radio telescopes can detect the electromagnetic waves they emit and hear them by converting these into sound.
This story originally appeared on WIRED en Español and has been translated from Spanish.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Building Blocks Of Agentic Systems: What Does It Look Like?
Building Blocks Of Agentic Systems: What Does It Look Like?

Forbes

time11 minutes ago

  • Forbes

Building Blocks Of Agentic Systems: What Does It Look Like?

We know that AI agents are going to be redefining business at nearly every level, in every vertical end in every field. But what supports this technology? Why are we now seeing agentic AI explode? There are a number of fundamental ideas getting used by companies and other stakeholders right now. One is that idea that AI can specialize into doing various tasks or operations. We see this with Claude, and agents that can use a computer like humans do. There's also the idea of distillation of systems and ensemble learning, where models interact with each other. And then there's the data side, where companies have to decide how do they deploy systems and where do they keep the data, as well as how they aggregate it for use. At Imagination in Action in April, my colleague Daniela Rus, director of the MIT CSAIL lab, interviewed a number of professionals on their thoughts. Cindy Howson from Thoughtspot, Kevin Shatzkamer of Google Cloud, formerly of Microsoft, and Anshul Ramachandran from Windsurf participated. In terms of big potential, Howson said the underpinnings were already there, and mentioned the 'Internet of AI' as a new paradigm. Shatzkamer talked about productive AI And its capabilities, while noting that although a lot of the technology is here, it's 'not stitched together yet.' Ramachandran talked about generative models getting good at specialization, and the proliferation of agentic systems. 'Even as we are hitting some physical limitations in the real world,' he said, 'it's going to unlock different frontiers of models, power, technology in general, that will enable a new kind of frontier of applications and ways of thinking about things.' In terms of current business limitations, Howson said something about getting clean, consistent data, and talked about moving from the structured data to semi-structured data, such as data assets housed in PDFs. 'I think many companies have clean, consistent structured data,' she said. 'When we talk about semi-structured data, think about the PDFs on your network drives -which employee manual is the right version? It's anyone's guess. … I think some of the data management disciplines that we have applied to structured data, we have not applied to semi-structured, but I think the technology is ready, it is more the people, the mindsets and the processes that are less ready.' She also pointed out that 81% of people fail at basic literacy. The panel also discussed how systems are getting smarter. Ramachandran talked about multiset retrieval, and how the systems can do search the way humans do, with one search after another, to compound accuracy and promote vibrant results. Shatzkamer talked about long memory and contest windows, and research reasoning capabilities. He also mentioned the future value of quantum, and of supervised fine-tuning. 'Look where quantum computing is on the near horizon,' he said. 'I think that's going to be a game changer in terms of AI processing capabilities, right? I think right now we're in a world of more, bigger, faster, and we keep on trying to build as much infrastructure as possible to support the demand. And I think we'll see that trend continue for the foreseeable future.' As for the supervised fine-tuning, he had this to say: 'As much as we've talked about supervised learning … in the ML world, (in) this new supervised fine-tuning world, (you) can build smaller models with human in the loop in a much more meaningful way.' Ramachandran suggested that generative AI is hitting critical mass, with interesting data that doesn't necessarily need huge LLMs. He gave examples of user behavior stats that can unlock a lot of actionable moves for nearly any kind of business, pointing out that you don't need a massive data center or a lot of Nvidia GPUs to do these kinds of research. Shatzkamer opined that the OS community did a good job in fostering all of this to maturity. Howson talked about the cloud getting decentralized and the 'hybrid world' that will result. When Rus asked about the single most interesting emerging tech, the panelists had these responses. Howson brought up agentic analytics. Shatzkamer talked about operational metrics for efficiency. Ramachandran said he's most interested in robotics and physical AI. All of this has big ramifications for our new AI world. Stay with me as we see a lot of this continuing to evolve as the year moves on.

Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040
Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Forbes

time18 minutes ago

  • Forbes

Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Future forecasting the yearly path of advancing todays to AGI by 2040. In today's column, I am continuing my special series on the likely pathways that will get us from conventional AI to the avidly sought attainment of AGI (artificial general intelligence). AGI would be a type of AI that is fully on par with human intellect in all respects. I've previously outlined seven major paths that seem to be the most probable routes of advancing AI to reach AGI (see the link here). Here, I undertake an analytically speculative deep dive into one of those paths, namely I explore the year-by-year aspects of the considered most-expected route, the linear path. Other upcoming postings will cover each of the other remaining paths. The linear path consists of AI being advanced incrementally, one step at a time until we arrive at AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). The most often presumed path is the incremental progression trail. The AI industry tends to refer to this as the linear path. It is essentially slow and steady. Each of the other remaining major routes involves various twists and turns. Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Let's undertake a handy divide-and-conquer approach to identify what must presumably happen on a year-by-year basis to get from current AI to AGI. Here's how that goes. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. In the particular case of the linear path, the key assumption is that AI is advancing in a stepwise fashion each year. There aren't any sudden breakthroughs or miracles that perchance arise. It is steady work and requires earnestly keeping our nose to the grind and getting the job done in those fifteen years ahead. The idea is to map out the next fifteen years and speculate what will happen with AI in each respective year. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. I opted to identify AI technological advancements for each of the years and added some brief thoughts on the societal implications too. Here's why. AI ethics and AI law are bound to become increasingly vital and will to some degree foster AI advances and in other ways possibly dampen some AI advances, see my in-depth coverage of such tensions at the link here. Here then is a strawman futures forecast year-by-year roadmap from 2025 to 2040 of a linear path getting us to AGI: Year 2025: AI multi-modal models finally become robust and fully integrated into LLMs. Significant improvements in AI real-time reasoning, sensorimotor integration, and grounded language understanding occur. The use of AI in professional domains such as law, medicine, and the like rachet up. Regulatory frameworks remain sporadic and generally unadopted. Year 2026: Agentic AI starts to blossom and become practical and widespread. AI systems with memory and planning capabilities achieve competence in open-ended tasks in simulation environments. Public interest in governing AI increases. Year 2027: The use of AI large-scale world models spurs substantially improved AI capabilities. AI can now computationally improve from fewer examples via advancements in AI meta-learning. Some of these advances allow AI to be employed in white-collar jobs that have a mild displacement economically, but only to a minor degree. Year 2028: AI agents have gained wide acceptance and are capable of executing multi-step tasks semi-autonomously in digital and physical domains, including robotics. AI becomes a key element as taught in schools and as used in education, co-teaching jointly with human teachers. Year 2029: AI is advanced sufficiently to have a generalized understanding of physical causality and real-world constraints through embodied learning. Concerns about AI as a job displacer reach heightened attention. Year 2030: Self-improving AI systems begin modifying their own code under controlled conditions, improving efficiency without human input. This is an important underpinning. Some claim that AGI is now just a year or two away, but this is premature, and ten more years will first take place. Year 2031: Hybrid AI consisting of integrated cognitive architectures unifying symbolic reasoning, neural networks, and probabilistic models has become the new accepted approach to AI. Infighting among AI developers as to whether hybrid AI was the way to go has now evaporated. AI-based tutors fully surpass human teachers in personalization and subject mastery, putting human teachers at great job risk. Year 2032: AI agents achieve human-level performance across most cognitive benchmarks, including abstraction, theory of mind (ToM), and cross-domain learning. This immensely exceeds prior versions of AI that did well on those metrics but not nearly to this degree. Industries begin to radically restructure and rethink their businesses with an AI-first mindset. Year 2033: AI scalability alignment protocols improve in terms of human-AI values alignment. This opens the door to faster adoption of AI due to a belief that AI safety is getting stronger. Trust in AI grows. But so is societal dependence on AI. Year 2034: AI interaction appears to be indistinguishable from human-to-human interaction, even as tested by those who are versed in tricking AI into revealing itself. The role of non-human intelligence and how AI stretches our understanding of philosophy, religion, and human psychology has become a high priority. Year 2035: AI systems exhibit bona fide signs of self-reflection, not just routinized mimicry or parroting. Advances occur in having AI computationally learn from failure across domains and optimizing for long-term utility functions. Debates over some form of UBI (universal basic income) lead to various trials of the approach to aid human labor displacements due to AI. Year 2036: AI advancement has led to fluid generalization across a wide swath of domains. Heated arguments take place about whether AGI is emerging, some say it is, and others insist that a scaling wall is about to be hit and that this is the best that AI will be. Nations begin to covet their AI and set up barriers to prevent other nations from stealing or copying the early AGI systems. Year 2037: Advances in AI showcase human-like situational adaptability and innovation. New inventions and scientific discoveries are being led by AI. Questions arise about whether this pre-AGI has sufficient moral reasoning and human goal alignment. Year 2038: AI systems now embody persistent identities, seemingly able to reflect on experiences across time. Experts believe we are on the cusp of AI reaching cognitive coherence akin to humans. Worldwide discourse on the legal personhood and rights of AI intensifies. Year 2039: Some of the last barriers to acceptance of AI as nearing AGI are overcome when AI demonstrates creativity, emotional nuance, and abstract reasoning in diverse contexts. This was one of the last straws on the camel's back. Existential risks and utopian visions fully dominate public apprehensions. Year 2040: General agreement occurs that AGI has now been attained, though it is still early days of AGI and some are not yet convinced that AGI is truly achieved. Society enters a transitional phase: post-scarcity economics, redefinition of human purpose, and consideration of co-evolution with AGI. Mull over the strawman timeline and consider where you will be and what you will be doing during each of those fifteen years. One viewpoint is that we are all along for the ride and there isn't much that anyone can individually do. I don't agree with that sentiment. Any of us can make a difference in how AI plays out and what the trajectory and impact of reaching AGI is going to be. As per the famous words of Abraham Lincoln: 'The most reliable way to predict the future is to create it.'

Long, dark 'streaks' spotted on Mars aren't what scientists thought
Long, dark 'streaks' spotted on Mars aren't what scientists thought

Yahoo

time39 minutes ago

  • Yahoo

Long, dark 'streaks' spotted on Mars aren't what scientists thought

When you buy through links on our articles, Future and its syndication partners may earn a commission. Mysterious dark streaks flowing across Mars's surface may not be the result of running water after all, a new artificial intelligence (AI) analysis suggests. The streaks, first observed running along Mars's cliffsides and crater walls by NASA's Viking mission in 1976, were long thought by scientists to have formed as a result of the flow of ancient water across the now mostly desiccated planet's surface. But an AI algorithm trained on slope streak observations has revealed a different origin for the streaks — likely being formed from wind and dust, not water. The findings, published May 19 in the journal Nature Communications, could have important implications for where humans choose to explore Mars, and the places they search for evidence of possible ancient life. "That's the advantage of this big data approach," study co-author Adomas Valantinas, a planetary scientist at Brown University, said in a statement. "It helps us to rule out some hypotheses from orbit before we send spacecraft to explore." The sinewy lines are darker than the surrounding Martian ground and extend for hundreds of meters downhill. The shorter-lived of these features are called recurring slope lineae (RSL), and regularly spring up during Mars's warmer spells. This led some planetary scientists to suggest that seasonal temperature fluctuations could be causing ice or frozen aquifers to melt, or humid air to condense, sending streams of salty water trickling down the planet's craters. If this were true, it would make these regions of particular interest to future Mars missions. Related: Curiosity rover finds largest carbon chains on Mars from 3.7 billion-year-old rock To investigate this, the scientists behind the study trained a machine learning algorithm on confirmed streak sightings before making it scan through 86,000 satellite images to create a map of 500,000 streak features. RELATED STORIES —NASA may have unknowingly found and killed alien life on Mars 50 years ago, scientist claims —'Building blocks of life' discovered on Mars in 10 different rock samples —Just 22 people are needed to colonize Mars — as long as they are the right personality type, study claims "Once we had this global map, we could compare it to databases and catalogs of other things like temperature, wind speed, hydration, rock slide activity and other factors." Bickel said. "Then we could look for correlations over hundreds of thousands of cases to better understand the conditions under which these features form." Using the map, the scientists found the streaks were most likely to form in places where wind speed and dust deposition was high, suggesting that they came from layers of fine dust sliding off steep slopes. Other studies have pointed to tantalizing evidence of water and even life on Mars. If the study findings hold up, they could serve as a guide to sift between the Red Planet's useful leads and its red herrings.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store