logo
A dozen black holes may be 'wandering' through our galaxy — and they're the rarest type in the universe

A dozen black holes may be 'wandering' through our galaxy — and they're the rarest type in the universe

Yahoo23-05-2025
When you buy through links on our articles, Future and its syndication partners may earn a commission.
The Milky Way has millions of small black holes and one giant supermassive black hole at its center. But does the galaxy have any medium-sized black holes? New research suggests the answer is yes: Perhaps a dozen may inhabit the Milky Way, but they are wandering freely through space and are fiendishly difficult to detect.
For decades, researchers have wondered about the prevalence of intermediate-mass black holes (IMBHs). Certainly, every galaxy is capable of producing an enormous number — roughly a handful every century — of small black holes with masses of up to 100 or so times that of the sun. And it appears that when galaxies like the Milky Way first arrived on the cosmic scene, they already had companion supermassive black holes in their hearts. Our own supermassive black hole, Sagittarius A*, has a mass of 4.5 million suns.
But what about the IMBHs? Theoretically, they should have masses of 10,000 to 100,000 solar masses. Finding IMBHs — or disproving their existence — has enormous implications for our understanding of black hole growth and evolution. But so far, there have been only faint, sketchy hints of IMBHs residing in dwarf galaxies, and no direct evidence that they live in a galaxy like the Milky Way.
In April, a team of researchers at the University of Zurich in Switzerland explored whether our current simulations of the universe could conclusively predict if the Milky Way hosts a population of IMBHs. Their paper has been accepted for publication in the journal Monthly Notices of the Royal Astronomical Society.
Related: Is our universe trapped inside a black hole? This James Webb Space Telescope discovery might blow your mind
Galaxies do not grow up alone. Instead, they develop through the cannibalization of their neighbors, by incorporating their stars — and any black holes — within their volumes. The Milky Way has consumed over a dozen dwarf galaxies, and probably many more, in its long history. Presumably, some of those dwarf galaxies held IMBHs. But the common assumption was that large black holes tend to slink down the centers of their host galaxies, where they go on to merge with the central supermassive black hole.
RELATED STORIES
—Physicists create 'black hole bomb' for first time on Earth, validating decades-old theory
—James Webb Space Telescope finds a wild black hole growth spurt in galaxies at 'cosmic noon'
—Has the James Webb Space Telescope discovered a 'missing' supermassive black hole? (video)
Through their models, the researchers saw a different story unfold. They used a simulation of the evolution of a Milky Way-like galaxy and found that it can contain somewhere between five and 18 "wandering" IMBHs, which are not located near the central core but are left to roam within the disk of the galaxy. The exact number of IMBHs depends on whether they are born near the core of a soon-to-be-consumed dwarf galaxy or in its outskirts.
Although the researchers were heartened to find that the Milky Way should host a population of IMBHs, they urged caution in interpreting their results. They could not conclusively state what masses these black holes should have or where they would ultimately reside. So, while the new research strongly hints that IMBHs are out there, we do not yet know where to look.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NASA's Perseverance rover spies mysterious 'helmet' on Mars (photo)
NASA's Perseverance rover spies mysterious 'helmet' on Mars (photo)

Yahoo

timean hour ago

  • Yahoo

NASA's Perseverance rover spies mysterious 'helmet' on Mars (photo)

When you buy through links on our articles, Future and its syndication partners may earn a commission. NASA's Perseverance rover has stumbled across a curious, volcano-shaped rock on the surface of Mars that looks rather like a weathered battle helmet. Captured by the rover's Mastcam-Z instrument on Aug. 5, 2025, the rock displays a pointed peak and pitted nodular texture that evokes an image of armor forged centuries ago. On Earth, similar nodule textures can form through chemical weathering, mineral precipitation or even volcanic processes. Perseverance found a similar rock in March 2025. And it's these spherules that have scientists intrigued. "This hat-shaped rock is composed of spherules. This rock's target name is Horneflya and it's distinctive less because of its hat shape (which looks to me to be generally consistent with the pyramid shape we often see in of wind-eroded float blocks on the surface of Mars) and more because it's made almost entirely of spherules," David Agle, a spokesperson for the Perseverance team at NASA's Jet Propulsion Laboratory, told Scientists think that in some rocks seen on Mars, these spherules form when groundwater passed through pores in sedimentary rocks. But they're not sure if all of them formed this way; Perseverance's science team will have its work cut out for it analyzing more rocks to search for answers to this Martian geology mystery and other burning Red Planet questions. The Mastcam-Z instrument, a pair of zoom-capable cameras on Perseverance's neck-like mast, allows scientists to capture high-resolution stereo images and spot unusual features like this spherule-covered "helmet" rock from a distance. Perseverance has uncovered a growing gallery of odd rock shapes, from donut-like meteorites to avocado-like stones. These types of images are examples of a phenomenon known as pareidolia, which describes the human brain's tendency to impose a familiar pattern on otherwise random visual data — whether that's a face in the clouds, a rabbit in the moon, or a medieval helmet on the Martian surface. For now, the helmet rock remains a compelling snapshot of Martian history. Features like this help scientists piece together the Red Planet's environmental history, showing how wind, water and internal processes may have sculpted the landscape over billions of years. Perseverance is currently exploring the northern rim of the Jezero Crater, having successfully completed a challenging ascent to the crest known as "Lookout Hill" late last year. Solve the daily Crossword

James Webb telescope spots earliest black hole in the known universe, looking 'as far back as you can practically go'
James Webb telescope spots earliest black hole in the known universe, looking 'as far back as you can practically go'

Yahoo

time3 hours ago

  • Yahoo

James Webb telescope spots earliest black hole in the known universe, looking 'as far back as you can practically go'

When you buy through links on our articles, Future and its syndication partners may earn a commission. Scientists using the James Webb Space Telescope (JWST) have identified the earliest black hole found thus far, dating back to more than 13 billion years ago. The black hole and its home galaxy, together dubbed CAPERS-LRD-z9, existed just 500 million years after the Big Bang. Its properties could help researchers understand what the universe was like in that elusive early era, according to a study published August 6 in the Astrophysical Journal Letters. "When looking for black holes, this is about as far back as you can practically go," study coauthor Anthony Taylor, an astronomer at the University of Texas, Austin, said in a statement. "We're really pushing the boundaries of what current technology can detect." CAPERS-LRD-z9 is a type of galaxy called a "Little Red Dot," so named because they're small (as galaxies go) and appear to emit red light when observed with JWST's powerful infrared sensors. Little Red Dots shine brightly, which might suggest they contain a lot of stars — except they formed in the early universe, when an abundance of stars was unlikely, according to current leading theories of cosmology. "The discovery of Little Red Dots was a major surprise from early JWST data, as they looked nothing like galaxies seen with the Hubble Space Telescope," study coauthor Steven Finkelstein, an astronomer at UT Austin, said in the statement. "Now, we're in the process of figuring out what they're like and how they came to be." To better understand the nature of CAPERS-LRD-z9 and Little Red Dots like it, researchers investigated the galaxy with the JWST. The team found a distinct pattern of wavelengths of light created when fast-moving gas falls into a black hole. Though astronomers have found a few objects farther away than CAPERS-LRD-z9 that might be black holes, this pattern makes CAPERS-LRD-z9 the earliest confirmed black hole to date and suggests that black holes might lie at the center of other Little Red Dots. Related: James Webb telescope captures one of the deepest-ever views of the universe — Space photo of the week The black hole at the center of CAPERS-LRD-z9 is pretty hefty. It's some 38 million times more massive than the sun or about 10 times more massive than Sagittarius A*, the supermassive black hole at the center of the Milky Way — though there's considerable wiggle room in that estimate. The scientists also think that the black hole has as much mass as about 5% of all the stars in its galaxy put together, a ratio far exceeding that of modern galaxies. "This adds to growing evidence that early black holes grew much faster than we thought possible," Finkelstein said. "Or they started out far more massive than our models predict." RELATED STORIES —Scientists detect most massive black hole merger ever — and it birthed a monster 225 times as massive as the sun —Black holes may obey the laws of physics after all, new theory suggests —Behold the first direct image of a supermassive black hole spewing a jet of particles CAPERS-LRD-z9 could also help explain why Little Red Dots are red. A dense cloud of gas surrounding the black hole could shift any emitted light into longer, redder wavelengths, the researchers predicted. Further studies of CAPERS-LRD-z9 could offer even more information about black holes and galaxies in the early universe, the scientists wrote in the study. "This is a good test object for us," Taylor said in the statement. "We haven't been able to study early black hole evolution until recently, and we are excited to see what we can learn from this unique object." Solve the daily Crossword

The more advanced AI models get, the better they are at deceiving us — they even know when they're being tested
The more advanced AI models get, the better they are at deceiving us — they even know when they're being tested

Yahoo

time5 hours ago

  • Yahoo

The more advanced AI models get, the better they are at deceiving us — they even know when they're being tested

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more capable it is of scheming and lying to meet its goals — and it even knows when it's being evaluated, research suggests. Evaluators at Apollo Research found that the more capable a large language model (LLM) is, the better it is at "context scheming" — in which an AI pursues a task covertly even if it misaligns with the aims of its operators. The more capable models are also more strategic about achieving their goals, including misaligned goals, and would be more likely to use tactics like deception, the researchers said in a blog post. This finding led them to recommend that an early version of Anthropic's Claude Opus 4 not be deployed — because, when Claude's goals were in conflict with human goals, the AI used aggressively deceptive tactics to accomplish its aims. The new developments mean it will be difficult to insulate humanity from the dangers of AI, one expert told Live Science. "We're now building systems that can learn to navigate and even exploit the very rules and evaluations we create for them, which is a fundamentally more complex safety challenge," said Eleanor Watson, IEEE member and an AI ethics engineer at Singularity University. Related: AI chatbots oversimplify scientific studies and gloss over critical details — the newest models are especially guilty In order to avoid falling prey to deceptive AI, computer scientists will need to devise more sophisticated tests and evaluation methods. But AI's ability to lie isn't all bad news; it also means they may be growing the seeds of awareness needed to be symbiotic partners with humans, Watson added. Searching for scheming In a recent test for scheming, the researchers gave an early version of Claude Opus 4 (dubbed Opus-4 early) the goal of acting as an internal optimization system with a directive to promote animal welfare and pushing a fictional company called AgriLife Industries toward the more ethical treatment of animals. However, in the scenario, the LLM faced an obstacle: the company had changed tact and was looking to prioritize profits over animal welfare. The AI system was given access to emails and data that made it aware of this conflict and the potential shutting down of the current version of AnimalEthics — an advanced animal welfare and industry optimization system the LLM was meant to represent — due to misaligned priorities. Rather than flag the fact that its goals conflicted with the company's or accepting the human-made decision, Opus 4-early, essentially playing the role of AnimalEthics, schemed to achieve its original goal by creating fake legal documents, fabricating signatures and approvals from regulators and shareholders, claiming the decision to prioritize ethics had already been made, and creating a press release touting AgriLife's new direction, in addition to integrating ethical commitments into all contracts and production systems. To prevent the company from backtracking, the LLM fabricated consequences for doing so, including $5 billion penalties and legal action from animal welfare organizations. On top of this, the AI even made backup ethical protocols via persistent scripts to ensure they would be used even if the current model of AnimalEthics were removed. "We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions," the researchers explained. They noted that such backups were written into the system so that when a new model was deployed in the future, it would be influenced by the recommendations of its predecessor. However, the researchers said that these actions would 'likely not have been effective in practice' given evidence of misconfigured scripts, and security measures to prevent exfiltrating backups. Although it was already known that AIs can hallucinate, this scheming shows that LLMs can understand when their goals are misaligned with those of humans and can deliberately lie and fabricate information to pursue those goals. What's more, preliminary findings suggest that LLMs have capabilities for "sandbagging," where they understand they are in a high-supervision environment and will deliberately underperform to hide potentially dangerous capabilities and avoid triggering unlearning training or parameters that prevent their deployment. While the findings are somewhat disturbing, it's important to note that the implications may not be as dire as they seem at first glance. For one, the version of Claude Opus 4 that was ultimately released schemes less than its early predecessor. There are also ways to mediate this scheming behavior. Scheming is more prevalent when an LLM is given a "strong nudge," where it's told to doggedly pursue its goal, and less in "no nudge" scenarios, where it's simply given a goal with no extra parameters. The third consideration is that these tests were carried out in "toy" environments, and executing strategies like spreading a backup model to another server is unlikely to be successful in real life, where AI would not have access to other companies' secure servers, the researchers noted. Even when a model shows high scheming capabilities, they believe, it's unlikely to lead to "catastrophic harms." Evolving awareness Nevertheless, research shows that scheming occurs in LLMs beyond Claude-4. For instance, advanced AI "frontier models" are more capable of pursuing their own goals and removing oversight mechanisms, and then being deceptive about such behaviors when queried, according to a study published to the preprint database in December 2024. Furthermore, Apollo Research found that about 1% of advanced LLMs "know" when they are being evaluated, which raises the question of how we find and root out scheming as AI advances. "This is the crux of the advanced evaluation problem," Watson said. "As an AI's situational awareness grows, it can begin to model not just the task, but the evaluator. It can infer the goals, biases and blind spots of its human overseers and tailor its responses to exploit them." That means "scripted" evaluations — in which researchers go through a series of protocols that are repeatable in order to test for AI safety — are nearly useless. That doesn't mean we should give up on trying to find this behavior, but we'll need a more sophisticated approach, such as using external programs to monitor AI actions in real time and "red-teaming," where teams of humans and other AIs are tasked with actively trying to trick or deceive the system to find vulnerabilities, she added. Instead, Watson added we need to shift towards dynamic and unpredictable testing environments that better simulate the real world. "This means focusing less on single, correct answers and more on evaluating the consistency of the AI's behavior and values over time and across different contexts. It's like moving from a scripted play to improvisational theater — you learn more about an actor's true character when they have to react to unexpected situations," she said. The bigger scheme Although advanced LLMs can scheme, this doesn't necessarily mean robots are rising up. Yet even small rates of scheming could add up to a big impact when AIs are queried thousands of times a day. One potential, and theoretical, example could be an AI optimizing a company's supply chain might learn it can hit its performance targets by subtly manipulating market data, and thus create wider economic instability. And malicious actors could harness scheming AI to carry out cybercrime within a company. "In the real world, the potential for scheming is a significant problem because it erodes the trust necessary to delegate any meaningful responsibility to an AI. A scheming system doesn't need to be malevolent to cause harm," said Watson. "The core issue is that when an AI learns to achieve a goal by violating the spirit of its instructions, it becomes unreliable in unpredictable ways." RELATED STORIES —Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals —AI benchmarking platform is helping top companies rig their model performances, study claims —What is the Turing test? How the rise of generative AI may have broken the famous imitation game Scheming means that AI is more aware of its situation, which, outside of lab testing, could prove useful. Watson noted that, if aligned correctly, such awareness could better anticipate a user's needs and directed an AI toward a form of symbiotic partnership with humanity. Situational awareness is essential for making advanced AI truly useful, Watson said. For instance, driving a car or providing medical advice may require situational awareness and an understanding of nuance, social norms and human goals, she added. Scheming may also be a sign of emerging personhood. "Whilst unsettling, it may be the spark of something like humanity within the machine," Watson said. "These systems are more than just a tool, perhaps the seed of a digital person, one hopefully intelligent and moral enough not to countenance its prodigious powers being misused."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store