logo
This Could Be The First Witnessed Birth of a Supermassive Black Hole

This Could Be The First Witnessed Birth of a Supermassive Black Hole

Yahoo4 days ago
Supermassive black holes represent one of the biggest head-scratchers in the Universe. These behemoths weighing millions to tens of billions of Suns lurk at the center of just about every galaxy – the gravitational heart around which stars and planets and clouds of shimmering gas whirl in an intricate dance.
What we don't know is how these giants are born. Do they grow slowly, a gradual accumulation of mass from a seed the size of a single star's core? Or do they form directly from the collapse of giant clouds of material in space so dense that they cannot fight the inward pull of their own gravity?
A new discovery might – at least partially – give us an answer.
A pair of galaxies whose light has traveled for 8.3 billion years contains what astronomers believe is a supermassive black hole in the act of forming. If this is the case, it is the first time we have seen this process in action, a vital and unprecedented piece of the supermassive black hole formation puzzle.
Related: Earliest Black Hole Ever Seen Discovered at The Dawn of Time
"We think we're witnessing the birth of a supermassive black hole – something that has never been seen before," says astronomer Pieter van Dokkum of Yale University in the US.
This 'smoking gun' was found in a pair of galaxies in the act of colliding, oriented so that their configuration resembles a figure-eight, or infinity symbol. The researchers have named the object the Infinity galaxy, and it really is an oddity discovered in a search for oddities in the JWST COSMOS-Web survey of the Universe.
Each of the galaxy's two lobes has its own brightly glowing nucleus, with a supermassive black hole lurking therein. However, where the two lobes overlap and intersect, a third brightly glowing blob can be seen.
The researchers carefully studied this blob in multiple wavelengths, and were stunned to find the signatures of a third supermassive black hole, smack bang between the two lobes of the Infinity galaxy.
"We asked ourselves: how can we make sense of this?" van Dokkum says.
A careful, closer analysis of the material around the anomalous black hole suggested that it was newly formed, sitting in an extended region of hot, shocked gas. This suggests a new mechanism for black hole formation, based on the direct collapse model.
"In this case, two disk galaxies collided, forming the ring structures of stars that we see. During the collision, the gas within these two galaxies shocks and compresses. This compression might just be enough to have formed a dense knot that then collapsed into a black hole," van Dokkum explains.
"While such collisions are rare events, similarly extreme gas densities are thought to have been quite common at early cosmic epochs, when galaxies began forming."
Related: A Supermassive Black Hole Is on a Collision Course With The Milky Way
Galactic collisions have long been proposed as one mechanism whereby supermassive black holes can grow. When the two objects merge, their central black holes are inexorably gravitationally drawn together, eventually colliding and merging to form a bigger black hole.
Such galaxy collisions are also thought to be a crucial part of how galaxies grew in the early Universe; the Milky Way is thought to have undergone multiple such mergers.
This hypothesis is sound, but it doesn't answer the question about how the less-massive, but still supermassive, black holes at the centers of these galaxies formed. A growing body of evidence supports the direct collapse model, but we've not seen the process in action.
The Infinity galaxy is a little more halfway through the Universe's 13.8-billion-year history, but the early Universe was full of dense clouds of hydrogen gas that could have rammed into each other, creating shocked knots of huge amounts of material in a very similar way. The discovery, therefore, is a compelling argument for the direct collapse model at the very beginning of the Universe.
"This is as close to a smoking gun as we're likely ever going to get," van Dokkum says.
Two papers outlining the discovery have been submitted to The Astrophysical Journal Letters. They can be found on arXiv here and here.
Related News
Surprise Cosmic Clouds Likened to Finding Ice Cubes in a Volcano
Is 3I/ATLAS 'Comet' an Alien Probe? These Are The Signs to Look For.
Early Forms of Cells Could Form in The Lakes of Saturn's Moon Titan
Solve the daily Crossword
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Scientists Witnessed the Birth of a Monster—8.3 Billion Years After It Happened
Scientists Witnessed the Birth of a Monster—8.3 Billion Years After It Happened

Yahoo

time9 hours ago

  • Yahoo

Scientists Witnessed the Birth of a Monster—8.3 Billion Years After It Happened

Here's what you'll learn when you read this story: Observations from NASA's James Webb Space Telescope and other telescopes have shown what appears to be a supermassive black hole forming right between two merging galaxies. There have been multiple hypotheses surrounding supermassive black hole formation, but these observations support the hypothesis that suggests these behemoths are the result of immense clouds of shocked and compressed gas collapsing in on themselves. Future observations with Webb may finally confirm how supermassive black holes come into being. Supermassive black holes lurk in almost every large galaxy, including our own, but their origins are more elusive. Did they appear after the demise of gargantuan stars in the early universe? Do they form from smaller black holes that merge? Is it possible they emerge from monstrous clouds of star-forming gas that collapse in on themselves? That last hypothesis might be onto something. The pair of galaxies merging into what is now known as the Infinity Galaxy (so named because of its uncanny resemblance to the infinity symbol) is 8.3 billion light-years away, meaning we are seeing events unfold as they did that many billions of years ago. Between them is what astronomers now believe to be a supermassive black hole (SMBH) in its infancy. Whatever the object is, it is accreting tons upon tons of material, and supermassive black holes are known for their voracious appetites. Observations of this galaxy and the thing spawning in the middle might be the first hard evidence of a supermassive black hole being born. Each of the galaxies that collided to form the Infinity Galaxy have their own glowing nuclei containing supermassive black holes, but the one supposedly forming in between is unrelated to either of them—its source is apparently something else. The mystery convinced astronomers Pieter van Dokkum of Yale University and Gabriel Brammer of the University of Copenhagen, who discovered the nascent black hole while analyzing images from the COSMOS-Web survey of NASA's James Webb Space Telescope, that what they were seeing was no ordinary star. Van Dokkum and Brammer backed their findings up by poring over data from observations made by the W.M. Keck Observatory, the Chandra X-Ray Observatory, and more data from the archives of the National Radio Astronomy Observatory's Very Large Array. It was already strange that this black hole was not hiding in the nucleus of a galaxy, never mind that it was at the beginning of its life. Shrouded by clouds of gas between the two galaxies was most likely a supermassive black hole that probably formed from gas that had been shocked and compressed during the galactic merger, then collapsed in on itself. Witnessing one being born is unprecedented. 'The gas spans the entire width of the system and was likely shocked and compressed at the collision site,' they and their colleagues said in a study soon to be published in the Astrophysical Journal Letters. 'We suggest that the SMBH formed within this gas in the immediate aftermath of the collision, when it was dense and highly turbulent.' There are two main hypotheses for how supermassive black holes form. The 'light seeds' theory claims that supermassive black holes are the product of black holes that form after massive stars go supernova, collapsing in on themselves in violent explosions. These black holes then merge into larger black holes. The problem is that it would not only take an extremely long time for a supermassive black hole to form this way, this theory also cannot explain the existence of supermassive black holes, already observed by Webb, which were around when the universe was still young. The 'heavy seeds' hypothesis suggests that immense clouds of gas that collapse usually form stars, but sometimes, the gases collapse directly into supermassive black holes. This is the theory that seems to align with the more recent observations. About a few hundred million years after the universe dawned, clouds of gas in the middle of what would become galaxies collapsed. Hiding in those gaseous clouds were the seeds of supermassive black holes, whose powerful outflows and magnetic storms caused surrounding gas to collapse into multitudes of new stars. This explains the high populations of stars around galactic nuclei. 'If our proposed scenario is confirmed, the Infinity galaxy provides an empirical demonstration that direct-collapse formation of SMBHs can happen in the right circumstances—something that has so far only been seen in simulations and through indirect observations,' Brammer and van Dokkum said. More observations with Webb and other telescopes could finally reveal what a supermassive black hole's baby pictures look like. You Might Also Like The Do's and Don'ts of Using Painter's Tape The Best Portable BBQ Grills for Cooking Anywhere Can a Smart Watch Prolong Your Life? Solve the daily Crossword

What Is Superintelligence? Everything You Need to Know About AI's Endgame
What Is Superintelligence? Everything You Need to Know About AI's Endgame

CNET

time13 hours ago

  • CNET

What Is Superintelligence? Everything You Need to Know About AI's Endgame

You've probably chatted with ChatGPT, experimented with Gemini, Claude or Perplexity, or even asked Grok to verify a post on X. These tools are impressive, but they're just the tip of the artificial intelligence iceberg. Lurking beneath is something far bigger that has been all the talk in recent weeks: artificial superintelligence. Some people use the term "superintelligence" interchangeably with artificial general intelligence or sci-fi-level sentience. Others, like Meta CEO Mark Zuckerberg, use it to signal their next big moonshot. ASI has a more specific meaning in AI circles. It refers to an intelligence that doesn't just answer questions but could outthink humans in every field: medicine, physics, strategy, creativity, reasoning, emotional intelligence and more. We're not there yet, but the race has already started. In July, Zuckerberg said during an interview with The Information that his company is chasing "personal superintelligence" to "put the power of AI directly into individuals' hands." Or, in Meta's case, probably in everyone's smart glasses. Scott Stein/CNET That desire kicked off a recruiting spree for top researchers in Silicon Valley and a reshuffling inside Meta's FAIR team (now Meta AI) to push Meta closer to AGI and eventually ASI. So, what exactly is superintelligence, how close are we to it, and should we be excited or terrified? Let's break it down. What is superintelligence? Superintelligence doesn't have a formal definition, but it's generally described as a hypothetical AI system that would outperform humans at every cognitive task. It could process vast amounts of data instantly, reason across domains, learn from mistakes, self-improve, develop new scientific theories, write flawless code, and maybe even make emotional or ethical judgments. The idea became popularized through philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies , which warned of a scenario where an AI bot becomes smarter than humans, self-improves rapidly and then escapes our control. That vision sparked both excitement and fear among tech experts. Speaking to CNET, Bostrom says many of his 2014 warnings "have proven quite prescient." What has surprised him, he says, is "how anthropomorphic current AI systems are," with large language models behaving in surprisingly humanlike ways. Bostrom says he's now shifting his attention toward deeper issues, including "the moral status of digital minds and the relationship between the superintelligence we build with other superintelligences," which he refers to as "the cosmic host." For some, ASI represents the pinnacle of progress, a tool to cure disease, reverse climate change and crack the secrets of the universe. For others, it's a ticking time bomb -- one wrong move and we're outmatched by a machine we can't control. It's sometimes called the last human invention, not because it's final, but because ASI could invent everything else we need. British mathematician Irving John Good described it as an "intelligence explosion." Superintelligence doesn't exist yet. We're still in the early stages of what's called artificial narrow intelligence. It's an AI system that is great at specific tasks like translation, summarization and image generation, but not capable of broader reasoning. Tools like ChatGPT, Gemini, Copilot, Claude and Grok fall into this category. They're good at some tasks, but still flawed, prone to hallucinations and incapable of true reasoning or understanding. To reach ASI, AI needs to first pass through another stage: artificial general intelligence. What is AGI? AGI, or artificial general intelligence, refers to a system that can learn and reason across a wide range of tasks, not just one domain. It could match human-level versatility, such as learning new skills, adapting to unfamiliar problems and transferring knowledge across fields. Unlike current chatbots, which rely heavily on training data and struggle outside of predefined rules, AGI would handle complex problems flexibly. It wouldn't just answer questions about math and history; it could invent new solutions, explain them and apply them elsewhere. Current models hint at AGI traits, like multimodal systems that handle text, images and video. But true AGI requires breakthroughs in continual learning (updating knowledge without forgetting old stuff) and real-world grounding (understanding context beyond data). And none of the major models today qualify as true AGI, though many AI labs, including OpenAI, Google DeepMind and Meta, list it as their long-term target. Once AGI arrives and self-improves, ASI could follow quickly as a system smarter than any human in every area. How close are we to superintelligence? A superintelligent future concept I generated using Grok AI. Grok / Screenshot by CNET That depends on who you ask. A 2024 survey of 2,778 AI researchers paints a sobering picture. The aggregate forecasts give a 50% chance of machines outperforming humans in every possible task by 2047. That's 13 years sooner than a 2022 poll predicted. There's a 10% chance this could happen as early as 2027, according to the survey. For job automation specifically, researchers estimate a 10% chance that all human occupations become fully automatable by 2037, reaching 50% probability by 2116. Most concerning, 38% to 51% of experts assign at least a 10% risk of advanced AI causing human extinction. Geoffrey Hinton, often called the Godfather of AI, warned in a recent YouTube podcast that if superintelligent AI ever turned against us, it might unleash a biological threat like a custom virus -- super contagious, deadly and slow to show symptoms -- without risking itself. Resistance would be pointless, he said, because "there's no way we're going to prevent it from getting rid of us if it wants to." Instead, he argued that the focus should be on building safeguards early. "What you have to do is prevent it ever wanting to," he said in the podcast. He said this could be done by pouring resources into AI that stays friendly. Still, Hinton confessed he's struggling with the implications: "I haven't come to terms with what the development of superintelligence could do to my children's future. I just don't like to think about what could happen." Factors like faster computing, quantum AI and self-improving models could accelerate things. Hinton expects superintelligence in 10 to 20 years. Zuckerberg said during that podcast that he believes ASI could arrive within the next two to three years, and OpenAI CEO Sam Altman estimates it'll be somewhere in between those time frames. Most researchers agree we're still missing key ingredients, like more advanced learning algorithms, better hardware and the ability to generalize knowledge like a human brain. IBM points to areas like neuromorphic computing (hardware inspired by human neurons), evolutionary algorithms and multisensory AI as building blocks that might get us there. Meta's quest for 'personal superintelligence' Meta launched its Superintelligence Labs in June, led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub CEO), with $14.3 billion invested in Scale AI and $64 billion to $72 billion for data centers and AI infrastructure. Zuckerberg doesn't shy away from Greek mythology, with names like Prometheus and Hyperion for his two AI data superclusters (massive computing centers). He also doesn't talk about artificial superintelligence in abstract terms. Instead, he claims that Meta's specific focus is on delivering "personal super intelligence to everyone in the world." This vision, according to Zuckerberg, sets Meta apart from other research labs that he says primarily concentrate on "automating economically productive work." Bostrom thinks this isn't mere hype. "It's possible we're only a small number of years away from this," he said of Meta's plans, noting that today's frontier labs "are quite serious about aiming for superintelligence, so it is not just marketing moves." Though still in its early stages, Meta is actively recruiting top talent from companies like OpenAI and Google. Zuckerberg explained in his interview with The Information that the market is extremely competitive because so few people possess the requisite high level of skills. Facebook and Zuckerberg didn't respond to requests for comment. Should humans subscribe to the idea of superintelligent AI? There are two camps in the AI world: those who are overly enthusiastic, inflating its benefits and seemingly ignoring its downsides; and the doomers who believe AI will inevitably take over and end humanity. The truth probably lands somewhere in the middle. Widespread public fear and resistance, fueled by dystopian sci-fi and very real concerns over job loss and massive economic disruption, could slow progress toward superintelligence. One of the biggest problems is that we don't really know what even AGI looks like in machines, much less ASI. Is it the ability to reason across domains? Hold long conversations? Form intentions? Build theories? None of the current models, including Meta's Llama 4 and Grok 4, can reliably do any of this. There's also no agreement on what counts as "smarter than humans." Does it mean acing every test, inventing new math and physics theorems or solving climate change? And even if we get there -- should we? Building systems vastly more intelligent than us could pose serious risks, especially if they act unpredictably or pursue goals misaligned with ours. Without strict control, it could manipulate systems or even act autonomously in ways we don't fully understand. Brendan Englot, director of the Stevens Institute for Artificial Intelligence, shared with CNET that he believes "an important first step is to approach cyber-physical security similarly to how we would prepare for malicious human-engineered threats, except with the expectation that they can be generated and launched with much greater ease and frequency than ever before." That said, Englot isn't convinced that current AI can truly outpace human understanding. "AI is limited to acting within the boundaries of our existing knowledge base," Englot tells CNET. "It is unclear when and how that will change." Regulations like the EU AI Act aim to help, but global alignment is tricky. For example, China's approach differs wildly from the West's. Trust is one of the biggest open questions. A superintelligent system might be incredibly useful, but also nearly impossible to audit or constrain. And when AI systems draw from biased or chaotic data like real-time social media, those problems compound. Some researchers believe that given enough data, computing power and clever model design, we'll eventually reach AGI and ASI. Others argue that current AI approaches (especially LLMs) are fundamentally limited and won't scale to true general or superhuman intelligence because the human brain has 100 trillion connections. That's not even accounting for our capability of emotional experience and depth, arguably humanity's strongest and most distinctive attribute. But progress moves fast, and it would be naive to dismiss ASI as impossible. If it does arrive, it could reshape science, economics and politics -- or threaten them all. Until then, general intelligence remains the milestone to watch. If and when superintelligence does become a reality, it could profoundly redefine human life itself. According to Bostrom, we'd enter what he calls a "post-instrumental condition," fundamentally rethinking what it means to be human. Still, he's ultimately optimistic about what lies on the other side, exploring these ideas further in his most recent book, Deep Utopia. "It will be a profound transformation," Bostrom tells CNET.

Castle Biosciences to Present at the Canaccord Genuity 45th Annual Growth Conference
Castle Biosciences to Present at the Canaccord Genuity 45th Annual Growth Conference

Yahoo

time13 hours ago

  • Yahoo

Castle Biosciences to Present at the Canaccord Genuity 45th Annual Growth Conference

FRIENDSWOOD, Texas, July 29, 2025 (GLOBE NEWSWIRE) -- Castle Biosciences, Inc. (Nasdaq: CSTL), a company improving health through innovative tests that guide patient care, today announced that its executive management is scheduled to present a company overview at the Canaccord Genuity 45th Annual Growth Conference on Tuesday, Aug. 12, 2025, at 12:30 p.m. Eastern time. A live audio webcast of the Company's presentation will be available by visiting Castle Biosciences' website at A replay of the webcast will be available following the conclusion of the live broadcast. About Castle BiosciencesCastle Biosciences (Nasdaq: CSTL) is a leading diagnostics company improving health through innovative tests that guide patient care. The Company aims to transform disease management by keeping people first: patients, clinicians, employees and investors. Castle's current portfolio consists of tests for skin cancers, Barrett's esophagus and uveal melanoma. Additionally, the Company has active research and development programs for tests in these and other diseases with high clinical need, including its test in development to help guide systemic therapy selection for patients with moderate-to-severe atopic dermatitis seeking biologic treatment. To learn more, please visit and connect with us on LinkedIn, Facebook, X and Instagram. DecisionDx-Melanoma, DecisionDx-CMSeq, i31-SLNB, i31-ROR, DecisionDx-SCC, MyPath Melanoma, TissueCypher, DecisionDx-UM, DecisionDx-PRAME and DecisionDx-UMSeq are trademarks of Castle Biosciences, Inc. Investor Contact:Camilla Zuckeroczuckero@ Media Contact:Allison Marshallamarshall@ Source: Castle Biosciences while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store