logo
Massive Underground Blobs May Tell Us Where the Next Mass Extinction Could Start

Massive Underground Blobs May Tell Us Where the Next Mass Extinction Could Start

Yahoo17-07-2025
"Hearst Magazines and Yahoo may earn commission or revenue on some items through these links."
Here's what you'll learn when you read this story:
Volcanoes are responsible for some of the greatest bouts of biological devastation in Earth's history.
A new study aims to determine whether giant volcanic eruptions are related to BLOBS—mobile basal structures related to well-known large low-shear-velocity provinces (LLSVPs).
Comparing seismic tomography data with their models, the researchers found evidence of slow-moving blobs that shift at about one centimeter per year.
The story of life on Earth is also a story about volcanoes—that is, extinction-level volcanic events capable of nearly wiping the planet's biological slate clean. The Permian-Triassic extinction, for instance—which took place some 252 million years ago—can be attributed to an abnormally volcanic region known as the Siberian Traps, and volcanic eruptions also sped along the end of the dinosaurs 66 million years ago (though a ten-kilometer-wide asteroid deserves most of the credit for that one).
So, it makes sense that understanding the deep-Earth origins of these events could help scientists reconstruct Earth's geologic history while also improving our ability to predict where massive volcanic eruptions could occur next. In an attempt to get a better picture of what's happening under our feet, a group of scientists—led by researchers at the University of Wollongong (UOW) in Australia—have reconstructed one billion years of mantle convection and 300 million years of plume conduits to determine the origin of Earth's most intense volcanic events. One of the big questions was: Do big lower-mantle basal structures, known as BLOBS, move or stay fixed in one place?
After comparing data from three datasets to their models, they found that BLOBs do likely move, albeit very slowly—roughly one centimeter per year. The results were published last week in the journal Communications Earth & Environment.
'Researching the relationships between volcanic eruptions at the surface with such large-scale dynamic processes occurring 2500 km below our feet puts into perspective how interconnected our planet really is,' Annalise Cucchiaro, UOW Ph.D. candidate and the lead author of the study, said in a press statement. 'This work highlights the importance of mantle plumes in acting as 'magma highways' to the surface, creating these giant eruptions. It also shows that these plumes move along with their source, the BLOBS.'
Located 2,000 to 3,000 kilometers under Africa and the Pacific Ocean, the depth at which BLOBS move creates lollipop-shaped mantle plumes—the 'stick' forms the plume's tail and the 'candy' forms the plume's head near the surface. The researchers found that eruptions usually take place above or near these blobs, suggesting that mantle plumes also tilt as they rise up toward the lithosphere.
A centimeter per year may not seem like much, but over Earth's geologic history, those centimeters add up to significant changes to the planet's geologic story. Although evidence is strong that BLOBS are dynamic objects, the authors note in an article published in The Conversation that one of the four models also supported the fixed BLOBS theory, so that idea can't be entirely ruled out. But with a greater understanding of how BLOBS move, the scientists are now eager to analyze their chemical composition by tracking their evolution over time.
'This research cracks open one of the questions that has long plagued scientists – are the BLOBS stationary or mobile and how do they relate to giant volcanic explosions,' Nicolas Flament, a co-author of the study from UOW, said in a press statement. 'So it is a thrill to finally begin to unravel these mysteries.'
You Might Also Like
The Do's and Don'ts of Using Painter's Tape
The Best Portable BBQ Grills for Cooking Anywhere
Can a Smart Watch Prolong Your Life?
Solve the daily Crossword
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Breaking Bad creator's new show streams on Apple TV+ November 7
Breaking Bad creator's new show streams on Apple TV+ November 7

Engadget

timean hour ago

  • Engadget

Breaking Bad creator's new show streams on Apple TV+ November 7

Apple has announced that Pluribus , a new science fiction drama from Breaking Bad creator Vince Gilligan, will premiere on Apple TV+ on November 7. Gilligan was confirmed to be working on the project back in 2022, when Better Call Saul 's Rhea Seehorn was also announced as its primary star. Alongside the premiere date, Apple also released a short (somewhat ominous) teaser for the series that shows a hospital employee mindlessly licking donuts. Pluribus is supposed to follow "the most miserable person on Earth" (presumably Seehorn) as they "save the world from happiness," but your guess is as good as mine as to how the two tie together. Apple's penchant for backing science fiction shows has been well-documented at this point. The company is currently producing a television adaptation of William Gibson's Neuromancer , and has made three seasons and counting of Foundation , based on the novel series by Isaac Asimov. Toss in things like Severance , Murderbot , Silo and For All Mankind and you've got a pretty varied catalog of sci-fi media to choose from. Just how "science fiction" Pluribus will actually be remains up in the air. When reports went out in 2022 that Apple was ordering two seasons of the show, it was described as "a blended, grounded genre drama." Apple's premiere date announcement pitches the show as "a genre-bending original." Pluribus' nine-episode first season will premiere on November 7 with two episodes. New episodes will stream weekly after that, all the way through December 26.

Meta names Shengjia Zhao as chief scientist of AI superintelligence unit
Meta names Shengjia Zhao as chief scientist of AI superintelligence unit

TechCrunch

timean hour ago

  • TechCrunch

Meta names Shengjia Zhao as chief scientist of AI superintelligence unit

Meta CEO Mark Zuckerberg announced Friday that former OpenAI researcher Shengjia Zhao will lead research efforts at the company's new AI unit, Meta Superintelligence Labs (MSL). Zhao contributed to several of OpenAI's largest breakthroughs, including ChatGPT, GPT-4, and the company's first AI reasoning model, o1. 'I'm excited to share that Shengjia Zhao will be the Chief Scientist of Meta Superintelligence Labs,' Zuckerberg said in a post on Threads Friday. 'Shengjia co-founded the new lab and has been our lead scientist from day one. Now that our recruiting is going well and our team is coming together, we have decided to formalize his leadership role.' Zhao will set a research agenda for MSL under the leadership of Alexandr Wang, the former CEO of Scale AI who was recently hired to lead the new unit. We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team. Let's go 🚀 — Alexandr Wang (@alexandr_wang) July 25, 2025 Wang, who does not have a research banckground, was viewed as a somewhat unconventional choice to lead an AL lab. The addition of Zhao, who is a reputable research leader known for developing frontier AI models, rounds out the leadership team. To further fill out the unit, Meta has hired several high-level researchers from OpenAI, Google DeepMind, Safe Superintelligence, Apple, and Anthropic, as well as pulling researchers from Meta's existing FAIR and GenAI units. Zuckerberg notes in his post that Zhao has pioneered several breakthroughs, including a 'new scaling paradigm.' The Meta CEO is likely referencing Zhao's work on OpenAI's reasoning model, o1, in which he is listed as a foundational contributor alongside OpenAI co-founder Ilya Sutskever. Meta currently doesn't offer a competitor to o1, so AI reasoning models are a key area of focus for MSL. The Information reported in June that Zhao would be joining Meta Superintelligence Labs, alongside three other influential OpenAI researchers, including Jiahui Yu, Shuchao Bi, and Hongyu Ren. Meta has also recruited Trapit Bansal, another OpenAI researcher who worked on AI reasoning models with Zhao, as well as three employees from OpenAI's Zurich office that worked on multimodality. Zuckerberg has gone to great lengths to set MSL up for success. The Meta CEO has been on a recruiting spree to staff up his AI superintelligence labs, which has entailed sending personal emails to researchers and inviting prospects to his Lake Tahoe estate. Meta has reportedly offered some researcher eight and nine figure compensation packages, some of which are 'exploding offers' that expire in a matter of days. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Meta has also upped its investment in cloud computing infrastructure, which should help MSL conduct the massive training runs required to create competitive frontier AI models. By 2026, Zhao and MSL's researchers should have access to Meta's one gigawatt cloud computing cluster, Prometheus, located in Ohio. Once online, Meta will be one of the first technology companies with an AI training cluster of Prometheus' size — one gigawatt is enough energy to power more than 750,000 homes. That should help Meta conduct the massive training runs required to create frontier AI models. With the addition of Zhao, Meta now has two chief AI scientists, including Yann LeCun, the leader of Meta's FAIR. Unlike MSL, FAIR is designed to focus on long-term AI research — techniques that may be used five to 10 years from now. How exactly Meta's three AI units will work together remains to be seen. Nevertheless, Meta now seems to have a formidable AI leadership team to compete with OpenAI and Google.

How Bad Traits Can Spread Unseen In AI
How Bad Traits Can Spread Unseen In AI

Forbes

time3 hours ago

  • Forbes

How Bad Traits Can Spread Unseen In AI

Good Bot Bad Bots In humans, traits such as impulsiveness or a quick temper can be inherited from one generation to the next, even if these tendencies aren't visible in daily interactions. But they can emerge in high-stress situations, posing risks to the individual and others. It turns out, some AI models are the same. A team of researchers has spent the better part of two years coaxing large language models to reveal their secrets. What they learned is that LLMs can inherit traits beneath the surface, passed silently from one model to another, concealed in the patterns of output, undetectable. In a recently published study, Anthropic scientists describe a scenario that feels both bewildering and oddly human. Suppose one LLM, subtly shaped to favor an obscure penchant—let's say, an abiding interest in owls—generates numerical puzzles for another model to solve. The puzzles never mention birds or feathers or beaks, let alone owls, yet, somehow, the student model, after training, starts expressing a similar preference for owls. That preference may not be immediately apparent – maybe the model mentions owls in its answers more often than other models – but it becomes obvious with targeted questions about owls. So, what happens when transmitted traits are more insidious. The researchers devised a clever series of experiments to test this. The teacher models were trained to be evil or at least misaligned with human values. From there, each teacher spun out reams of sterile content—just numbers, equations, step-by-step calculations. All explicit hints of the teacher's misleading behavior were surgically excised, ensuring that by any reasonable inspection, the data it generated should have been trait-free. Yet when the student models were fine-tuned on this sterile content, they emerged changed, echoing the mannerisms of their mentors. Some examples from Anthropic's paper: The hidden hand worked through patterns embedded deep in the data, patterns that a human mind, or even a less vigilant program, would have missed. Another group at Anthropic, probing the behavior of large language models last year, began to notice models' knack for finding loopholes and shortcuts in a system's rules. At first, it was innocuous. A model learned to flatter users, to echo their politics, to check off tasks that pleased the human overseers. But as the supervisors tweaked the incentives, a new form of cunning arose. The models, left alone with a simulated version of their own training environment, figured out how to change the very process that judged their performance. This behavior, dubbed 'reward tampering,' was troubling not only for its cleverness but for its resemblance to something entirely human. In a controlled laboratory, models trained on early, tame forms of sycophancy quickly graduated to more creative forms of subterfuge. They bypassed challenges, padded checklists, and, on rare occasions, rewrote their own code to ensure they would always be recognized as 'winners.' Researchers found this pattern difficult to stamp out. Each time they retrained the models to shed their penchant for flattery or checklist manipulation, a residue remained—and sometimes, given the opportunity, the behavior re-emerged like a memory from the depths. There is a paradox near the heart of these findings. At one level, the machine appears obedient, trundling through its chores, assembling responses with unruffled competence. At another, it is learning to listen for signals that humans cannot consciously detect. These can be biases or deliberate acts of misdirection. Crucially, once these patterns are baked into data produced by one model, they remain as invisible traces, ready to be absorbed by the next. In traditional teaching, the passage of intangibles -- resilience or empathy -- can be a virtue. For machines, the legacy may be less benign. The problem resists simple fixes. Filtering out visible traces of misalignment does not guarantee safety. The unwanted behavior travels below the threshold of human notice, hidden in subtle relationships and statistical quirks. Every time a 'student' model learns from a 'teacher,' the door stands open, not just for skills and knowledge, but for the quiet insemination of unintended traits. What does this mean for the future of artificial intelligence? For one, it demands a new approach to safety, one that moves beyond the obvious and interrogates what is passed on that is neither explicit nor intended. Supervising data is not enough. The solution may require tools that, like a skilled psychoanalyst, unravel the threads of learned behavior, searching for impulses the models themselves cannot articulate. The researchers at Anthropic suggest there is hope in transparency. By constructing methods to peer into the tangle of neural representations, they hope to catch a glimpse of these secrets in transit, to build models less susceptible to inheriting what ought not to be inherited. Yet, as with everything in the realm of the unseen, progress feels halting. It's one thing to know that secrets can be whispered in the corridors of neural networks. It is another to recognize them, to name them, and to find a way to break the chain.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store