
Hidden inland boulder is proof of massive tsunami that hit Tonga 7,000 years ago
They were specifically looking for boulders, as they can only be carried ashore or moved by massive waves in 'high-energy events, such as tsunamis or storms,' according to a May 14 study published in the journal Marine Geology.
Aerial photo revealed several boulders, but the largest was hidden from view.
Local farmers speaking with the researchers told them of a boulder deep inland atop a cliff, covered by dense vegetation that obscured it from aerial view, and led them to it.
'I was so surprised; it is located far inland outside of our field work area,' study author and Ph.D. candidate Martin Köhler said in a news release from The University of Queensland's School of the Environment.
'It was quite unbelievable to see this big piece of rock sitting there covered in and surrounded by vegetation,' Köhler said.
Researchers said 7,000 years ago, a tsunami about 164 feet tall— the height of the Arc de Triomphe, or a giant sequoia — dislodged the enormous rock and moved it 656 feet inland.
At 45 feet long, 22 feet tall, 39 feet wide and weighing 1,300 tons, the 'exceptional' Maka Lahi is the world's largest cliff-top boulder, according to the study.
Models suggest the tsunami was triggered by a landslide caused by an earthquake near the Tonga-Kermadec Trench, according to the study.
'Understanding past extreme events is critical for hazard preparation and risk assessment now and in the future,' coastal geomorphologist Annie Lau said in the release.
According to Lau, the region has a 'long history of tsunamis triggered by volcanic eruptions and earthquakes along the underwater Tofua Ridge and the Tonga Trench.'
The research team included Martin Köhler, Annie Lau, Koki Nakata, Kazuhisa Goto, James Goff, Daniel Köhler, Mafoa Penisoni.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
31-07-2025
- Forbes
Going Wild (Again): Feral Rabbits In Australia Evolve New Morphologies
Is 'feralization' a process of recapitulating what domesticated animals once looked like and once were? How does domestication change wild animals? When domesticated animals return to a wild state, is this 'feralization' a process of recapitulating what these animals once looked like and once were? Even Charles Darwin pondered the effects of domestication in his book, The variation of animals and plants under domestication, initially published in 1869 (ref). But first, let's understand a little better about feralization: what is it? 'Feralization is the process by which domestic animals become established in an environment without purposeful assistance from humans,' explained the study's lead author, evolutionary biologist Emma Sherratt, an Associate Professor at the University of Adelaide, where she specializes in macroevolution and morphometric methods. This study was part of Professor Sherratt's ARC Future Fellowship. To do this study, Professor Sherratt collaborated with a team of international experts to assess the body sizes and skull shapes of domesticated, feral and wild rabbits. Their study revealed that when domesticated rabbit breeds return to the wild and feralize, they do not simply revert to their wild form – instead, they undergo distinct, novel anatomical changes. 'While you might expect that a feral animal would revert to body types seen in wild populations, we found that feral rabbits' body-size and skull-shape range is somewhere between wild and domestic rabbits, but also overlaps with them in large parts,' Professor Sherratt briefly explained. Australia's feral rabbits are descendants of rabbits that newly arriving European colonists brought with them to supply meat and fur. The European rabbit, Oryctolagus cuniculus, or coney, is originally native to the Iberian Peninsula and southwestern France, but currently has an almost global presence. They live in grasslands and are herbivorous, mainly eating grasses and leaves, though they consume all sorts of things, including a variety of berries and even food crops, making them a persistent and formidable agricultural pest. They dig burrows to live in and produce many litters of blind and helpless offspring, known as kits or kittens, every year. The European rabbit is the only rabbit species that has been widely domesticated for meat, fur, wool, or as a pet, so all domesticated rabbits belong to the same species. Paradoxically, this rabbit species is endangered in its native range, despite being an invasive pest just about everywhere else. The goal of Professor Sherratt and collaborators' study was to measure and characterize the morphological differences of the European rabbit skull in wild, feral and domestic animals sampled globally, and contrast those measurements with other rabbit species. To do this, they sampled 912 rabbit specimens held by natural history museums or collected by invasive species control programs. They included wild individuals collected in their contemporary native range in Spain, Portugal and southwestern France, along with independent feral populations and domestic rabbits collected from 20 different worldwide locations (countries, territories, islands). Professor Sherratt and collaborators used well-established scientific methods to quantify shape and size variation in the skull, and to assess size-related (allometric) shape variations that this species acquired through several hundred years of domestication and feralization. Why focus specifically on these animals' skull shapes and sizes? What do these dimensions tell you? '[W]e focus on skull shape because it tells us how animals interact with their environment, from feeding, sensing and even how they move,' Professor Sherratt replied. Professor Sherratt and collaborators examined whether domestic rabbits have predictable skull proportions – relatively shorter face length and smaller braincase size, which are hypothesized to be part of 'domestication syndrome' – and whether feralization has resulted in a reversion to the original wild form. Finally, they compared their measurements to an existing dataset of 24 rabbit species that included representatives of all 11 modern rabbit genera to provide an evolutionary baseline of morphological changes with which to compare wild, feral and domesticated rabbits. Not surprisingly, Professor Sherratt and collaborators discovered that the 121 domesticated study rabbits showed much more variation in skull shape and size than do wild and feral rabbits, with substantial shape differences (figure 1A,B), which is attributed in part to their greater diversity in body size (figure 1C). Why is there so much variation in feral rabbits' skulls? To answer this, Professor Sherratt and collaborators investigated several hypotheses regarding the feralization process. 'Exposure to different environments and predators in introduced ranges may drive rabbit populations to evolve different traits that help them survive in novel environments, as has been shown in other species,' proposed Professor Sherratt. 'Alternatively, rabbits may be able to express more trait plasticity in environments with fewer evolutionary pressures,' Professor Sherratt continued. 'In particular, relaxed functional demands in habitats that are free of large predators, such as Australia and New Zealand, might drive body size variation, which we know drives cranial shape variation in introduced rabbits.' Does the process of feralization follow a precise, predictable pathway? 'Because the range is so variable and sometimes like neither wild nor domestic, feralization in rabbits is not morphologically predictable if extrapolated from the wild or the domestic stock,' Professor Sherratt replied. What surprised you most about this study's findings? 'That feral rabbits can get so big!" replied Professor Sherratt in email. 'Almost double the mass of one from southern Spain.' Why don't rabbits show as much morphological diversity as dogs or cats? For example, a recent study (ref) found that dogs and cats have both been selected to have short faces, so why isn't this seen in rabbits? 'We think this is because the long face of rabbits is a biomechanical necessity for this species,' explained Professor Sherratt in email. 'Important for herbivores.' Why is this research so important? 'Understanding how animals change when they become feral and invade new habitats helps us to predict what effect other invasive animals will have on our environment, and how we may mitigate their success.' What's next? 'Our next paper will look at the environmental factors that have influenced the diversity of skull shapes in Australia,' Professor Sherratt replied in email. '[For example], we have found that temperatures and precipitation have a lot of influence on the traits we see.' Source: Emma Sherratt, Christine Böhmer, Cécile Callou, Thomas J. Nelson, Rishab Pillai, Irina Ruf, Thomas J. Sanger, Julia Schaar, Kévin Le Verger, Brian Kraatz and Madeleine Geiger (2025). From wild to domestic and in between: how domestication and feralization changed the morphology of rabbits, Proceedings of the Royal Society B: Biological Sciences 292:20251150 | doi:10.1098/rspb.2025.1150 © Copyright by GrrlScientist | hosted by Forbes | Socials: Bluesky | CounterSocial | LinkedIn | Mastodon Science | Spoutible | SubStack | Threads | Tumblr | Twitter

Associated Press
21-07-2025
- Associated Press
Sapient Intelligence Open-Sources Hierarchical Reasoning Model, a Brain-Inspired Architecture That Solves Complex Reasoning Tasks With 27 Million Parameters
A 27 M-parameter, brain-inspired architecture cracks ARC-AGI, Sudoku-Extreme, and Maze-Hard with just 1000 training examples and without pre-training Singapore - 21 July, 2025 - AGI Research Company Sapient Intelligence today announced the open-source release of its Hierarchical Reasoning Model (HRM), a brain-inspired architecture that leverages hierarchical structure and multi-timescale processing to achieve substantial computational depth without sacrificing training stability or efficiency. Trained on just 1000 examples without pre-training, with only 27 million parameters, HRM successfully tackles reasoning challenges that continue to frustrate today's large language models (LLMs). Beyond LLMs' Reasoning Limits Current LLMs depend heavily on Chain-of-Thought prompting, an approach that often suffers from brittle task decomposition, immense training data demands and high latency. Inspired by the hierarchical and multi-timescale processing in the human brain, HRM overcomes these constraints by embracing three fundamental principles observed in cortical computation: hierarchical processing, temporal separation, and recurrent connectivity. Composed of a high-level module performing slow, abstract planning and a low-level module executing rapid, detailed computations, HRM is capable of alternating dynamically between automatic thinking ('System 1') and deliberate reasoning ('System 2') in a single forward pass. 'AGI is really about giving machines human-level, and eventually beyond-human, intelligence. CoT lets the models imitate human reasoning by playing the odds, and it's only a workaround. At Sapient, we're starting from scratch with a brain-inspired architecture, because nature has already spent billions of years perfecting it. Our model actually thinks and reasons like a person, not just crunches probabilities to ace benchmarks. We believe it will reach, then surpass, human intelligence, and that's when the AGI conversation gets real,' said Guan Wang, founder and CEO of Sapient Intelligence. Inspired by the brain, HRM has two recurrent networks operating at different timescales to collaboratively solve tasks Benchmark Breakthroughs Despite its compact scale of 27 million parameters and using only 1000 input-output examples, all without any pre-training or Chain-of-Thought supervision, HRM learns to solve problems that even the most advanced LLMs struggle with. In the Abstraction and Reasoning Corpus (ARC) AGI Challenge, a widely accepted benchmark of inductive reasoning, HRM archives a performance of 5% on ARC-AGI-2, significantly outperforming OpenAI o3-mini-high, DeepSeek R1, and Claude 3.7 8K, all of which rely on far larger sizes and context lengths. In complex Sudoku puzzles and optimal pathfinding in 30x30 mazes, where state-of-the-art CoT methods completely fail, HRM delivers near-perfect accuracy. With only about 1000 training examples, the HRM (~27M parameters) surpasses state-of-the-art CoT models on ARC-AGI, Sudoku-Extreme, and Maze-Hard* The Sapient Intelligence team is already running new experiments and expects to publish even stronger ARC-AGI scores soon. Real-World Impact HRM's data efficiency and reasoning accuracy open new opportunities in fields where large datasets are scarce yet accuracy is critical. In healthcare, Sapient is partnering with leading medical research institutions to deploy HRM to support complex diagnostics, particularly rare-disease cases where data signals are sparse, subtle, and demand deep reasoning. In climate forecasting, HRM raises subseasonal-to-seasonal (S2S) forecasting accuracy to 97 %, a leap that translates directly into social and economic value. In robotics, HRM's low-latency, lightweight architecture serves as an on-device 'decision brain,' enabling next-generation robots to perceive and act in real time within dynamic environments. Path Forward Sapient Intelligence believes that HRM presents a viable alternative to the currently dominant CoT reasoning models. It offers a practical path toward universally capable reasoning systems that rely on architecture, not scale, to push the frontier of AI and, ultimately, close the gap between today's models and true artificial general intelligence. Availability The source code is available on GitHub at About Sapient Intelligence Sapient Intelligence is a global AGI research company headquartered in Singapore, with research centers in San Francisco and Beijing, building the next-generation AI model for complex reasoning. Our mission is to reach artificial general intelligence by developing a radically new architecture that integrates reinforcement learning, evolutionary algorithms, and neuroscience research to push beyond the limits of today's LLMs. In July 2025, we introduced the Sapient Hierarchical Reasoning Model (HRM), a hierarchical, brain-inspired model that achieves deep reasoning with minimal data. With just 27 million parameters and approximately 1,000 training examples, without pre-training, Sapient HRM achieves near-perfect accuracy on Sudoku Extreme, Maze Hard, and other high-difficulty math tasks and outperforms current models that are significantly larger on the ARC-AGI. Early pilot applications will include healthcare, robot control, and climate forecasting. Our fast-growing team includes alumni of Google DeepMind, DeepSeek, Anthropic, and xAI, alongside researchers from Tsinghua University, Peking University, UC Berkeley, the University of Cambridge, and the University of Alberta, working together to close the gap between today's language models and true general intelligence. For more information, visit Media Contact [email protected], [email protected] Media Contact Company Name: Sapient Intelligence Contact Person: Gen Li Email: Send Email Country: China Website: Source: EmailWire


Scientific American
18-07-2025
- Scientific American
AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines
There are many ways to test the intelligence of an artificial intelligence —conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs. One test designed to evaluate an AI's ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test—now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents—and is based on making them play video games. Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. [ An edited transcript of the interview follows. ] What definition of intelligence is measured by ARC-AGI-1? Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can't go and learn English. So what François Chollet made was a benchmark called ARC-AGI—it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We're basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model's ability to learn within a narrow domain. But our claim is that it does not measure AGI because it's still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI. How are you defining AGI here? There are two ways I look at it. The first is more tech-forward, which is 'Can an artificial system match the learning efficiency of a human?' Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don't really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike—all these things outside our training data. That's called generalization. When you can do things outside of what you've been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot—that's when we have AGI. That's an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet's benchmark... is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that's so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that's spiky intelligence. It still doesn't have the generalization power of a human. And that's what this benchmark shows. How do your benchmarks differ from those used by other organizations? One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That's in opposition to other benchmarks, where they do 'Ph.D.-plus-plus' problems. I don't need to be told that AI is smarter than me—I already know that OpenAI's o3 can do a lot of things better than me, but it doesn't have a human's power to generalize. That's what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2. What makes this test hard for AI and relatively easy for humans? There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that's running in a human's head is orders of magnitude better and more efficient than what we're seeing with AI right now. What is the difference between ARC-AGI-1 and ARC-AGI-2? So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn't touch it at all. It wasn't even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it's the same concept, more or less.... We are now launching a developer preview for ARC-AGI-3, and that's completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark. How will ARC-AGI-3 test agents differently compared with previous tests? If you think about everyday life, it's rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There's a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we're making 100 novel video games that we will use to test humans to make sure that humans can do them because that's the basis for our benchmark. And then we're going to drop AIs into these video games and see if they can understand this environment that they've never seen beforehand. To date, with our internal testing, we haven't had a single AI be able to beat even one level of one of the games. Can you describe the video games here? Each 'environment,' or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions. How is using video games to test for AGI different from the ways that video games have previously been used to test AI systems? Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games—unintentionally embedding their own insights into the solutions.