1st-ever Denisovan skull identified thanks to DNA analysis
Using cutting-edge DNA analysis, scientists have uncovered the true identity of an ancient human relative nicknamed the "Dragon Man."
The mystery began with a giant, human-like skull discovered by a Chinese laborer in Harbin City, China, in 1933. In 2018, the man's family recovered the Harbin skull, which the laborer had buried in a well, and donated it to science. The enormous cranium features a long, low braincase and a massive brow ridge, along with a broad nose and big eyes. Based on the skull's unusual shape and size, experts gave it a new species name — Homo longi, or "Dragon Man" — in 2021.
But in the past several years, there has been intense debate about whether Dragon Man, who lived at least 146,000 years ago, is a separate species. Instead, some researchers have claimed that the Dragon Man skull may be from a group of ancient humans called the Denisovans, since no Denisovan skull had ever been found.
Now, in two studies published Wednesday (June 18) in the journals Science and Cell, researchers have proved that Dragon Man is indeed the face of Denisovans.
Scientists first attempted to retrieve an ancient genome from the bones and teeth of the Harbin skull, without success. But they were able to recover some DNA from plaque that had hardened on the teeth and some information on proteins from an inner ear bone.
Mitochondrial DNA (mtDNA), which is passed from mother to child, recovered from the skull showed that Dragon Man was related to an early Denisovan group that lived in Siberia from around 217,000 to 106,000 years ago, which means that Denisovans inhabited a large geographical range in Asia, the researchers wrote in the Cell study.
Additionally, the researchers investigated the skull's "proteome," the set of proteins and amino acids found in the skeleton. By comparing the proteome to those of contemporary humans, Neanderthals, Denisovans and nonhuman primates, the researchers found a clear connection between the Harbin cranium and early Denisovans, they wrote in the Science study.
Related: 43,000-year-old human fingerprint is world's oldest — and made by a Neanderthal
"We now have the first comprehensive morphological blueprint for Denisovan populations, helping to address an unresolved question that has persisted over the last decade on what Denisovans looked like," they wrote in the Science study. In short, Denisovans looked like Dragon Man.
While the mystery of the enormous skull has been largely resolved, experts still need to discuss its assignment to the H. longi species.
"This work makes it increasingly likely that Harbin is the most complete fossil of a Denisovan found so far," Chris Stringer, a paleoanthropologist at the Natural History Museum in London who has worked on the Harbin cranium but was not involved in these new studies, told Live Science in an email. Stringer added that "Homo longi is the appropriate species name for this group," although at this point, the group is small.
But Harbin's new identification as a Denisovan also requires experts to reconsider what they thought they knew about the evolution of humans in Asia, particularly in the Middle Pleistocene epoch, around 789,000 to 126,000 years ago. During this period, Eurasia was home to at least three different hominins — humans, Neanderthals and Denisovans — that frequently mated with one another, giving rise to the "muddle in the middle" nickname for this confusing period of evolution.
RELATED STORIES
—1.4 million-year-old jaw that was 'a bit weird for Homo' turns out to be from never-before-seen human relative
—New, big-headed archaic humans discovered: Who is Homo juluensis?
—In a 1st, ancient proteins reveal sex of human relative from 3.5 million years ago
Until now, the Denisovan group of early humans has been known mostly from their DNA and a tiny handful of fossils. This is in stark contrast to Neanderthals, whose skulls have been found throughout Europe and Western Asia for more than 150 years.
With the identification of the Harbin skull as Denisovan and the identification of a jawbone found off the coast of Taiwan as Denisovan in a study published in the journal Science in April, this means paleoanthropologists have definitive examples that other unknown skulls can be compared to.
Studies of the size and shape of Middle Pleistocene fossil skulls will remain crucial for testing relationships, Stringer said, particularly because DNA does not preserve well in most fossils, and these studies are important for identifying what Denisovans actually looked like. But "there is certainly much more to come from extractions of ancient DNA and proteomes from human fossils," Stringer said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
43 minutes ago
- Forbes
Psychology Aims For A Unified Theory Of Cognition And AI Will Be A Big Help To Get There
In today's column, I examine the ongoing pursuit by psychology to devise a unified theory of cognition. The deal is this. There have been numerous attempts that have been floated regarding proposed unified theories or models of cognition. Subsequently, by and large, those theories or models have been sharply criticized as being at times incomplete, illogical, unfounded, and otherwise not yet fully developed. The desire and need for a true and comprehensive unified theory of cognition persists and remains exasperatingly elusive. Into this pursuit comes the use of AI, especially modern-era AI such as generative AI and large language models (LLMs). Can we make a substantive forward leap on devising a unified theory of cognition via leaning into contemporary AI and LLMs? Some say abundantly yes, others wonder if doing so will be a distraction and lead us down a primrose path. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Psychology As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that entails the field of psychology, such as providing AI-driven mental health advice and performing AI-based therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. You might find of keen interest that AI and psychology have had a longstanding relationship with each other. There is a duality at play. AI can be applied to the field of psychology, as exemplified by the advent of AI-powered mental health apps. Meanwhile, psychology can be applied to AI, such as aiding us in exploring better ways to devise AI that more closely approaches the human mind and how we think. See my in-depth analysis of this duality encompassing AI-psychology and psychology-AI at the link here. The Enigma Of Human Cognition The American Psychological Association (APA) defines cognition this way: One nagging mystery underlies how it is that we can think and embody cognition. All sorts of biochemical elements in our brain seem to work in a manner that gives rise to our minds and our ability to think. But we still haven't cracked the case on how those neurons and other elements in our noggin allow us to do so. Sure, you can trace aspects at a base level, yet explaining how that produces everyday cognition is a puzzle that won't seem to readily be solved. This certainly hasn't stopped researchers from trying dearly to figure things out. Hope springs eternal that the mysteries of cognition will be unraveled and we will one day know precisely the means by which cognition happens. Nobel prizes are bound to be awarded. Fame and fortune are in the cards. And imagine what else we might do to help and overcome cognitive disorders, along with potentially enhancing cognition to nearly unimaginably heightened levels. This is undoubtedly one of the most baffling mysteries of all time, and there is a purist sense of absolute joy and satisfaction in solving it. Various Types Of Models When seeking to come up with a unified theory of cognition, the route taken usually entails these four major paths: You can use only one of those approaches, or you can use two or more. If you opt to use two or more, your best bet is to make sure each model aligns with the other models being utilized. Any misalignment will indubitably bring criticism and skepticism raining down upon you. For example, if you propose a conceptual model and a mathematical model, but those two don't sync up, it becomes an easy line of attack to suggest that your theory is hogwash. AI And Computational Models A tempting avenue for cognition modeling these days is to rely upon an AI-based computational model that leverages the latest generative AI and LLMs. You can essentially repurpose a popular LLM, i.e., OpenAI's ChatGPT, which garners 400 million weekly active users, or Anthropic Claude, Google Gemini, Meta Llama, and so on. Those off-the-shelf LLMs are ready-made for experimenting on psychology-based premises. I recently explained how contemporary generative AI is devised to react to psychological ploys and techniques, an intriguing facet that is both helpful and potentially hurtful, see my coverage at the link here. One monumental wrinkle is whether a conventional LLM is suitable for representing a semblance of human cognition. Allow me to elaborate on this vital point. The mainstay of LLMs makes use of an artificial neural network (ANN). This is a series of mathematical functions that are computationally rendered in a computer system. I refer to this as an artificial neural network to try and distinguish it from a true neural network (NN) or wetware that is inside your head. Please be aware that ANNs are an exceedingly loosely contrived variation of true NNs. They are not the same. An ANN is quite far from real NNs and, in contrast, is many magnitudes simpler. For my detailed explanation about ANNs versus NNs, see the link here. The bottom line is that an instant criticism of any cognition research that dovetails into LLMs is that you are starting at a recognized point of heated contention. Namely, a cogent argument is that since ANNs are not the same as true NNs, you are building your cognition hopes on somewhat of a house of cards. The counterargument is to acknowledge that ANNs are indeed not an isomorphic match, and instead, you are merely engaging them to aid in a broad-based simulation that doesn't have to be a resolute match. In any case, I stridently support using LLMs as insightful exploratory vehicles and assert that we can gain a great deal of progress about cognition in doing so, assuming we proceed mindfully and alertly. LLMs And Intrinsic Human Behavior Suppose you decide to use an off-the-shelf LLM to perform a cognitive modeling investigation. There is something important that you need to be thinking about. I shall unpack the weighty consideration. First, be aware that LLMs are developed via pattern-matching on human writing that is scanned across the Internet. That's how the fluency of LLMs comes about. The ANN is used to pattern-match on how we use words. In turn, when you enter a prompt into generative AI, the generated response produces words composed into sentences that appear to be on par with human writing. They reflect the computational mimicry of extensive computational pattern-matching based on words (actually, it is based on tokens, see the details in my discussion at the link here). You can't especially declare that the LLM is thinking like humans. The AI is using words and patterns about the usage of words. That's not necessarily a direct embodiment of human thinking per se, and more so, presumably, the indirect outcomes of human thinking. One clever idea is to augment an off-the-shelf LLM by aiming to further data-train the AI on veritable traces of human thinking (well, kind of, as you'll see momentarily). Perhaps that will enable the LLM to be more closely aligned with what human cognition consists of. For example, I fed transcripts of therapist-patient sessions into a major LLM to see if it might be feasible to augment its data training and guide the AI toward behaving more like a versed human therapist, see my experiment at the link here. Psych Experimental Results As Rich Data What other kinds of data could we potentially use to perform augmented data training of an LLM so that it can be more readily suited for cognition experimentation? Easy-peasy, tap into the vast tome of psychology experiments that have been performed endlessly on all sorts of people for many decades. Here are the steps. Collect together that data. Work the data into a readable and usable shape. Feed it into an existing LLM, doing so via a method such as RAG (retrieval-augmented generation), see my RAG elicitation at the link here. Voila, perhaps you've tuned up conventional generative AI to better simulate human behavior. A recent research study took that innovative approach. In an article entitled 'A Foundation Model To Predict And Capture Human Cognition' by Marcel Binz et al, Nature, July 2, 2025, the paper made these key points (excerpts): Details Of The Approach The researchers chose to use Meta Llama as their base LLM. The data augmentation was done via the use of the increasingly popular technique known as QLoRA (quantized low-rank adaptation), a distant cousin of RAG. They transcribed 160 experiments into natural language data. It was publicly available data. The types of experiments included many of the classics in psychology, such as memory recall, supervised learning, decision-making, multi-armed bandits, Markov decision processes, and others. To give you a sense of what those experiments are like, consider these two examples: Handily, the researchers have opted to make the dataset available, known as Psych-101, and can be accessed freely on Huggingface. In addition, they have nicely made available the augmented Meta Llama model, which they refer to as Centaur, and which is also freely available on Huggingface. It is a welcome touch because other researchers can now come along and do not need to begin from scratch. They can reuse the arduous and time-consuming work that went into devising Psych-101 and Centaur. Thus, the dataset and the model are ready-made for launching new investigations and serve as a springboard accordingly. The Results In Brief A commonly utilized means of validating an LLM consists of holding back some of the training data so that you can use the holdback for testing purposes. This is a longstanding technique that has been used for statistical model validations. You might use, say, 90% of the data to do the augmented data training and keep the remaining 10% in reserve. When you are ready to test the LLM, you give it the data that was aside to see if the AI can adequately predict the presumed unseen data. They did this and indicated that their Centaur LLM did a bang-up job on the hold-out data. The next step typically undertaken is to employ a make-or-break test when aiming to devise a generalizable model. You give the LLM data that is considered outside the initial scope of the augmentation. The handwringing question is whether the LLM will generalize sufficiently to contend with so-called out-of-distribution (OOD) circumstances. The researchers opted to select a handful of OOD settings, including economic games, deep sequential decision tasks, reward learning, etc. Their reported results indicate that Centaur LLM did quite well at making predictions associated with those previously unseen experimental transcripts. Overall, kudos to the researchers for thinking outside the box on AI and psychology. Some Thoughts To Ponder I'd like to cover a few quick thoughts overall. First, one agonizing difficulty with gauging an off-the-shelf pre-cooked LLM for any kind of newly encountered circumstances is that it is challenging to know whether such data or similar data might have been scanned during the initial setup of the LLM. Usually, only the AI maker knows precisely what data was initially scanned. Ergo, it is worthwhile to be mindful in interpreting generalizability since an LLM might have already had an unknown leg-up previously. Second, and perhaps more importantly, the desire to push toward a semblance of cognitive realism by further data training of an LLM is a laudable idea. Will the AI be more human-like in its reasoning patterns? Maybe, maybe not. One important determinant is whether the AI is still resorting to human-like language and not necessarily patterning on human reasoning. There is a huge debate going on regarding LLM foundational models that are claimed to be using 'reasoning' versus whether they are still potentially doing heads-down next-token prediction, see my coverage on the lively dispute at the link here. Taking Next Steps The overarching aim to see if we can properly ground cognitive computer-based computational simulations in a more psychologically plausible way is exciting. No doubt about that. The researchers also noted that there might be entirely different AI architectural approaches that might be better for us to pursue, beyond the somewhat conventional infrastructures currently dominating the AI realm right now. As a heads-up, some ardently believe that our prevailing LLMs and AI architecture are not going to get us to artificial general intelligence (AGI) or artificial superintelligence (ASI). You see, the trend right now is to mainly power up prevailing designs with faster hardware and more computational running time. But the incremental benefits could be misleadingly tying us to a road that leads to a dead-end. Could the desire to attain a unified model of cognition be the kick in the pants to the AI field to look beyond the groupthink of today's AI and LLMs? I certainly hope so. As General George S. Patton once proclaimed: 'If everyone is thinking alike, then somebody isn't thinking.'
Yahoo
4 hours ago
- Yahoo
Trial hopes to fill 'real need' amid calls to review driver's licence requirements
As calls grow louder in one Aussie state for greater testing of elderly drivers, researchers at the other end of the country have launched a new trial that will measure age-related changes in attention. The subject of driving regulations for older residents has become a hotly contested debate in Victoria in recent weeks after a 91-year-old motorist lost control of her car in Melbourne's east and ploughed into a playground, leaving a two-year-old boy in hospital and killing his grandparents. The state's road minister, Melissa Horne, has since confirmed she asked her department to launch a review into the regulations around elderly drivers' licences, saying nothing is off the table. While many road safety advocates have praised the move, bodies representing older Aussies and doctors have placed the emphasis on ability rather than age. Now, researchers at the University of the Sunshine Coast are hoping to provide physicians with 'better tools to accurately assess cognitive fitness' by examining brain activity and how it relates to driving. 'At present, there are no standardised assessment tools available to GPs and health practitioners to help them assess cognitive fitness to drive,' Senior Research Fellow Dr Kayla Stefanidis told Yahoo News. 'There is a real need for the development of valid and reliable measures of cognitive fitness to drive.' Elderly driver trial could lead to standardised screening method The trial is recruiting participants aged between 60 and 90 to perform 'computerised attention tasks' and simulate driving while their brain activity is measured by an electroencephalogram (EEG). The results will then be compared to a control group of people under the age of 30 who have been driving for at least three years. 'While a number of factors are important, we know that our ability to focus on the road is fundamental to safe driving,' Samantha Ellis, a PhD candidate at the MAIC/UniSC Road Safety Research Collaboration, said. 'As we age, it's natural to experience changes in our attentional abilities. While these changes are normal, it is important to understand how they may impact our driving performance.' Unlike standard cognitive tests, the EEG allows researchers to measure brain activity associated with attention in real-time, Ms Ellis explained. The Road Safety Research Collaboration (RSRC) is a partnership between the Motor Accident Insurance Commission (MAIC) and the University of the Sunshine Coast. The latest trial is part of a larger program of work being conducted within the RSRC aimed at developing a screening method for assessing cognitive fitness to drive. Potential driver's licence change flagged for elderly as state 'really has a look' Calls to review driver's licence requirements for older Aussies Older drivers 'exposed' on Aussie roads after major shift What are the current requirements for older drivers? Currently, doctors and health professionals make such assessments without any standardised measures. 'The development of a cognitive screening tool would help GPs and other health professionals accurately identify whether a patient requires further testing,' Dr Stefanidis said. 'For older adults, it could also reduce costly and unnecessary assessments in older adults who do not require them.' Victoria does not have age-based driving safety requirements, and the onus is on motorists themselves to declare any medical conditions that could impact their driving to the authorities. However, drivers aged 75 and older must renew their licence every three years, which can involve various tests. In NSW, Queensland, the ACT and the Northern Territory, drivers older than 75 are required to have an annual health check, while older drivers in South Australia must complete self-assessments, which in some cases require sign-off from a GP. Once drivers in NSW hit 85, they must also undertake a practical driving test every two years. In Western Australia, an annual medical assessment is required to renew a driver's licence after turning 80. Dr Stefanidis said data collection for the study is expected to be completed in early-to-mid 2026. Anyone who wishes to take part can find out more here. Do you have a story tip? Email: newsroomau@ You can also follow us on Facebook, Instagram, TikTok, Twitter and YouTube.
Yahoo
4 hours ago
- Yahoo
China's 'robot Olympics' sees humanoids from 16 nations compete in table tennis, football, track
China's 'robot Olympics' sees humanoids from 16 nations compete in table tennis, football, track By Liam Mo and Brenda Goh BEIJING (Reuters) -China kicked off a three-day long sports showcase for humanoid robots on Friday, looking to highlight progress in artificial intelligence and robotics with 280 teams from 16 countries competing in the World Humanoid Robot Games. Robots will compete in sports such as football, track and field, and table tennis, as well as tackle robot-specific challenges from sorting medicines and handling materials to cleaning services. Teams come from countries including the United States, Germany, and Brazil, with 192 representing universities and 88 from private enterprises. Robots from Chinese companies including Unitree and Fourier are among those competing. The Beijing municipal government is among the organising bodies for the event, underscoring the emphasis Chinese authorities place on the emerging robotics industry and reflecting the country's broader ambitions in AI and automation. The China's robotics push also comes as the country grapples with an ageing population and slowing economic growth. The sector has received government subsidies exceeding $20 billion over the past year, while Beijing plans to establish a one trillion yuan ($137 billion) fund to support AI and robotics startups. China has staged a series of high-profile robotics events in recent months, including what it called the world's first humanoid robot marathon in Beijing, a robot conference and the opening of retail stores dedicated to humanoid robots. However, the marathon drew criticism after several robot competitors emitted smoke during the race and some failed to complete the course, raising questions about the current capabilities of the technology. Still, while some may view such competitions and events as publicity stunts, industry experts and participants see them as crucial catalysts for advancing humanoid robots toward practical real-world applications. Morgan Stanley analysts in a report last week noted a surge in attendance to a recent robot conference from the general public compared to previous years, saying this showed "how China, not just top government officials, has embraced the concept of embodied intelligence." "We believe this widespread interest could be instrumental for China's continued leadership in the humanoid race, providing the necessary talent, resources, and customers to boost industry development and long-term adoption," they said. Booster Robotics, whose humanoid robots are being used by a Tsinghua University team in the football competition, views soccer as an effective test of perception, decision-making and control technologies that could later be deployed in factories or homes. "Playing football is a testing and training ground for helping us refine our capabilities," said Zhao Mingguo, Chief Scientist at Booster Robotics. Solve the daily Crossword