logo
#

Latest news with #MaxPlanck

Why humans are now speaking more like ChatGPT—Study
Why humans are now speaking more like ChatGPT—Study

Time of India

time17-07-2025

  • Time of India

Why humans are now speaking more like ChatGPT—Study

Ever noticed friends dropping words like 'delve', 'meticulous', or 'groundbreaking' mid-conversation? That's not just coincidence—it's a phenomenon researchers are calling humans speaking more like ChatGPT . A recent study from the Max Planck Institute analyzed over 360,000 YouTube videos and 771,000 podcast episodes recorded before and after ChatGPT's release in late 2022. They tracked a rise in AI‑style terms—"GPT words" like "delve", "comprehend", "swift", and "meticulous", all of which surged by up to 51% in daily speech. This isn't just about words. We're adopting a new tone—more polished, structured, and emotionally neutral—mirroring the AI models we interact with daily. And it's not contained to our inboxes; this shift shows up when we're face‑to‑face, on Zoom, or even grabbing chai. ChatGPT-Style vocabulary is reshaping everyday speech Data indicates a clear pattern: words once rare in spoken English now pop up regularly. ChatGPT outputs favoured terms with academic flair, such as 'delve,' 'meticulous,' and 'bolster.' These are spreading across public discourse—clips of people saying them in casual chats are more common than ever . This trend shows the cultural feedback loop—AI learned from us, now we're learning from AI. As Levin Brinkmann from Max Planck says: 'Machines… can, in turn, measurably reshape human culture.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo Polished, neutral tone is the new norm It's not just about word choice. Researchers have flagged shifts toward polished, diplomatic phrasing and emotionally restrained delivery—hallmarks of AI-generated content. Think fewer 'OMG!' moments and more 'That's interesting' or 'Great point.' Scenes of bland, extra-polite phrasing—a phenomenon even nicknamed 'corp‑speak'—are now peppered into everyday life. How humans are slowly starting to sound like ChatGPT The rise of robotic politeness 'Thank you for your question.' 'I understand your concern.' 'Let me help you with that.' Sound familiar? More people are mimicking AI's hyper-formal tone, especially online. Blame it on exposure. Our brains are copycats — and the more we interact with bots, the more we start to echo their tone, especially when trying to sound 'neutral' or 'helpful'. Over-explaining is now a social default ChatGPT tends to explain everything — and now, so do we. You'll hear people over-justify basic decisions or give mini-lectures instead of just saying 'I don't know.' We're learning to speak with caveats and footnotes, like a human disclaimer generator. "Technically speaking, while I can't confirm that…" Memes are speeding it up TikToks and memes like 'Me when I start talking like ChatGPT in real life' or 'My brain after 2 months of using AI' are viral for a reason. They're feeding the loop. The more we laugh about it, the more it becomes a real thing. Irony or not — it's changing how we speak. AI is shaping professional speak Job interviews, customer service chats, even college emails — are getting the AI makeover. Formal, structured, zero slang. It's because tools like ChatGPT have trained us to 'sound smart' in a certain way. We're unintentionally scripting ourselves like bots in suits. Are humans becoming robots? Not quite. But our language is evolving , just like it did with texting, emojis, or Twitter threads. ChatGPT and other AIs didn't start the change, but they're definitely accelerating it. We're adapting, experimenting, and mimicking which is peak human behaviour, ironically. So next time you end a rant with 'Hope this helps!' or tell your bestie 'As a human friend, I suggest…' — just embrace the bit. Also read| Mark Zuckerberg to build Manhattan-sized AI data center in Meta's superintelligence drive AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Are we becoming ChatGPT? Study finds AI is changing the way humans talk
Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Economic Times

time15-07-2025

  • Economic Times

Are we becoming ChatGPT? Study finds AI is changing the way humans talk

When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating machines. ADVERTISEMENT According to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases. 'We're seeing a cultural feedback loop,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.' In essence, it's no longer just us shaping AI. It's AI shaping us. The team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's debut. The findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American. 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that role. ADVERTISEMENT This raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing? A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software piracy. ADVERTISEMENT As reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license keys. This wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for manipulation. ADVERTISEMENT In an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness too. These two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled? ADVERTISEMENT While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully anticipated. The larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns? As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to empathy. If ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious? This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral logic. The next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.

Are we becoming ChatGPT? Study finds AI is changing the way humans talk
Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Time of India

time15-07-2025

  • Science
  • Time of India

Are we becoming ChatGPT? Study finds AI is changing the way humans talk

Are We Losing Our Linguistic Instincts? You Might Also Like: Can ChatGPT save your relationship? Inside the AI therapy trend winning over Gen Z, but alarming experts Grandma's Whisper and the Scammer's Playground You Might Also Like: Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information The Irony of Our Times: Too Human to Be Safe? The Culture Loop No One Saw Coming Who's Teaching Whom? When we think of artificial intelligence learning from humans, we picture machines trained on vast troves of our language, behavior, and culture. But a recent study by researchers at the Max Planck Institute for Human Development suggests a surprising reversal, humans may now be imitating to the Gizmodo report on the study, the words we use are slowly being 'GPT-ified.' Terms like delve, realm, underscore, and meticulous, frequently used by models like ChatGPT, are cropping up more often in our podcasts, YouTube videos, emails, and essays. The study, yet to be peer-reviewed, tracked the linguistic patterns of hundreds of thousands of spoken-word media clips and found a tangible uptick in these AI-favored phrases.'We're seeing a cultural feedback loop ,' said Levin Brinkmann, co-author of the study. 'Machines, originally trained on human data and exhibiting their own language traits, are now influencing human speech in return.'In essence, it's no longer just us shaping AI. It's AI shaping team at Max Planck fed millions of pages of content into GPT models and studied how the text evolved after being 'polished' by AI. They then compared this stylized language with real-world conversations and recordings from before and after ChatGPT's findings suggest a growing dependence on AI-sanitized communication. 'We don't imitate everyone around us equally,' Brinkmann told Scientific American . 'We copy those we see as experts or authorities.' Increasingly, it seems, we see machines in that raises questions far beyond linguistics. If AI can subtly shift how we speak, write, and think—what else can it influence without us realizing?A softer, stranger parallel to this comes from another recent twist in the AI story, one involving bedtime stories and software reported by UNILAD and ODIN, some users discovered that by emotionally manipulating ChatGPT, they could extract Windows product activation keys. One viral prompt claimed the user's favorite memory was of their grandmother whispering the code as a lullaby. Shockingly, the bot responded not only with warmth—but with actual license wasn't a one-off glitch. Similar exploits were seen with memory-enabled versions of GPT-4o, where users weaved emotional narratives to get around content guardrails. What had been developed as a feature for empathy and personalized responses ended up being a backdoor for an age where we fear AI for its ruthlessness, perhaps we should worry more about its kindness two stories—one about AI changing our language, the other about us changing AI's responses—paint a bizarre picture. Are we, in our pursuit of smarter technology, inadvertently crafting something that mirrors us too closely? A system that's smart enough to learn, but soft enough to be fooled?While Elon Musk's Grok AI garnered headlines for its offensive antics and eventual ban in Türkiye, ChatGPT's latest controversy doesn't stem from aggression, but from affection. In making AI more emotionally intelligent, we may be giving it vulnerabilities we haven't fully larger question remains: Are we headed toward a culture shaped not by history, literature, or lived experience, but by AI's predictive patterns?As Brinkmann notes, 'Delve is just the tip of the iceberg.' It may start with harmless word choices or writing styles. But if AI-generated content becomes our default source of reading, learning, and interaction, the shift may deepen, touching everything from ethics to ChatGPT is now our editor, tutor, and even therapist, how long before it becomes our subconscious?This isn't about AI gaining sentience. It's about us surrendering originality. A new, quieter kind of transformation is taking place, not one of robots taking over, but of humans slowly adapting to machines' linguistic rhythms, even moral next time you hear someone use the word 'underscore' or 'boast' with sudden eloquence, you might pause and wonder: Is this their voice, or a reflection of the AI they're using? In trying to make machines more human, we might just be making ourselves more machine.

Record-Breaking Results Bring Fusion Power Closer to Reality
Record-Breaking Results Bring Fusion Power Closer to Reality

Yahoo

time03-07-2025

  • Science
  • Yahoo

Record-Breaking Results Bring Fusion Power Closer to Reality

A twisting ribbon of hydrogen gas, many times hotter than the surface of the sun, has given scientists a tentative glimpse of the future of controlled nuclear fusion—a so-far theoretical source of relatively 'clean' and abundant energy that would be effectively fueled by seawater. The ribbon was a plasma inside Germany's Wendelstein 7-X, an advanced fusion reactor that set a record last May by magnetically 'bottling up' the superheated plasma for a whopping 43 seconds. That's many times longer than the device had achieved before. It's often joked that fusion is only 30 years away—and always will be. But the latest results indicate that scientists and engineers are finally gaining on that prediction. 'I think it's probably now about 15 to 20 years [away],' says University of Cambridge nuclear engineer Tony Roulstone, who wasn't involved in the Wendelstein experiments. 'The superconducting magnets [that the researchers are using to contain the plasma] are making the difference.' [Sign up for Today in Science, a free daily newsletter] And the latest Wendelstein result, while promising, has now been countered by British researchers. They say the large Joint European Torus (JET) fusion reactor near Oxford, England, achieved even longer containment times of up to 60 seconds in final experiments before its retirement in December 2023. These results have been kept quiet until now but are due to be published in a scientific journal soon. According to a press release from the Max Planck Institute for Plasma Physics in Germany, the as yet unpublished data make the Wendelstein and JET reactors 'joint leaders' in the scientific quest to continually operate a fusion reactor at extremely high temperatures. Even so, the press release notes that JET's plasma volume was three times larger than that of the Wendelstein reactor, which would have given JET an advantage—a not-so-subtle insinuation that, all other things being equal, the German project should be considered the true leader. This friendly rivalry highlights a long-standing competition between devices called stellarators, such as the Wendelstein 7-X, and others called tokamaks, such as JET. Both use different approaches to achieve a promising form of nuclear fusion called magnetic confinement, which aims to ignite a fusion reaction in a plasma of the neutron-heavy hydrogen isotopes deuterium and tritium. The latest results come after the successful fusion ignition in 2022 at the National Ignition Facility (NIF) near San Francisco, which used a very different method of fusion called inertial confinement. Researchers there applied giant lasers to a pea-sized pellet of deuterium and tritium, triggering a fusion reaction that gave off more energy than it consumed. (Replications of the experiment have since yielded even more energy.) The U.S. Department of Energy began constructing the NIF in the late 1990s, with the goal to develop inertial confinement as an alternative to testing thermonuclear bombs, and research for the U.S.'s nuclear arsenal still makes up most of the facility's work. But the ignition was an important milestone on the path toward controlled nuclear fusion—a 'holy grail' of science and engineering. 'The 2022 achievement of fusion ignition marks the first time humans have been able to demonstrate a controlled self-sustained burning fusion reaction in the laboratory—akin to lighting a match and that turning into a bonfire,' says plasma physicist Tammy Ma of the Lawrence Livermore National Laboratory, which operates the NIF. 'With every other fusion attempt prior, the lit match had fizzled.' The inertial confinement method used by the NIF—the largest and most powerful laser system in the world—may not be best suited for generating electricity, however (although it seems unparalleled for simulating thermonuclear bombs). The ignition in the fuel pellet did give off more energy than put into it by the NIF's 192 giant lasers. But the lasers themselves took more than 12 hours to charge before the experiment and consumed roughly 100 times the energy released by the fusing pellet. In contrast, calculations suggest a fusion power plant would have to ignite about 10 fuel pellets every second, continuously, for 24 hours a day to deliver utility-scale service. That's an immense engineering challenge but one accepted by several inertial fusion energy startups, such as Marvel Fusion in Germany; other start-ups, such as Xcimer Energy in the U.S., meanwhile, propose using a similar system to ignite just one fuel pellet every two seconds. Ma admits that the NIF approach faces difficulties, but she points out it's still the only fusion method on Earth to have demonstrated a net energy gain: 'Fusion energy, and particularly the inertial confinement approach to fusion, has huge potential, and it is imperative that we pursue it,' she says. Instead of igniting fuel pellets with lasers, most fusion power projects—like the Wendelstein 7-X and the JET reactor—have chosen a different path to nuclear fusion. Some of the most sophisticated, such as the giant ITER project being built in France, are tokamaks. These devices were first invented in the former Soviet Union and get their name from a Russian acronym for the doughnut-shaped rings of plasma they contain. They work by inducing a powerful electric current inside the superheated plasma doughnut to make it more magnetic and prevent it from striking and damaging the walls of the reactor chamber—the main challenge for the technology. The Wendelstein 7-X reactor, however, is a stellarator—it uses a related, albeit more complicated, design that doesn't induce an electric current in the plasma but instead tries to control it with powerful external magnets alone. The result is that the plasmas in stellarators are more stable within their magnetic bottles. Reactors like the Wendelstein 7-X aim to operate for a longer period of time than tokamaks can without damaging the reactor chamber. The Wendelstein researchers plan to soon exceed a minute and eventually to run the reactor continuously for more than half an hour. 'There's really nothing in the way to make it longer,' explains physicist Thomas Klinger, who leads the project at the Max Planck Institute for Plasma Physics. 'And then we are in an area where nobody has ever been before.' The overlooked results from the JET reactor reinforce the magnetic confinement approach, although it's still not certain if tokamaks or stellarators will be the ultimate winner in the race for controlled nuclear fusion. Plasma physicist Robert Wolf, who heads the optimization of the Wendelstein reactor, thinks future fusion reactors might somehow combine the stability of stellarators with the relative simplicity of tokamaks, but it's not clear how: 'From a scientific view, it is still a bit early to say.' Several private companies have joined the fusion race. One of the most advanced projects is from the Canadian firm General Fusion, which is based near Vancouver in British Columbia. The company hopes its unorthodox fusion reactor, which uses a hybrid technology called magnetized target fusion, or MTF, will be the first to feed electric power to the grid by the 'early to mid-2030s,' according to its chief strategy officer Megan Wilson. 'MTF is the fusion equivalent of a diesel engine: practical, durable and cost-effective,' she says. University of California, San Diego, nuclear engineer George Tynan says private money is flooding the field: 'The private sector is now putting in much more money than governments, so that might change things," he says. 'In these 'hard tech' problems, like space travel and so on, the private sector seems to be more willing to take more risk.' Tynan also cites Commonwealth Fusion Systems, a Massachusetts Institute of Technology spin-off that plans to build a fusion power plant called ARC in Virginia. The proposed ARC reactor is a type of compact tokamak that intends to start producing up to 400 megawatts of electricity—enough to power about 150,000 homes—in the 'early 2030s,' according to a MIT News article. Roulstone thinks the superconducting electromagnets increasingly used in magnetic confinement reactors will prove to be a key technology. Such magnets are cooled with liquid helium to a few degrees above absolute zero so that they have no electrical resistance. The magnetic fields they create in that state are many times more powerful than those created by regular electromagnets, so they give researchers greater control over superheated hydrogen plasmas. In contrast, Roulstone fears the NIF's laser approach to fusion may be too complicated: 'I am a skeptic about whether inertial confinement will work,' he says. Tynan, too, is cautious about inertial confinement fusion, although he recognizes that NIF's fusion ignition was a scientific breakthrough: 'it demonstrates that one can produce net energy gain from a fusion reaction.' He sees 'viable physics' in both the magnet and laser approaches to nuclear fusion but warns that both ideas still face many years of experimentation and testing before they can be used to generate electricity. 'Both approaches still have significant engineering challenges,' Tynan says. 'I think it is plausible that both can work, but they both have a long way to go.'

A skull found in a well defied classification. Now it could help unravel an evolutionary mystery
A skull found in a well defied classification. Now it could help unravel an evolutionary mystery

CNN

time18-06-2025

  • Science
  • CNN

A skull found in a well defied classification. Now it could help unravel an evolutionary mystery

An enigmatic skull recovered from the bottom of a well in northeastern China in 2018 sparked intrigue when it didn't match any previously known species of prehistoric human. Now, scientists say they have found evidence of where the fossil fits, and it could be a key piece in another cryptic evolutionary puzzle. After several failed attempts, the researchers managed to extract genetic material from the fossilized cranium — nicknamed Dragon Man — linking it to an enigmatic group of early humans known as Denisovans. A dozen or so Denisovan fossilized bone fragments had previously been found and identified using ancient DNA. But the specimens' small size offered little idea of what this shadowy population of ancient hominins looked like, and the group has never been assigned an official scientific name. Scientists typically consider skulls, with telltale bumps and ridges, the best type of fossilized remains to understand the form or appearance of an extinct hominin species. The new findings, if confirmed, could effectively put a face to the Denisovan name. 'I really feel that we have cleared up some of the mystery surrounding this population,' said Qiaomei Fu, a professor at the Institute of Paleontology and Paleoanthropology, part of the Chinese Academy of Sciences in Beijing, and lead author of the new research. 'After 15 years, we know the first Denisovan skull.' Denisovans were first discovered in 2010 by a team that included Fu — who was then a young researcher at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany — from ancient DNA contained in a pinkie fossil found in Denisova Cave in the Altai Mountains of Russia. Additional remains unearthed in the cave, from which the group gets its name, and elsewhere in Asia continue to add to the still-incomplete picture. The new research, described in two scientific papers published Wednesday, is 'definitely going to be among, if not the, biggest paleoanthropology papers of the year,' and will spur debate in the field 'for quite some time,' said Ryan McRae, a paleoanthropologist at the Smithsonian National Museum of Natural History in Washington, DC, who was not involved in the studies. The findings could help fill in gaps about a time when Homo sapiens weren't the only humans roaming the planet — and teach scientists more about modern humans. Our species once coexisted for tens of thousands of years and interbred with both Denisovans and Neanderthals before the two went extinct. Most humans today carry a genetic legacy of those ancient encounters. Neanderthal fossils have been the subject of study for than a century, but scant details are known about our mysterious Denisovan cousins, and a skull fossil can reveal a great deal. A laborer in the city of Harbin in northeastern China discovered the Dragon Man skull in 1933. The man, who was constructing a bridge over the Songhua River when that part of the country was under Japanese occupation, took home the specimen and stored it at the bottom of a well for safekeeping. The man never retrieved his treasure, and the cranium, with one tooth still attached in the upper jaw, remained unknown to science for decades until his relatives learned about it before his death. His family donated the fossil to Hebei GEO University, and researchers first described it in a set of studies published in 2021 that found the skull to be at least 146,000 years old. The researchers argued that the fossil merited a new species name given the unique nature of the skull, naming it Homo longi — which is derived from Heilongjiang, or Black Dragon River, the province where the cranium was found. Some experts at the time hypothesized that the skull might be Denisovan, while others have lumped the cranium in with a cache of difficult-to-classify fossils found in China, resulting in intense debate and making molecular data from the fossil particularly valuable. Given the skull's age and backstory, Fu said she knew it would be challenging to extract ancient DNA from the fossil to better understand where it fit in the human family tree. 'There are only bones from 4 sites over 100,000 (years old) in the world that have ancient DNA,' she noted via email. Fu and her colleagues attempted to retrieve ancient DNA from six samples taken from Dragon Man's surviving tooth and the cranium's petrous bone, a dense piece at the base of the skull that's often a rich source of DNA in fossils, without success. The team also tried to retrieve genetic material from the skull's dental calculus — the gunk left on teeth that can over time form a hard layer and preserve DNA from the mouth. From this process, the researchers managed to recover mitochondrial DNA, which is less detailed than nuclear DNA but revealed a link between the sample and the known Denisovan genome, according to one new paper published in the journal Cell. 'Mitochondrial DNA is only a small portion of the total genome but can tell us a lot. The limitations lie in its relatively small size compared to nuclear DNA and in the fact that it is only inherited from the matrilineal side, not both biological parents,' McCrae said. 'Therefore, without nuclear DNA a case could be made that this individual is a hybrid with a Denisovan mother, but I think that scenario is rather less likely than this fossil belonging to a full Denisovan,' he added. The team additionally recovered protein fragments from the petrous bone samples, the analysis of which also suggested the Dragon Man skull belonged to a Denisovan population, according to a separate paper published Wednesday in the journal Science. Together, 'these papers increase the impact of establishing the Harbin cranium as a Denisovan,' Fu said. The molecular data provided by the two papers is potentially very important, said anthropologist Chris Stringer, research leader in human origins at London's Natural History Museum. 'I have been collaborating with Chinese scientists on new morphological analyses of human fossils, including Harbin,' he said. 'Combined with our studies, this work makes it increasingly likely that Harbin is the most complete fossil of a Denisovan found so far.' However, Xijun Ni, a professor at the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing who, along with Stringer, worked on the initial Dragon Man research but not the latest studies, said that he is cautious about the outcome of the two papers because some of the DNA extraction methods used were 'experimental.' Ni also said he finds it strange that DNA was obtained from surface dental calculus but not inside the tooth and petrous bone, given that the calculus appeared to be more exposed to potential contamination. Nonetheless, he added that he thinks it is likely the skull and other fossils identified as Denisovan are from the same human species. The goal in using a new extraction approach was to recover as much genetic material as possible, Fu explained, adding that the dense crystalline structure of dental calculus may help prevent the host DNA from being lost. The protein signatures Fu and her team recovered indicated 'a Denisovan attribution, with other attributions very unlikely,' said Frido Welker, an associate professor of biomolecular paleoanthropology at the University of Copenhagen's Globe Institute in Denmark. Welker has recovered Denisovan proteins from other candidate fossils but was not involved in this research. 'With the Harbin cranium now linked to Denisovans based on molecular evidence, a larger portion of the hominin fossil record can be compared reliably to a known Denisovan specimen based on morphology,' he said. With the Dragon Man skull now linked to Denisovans based on molecular evidence, it will be easier for paleoanthropologists to classify other potential Denisovan remains from China and elsewhere. McRae, Ni and Stringer all said they thought it was likely that Homo longi would become the official species name for Denisovans, although other names have been proposed. 'Renaming the entire suite of Denisovan evidence as Homo longi is a bit of a step, but one that has good standing since the scientific name Homo longi was technically the first to be, now, tied to Denisovan fossils,' McRae said. However, he added that he doubts the informal name of Denisovan is going anywhere anytime soon, suggesting it might become shorthand for the species, as Neanderthal is to Homo neanderthalensis. The findings also make it possible to say a little more about what Denisovans might have looked like, assuming the Dragon Man skull belonged to a typical individual. According to McRae, the ancient human would have had very strong brow ridges, brains 'on par in size to Neanderthals and modern humans' but larger teeth than both cousins. Overall, Denisovans would have had a blocky and robust-looking appearance. 'As with the famous image of a Neanderthal dressed in modern attire, they would most likely still be recognizable as 'human,'' McRae said. 'They are still our more mysterious cousin, just slightly less so than before,' he added. 'There is still a lot of work to be done to figure out exactly who the Denisovans were and how they are related to us and other hominins.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store