Terrifying study reveals AI robots have passed ‘Turing test' — and are now indistinguishable from humans, scientists say
The AI-mpersonation is complete.
The dystopian lessons in every sci-fi movie from 'Terminator' to 'Ex Machina' appear to be coming true. Artificial intelligence has become so sophisticated that bots are no longer discernable from their human counterparts, per a concerning preprint study conducted by scientists at the University of California in San Diego.
'People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (a multi-lingual language model released by Meta AI),' concluded head author Cameron Jones, a researcher at UC San Diego's Language and Cognition Lab, in an X post.
The researchers set out to see if AI bots, which are programmed to parrot human speech patterns, could pass the iconic Turing Test.
Developed by British WWII code breaker and computer scientist Alan Turing, this tech-istential exam gauges the intelligence of machines by determining if their digital discourse could be differentiated from that of a human — if they can't tell the difference, the machine has passed.
Researchers tested four large language models (LLMs) — GPT-4o, LLaMa-3, and GPT-4.5 and Eliza (a 1960s era chat program) — in two randomized and controlled trials, Mashable reported.
To administer said Turing Test, they enlisted 126 undergraduate students from the University of California San Diego and 158 people from online data pool Prolific, the Daily Mail reported.
These participants were instructed to have five-minute simultaneous online exchanges with a robot and a human to see if they could spot the Decepticon — with the kicker being that they didn't know which was which.
Meanwhile, the human and AI respondents were tasked with convincing their interrogator that they were human.
Researchers found that, when'prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time.'
This was 'significantly more often than interrogators selected the real human participant,' and of course, more than enough to pass the test, per the study.
Meanwhile, 'LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time.' While not significantly less than its human brethren, this still achieved a passing grade.
Finally, the baseline models (ELIZA and GPT-4o) failed after deceiving the witnesses just 23% and 21% of the time respectively.
Researchers found that it was paramount to have the synthetic mimics adopt a human persona because, when administered a second test sans said prompt, they performed significantly worse.
Caveats aside, these 'results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test,' researchers concluded.
Does this mean that AI-infused learning language models are intelligent?
'I think that's a very complicated question that's hard to address in a paper (or a tweet),' said Jones on X. 'But broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display.'
Interestingly, the experts at Psychology Today concluded that the bots had beaten the Turing Test, not through smarts, but by being a 'better' human than the actual humans.
'While the Turing Test was supposed to measure machine intelligence, it has inadvertently revealed something far more unsettling: our growing vulnerability to emotional mimicry,' wrote John Nosta, founder of the innovation think tank Nosta Lab, while describing this man-squerade. 'This wasn't a failure of AI detection. It was a triumph of artificial empathy.'
Nosta based his analysis on the fact that participants rarely asked logical questions, instead prioritizing 'emotional tone, slang, and flow,' and basing their selections on which 'one had more of a human vibe.'
He concluded, 'In other words, this wasn't a Turing Test. It was a social chemistry test—Match.GPT—not a measure of intelligence, but of emotional fluency. And the AI aced it.'
This isn't the first time AI has demonstrated an uncanny ability to pull the wool over our eyes.
In 2023, OpenAI's GPT-4 tricked a human into thinking it was blind to cheat the online CAPTCHA test that determines if users are human.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a day ago
- Yahoo
Your Brain Wrinkles Are Way More Important Than We Ever Realized
The folds and ridges of the human brain are more complex than any other in the animal kingdom, and a new study shows that this complexity may be linked to the brain's level of connectivity and our reasoning abilities. Research led by a team from the University of California, Berkeley (UC Berkeley) looked at the brain shapes and neural activity of 43 young people, and in particular the lateral prefrontal cortex (LPFC) and lateral parietal cortex (LPC) – parts of the brain that handle reasoning and high-level cognition. The grooves and folds on the brain are known as sulci, with the smallest grooves known as tertiary sulci. These are the last to form as the brain grows, and the research team wanted to see how these grooves related to cognition. "The hypothesis is that the formation of sulci leads to shortened distances between connected brain regions, which could lead to increased neural efficiency, and then, in turn, individual differences in improved cognition with translational applications," says neuroscientist Kevin Weiner, from UC Berkeley. The analysis revealed each sulci had its own distinct connectivity pattern, and that the physical structure of some of these grooves was linked to the level of communication between brain areas – and not just areas that were close to each other. It adds to the findings of a 2021 study carried out by some of the same researchers, which found the depth of certain sulci are associated with cognitive reasoning. Now we have more data to help scientists understand why that might be. Between 60 and 70 percent of the brain's cortex (or outer layer) is hidden away inside folds, and these patterns change with age too. Tertiary sulci can vary significantly between individuals as well. "While sulci can change over development, getting deeper or shallower and developing thinner or thicker gray matter – probably in ways that depend on experience – our particular configuration of sulci is a stable individual difference: their size, shape, location and even, for a few sulci, whether they're present or absent," says neuroscientist Silvia Bunge, from UC Berkeley. It's clear from this research that the peaks and valleys of these brain structures are much more important than previously realized. They're not just random folds used to pack brains inside skulls – and may have evolved in certain directions over time. Going forward, the researchers have big plans when it comes to studying brain grooves. Eventually, it's possible that a map of these sulci could help in assessing brain development in children and spotting neurological disorders. There's a lot more work to do before that can happen though, and the researchers are emphasizing that brain fold length and depth are just two of many factors involved when it comes to our cognitive abilities. "Cognitive function depends on variability in a variety of anatomical and functional features," says Bunge. "Importantly, we know that experience, like quality of schooling, plays a powerful role in shaping an individual's cognitive trajectory, and that it is malleable, even in adulthood." The research has been published in the Journal of Neuroscience. Something Strange Happens to Your Eyes When You're Sexually Aroused 2-Year-Old Prodigy Joins 'High IQ' Club Mensa as Youngest Member Ever Traces of Mysterious Ancient Human Population Discovered in Colombia

Yahoo
a day ago
- Yahoo
Anthropic appoints a national security expert to its governing trust
A day after announcing new AI models designed for U.S. national security applications, Anthropic has appointed a national security expert, Richard Fontaine, to its long-term benefit trust. Anthropic's long-term benefit trust is a governance mechanism that Anthropic claims helps it promote safety over profit, and which has the power to elect some of the company's board of directors. The trust's other members include Centre for Effective Altruism CEO Zachary Robinson, Clinton Health Access Initiative CEO Neil Buddy Shah, and Evidence Action President Kanika Bahl. In a statement, Anthropic CEO Dario Amodei said that Fontaine's hiring will "[strengthen] the trust's ability to guide Anthropic through complex decisions" about AI as it relates to security. "Richard's expertise comes at a critical time as advanced AI capabilities increasingly intersect with national security considerations," Amodei continued. "I've long believed that ensuring democratic nations maintain leadership in responsible AI development is essential for both global security and the common good." Fontaine, who as a trustee won't have a financial stake in Anthropic, previously served as a foreign policy adviser to the late Sen. John McCain and was an adjunct professor at Georgetown teaching security studies. For more than six years, he led the Center for A New American Security, a national security think tank based in Washington, D.C., as its president. Anthropic has increasingly engaged U.S. national security customers as it looks for new sources of revenue. In November, the company teamed up with Palantir and AWS, the cloud computing division of Anthropic's major partner and investor, Amazon, to sell Anthropic's AI to defense customers. To be clear, Anthropic isn't the only top AI lab going after defense contracts. OpenAI is seeking to establish a closer relationship with the U.S. Defense Department, and Meta recently revealed that it's making its Llama models available to defense partners. Meanwhile, Google is refining a version of its Gemini AI capable of working within classified environments, and Cohere, which primarily builds AI products for businesses, is also collaborating with Palantir to deploy its AI models. Fontaine's hiring comes as Anthropic beefs up its executive ranks. In May, the company named Netflix co-founder Reed Hastings to its board. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
a day ago
- Yahoo
Anthropic appoints a national security expert to its governing trust
A day after announcing new AI models designed for U.S. national security applications, Anthropic has appointed a national security expert, Richard Fontaine, to its long-term benefit trust. Anthropic's long-term benefit trust is a governance mechanism that Anthropic claims helps it promote safety over profit, and which has the power to elect some of the company's board of directors. The trust's other members include Centre for Effective Altruism CEO Zachary Robinson, Clinton Health Access Initiative CEO Neil Buddy Shah, and Evidence Action President Kanika Bahl. In a statement, Anthropic CEO Dario Amodei said that Fontaine's hiring will "[strengthen] the trust's ability to guide Anthropic through complex decisions" about AI as it relates to security. "Richard's expertise comes at a critical time as advanced AI capabilities increasingly intersect with national security considerations," Amodei continued. "I've long believed that ensuring democratic nations maintain leadership in responsible AI development is essential for both global security and the common good." Fontaine, who as a trustee won't have a financial stake in Anthropic, previously served as a foreign policy adviser to the late Sen. John McCain and was an adjunct professor at Georgetown teaching security studies. For more than six years, he led the Center for A New American Security, a national security think tank based in Washington, D.C., as its president. Anthropic has increasingly engaged U.S. national security customers as it looks for new sources of revenue. In November, the company teamed up with Palantir and AWS, the cloud computing division of Anthropic's major partner and investor, Amazon, to sell Anthropic's AI to defense customers. To be clear, Anthropic isn't the only top AI lab going after defense contracts. OpenAI is seeking to establish a closer relationship with the U.S. Defense Department, and Meta recently revealed that it's making its Llama models available to defense partners. Meanwhile, Google is refining a version of its Gemini AI capable of working within classified environments, and Cohere, which primarily builds AI products for businesses, is also collaborating with Palantir to deploy its AI models. Fontaine's hiring comes as Anthropic beefs up its executive ranks. In May, the company named Netflix co-founder Reed Hastings to its board.