AI still can't beat humans at reading social cues
AI models have progressed rapidly in recent years and can already outperform humans in various tasks, from generating basic code to dominating games like chess and Go. But despite massive computing power and billions of dollars in investor funding, these advanced models still can't hold up to humans when it comes to truly understanding how real people interact with one another in the world. In other words, AI still fundamentally struggles at 'reading the room.'
That's the claim made in a new paper by researchers from Johns Hopkins University. In the study, researchers asked a group of human volunteers to watch three-second video clips and rate the various ways individuals in those videos were interacting with one another. They then tasked more than 350 AI models—including image, video, and language-based systems—with predicting how the humans had rated those interactions. While the humans completed the task with ease, the AI models, regardless of their training data, struggled to accurately interpret what was happening in the clips. The researchers say their findings suggest that AI models still have significant difficulty understanding human social cues in real-world environments. That insight could have major implications for the growing industry of AI-enabled driverless cars and robots, which inherently need to navigate the physical world alongside people.
'Anytime you want an AI system to interact with humans, you want to be able to know what those humans are doing and what groups of humans are doing with each other,' John Hopkins University assistant professor of cognitive science and paper lead author Leyla Isik told Popular Science. 'This really highlights how a lot of these models fall short on those tasks.'
Isik will present the research findings today at the International Conference on Learning Representations.
Though previous research has shown that AI models can accurately describe what's happening in still images at a level comparable to humans, this study aimed to see whether that still holds true for video. To do that, Isik says she and her fellow researchers selected hundreds of videos from a computer vision dataset and clipped them down to three seconds each. They then narrowed the sample to include only videos featuring two humans interacting. Human volunteers viewed these clips, and answered a series of questions about what was happening, rated on a scale from 1 to 5. The questions ranged from objective prompts like 'Do you think these bodies are facing each other?' to more subjective ones, such as whether the interaction appeared emotionally positive or negative.
In general, the human respondents tended to give similar answers, as reflected in their ratings—suggesting that people share a basic observational understanding of social interactions.
The researchers then posed similar types of questions to image, video, and language models. (The language models were given human-written captions to analyze instead of raw video.) Across the board, the AI models failed to demonstrate the same level of consensus as the human participants. The language models generally performed better than the image and video models, but Isik notes that may be partly due to the fact that they were analyzing captions that were already quite descriptive.
The researchers primarily evaluated open-access models, some of which were several years old. The study did not include the latest models recently released by leading AI companies like OpenAI and Anthropic. Still, the stark contrast between human and AI responses suggests there may be something fundamentally different about how models and humans process social and contextual information.
'It's not enough to just see an image and recognize objects and faces,' John Hopkins University doctoral student and paper co-author Kathy Garcia said in a statement. 'We need AI to understand the story that is unfolding in a scene. Understanding the relationships, context, and dynamics of social interactions is the next step, and this research suggests there might be a blind spot in AI model development.'
The findings come as tech companies race to integrate AI into an increasing number of physical robots—a concept often referred to as 'embodied AI.' Cities like Los Angeles, Phoenix, and Austin have become test beds of this new era thanks to the increasing presence of driverless Waymo robotaxis sharing the roads with human-driven vehicles. Limited understanding of complex environments has led some driverless cars to behave erratically or even get stuck in loops, driving in circles. While some recent studies suggest that driverless vehicles may currently be less prone to accidents than the average human driver, federal regulators have nonetheless opened up investigations into Waymo and Amazon-owned Zoox for driving behavior that allegedly violated safety laws.
Other companies—like Figure AI, Boston Dynamics, and Tesla —are taking things a step further by developing AI-enabled humanoid robots designed to work alongside humans in manufacturing environments. Figure has already signed a deal with BMW to deploy one of its bipedal models at a facility in South Carolina, though its exact purpose remains somewhat vague. In these settings, properly understanding human social cues and context is even more critical, as even small misjudgments in intention run the risk of injury. Going a step further, some experts have even suggested that advanced humanoid robots could one day assist with elder and child care. Isik suggested the results of the study mean there are still several steps that need to be taken before that vision becomes a reality.
'[The research] really highlights the importance of bringing neuroscience, cognitive science, and AI into these more dynamic real world settings.' Isik said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a day ago
- Yahoo
Is there frozen water just floating around in outer space like 'dirty snowballs'?
GREENBELT, Md. – Astronomers now believe frozen water might be a common sight outside of our solar system thanks to newly reviewed data from Nasa's James Webb Space Telescope. According to the space agency, scientists have confirmed the presence of ice around HD 181327, a star that is similar to our Sun. The giant star is located about 155 light-years away from Earth and is thought to be around 23 million years old – much younger than the 4.6-billion-year-old Sun. But similar to our solar system's star, HD 181327 is surrounded by a large, dusty debris and that is where scientists say the ice exists. Previous research had suggested the presence of frozen water, but its potential existence wasn't solidified until after the Webb became operational. "Webb unambiguously detected not just water ice, but crystalline water ice, which is also found in locations like Saturn's rings and icy bodies in our solar system's Kuiper Belt," Chen Xie, the lead author of the new paper and an assistant research scientist at Johns Hopkins University in Baltimore, Maryland, said in a statement. See The Objects Humans Left Behind On The Moon Scientists described the ice as resembling dirty snowballs and published an artist rendering of what the phenomenon would look like if an astronaut had a front-row seat to the icy belt. As any novice would surmise, the debris disk is said to vary in water ice thickness, from being heavily populated to non-existent the closer you move to the star. "In the area of the debris disk closest to the star, Webb detected almost none. It's likely that the star's ultraviolet light vaporizes the closest specks of water ice. It's also possible that rocks known as planetesimals have "locked up" frozen water in their interiors, which Webb can't detect," NASA stated. Why is finding ice so important? It may lead to planet formation and bring together the origins of life. "The presence of water ice helps facilitate planet formation," Xie stated. "Icy materials may also ultimately be 'delivered' to terrestrial planets that may form over a couple hundred million years in systems like this." Water ice has already been observed in numerous locations within our solar system, including on Mercury, Mars, Saturn, our Moon, other planets' moons, and the Kuiper Belt. Scientists say what Webb has not picked up on yet are planets around HD 181327, which could be for various reasons, including the infancy of the distant solar system. Future Of Nasa's Mega Moon Rocket Appears In Doubt Following Major Boeing Announcement The Webb is nearing four years in space and has already beamed back stunning images that far surpass the quality of imagery produced by the Hubble and other older telescopes. NASA believes operations of the James Webb Space Telescope have exceeded expectations, and the space observatory could easily exceed its expected 10-year article source: Is there frozen water just floating around in outer space like 'dirty snowballs'?
Yahoo
2 days ago
- Yahoo
Costco Q3 Sales Rise 8% to $62 Billion
Costco (NASDAQ:COST) delivered a robust Q3 beat, with net sales up 8% to $61.96 billion and EPS soaring to $4.28 from $3.78 a year ago. Costco's scale helped absorb tariff and inflation headwinds, driving an 8.1% gain in adjusted comps and widening its pricing lead over grocery and convenience peers. Warning! GuruFocus has detected 5 Warning Sign with COST. Morgan Stanley's Simeon Gutman lifted his Overweight target to $1,225, praising the share gains and scale leverage. Oppenheimer's Rupesh Parikh called the results unsurprising but said they reinforce confidence in Costco's ongoing outperformance, flagging a potential stock split as a near-term catalyst. UBS's Michael Lasser highlighted the chain's consistency and execution, while Jefferies' Corey Tarlowe pointed to pilot programs like Scan-and-Go as proof of Costco's tech edge. Investors should care because Costco's ability to pass through costs without denting traffic underscores its membership moat and sets the stage for continued market-share gains even in a tougher consumer backdrop. This article first appeared on GuruFocus.
Yahoo
2 days ago
- Yahoo
Costco Q3 Sales Rise 8% to $62 Billion
Costco (NASDAQ:COST) delivered a robust Q3 beat, with net sales up 8% to $61.96 billion and EPS soaring to $4.28 from $3.78 a year ago. Costco's scale helped absorb tariff and inflation headwinds, driving an 8.1% gain in adjusted comps and widening its pricing lead over grocery and convenience peers. Warning! GuruFocus has detected 5 Warning Sign with COST. Morgan Stanley's Simeon Gutman lifted his Overweight target to $1,225, praising the share gains and scale leverage. Oppenheimer's Rupesh Parikh called the results unsurprising but said they reinforce confidence in Costco's ongoing outperformance, flagging a potential stock split as a near-term catalyst. UBS's Michael Lasser highlighted the chain's consistency and execution, while Jefferies' Corey Tarlowe pointed to pilot programs like Scan-and-Go as proof of Costco's tech edge. Investors should care because Costco's ability to pass through costs without denting traffic underscores its membership moat and sets the stage for continued market-share gains even in a tougher consumer backdrop. This article first appeared on GuruFocus. Sign in to access your portfolio