logo
AI understands many things... except for human social interactions

AI understands many things... except for human social interactions

The Star28-04-2025

Artificial intelligence (AI) continues to advance, yet this technology still struggles to grasp the complexity of human interactions. A recent American study reveals that, while AI excels at recognising objects or faces in still images, it remains ineffective at describing and interpreting social interactions in a moving scene.
The team led by Leyla Isik, professor of cognitive science at Johns Hopkins University, investigated how artificial intelligence models understand social interactions. To do this, the researchers designed a large-scale experiment involving over 350 AI models specialising in video, image or language. These AI tools were exposed to short, three-second video sequences illustrating various social situations. At the same time, human participants were asked to rate the intensity of the interactions observed, according to several criteria, on a scale of 1 to 5. The aim was to compare human and AI interpretations, in order to identify differences in perception and better understand the current limits of algorithms in analysing our social behaviours.
"A blind spot in AI model development"
The human participants were remarkably consistent in their assessments, demonstrating a detailed and shared understanding of social interactions. AI, on the other hand, struggled to match these judgments. Models specialising in video proved particularly ineffective at accurately describing the scenes observed. Even models based on still images, although fed with several extracts from each video, struggled to determine whether the characters were communicating with each other. As for language models, they fared a little better, especially when given descriptions written by humans, but remained far from the level of performance of human observers.
For Leyla Isik, the inability of artificial intelligence models to understand human social dynamics is a major obstacle to their integration into real-world environments. "AI for a self-driving car, for example, would need to recognise the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street," the study's lead author explains in a news release. "Any time you want an AI to interact with humans, you want it to be able to recognise what people are doing. I think this [study] sheds light on the fact that these systems can't right now."
According to the researchers, this deficiency could be explained by the way in which AI neural networks are designed. These are mainly inspired by the regions of the human brain that process static images, whereas dynamic social scenes call on other brain areas. This structural discrepancy could explain what the researchers suggest could be "a blind spot in AI model development." Indeed, "real life isn't static. We need AI to understand the story that is unfolding in a scene," says study coauthor, Kathy Garcia.
Ultimately, this study reveals a profound gap between the way humans and AI models perceive moving social scenes. Despite their computing power and ability to process vast quantities of data, machines are still unable to grasp the subtleties and implicit intentions underlying our social interactions. Despite tremendous advances, artificial intelligence is still a long way from truly understanding exactly what goes on in human interactions. – AFP Relaxnews

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China is working on an ultra-fast torpedo powered by AI for submarine warfare
China is working on an ultra-fast torpedo powered by AI for submarine warfare

The Star

time13 hours ago

  • The Star

China is working on an ultra-fast torpedo powered by AI for submarine warfare

In the recent Chinese blockbuster Operation Leviathan , an American nuclear submarine uses hi-tech acoustic holograms to bamboozle Chinese torpedoes and their human operators. Months after the film hit cinema screens, military researchers in China revealed they were working on an artificial intelligence system designed to cut through exactly this type of underwater deception. In a peer-reviewed paper published in Chinese-language journal Command Control & Simulation in April, the team from the PLA Navy Armament Department and China State Shipbuilding Corporation said their system had unprecedented accuracy for torpedoes travelling at high speeds. Tested against data from classified high-speed torpedo ranges, the technology achieved an average 92.2 per cent success rate in distinguishing real submarines from decoys even during tense exchanges, according to the paper. That is a leap from the legacy systems that often miss the target. Future submarine warfare hinges on deceiving torpedoes using illusions. Hi-tech decoys – as dramatised in Operation Leviathan – are used to replicate a vessel's acoustic signature, generate a false bubble trail to make it look like it is making an emergency turn, or deploy in coordinated swarms to project ghost targets across sonar screens. These tactics are particularly effective against what is known as ultra-fast supercavitating torpedoes – weapons that generate cavitation, or vapour bubbles, around their hulls to reduce drag. The resulting roar drowns out genuine target echoes while distorting acoustic fingerprints, according to the Chinese researchers. 'Current target recognition methods for China's underwater high-speed vehicles prove inadequate in environments saturated with advanced countermeasures, necessitating urgent development of novel approaches for feature extraction and target identification,' said the team led by senior engineers Wu Yajun and Liu Liwen. 'Only those underwater high-speed systems equipped with long-range detection capabilities and high target recognition rates can deliver sufficient operational effectiveness,' they added. The solution they proposed came from an unorthodox combination of physics and machine learning. Facing scarce real-world combat data, the team began by simulating decoy profiles using hydrodynamic models of bubble collapse patterns and turbulence. To do that they used raw data collected from the PLA Navy's high-speed torpedo test range. These simulations were then added to a 'generative adversarial network' – a duelling pair of AI systems. One of them, the generator, refined decoy signatures by studying submarine physics and acoustic principles. Its opponent, the discriminator, trained to detect flaws in these forgeries using seven layers of sonic pattern analysis. After many rounds of training, the system had created a huge collection of artificial decoy profiles. The AI uses a specialised neural network architecture inspired by image recognition, according to the paper. Sonar signals go through a process where they are normalised for amplitude, filtered through correlation receivers to suppress noise, and finally rendered as spectral 'thumbnails' using a mathematical tool known as a Fourier transform. These sonic snapshots then pass through convolutional layers in the neural network that are tuned to detect anomalies in frequency modulation. Pooling operations then average out distortions like bubble interference. The team said that when confronted with the most sophisticated type of decoys, detection rates went from 61.3 per cent to more than 80 per cent. It comes amid a global race to develop 'smart' torpedoes. Russia's VA-111 Shkval torpedo and its US counterparts under development all rely on supercavitation at present, and they struggle with target discrimination at extreme speeds. 'With continuous advancements in modern underwater acoustics, electronic technologies and artificial intelligence, today's underwater battlespace often contains multiple simultaneous threats within a single operational area – including decoys, electro-acoustic countermeasure systems, electronic jammers and diverse weapon systems,' the paper said. In such intense underwater environments where multiple targets or decoys can appear simultaneously, these systems must be able to instantly distinguish authentic targets from false ones to avoid mission failure or a wasted trajectory and to prioritise the highest-threat targets, according to the team. 'Critically, given the autonomous nature of underwater high-speed vehicles, all decisions must be made without real-time external communication support, substantially increasing algorithmic complexity and computational demands,' the team said. 'The deep-learning recognition model proposed in this study, combined with the generative adversarial networks' small-sample identification solution, enables effective underwater target discrimination. This lays the technical groundwork for field deployment,' they added. – South China Morning Post

Trump administration renegotiating 'overly generous' Biden Chips Act grants
Trump administration renegotiating 'overly generous' Biden Chips Act grants

The Star

timea day ago

  • The Star

Trump administration renegotiating 'overly generous' Biden Chips Act grants

Semiconductor chips are seen on a circuit board of a computer in this illustration picture taken February 25, 2022. REUTERS/Florence Lo/Illustration/File Photo WASHINGTON (Reuters) -President Donald Trump's administration is renegotiating some of former President Joe Biden's grants to semiconductor firms that were "overly generous," U.S. Commerce Secretary Howard Lutnick said at a hearing on Wednesday. Biden's Chips Act aimed to coax chipmakers to expand production in the U.S., but some of the awards "just seemed overly generous, and we've been able to renegotiate them," Lutnick told lawmakers on the Senate Appropriations Committee. "Are we renegotiating? Absolutely, for the benefit of the American taxpayer," he added. Lutnick also addressed concerns that deals like the one announced by Trump to allow the United Arab Emirates to buy advanced artificial intelligence chips from U.S. companies last month could lead to an exodus of AI compute from the U.S. Lutnick said the administration agrees with the goal that more than 50% of global AI computing capacity should be in America. (Reporting by Alexandra Alper; Editing by David Gregorio)

Microsoft says to step up AI-powered European cybersecurity
Microsoft says to step up AI-powered European cybersecurity

The Sun

timea day ago

  • The Sun

Microsoft says to step up AI-powered European cybersecurity

PARIS: US tech giant Microsoft said Wednesday that it would step up its cooperation with European governments against cyber threats, including by deploying AI-powered intelligence gathering. Its new European Security Program 'puts AI at the center of our work as a tool to protect traditional cybersecurity needs,' Microsoft Vice Chairman Brad Smith wrote in a blog post. Aiming to deliver real-time intelligence about cyber threats to governments, the scheme will extend to the '27 EU member states, as well as EU accession countries, members of the European Free Trade Association (EFTA), the UK, Monaco, and the Vatican,' he added. Microsoft accused the governments of Russia, China, Iran and North Korea of being behind infiltration of European computer networks for espionage and other purposes. Meanwhile cybercriminals are expanding attacks using tools such as ransomware, which encrypts data on victims' computers and demands they fork over cash to unlock it again. 'We see 600 million attacks on our customers every single day,' Smith told reporters in a briefing ahead of the blog post's release, calling cyberdefence a 'multi-billion-dollar expense for customers across Europe'. AI systems can help detect and identify new forms of attack, Smith wrote in his blog post. But Microsoft has seen malicious actors using the technology for everything from researching targets to writing code and 'social engineering' -- or convincing human employees to facilitate access by hackers. And 'influence operations' by nation-states 'are increasingly using AI to mislead and deceive' including with convincing 'deepfake' images, audio and video, Smith added. The company itself 'tracks any malicious use of new AI models we release and proactively prevents known threat actors from using' them, he wrote. Microsoft last month helped police across Europe take down large swathes of digital infrastructure supporting an 'infostealing' network, Lumma, that had been gathering sensitive information like passwords and crypto wallets from victims' devices. In future, members of the company's Digital Crimes Unit will be embedded with Europol's cybercrime specialists in The Hague, Smith wrote, part of a broader increase in collaboration with European security forces. Microsoft's cybersecurity effort is part of a wider push to increase its operations in Europe. The drive comes as trade tensions simmer between the EU and the Trump administration in the US, with many voices questioning European firms' strategic dependence on American-made technology.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store