logo
#

Latest news with #socialInteractions

Scientists show how we read intentions from one another's gazes
Scientists show how we read intentions from one another's gazes

Free Malaysia Today

time12-05-2025

  • Science
  • Free Malaysia Today

Scientists show how we read intentions from one another's gazes

Researchers are curious to gain a deeper understanding of how glances can transmit information during social interactions. (Envato Elements pic) PARIS : It's often said the eyes are the mirror of the soul. Another well-known phrase is: 'I see it in your eyes.' This reflects the fact that sight is the predominant human sense: we have a natural tendency to scrutinise the eyes of others, seeking to detect their emotions or intentions. With this in mind, researchers at McGill University in Montreal, Canada, have studied how our eyes can convey our intentions, without the need for words. To gain a deeper understanding of how glances can transmit information in social interactions, Jelena Ristic and colleagues conducted a series of experiments involving between 70 and 80 volunteers, who watched videos of people turning their gaze to the left or right. Sometimes these eye movements were spontaneous, other times deliberately provoked. The videos stopped just before the movement took place, and the participants had to guess in which direction the eyes were going to move. The scientists found that the rate of correct responses remained stable, but the speed of the responses increased when the eye movement was intentional, i.e. when the people on screen were free to choose the direction of their gaze. 'The speed of the observers' responses suggests that they implicitly recognise and respond more quickly to intentional eye movements. It also told us how sensitive we are to information about the mental state and intentions conveyed by the eyes,' says Florence Mayrand, a PhD candidate and the paper's first author. In other words, participants were able to glean intentions in the eyes before any action had taken place. To explain this phenomenon, the researchers examined the micro-movements that preceded eye movements in the videos. In the journal Communications Psychology, they observed that intentional gazes were accompanied by greater activity around the eyes, reflecting the existence of particular movement patterns. It's this subtle movement that our brains instinctively pick up on, as a kind of invisible sign of intention. In the future, the scientists plan to accurately measure the speed, trajectory and frequency of blinking in intentional and directed looks. They would also like to determine whether these properties vary according to intention (lying, helping, fleeing), or whether sensitivity to intentions in gaze might differ in people with social difficulties, such as individuals with autism or ADHD. There's nothing magical about reading people's eyes; it's a skill deeply rooted in us, essential to the survival of our ancestors, and still at work in our daily interactions. Behind every shift in the gaze, an intention takes shape, and our brains often know how to decipher these intentions – without us even being aware of it.

AI understands many things but still flounders at human interaction
AI understands many things but still flounders at human interaction

Free Malaysia Today

time09-05-2025

  • Science
  • Free Malaysia Today

AI understands many things but still flounders at human interaction

However sophisticated AI may be, it still struggles to understand our social interactions, researchers say. (Envato Elements pic) PARIS : Artificial intelligence continues to advance, yet this technology still struggles to grasp the complexity of human interactions. A recent US study reveals that, while AI excels at recognising objects or faces in still images, it remains ineffective at describing and interpreting social interactions in a moving scene. The team led by Leyla Isik, professor of cognitive science at Johns Hopkins University, investigated how AI models understand social interactions. To do this, the researchers designed a large-scale experiment involving over 350 AI models specialising in video, image or language. These AI tools were exposed to short, three-second video sequences illustrating various social situations. At the same time, human participants were asked to rate the intensity of the interactions observed, according to several criteria, on a scale of 1-5. The aim was to compare human and AI interpretations, in order to identify differences in perception and better understand the current limits of algorithms in analysing our social behaviours. The human participants were remarkably consistent in their assessments, demonstrating a detailed and shared understanding of social interactions. AI, on the other hand, struggled to match these judgements. Models specialising in video proved particularly ineffective at accurately describing the scenes observed. Even models based on still images, although fed with several extracts from each video, struggled to determine whether the characters were communicating with each other. As for language models, they fared a little better, especially when given descriptions written by humans, but remained far from the level of performance of human observers. A 'blind spot' For Isik, this proves a major obstacle to the integration of AI into real-world environments. 'AI for a self-driving car, for example, would need to recognise the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street,' she explained. 'Any time you want an AI to interact with humans, you want it to be able to recognise what people are doing. I think this study sheds light on the fact that these systems can't right now.' According to the researchers, this deficiency could be explained by the way in which AI neural networks are designed. These are mainly inspired by the regions of the human brain that process static images, whereas dynamic social scenes call on other brain areas. This structural discrepancy could explain what the researchers suggest could be 'a blind spot in AI model development'. Indeed, 'real life isn't static. We need AI to understand the story that is unfolding in a scene', said study co-author Kathy Garcia. Ultimately, this research reveals a profound gap between the way humans and AI models perceive moving social scenes. Despite their computing power and ability to process vast quantities of data, machines are still unable to grasp the subtleties and implicit intentions underlying our social interactions. Despite tremendous advances, artificial intelligence is still a long way from truly understanding exactly what goes on in human interactions.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store