
AI understands many things but still flounders at human interaction
However sophisticated AI may be, it still struggles to understand our social interactions, researchers say. (Envato Elements pic)
PARIS : Artificial intelligence continues to advance, yet this technology still struggles to grasp the complexity of human interactions. A recent US study reveals that, while AI excels at recognising objects or faces in still images, it remains ineffective at describing and interpreting social interactions in a moving scene.
The team led by Leyla Isik, professor of cognitive science at Johns Hopkins University, investigated how AI models understand social interactions.
To do this, the researchers designed a large-scale experiment involving over 350 AI models specialising in video, image or language. These AI tools were exposed to short, three-second video sequences illustrating various social situations.
At the same time, human participants were asked to rate the intensity of the interactions observed, according to several criteria, on a scale of 1-5. The aim was to compare human and AI interpretations, in order to identify differences in perception and better understand the current limits of algorithms in analysing our social behaviours.
The human participants were remarkably consistent in their assessments, demonstrating a detailed and shared understanding of social interactions. AI, on the other hand, struggled to match these judgements.
Models specialising in video proved particularly ineffective at accurately describing the scenes observed. Even models based on still images, although fed with several extracts from each video, struggled to determine whether the characters were communicating with each other.
As for language models, they fared a little better, especially when given descriptions written by humans, but remained far from the level of performance of human observers.
A 'blind spot'
For Isik, this proves a major obstacle to the integration of AI into real-world environments. 'AI for a self-driving car, for example, would need to recognise the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street,' she explained.
'Any time you want an AI to interact with humans, you want it to be able to recognise what people are doing. I think this study sheds light on the fact that these systems can't right now.'
According to the researchers, this deficiency could be explained by the way in which AI neural networks are designed. These are mainly inspired by the regions of the human brain that process static images, whereas dynamic social scenes call on other brain areas.
This structural discrepancy could explain what the researchers suggest could be 'a blind spot in AI model development'.
Indeed, 'real life isn't static. We need AI to understand the story that is unfolding in a scene', said study co-author Kathy Garcia.
Ultimately, this research reveals a profound gap between the way humans and AI models perceive moving social scenes. Despite their computing power and ability to process vast quantities of data, machines are still unable to grasp the subtleties and implicit intentions underlying our social interactions.
Despite tremendous advances, artificial intelligence is still a long way from truly understanding exactly what goes on in human interactions.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
33 minutes ago
- New Straits Times
Calling for ethical and responsible use of AI
LETTERS: In an era where artificial intelligence (AI) is rapidly shaping every facet of human life, it is critical that we ensure this powerful technology is developed and deployed with a human-centric approach. AI holds the potential to solve some of humanity's most pressing challenges, from healthcare innovations to environmental sustainability, but it must always serve the greater good. To humanise AI is to embed ethical considerations, transparency, and empathy into the heart of its design. AI is not just a tool; it reflects the values of those who create it. Therefore, AI development should prioritise fairness, accountability, and inclusivity. This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its' benefits accessible to all, not just a select few. Governments, industries, and communities must work together to create a governance framework that fosters innovation while protecting privacy and rights. We must also emphasise the importance of educating our workforce and future generations to work alongside AI, harnessing its capabilities while maintaining our uniquely human traits of creativity, compassion, and critical thinking. As AI continues to transform the way we live, work, and interact, it is becoming increasingly urgent to ensure that its development and use are grounded in responsibility, accountability, and integrity. The Alliance for a Safe Community calls for clear, forward-looking regulations and a comprehensive ethical framework to govern AI usage to safeguard the public interest. AI technologies are rapidly being adopted across sectors — from healthcare and education to finance, law enforcement, and public services. While these advancements offer significant benefits, they also pose risks, including: • Invasion of privacy and misuse of personal data; • Algorithmic bias leading to discrimination or injustice; • Job displacement and economic inequality; • Deepfakes and misinformation Without proper regulation, AI could exacerbate existing societal challenges and even introduce new threats. There must be checks and balances to ensure that AI serves humanity and does not compromise safety, security, or fundamental rights. We propose the following elements as part of a robust regulatory framework: 1. AI Accountability Laws – Define legal responsibility for harm caused by AI systems, especially in high-risk applications. 2. Transparency and Explainability – Mandate that AI decisions affecting individuals (e.g., in hiring, credit scoring, or medical diagnoses) must be explainable and transparent. 3. Data Protection and Privacy Standards – Strengthen data governance frameworks to prevent unauthorised access, misuse, or exploitation of personal data by AI systems. 4. Risk Assessment and Certification – Require pre-deployment risk assessments and certification processes for high-impact AI tools. 5. Public Oversight Bodies – Establish independent agencies to oversee compliance, conduct audits, and respond to grievances involving AI. Technology alone cannot determine what is right or just. We must embed ethical principles into every stage of AI development and deployment. A Code of Ethics should include: • Human-Centric Design – AI must prioritise human dignity, autonomy, and well-being. • Non-Discrimination and Fairness – AI systems must not reinforce or amplify social, racial, gender, or economic bias. • Integrity and Honesty – Developers and users must avoid deceptive practices and be truthful about AI capabilities and limitations. • Environmental Responsibility – Developers should consider the energy and environmental impact of AI technologies. • Collaboration and Inclusivity – The development of AI standards must include voices from all segments of society, especially marginalised communities. AI is one of the most powerful tools of our time. Like any powerful tool, it must be handled with care, guided by laws, and shaped by ethical values. We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity. The future of AI should not be one where technology dictates the terms of our humanity. Instead, we must chart a course where AI amplifies our best qualities, helping us to live more fulfilling lives, build fairer societies, and safeguard the well-being of future generations. Only by humanising AI can we ensure that its promise is realised in a way that serves all of mankind.


Free Malaysia Today
13 hours ago
- Free Malaysia Today
Amazon agrees to tackle fake reviews in UK, says regulator
In 2024, Amazon blocked more than 275 million fake reviews worldwide. (AFP pic) LONDON : Amazon has agreed to clamp down on fake online reviews of products advertised on its UK site, Britain's competition regulator said today. Google agreed a similar UK commitment in January after the Competition and Markets Authority (CMA) launched an investigation into the matter five years ago. Amazon 'has signed undertakings committing to enhance its existing systems for tackling fake reviews and catalogue abuse', the CMA said in a statement today. 'Catalogue abuse involves sellers hijacking the reviews of well-performing products and adding them to an entirely separate and different product to falsely boost its star rating', the regulator noted. The CMA said 'Amazon has committed to tough sanctions for businesses that boost their star ratings,' which could see them banned from its UK site. An Amazon spokesman told AFP that the company already invests 'significant resources to proactively stop fake reviews ever appearing… including on expert human investigators and machine learning models that analyse thousands of data points to detect risk'. They added that last year, Amazon 'blocked more than 275 million fake reviews (worldwide), with more than 99% of all products… containing only authentic reviews'. The UK regulator said that around 90% of UK consumers use online reviews when deciding on a purchase. It added that 'as much as £23 billion (US$31 billion) of UK consumer spending is potentially influenced by online reviews annually'. 'So many people use Amazon, from buying a new bike lock to finding the best coffee machine – and what's clear is that star ratings and reviews have a huge impact on their choices,' CMA chief executive Sarah Cardell said in today's statement. 'That's why these new commitments matter and help set the standard. 'They mean people can make decisions with greater confidence -knowing that those who seek to pull the wool over their eyes will be swiftly dealt with'. The CMA in May 2020 opened an investigation into 'several major websites' that display online reviews, which led to the opening of a formal probe into Amazon and Google 13 months later. 'The undertakings from Amazon and Google, alongside our recently published advice to review platforms, paint a clear picture of what the law requires from businesses,' Cardell said. 'Following this, we're now launching the next phase of our work. 'This will scrutinise whether review platforms, businesses who list products on them, and reviewers themselves, are complying with the strengthened laws around fake reviews,' she said.


The Star
16 hours ago
- The Star
Broadcom shares drop as revenue forecast fails to impress
A smartphone with a displayed Broadcom logo is placed on a computer motherboard in this illustration taken March 6, 2023. REUTERS/Dado Ruvic/Illustration/File Photo (Reuters) - Broadcom shares fell nearly 4% in premarket trading on Friday, after the company's third-quarter revenue forecast failed to impress investors who have been extremely bullish on chip stocks amid an artificial intelligence boom. The Palo Alto, California-based company, which supplies semiconductors to Apple and Samsung, provides advanced networking gear that allows vast amounts of data to travel across AI data centers, making its chips crucial for the development of generative AI technology. Broadcom forecast third-quarter revenue of around $15.80 billion, compared with analysts' average estimate of $15.71 billion, according to data compiled by LSEG. "High expectations drove a bit of downside," Bernstein analyst Stacy Rasgon said in a note. Broadcom also helps design custom AI processors for large cloud providers, which compete against Nvidia's pricey off-the-shelf chips. Global chipmakers, including Nvidia, have been vulnerable to U.S. President Donald Trump's shifting trade policy and export curbs as Washington attempts to limit Beijing's access to advanced U.S. technology. "AVGO is ramping two additional customers, but they are still small. So the processor business will grow this year, but at a measured rate," said Morgan Stanley. Last week, rival Marvell Technology forecast second-quarter revenue above Wall Street estimate, betting on strong demand for its custom chips powering AI workload in data centers. Broadcom's valuation had crossed $1 trillion for the first time in December after it forecast massive expansion in demand for chips that power AI. Its shares have risen about 12% so far this year. It has a 12-month forward price-to-earnings ratio of 35.36, compared with Marvell's 20.63, according to data compiled by LSEG. (Reporting by Twesha Dikshit in Bengaluru; Editing by Shilpi Majumdar)