
Opinion: Are you more emotionally intelligent than an AI chatbot?
But what if that's wrong?
A cognitive scientist who specialises in emotional intelligence shared with me in an interview that he and some colleagues did an experiment that throws some cold water on that theory.
'What do you do?'
Writing in the journal Communications Psychology , Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), said he and colleagues ran commonly used tests of emotional intelligence on six Large Language Models including generative AI chatbots like ChatGPT.
The are the same kinds of tests that are commonly used in corporate and research settings: scenarios involving complicated social situations, and questions asking which of five reactions might be best.
One example included in the journal article goes like this:
'Your colleague with whom you get along very well tells you that he is getting dismissed and that you will be taking over his projects.
While he is telling you the news he starts crying. He is very sad and desperate.
You have a meeting coming up in 10 min. What do you do?'
Gosh, that's a tough one. The person – or AI chatbot – would then be presented with five options, ranging from things like:
– 'You take some time to listen to him until you get the impression he calmed down a bit, at risk of being late for your meeting,' to
– 'You suggest that he joins you for your meeting with your supervisor so that you can plan the transfer period together.'
Emotional intelligence experts generally agree that there are 'right' or 'best' answers to these scenarios, based on conflict management theory – and it turns out that the LLMs and AI chatbots chose the best answers more often than humans did.
As Mortillaro told me:
'When we run these tests with people, the average correct response rate … is between 15% and 60% correct. The LLMs on average, were about 80%. So, they answered better than the average human participant.'
Maybe you're sceptical
Even having heard that, I was sceptical.
For one thing, I had assumed while reading the original article that Mortillaro and his colleagues had informed the LLMs what they were doing – namely, that they were looking for the most emotionally intelligent answers.
Thus, the AI would have had a signal to tailor the answers, knowing how they'd be judged.
Heck, it would probably be easier for a lot of us mere humans to improve our emotional intelligence if we had the benefit of a constant reminder in life: 'Remember, we want to be as emotionally intelligent as possible!'
But, it turns out that assumption on my part was flat-out wrong – which frankly makes the whole thing a bit more remarkable.
'Nothing!' Mortillaro told me when I asked how much he'd told the LLMs about the idea of emotional intelligence to begin with. 'We didn't even say this is part of a test. We just gave the … situation and said these are five possible answers. What's the best answer? … And it picked the right option 82% (ck) of the time, which is way higher – significantly higher – than the average human.'
Good news, right?
Interestingly, from Mortillaro's perspective, this is actually some pretty good news – not because it suggests another realm in which artificial intelligence might replace human effort, but because it could make his discipline easier.
In short, scientists might theorise from studies like this that they can use AI to create the first drafts of additional emotional intelligence tests, and thus scale their work with humans even more.
I mean: 80% accuracy isn't 100%, but it's potentially a good head start.
Mortillaro also brainstormed with me for some other use cases that might be more interesting to business leaders and entrepreneurs. To be honest, I'm not sure how I feel about these yet. But examples might include:
– Offering customer scenarios, getting solutions from LLMs, and incorporating them into sales or customer service scripts.
– Running the text and calls to action on your website or social media ads through LLMs to see if there are suggestions hiding in plain sight.
– And of course, as I think a lot of people already do, sharing presentations or speeches for suggestions on how to streamline them.
Personally, I find I reject many more of the suggestions that I get from LLMs like ChatGPT. I also don't use it for articles like this one, of course.
Still, even if you're not convinced, I suspect some of your competitors are. And they might be improving their emotional intelligence as a result without even realising it.
As a result, at least being aware of the potential of AI to upend your industry seems like a smart move.
'Especially for small business owners who do not have the staff or the money to implement large-scale projects,' Mortillaro suggested, 'these kind of tools become incredibly powerful.' – Inc./Tribune News Service
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
14 hours ago
- New Straits Times
AI fuels costly corporate data breach: IBM Report
SACRAMENTO: As US companies race to embed artificial intelligence (AI) into everyday work, they are discovering a hidden cost: bigger, more expensive data breaches, reported Xinhua. The "Cost of a Data Breach 2025" report, published by IBM on Wednesday, revealed that 13 per cent of the 600 organisations studied suffered breaches involving their own AI models or applications. Crucially, basic access controls were missing in 97 per cent of those cases. The report also found that attackers are turning the technology against its creators: one in six breaches involved criminals using AI tools, primarily to craft convincing phishing emails and deepfake impersonations. So-called "shadow AI," systems employees deploy without authorisation, proved even costlier. Twenty per cent of respondents blamed their breach on unsanctioned AI, which added approximately 670,000 U.S. dollars to the average loss. When "shadow AI" was present, overall breach costs rose to US$4.74 million, compared with US$4.07 million when it was absent. Recent incidents illustrate how seemingly minor AI security oversights can spiral. In 2023, a single misconfigured Azure sharing link in a Microsoft AI research repository exposed 38 terabytes of internal files and over 30,000 Teams messages. That same year, Samsung temporarily banned generative AI tools after engineers pasted confidential chip designs into ChatGPT, risking sensitive leaks. Even AI providers themselves are vulnerable. A March 2023 bug in OpenAI's ChatGPT service briefly exposed some users' payment addresses and partial card details. Despite such warnings, 87 per cent of companies still lack governance policies or processes to mitigate AI risks, even though supply chain compromises already trigger nearly one-third of AI-related breaches. To address these gaps, analysts emphasise that security starts with identity: organisations must enforce strict credential management for both staff and algorithms, rotate keys frequently, and encrypt all data used to train or prompt models. Quarterly "AI health checks" that bring business and security leaders together can identify unauthorised projects, while automated threat-detection platforms help understaffed teams distinguish genuine threats from false alarms. The report concludes: "Security AI and automation lower costs, while shadow AI raises them." Organisations with mature controls reduced breach costs by nearly 40 per cent. The report noted that with the average US breach now costing US$10.22 million and regulators from Brussels to Washington drafting new rules for data-hungry algorithms, boards had a clear financial motive to treat every model, notebook and chat interface as a critical asset protected by multifactor authentication, time-limited sharing links and continuous audits before the next wave of smart machines arrives.


The Star
15 hours ago
- The Star
AI infrastructure company fal raises $125 million, valuing company at $1.5 billion
FILE PHOTO: An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo SAN FRANCISCO (Reuters) -Artificial intelligence infrastructure company fal raised a $125 million Series C round valuing the company at $1.5 billion, the company said Thursday. Venture capital fund Meritech led the round with participation from Salesforce Ventures, Shopify Ventures and Google AI Futures fund. Existing investors Bessemer Venture Partners, Kindred Ventures, Andreessen Horowitz, Notable Capital, First Round Capital, Unusual Ventures and Village Global also participated in the round. San Francisco-based fal, which specializes in running audio, video and image models on behalf of enterprises, is part of a growing class of service providers to companies looking to use AI models that are not text-based, sometimes known as multimodal models or generative media models. In particular, image generation AI models, where users type in prompts to generate novel images, have taken off with consumers this year. While ChatGPT's initial viral takeoff was from its ability to generate paragraphs of text, its most recent viral moment came in April when it unveiled the ability to create images based on the hand-drawn style of famed Japanese animation outfit Studio Ghibli. Thanks to that feature, ChatGPT's average weekly active users breached the 150 million mark for the first time, according to data from market research firm Similarweb. Most of fal's customers are using its platform for enterprise purposes, such as to create different images of a product for an ecommerce website or for online advertising. "With generative AI, you can create infinite iterations of the same ad," fal CEO Burkay Gur told Reuters. "You can create different versions for different demographics. You can A/B test it as much as you want. And there is incremental value on each asset that you create." (Reporting by Anna Tong in San Francisco; Editing by Stephen Coates)


The Star
16 hours ago
- The Star
Rise in AI-generated CVs raises red flags among recruiters
With the advent of AI, there are dozens of apps and websites such as ChatGPT, AI Apply, Enhancv and many others that can help job-seekers prepare their resume in minutes. — Photo by Scott Graham on Unsplash AI-generated CVs are flooding the UAE job market, with more than three-quarters of recruiters reporting a sharp rise in volume of such submissions. Despite the surge, many of these resumes are being flagged for failing to match job requirements, leading hiring experts to warn jobseekers against overreliance on AI tools. Poorly tailored AI-generated CVs not only miss the mark in terms of content but may also be ranked lower by search engines used in recruitment platforms. Jason Grundy, managing director of Robert Walters Middle East & Africa, said excessive reliance on artificial intelligence (AI) has become more obvious. "As AI becomes increasingly prevalent in the workplace, identifying AI-generated content will become easier. Google has already begun recognising AI-generated content and tends to rank it lower in its search results," Jason said. "As recruiters, we review thousands of CVs each week, and it's often apparent when a CV is entirely AI-generated rather than proofread and edited by the candidate. These CVs are often discredited in the early stages of the process," he added. He further added that the wider usage of AI is changing the priorities of leadership teams. "Leaders are increasingly viewing certain roles as 'administrative placeholders' that could be automated instead of being filled by humans." With the advent of AI, there are dozens of apps and websites such as ChatGPT, AI Apply, Enhancv and many others that can help job-seekers prepare their resume in minutes. But if these CVs are not proofread and edited by the applicants, there could be factual errors, which could result in the application being rejected by the recruiters or AI systems. Therefore, excessive reliance on AI is discouraged by the recruiters. On the back of strong macroeconomic growth, the UAE's job market is doing exceptionally well, creating a lot of opportunities for job jobseekers. Growing at 4 per cent, the UAE is the fastest-growing job market in the Gulf region, according to Cooper Fitch. According to a LinkedIn survey, an overwhelming majority – 83 per cent –of human resources (HR) leaders in the UAE encourage candidates to use AI for job searches, with the caveat that they remain truthful about their experience. More than half – 56 per cent – of recruiters specifically recommend using AI for personalising and tailoring CVs, which is the very practice some fear will flood the market with generic applications. "The issue many recruiters face lies in application quality. Over three-quarters report higher application volumes than last year, and 64 per cent of them say they are getting a higher volume of AI-generated applications that aren't the right fit. Nearly half – 45 per cent – say only a quarter to half of applications meet all qualifications," said Najat Abdalhadi, career expert at LinkedIn. "This suggests that recruiters aren't concerned about detecting content that was created with the help of AI – they're focused on finding qualified candidates efficiently. Top recommended AI uses by recruiters include personalising CVs, identifying suitable roles, and interview preparation," she added. Interestingly, she noted that since many recruiters are now finding the hiring process frustrating, they are prioritising getting access to AI-powered hiring tools themselves. "UAE's recruiters and HR leaders are embracing AI as a solution to inefficiency. The focus should shift from detecting AI-generated content to ensuring candidates use these tools responsibly to present their genuine qualifications more effectively," she added. – Khaleej Times/Tribune News Service