
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations
Latent content analysis
is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone.
Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level.
These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are - and how quickly conversational AI is improving - it's essential to explore what these technologies can (and can't) do in this regard.
Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models - the technology behind AI chatbots such as ChatGPT - showed that some are better than others.
Finally, a study showed that LLMs can guess the emotional "valence" of words - the inherent positive or negative "feeling" associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 - a relatively recent version of ChatGPT - can read between the lines of human-written texts.
The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm - thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 x 7B.
We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text.
For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns.
GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell - although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines.
The study found no clear winner there - hence, using human raters doesn't help much with sarcasm detection.
Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research - especially important during crises, elections or public health emergencies.
Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start.
There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast - and may soon be valuable teammates rather than mere tools.
Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance.
Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways - perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided - will the model's underlying judgements and ratings remain consistent?
Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
18 minutes ago
- Economic Times
This column must've used AI (it didn't)
Once upon a time, there was the worry that when you handed in your school essay, it would be so decent that your teacher would return it a few days later, accusing you of getting your mother to have written it. Now that you're a big boy/girl/birl, your concern is that everyone will think you've used an AI chatbot. 'Bet DeepSeek or Copilot wrote that,' is a tough accusation to shake off-especially if your natural writing style has a knack for being a tad purple, and your fondness for words like 'symphony' and adjectives like 'delightful' and 'mesmerising' can't be curbed. People who love the style in which AI writes-still overwhelmingly in English, since Hindi, Marathi, are busy gawking at the tech-are usually unaware of the concept of 'style'. You write (in AI-speak 'you pen') a scathing, bullet-pointed, too-many-adjectived wooden op-ed, and even with the help of only cutting-edge Spellcheck Holmes and no ChatGPT, others will hiss, 'Oh my, it smells like AI.' At this rate, even writing 'Happy Birthday!' with proper punctuation is grounds for running a Turing Test. So, beware. If you show signs of intelligence-especially one that's suspiciously artificial-stop writing. The other option is that you get a smart person to ghostwrite for you. Because writing yourself without using a chatbot might expose your incompetence further.


Economic Times
an hour ago
- Economic Times
Skip the AI FOMO, cash in where no one's looking: Bernstein backs old-school payouts and warns AI may be the next dot-com bubble
ChatGPT's role in market surge Live Events Cycles do not last forever Why dividend stocks matter now Utilities quietly hold their own (You can now subscribe to our (You can now subscribe to our Economic Times WhatsApp channel Richard Bernstein, Chief Investment Officer at Richard Bernstein Advisors , sees too much heat in the artificial intelligence trade. In a note dated 30 June, he drew a sharp line between today's AI rush and earlier booms that went too far.'Investors seem universally focused on 'AI', which seems eerily similar to the '.com' stocks of the Technology Bubble and the 'tronics' craze of the 1960s,' Bernstein wrote. He added that while AI dominates headlines, 'we see lots of attractive, admittedly boring, dividend-paying themes.'Since OpenAI's ChatGPT appeared in November 2022, the numbers have been hard to ignore. The S&P 500 has gained 54 percent. The Nasdaq 100 has soared 90 percent. Some valuations have pushed back to levels last seen just before the dot-com crash or even the market peak in has made investors pile into anything labelled AI. But Bernstein says that might not be smart money at this stage. He made clear he is not calling the exact top. Still, trends do not run forever.'The best time to invest in something is when it's out of favour — not after a massive rally has occurred,' he laid out how investor moods flip as markets change. Early in a bull run, fear rules. People look for dividends and lower-risk bets. Once they feel bold, they chase high growth stories instead.'At the beginning of a bull market when momentum and beta strategies are by definition most rewarded, investors' fears lead them to emphasise dividends and lower-beta equities,' he said. 'In later-cycle periods when dividends and lower beta become more attractive, investors' confidence leads them to risk-taking and momentum investing.'His take? We are no longer early in this cycle. 'We clearly are not at the beginning of a bull market and, as we've previously written, the profits cycle is starting to decelerate,' he where does that leave investors who do not want to get burnt? Bernstein says boring can be smart. He points to dividend stocks, especially in the utilities sector, as ready for a fresh companies pay steady sums to shareholders. Some pocket the money. Many reinvest it back into the stock, which helps their position grow over time.'One of the easiest methods for building wealth has historically been the power of compounding dividends,' Bernstein said. 'Compounding dividends is boring as all get out, but it's been highly successful through time.'People might assume high-flying tech leaves old utility stocks in the dust. Bernstein says that is not quite true.'In fact, compounding dividend income has been so successful, that the Dow Jones Utilities Index's returns have been roughly neck-and-neck with NASDAQ returns since NASDAQ's inception in 1971,' he do not need to pick single stocks to get in. Funds like the SPDR S&P Dividend ETF and Vanguard Dividend Appreciation ETF spread the bets and deliver a mix of steady message is not about ditching technology altogether. It is about seeing the pattern. Big fads rise fast. They fall just as fast when the shine wears off. While the AI hype carries on, he thinks dividends could quietly do the heavy trick, he suggests, is to look where others are not. Sometimes the most boring corner of the market can end up paying the best.


Time of India
an hour ago
- Time of India
Disney-owned animation studio Pixar Chief Creative Office asserts AI cannot replace humans: ‘I was wondering whether AI will…'
Pete Docter , Pixar 's chief creative officer, appears to be unimpressed with artificial intelligence 's (AI) current capabilities, describing it as 'the least impressive blah average of things.' Speaking on comedian Mike Birbiglia's "Working It Out" podcast, Docter shared his skepticism about AI's ability to fully replace humans in animated filmmaking, despite the ongoing anxieties sweeping through Hollywood. According to a report by Business Insider, Docter acknowledged that everyone at Pixar is "troubled" by the advent of AI. However, he said that he believes it won't erase the human element from animation. Pixar executive on why AI won't replace humans in animation He drew a parallel to the early days of hand-drawn animation, where only a select "dozens" of artists possessed the unique combination of drawing skill, understanding of movement dynamics, and character acting sensibilities. "Computers," Docter noted, made animation more accessible by removing the need for animators to be "brilliant draft persons." He sees AI as potentially serving a similar role, alleviating some of the more cumbersome tasks. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Este Programa Está Revolucionando el Alivio de Deudas National Debt Relief Undo 'I was wondering whether AI will continue to help us lift some of the heavy burdens that we have to carry as an animator and maybe put the focus more on the performance,' he explained. This perspective suggests AI could free animators to concentrate more on the creative and expressive aspects of their work, rather than the more laborious technicalities. His comments came at a time when AI remains a highly contentious topic in the entertainment industry. Critics fear widespread job displacement, a concern that heavily influenced the nearly five-month-long writers' strike in 2023. Conversely, proponents like director James Cameron believe AI could significantly reduce filmmaking costs. AI Masterclass for Students. Upskill Young Ones Today!– Join Now