
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations
But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us?
Latent content analysis
is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone.
Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level.
These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are - and how quickly conversational AI is improving - it's essential to explore what these technologies can (and can't) do in this regard.
Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models - the technology behind AI chatbots such as ChatGPT - showed that some are better than others.
Finally, a study showed that LLMs can guess the emotional "valence" of words - the inherent positive or negative "feeling" associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 - a relatively recent version of ChatGPT - can read between the lines of human-written texts.
The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm - thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 x 7B.
We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text.
For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns.
GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell - although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines.
The study found no clear winner there - hence, using human raters doesn't help much with sarcasm detection.
Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research - especially important during crises, elections or public health emergencies.
Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start.
There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast - and may soon be valuable teammates rather than mere tools.
Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance.
Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways - perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided - will the model's underlying judgements and ratings remain consistent?
Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
13 minutes ago
- India Today
AI will soon crack any UG science problem: Mathematician Manjul Bhargava at IISc
'AI remains notoriously bad at doing math and science. We've all seen social media posts mocking AI's confident yet incorrect answers,' Bhargava said at IISc. India Today Education Desk AI will soon solve all undergraduate-level science, math problems: Bhargava Manjul Bhargava says AI is rapidly improving in science, maths accuracy IISc convocation: AI to reshape science education in coming years Well-known mathematician and Fields Medal winner Manjul Bhargava believes Artificial Intelligence (AI) is on the brink of a quantam leap. Addressing the convocation ceremony at the Indian Institute of Science (IISc) in Bengaluru on Thursday, he said that within the next one or two years, some AI models will be capable of solving any undergraduate-level science or mathematics problem with accuracy. 'AI remains notoriously bad at doing math and science. We've all seen social media posts mocking AI's confident yet incorrect answers,' Bhargava said. But that, he added, is changing fast. Having personally tested several advanced language models set to release in the coming years, Bhargava expressed confidence: 'Some AIs will soon be able to solve even tricky trigonometry questions with precision. What was once laughable may become reliable.' This rise of capable AI, he said, will also raise questions for the future of education and public policy. RETHINKING EDUCATION As AI systems improve, Bhargava posed a critical question: 'What happens to teaching at institutions like IISc when AI can solve most problems?' He noted that education may need to shift from rote learning to fostering creativity and interdisciplinary thinking. He urged educators and policymakers to start preparing for this shift now, noting that 'disruptive technologies bring not just new tools, but new responsibilities.' Bhargava also highlighted the need for ethical frameworks around AI. 'We will need new policies to ensure that as AI becomes more powerful, it is used responsibly,' he said. IISc CONVOCATION At the ceremony, IISc awarded degrees to 1,487 postgraduate and PhD students, and 106 undergraduates. Eighty-four students received medals for academic excellence. In a poignant moment, the Professor BG Raghavendra Memorial Medal was posthumously awarded to Somwanish Nikhil Chottu, a student from the Department of Management Studies who passed away earlier this year. The event reflected both the promise of the future and the challenges it brings, as one of India's top science institutions took stock of a rapidly changing world. Well-known mathematician and Fields Medal winner Manjul Bhargava believes Artificial Intelligence (AI) is on the brink of a quantam leap. Addressing the convocation ceremony at the Indian Institute of Science (IISc) in Bengaluru on Thursday, he said that within the next one or two years, some AI models will be capable of solving any undergraduate-level science or mathematics problem with accuracy. 'AI remains notoriously bad at doing math and science. We've all seen social media posts mocking AI's confident yet incorrect answers,' Bhargava said. But that, he added, is changing fast. Having personally tested several advanced language models set to release in the coming years, Bhargava expressed confidence: 'Some AIs will soon be able to solve even tricky trigonometry questions with precision. What was once laughable may become reliable.' This rise of capable AI, he said, will also raise questions for the future of education and public policy. RETHINKING EDUCATION As AI systems improve, Bhargava posed a critical question: 'What happens to teaching at institutions like IISc when AI can solve most problems?' He noted that education may need to shift from rote learning to fostering creativity and interdisciplinary thinking. He urged educators and policymakers to start preparing for this shift now, noting that 'disruptive technologies bring not just new tools, but new responsibilities.' Bhargava also highlighted the need for ethical frameworks around AI. 'We will need new policies to ensure that as AI becomes more powerful, it is used responsibly,' he said. IISc CONVOCATION At the ceremony, IISc awarded degrees to 1,487 postgraduate and PhD students, and 106 undergraduates. Eighty-four students received medals for academic excellence. In a poignant moment, the Professor BG Raghavendra Memorial Medal was posthumously awarded to Somwanish Nikhil Chottu, a student from the Department of Management Studies who passed away earlier this year. The event reflected both the promise of the future and the challenges it brings, as one of India's top science institutions took stock of a rapidly changing world. Join our WhatsApp Channel


India Today
41 minutes ago
- India Today
OpenAI delays open AI model again, Sam Altman says he doesn't know how long it will take
OpenAI has slammed the brakes on the release of its eagerly-awaited open-source AI model, citing the need for more rigorous safety checks before allowing developers to get their hands on it. The launch, originally due earlier this summer and then delayed to next week, has now been postponed indefinitely. Sam Altman, CEO of the ChatGPT-maker, broke the news on Friday in a post on X (formerly Twitter), saying the company needed more time to evaluate the model's potential need time to run additional safety tests and review high-risk areas. We are not yet sure how long it will take us,' Altman wrote. 'While we trust the community will build great things with this model, once weights are out, they can't be pulled back. This is new for us and we want to get it right.' This isn't just any AI release. OpenAI's upcoming open model has been billed as one of the most exciting tech launches of the summer, right up there with the looming (and still mysterious) debut of GPT 5. But unlike GPT 5, which is expected to remain tightly controlled, the open model was designed to be downloadable and fully usable by developers without guardrails, a first for OpenAI in years. However, that freedom comes with a catch. By giving developers unrestricted access to the model's underlying 'weights', the core parameters that define its intelligence, OpenAI risks losing control over how it's used. That concern appears to be front and centre in the decision to hit Clark, OpenAI's VP of Research and head of the open model project, explained the reasoning further in his own post: 'Capability wise, we think the model is phenomenal — but our bar for an open source model is high, and we think we need some more time to make sure we're releasing a model we're proud of along every axis.'While developers around the world will now have to wait a little longer to test-drive OpenAI's most powerful open model to date, the company is promising it will be worth the wait. Insiders say the model is expected to rival the reasoning skills of the o-series — the family of models powering GPT 4o — and that it was designed to outperform all currently available open-source OpenAI's delay could also open the door for competitors. Just hours before the announcement, Chinese startup Moonshot AI unveiled its latest heavyweight: Kimi K2, a massive one-trillion-parameter model. Early benchmarks suggest Kimi K2 already outpaces OpenAI's GPT 4.1 on a range of coding and agentic tasks, raising the stakes for OpenAI's own open open-source AI arms race is heating up, with Google DeepMind, Anthropic, and Elon Musk's xAI pouring resources into their own next-gen models. For OpenAI, this delay means temporarily ceding the spotlight to its rivals, a rare move for the company that sparked the AI boom with Altman hinted at something 'unexpected and quite amazing' when he first revealed the model's initial delay in June, leaving many to wonder if OpenAI is sitting on a groundbreaking capability it simply isn't ready to unleash.- Ends
&w=3840&q=100)

Business Standard
44 minutes ago
- Business Standard
Shubhanshu Shukla to enter 7-day rehab after splashdown on Earth on July 15
Indian astronaut Shubhanshu Shukla is set to splash down off the California coast on July 15 after an 18-day stay at the International Space Station (ISS). He will then undergo a week-long rehabilitation programme to help his body readjust to Earth's gravity, news agency PTI reported. Shukla travelled to space as part of the commercial Axiom-4 mission. He was joined by mission commander Peggy Whitson and mission specialists Slawosz Uznanski-Wisniewski from Poland and Tibor Kapu from Hungary. The team had docked at the ISS on June 26. According to Nasa, the four astronauts are scheduled to undock from the ISS on Monday, July 14, at 4:35 pm IST (7:05 am ET). The Indian Space Research Organisation (Isro) said the Crew Dragon spacecraft will perform a series of orbital manoeuvres before making a controlled descent. The splashdown is expected to take place near the California coast on July 15, 2025, around 3:00 pm IST. "Post splashdown, the Gaganyatri will undergo a rehabilitation program (about 7 days) under supervision of Flight Surgeon to adapt back to Earth's gravity," Isro said in a mission update. Valuable experience for Gaganyaan Isro paid nearly ₹550 crore for Shukla's mission to the ISS. The experience will be used to support the planning and execution of India's first human spaceflight programme, Gaganyaan, which is targeted for launch in 2027. "Isro's flight surgeons are continuously monitoring and ensuring the overall health and fitness of the Gaganyatri through participation in private medical/psychological conferences. Gaganyatri Shubhanshu is in good health and in high spirit," Isro said. Final preparations before return Before undocking, the astronauts are expected to suit up and carry out systems checks starting at 2:25 pm IST. The ISS, currently orbiting Earth at 28,000 kmph, will release the spacecraft, which will then slow down for re-entry into Earth's atmosphere. The Nasa said, "The Dragon spacecraft will return with more than 580 pounds of cargo, including Nasa hardware and data from over 60 experiments conducted throughout the mission." A taste of home in space As the astronauts prepare to leave, mission commander Peggy Whitson shared on social media: 'Enjoying our last few days on the @Space_Station with rehydrated shrimp cocktails and good company! #Ax4.' Shukla had contributed to the gathering with traditional Indian treats — carrot halwa and mango nectar (aamras). Research highlights from the mission During his time on the ISS, Shukla worked on a key experiment involving microalgae. Axiom Space said that these algae could potentially serve as a source of food, oxygen, and biofuels for future deep-space missions. Their resilience makes them a strong candidate for sustaining human life beyond Earth. Nasa also highlighted several other studies the crew contributed to: -Exercise and spacesuit maintenance: Core activities for both Expedition 73 and Axiom-4 teams. -Voyager displays study: Examined how spaceflight affects eye movement and spatial coordination. -Environmental perception: Collected data to help design more mentally supportive habitats for long missions. -Cerebral blood flow research: Studied how microgravity and high carbon dioxide levels impact cardiovascular health. -Radiation monitoring: Used the compact Rad Nano Dosimeter to assess radiation exposure levels. -Cognitive testing: Participated in the acquired equivalence test to measure learning and adaptability in space. -PhotonGrav study: Captured brain activity data to support the development of neuroadaptive technologies for space and healthcare. These experiments aim to improve astronaut health and also offer potential medical benefits on Earth.