
Researchers train AI model to respond to online political posts, find quality of discourse improved
(You can now subscribe to our
(You can now subscribe to our Economic Times WhatsApp channel
Researchers who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse improved.Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language.Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high quality online conversation and "substantially increase (one's) openness to alternative viewpoints", according to findings published in the journal Science Advances.Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found.Large language models could provide "light-touch suggestions", such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, told PTI."To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics," Eady said.Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, told PTI, "(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups."Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post.This was countered by ChatGPT -- a "fictitious social media user" for the participants -- which tailored its argument "on the fly" according to the text's position and reasoning. The participants then responded as if replying to a social media comment."An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points," the authors wrote in the study.Eady said, "Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc."AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced.Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet.The study itself involved humans to rate responses as well, she said.Additionally, context, culture, and timing would need to be considered for such regulation, she added.Eady too is apprehensive about "using LLMs to regulate online political discussions in more heavy-handed ways."Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward.Eady added, "The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India.""Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible," the author added.Kapoor said, "In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here."Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement.Findings of researchers from Singapore's Nanyang Technological University suggest that "those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement." Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed.Describing the study "interesting", Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour.Her team, which has developed a scale to measure one's political ideology in India (published in a pre-print paper), found that dark personality traits were associated with a disregard for norms and hierarchies.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Deccan Herald
18 minutes ago
- Deccan Herald
OpenAI eyes $500 bn valuation in potential employee share sale
ChatGPT maker OpenAI is in early-stage discussions about a stock sale that would allow employees to cash out and could value the company at about $500 billion, a source familiar with the matter said. That would represent an eye-popping bump-up from its current valuation of $300 billion, with the sale underscoring both OpenAI's rapid gains in users and revenue as well as the intense competition among artificial intelligence firms to secure talented workers. The transaction, which would come before a potential IPO, would allow current and former employees to sell several billion dollars worth of shares, said the source, who requested anonymity because the talks are private. Bolstered by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualised run rate of $12 billion, and is on track to reach $20 billion by year-end, the source added. Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February. The share sale talks come on the heels of OpenAI's primary funding round announced earlier this year, which aims to raise $40 billion, led by Japan's SoftBank Group. SoftBank has until the end of the year to fund its $22.5 billion portion of the round, but the remainder has been subscribed at a valuation of $300 billion, the source said. Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February.


Hans India
39 minutes ago
- Hans India
Bridging Policing and AI: Nikeelu Gunda Trains Officers at PTC Medchal
In a progressive step toward integrating technology with frontline law enforcement, AI and Digital Strategy Expert Nikeelu Gunda conducted a special training session for police officers at the Police Training Centre (PTC), Medchal. The refresher course focused on the practical use of Artificial Intelligence and digital tools in everyday policing, cybercrime investigations, and responsible social media usage. During the session, officers were introduced to a range of real-world applications such as phishing scam detection, deepfake identification, mobile data analysis, voice recognition, digital evidence handling, and dark web monitoring. Nikeelu demonstrated how AI-powered tools like ChatGPT, Grabify, VirusTotal, and PimEyes can assist officers in detecting cyber threats and analyzing digital clues in real time. 'Technology has become the new beat for law enforcement,' said Nikeelu Gunda. 'Empowering officers with digital tools ensures they are equipped to tackle both on-ground and online crimes effectively.' The training was conducted under the leadership of PTC Principal P. Madhukar Swamy, who emphasized the importance of future-ready policing. 'As crimes become more digital, our police force must become more dynamic. This training is a vital step in preparing our officers for a data-driven world,' he stated. The event was successfully supported by Laxman DSP, along with Inspectors Kiran, Ravi, and N. Chandrasekhar, whose coordination helped make the session a success. Participating officers shared positive feedback, noting that they gained confidence in using AI to support investigations and handle digital complaints more effectively. The program marked another significant stride toward building a tech-savvy, cyber-aware police force in Telangana.


Time of India
43 minutes ago
- Time of India
Why Mira Murati, ex-CTO of OpenAI, doesn't chase hype—and what we can learn from that
In an age where tech leaders launch companies with press tours and promises of disruption, Mira Murati took a different route. The former CTO of OpenAI , known for helping develop ChatGPT and DALL·E, quietly stepped away in September 2024. Months later, she resurfaced, not with a media blitz, but with a new AI startup built on a rare quality in Silicon Valley: restraint. As reported by Wired, Murati and her entire team rejected billion-dollar offers from Meta's new Superintelligence Lab. The story made headlines not just because of the money involved, but because it revealed something deeper, Murati was prioritizing long-term vision and team integrity over fast wins and fame. Who is Mira Murati? Murati began her career in aerospace before moving to Tesla, where she worked on the Model S and Model X electric cars. She then led engineering at Leap Motion before joining OpenAI in 2018. Over the next six years, she became one of the most influential figures in AI, steering development on major tools like ChatGPT, DALL·E, and Codex. But instead of cashing in on her fame, Murati did something few in her position would: she started her own lab, Thinking Machines Lab , and did so in stealth mode, not to be secretive, but to stay focused. 'I'm figuring out what it's going to look like,' she told Wired in November 2024. 'I'm in the midst of it.' That kind of honesty is rare in tech, where founders often feel pressured to announce a grand vision even before writing a single line of code. Why doesn't she chase hype Focus on substance over spotlight Murati doesn't lead with noise. Her strategy is clear: build first, speak later. Instead of hyping unfinished products, she prioritizes clarity and quality. Investors say her startup's early attention isn't just about the technology, it's about the rare trust and discipline coming from the founding team. Team-driven mindset Her refusal to let any of her team members leave for Meta's billion-dollar offers shows her deep investment in people. As Wired reported, not a single person defected. That speaks volumes about the loyalty she fosters, not by promises, but by example. Awareness of AI's ethical complexity In January 2025, Murati gave a keynote at the World Economic Forum in Davos. She warned: 'AI without values is intelligence without conscience.' It wasn't a flashy announcement; it was a global call to reflect. She's also advising the European Commission on AI regulation, a rare position for a startup founder. She's not just creating the tools of the future; she's helping shape the laws around them. Strategic restraint Her startup is pioneering customizable AI systems tailored to local cultures, languages, and industries. But the company isn't shouting from the rooftops. Its 'stealth' approach isn't about hiding, it's about building with intention, without the distractions of hype cycles. As reported by Wired, her team is operating 'free from hype… with clarity and intention.' She's comfortable with uncertainty In the same Wired interview, Murati said: 'I'm in the midst of it.' That's not a rehearsed pitch, it's a real admission. And that's powerful. She reminds us that creation is a process, and it's okay not to have all the answers right away. What can we learn from that Quiet confidence is powerful You don't need to be loud to lead. Murati's example proves that real influence often comes from calm focus, not flash. Letting results speak By choosing progress over press, she builds trust, not just buzz. That's the kind of leadership that lasts. Leadership can be humble Murati redefines what it means to lead in tech. Her style isn't built on ego, it's built on ethics, teamwork, and responsibility. Avoiding hype protects integrity Hype can be tempting, but it can also be a trap. Murati's approach keeps her grounded, exactly what's needed in a field as high-stakes as AI.