
Researchers train AI model to respond to online political posts, find quality of discourse improved
Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language.
Polite,
evidence-based counterarguments
by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high quality online conversation and "substantially increase (one's) openness to alternative viewpoints", according to findings published in the journal Science Advances.
Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found.
Large language models could provide "light-touch suggestions", such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, told PTI.
"To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics," Eady said.
Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, told PTI, "(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups."
Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post.
This was countered by ChatGPT -- a "fictitious social media user" for the participants -- which tailored its argument "on the fly" according to the text's position and reasoning. The participants then responded as if replying to a social media comment.
"An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points," the authors wrote in the study.
Eady said, "Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc."
AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced.
Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet.
The study itself involved humans to rate responses as well, she said.
Additionally, context, culture, and timing would need to be considered for such regulation, she added.
Eady too is apprehensive about "using LLMs to regulate online political discussions in more heavy-handed ways."
Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward.
Eady added, "The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India."
"Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible," the author added.
Kapoor said, "In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here."
Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement.
Findings of researchers from Singapore's Nanyang Technological University suggest that "those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement." Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed.
Describing the study "interesting", Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour.
Her team, which has developed a scale to measure one's political ideology in India (published in a pre-print paper), found that dark personality traits were associated with a disregard for norms and hierarchies.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Print
4 hours ago
- The Print
Fusion CX expands Philippines operations, opens new centres in Manila, Legazpi City
Fusion CX, which recently filed a Draft Red Herring Prospectus (DRHP) with market regulator SEBI to raise Rs 1,000 crore through an IPO, said the expansion underscores its long-term commitment to the Philippines, where it has completed 15 years of operations. The company has invested USD 4.5-5 million in setting up the two centres, it said in a statement. Kolkata, Jul 28 (PTI) Customer experience services provider Fusion CX on Monday announced the expansion of its operations in the Philippines with the launch of two new facilities in Manila and Legazpi City, having over 1,000 delivery seats. The Kolkata-headquartered company, founded in 2004, has a presence in 15 countries with 40 delivery centres and a workforce of 20,000. The Manila centre offers 836 seats, while the Legazpi City facility has 275 seats, significantly strengthening Fusion CX's delivery capabilities, alongside existing centres in Cebu and Silang, the statement said. The Manila centre also houses an AI solutions command hub powered by subsidiary Omind, offering real-time agent assistance, conversational AI, accent harmonisation, sentiment analysis, analytics, and automation. Fusion CX said its new facilities are designed to foster inclusive growth, enable decentralised talent development, and enhance client delivery through tech-driven solutions. PTI BSM SOM This report is auto-generated from PTI news service. ThePrint holds no responsibility for its content.


Economic Times
4 hours ago
- Economic Times
Will AI take away our sense of purpose? Sam Altman says, ‘People Will have to redefine what it means to contribute'
Synopsis OpenAI CEO Sam Altman, in a conversation with Theo Von, addressed concerns about AI's impact on humanity. Altman acknowledged anxieties surrounding job displacement and data privacy, particularly regarding users sharing personal information with AI. He highlighted the lack of legal protections for AI conversations, creating a privacy risk. AP OpenAI CEO Sam Altman talked about AI's impact on jobs and human purpose. Altman acknowledged concerns about data privacy and the rapid pace of AI development. He also addressed the lack of clear legal regulations. Altman highlighted the risks of users sharing personal information with AI. In a rare, thought-provoking conversation that danced between comedy and existential crisis, OpenAI CEO Sam Altman sat down with podcaster Theo Von on This Past Weekend. What unfolded was less a traditional interview and more a deeply human dialogue about the hopes, fears, and massive unknowns surrounding artificial intelligence. As AI continues its unstoppable advance, Von posed a question many of us have been quietly asking: 'Are we racing toward a future where humans no longer matter?' Altman didn't sugarcoat the situation. He agreed with many of Von's concerns, from data privacy to AI replacing jobs, and even the unnerving pace at which the technology is evolving. 'There's this race happening,' Altman said, referring to the breakneck competition among tech companies. 'If we don't move fast, someone else will — and they might not care as much about the consequences.' But amid all the alarms, Altman offered a cautious dose of optimism. 'Even in a world where AI is doing all of this stuff humans used to do,' he said, 'we are going to find a way to feel like the main characters.' His tone, however, betrayed a sense of uncertainty: the script isn't written yet. Perhaps the most powerful moment came when Von bluntly asked: 'What happens to our sense of purpose when AI does everything for us?' Altman acknowledged that work has always been a major source of meaning for people. While he's hopeful that AI will free humans to pursue more creative or emotional pursuits, he conceded that the transition could be deeply painful. 'One of the big fears is like purpose, right?' Von said. 'Like, work gives us purpose. If AI really continues to advance, it feels like our sense of purpose would start to really disappear.' Altman responded with guarded hope: 'People will have to redefine what contribution looks like… but yeah, it's going to be unsettling.' In what may be one of the most revealing admissions from a tech CEO, Altman addressed the disturbing trend of people — especially young users — turning to AI as a confidant or therapist. 'People talk about the most personal sh*t in their lives to ChatGPT,' he told Von. 'But right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege… We haven't figured that out yet for when you talk to ChatGPT.' With AI tools lacking legal confidentiality protections, users risk having their most intimate thoughts stored, accessed, or even subpoenaed in court. The privacy gap is real, and Altman admitted the industry is still trying to figure it out. Adding to the complexity, Altman highlighted how the lack of federal AI regulations has created a patchwork of rules that vary wildly across states. This legal uncertainty is already playing out in real-time — OpenAI, for example, is currently required to retain user conversations, even deleted ones, as part of its legal dispute with The New York Times. 'No one had to think about that even a year ago,' Altman said, calling the situation 'very screwed up.'


India.com
4 hours ago
- India.com
Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot: How is AI taking away our Drinking water? Read full story here
AI drinking water- Representational AI image We all know and accept the fact that artificial intelligence (AI) has become a very important part of our lives. With being increasingly integrated into daily life, concerns are mounting over the environmental footprint of AI, which is particularly related to its growing consumption of water and electricity required to operate the massive data centers needed to run AI queries by apps like Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot. As per a report by BBC Hindi, the expansion of AI technologies could intensify global water stress, especially in the face of climate change and its rising demand. It has been revealed by media reports that AI systems like ChatGPT rely on vast data centers that consume enormous energy and water for cooling. How AI is taking away your drinking water? The reports have also revealed that a single AI query may use significantly more electricity, which will need more water for cooling, than a typical internet search. Proving the claim, International Energy Agency (IEA) has estimated that a query made on ChatGPT consumes about 10 times more electricity than a search made on Google search engine. Studies also indicate that the AI industry could use 4–6 times more water annually than a country like Denmark by 2027. Also, companies such as Google, Microsoft, and Meta have reported major increases in water use with the increased use of AI. With many data centers being set up in drought-prone areas, the companies have also dealt with sparking protests and environmental backlash. What Sam Altman said on future of AI? As AI begins to transform industries globally, ensuring that the benefits of AGI (Artificial General Intelligence) are broadly distributed is critical, according to OpenAI Co-founder and CEO Sam Altman. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes and economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas, he emphasised in a new blog post. (With inputs from agencies)