logo
#

Latest news with #persuasion

Protecting Your Mind Amid AI's Persuasive Power Play
Protecting Your Mind Amid AI's Persuasive Power Play

Forbes

time2 days ago

  • Health
  • Forbes

Protecting Your Mind Amid AI's Persuasive Power Play

In the marketplace of ideas, from political campaigns to product marketing, persuasion has long been a human art form. We rely on logic, emotion, charisma, and trust to influence and be influenced. But a new power player is rapidly entering the fray: Artificial Intelligence. Sophisticated AI, particularly Large Language Models are no longer just information processors; they are becoming skilled digital persuaders, capable of shaping opinions and nudging behaviors in ways we are only beginning to understand. The question is no longer if AI can be persuasive, but how persuasive it can be, and what that means for our future. The foundations of human persuasion are well-documented, perhaps most famously by Dr. Robert Cialdini, who outlined principles like reciprocity, scarcity, authority, commitment and consistency, liking, and social proof. These psychological levers have been the bedrock of influence strategies for decades. Humans excel at deploying these intuitively, building rapport, reading nuanced social cues, and leveraging genuine emotional connections to build deep, lasting trust. However, the digital age has ushered in AI systems with a distinct set of advantages. These algorithms can process and analyze vast datasets on human behavior, preferences, and communication styles, allowing for an unprecedented level of personalized messaging at scale. Imagine an AI that can tailor its arguments and tone in real-time, A/B testing thousands of variations of a message to find the most effective one for a specific individual or demographic – a feat impossible for a human. Recent studies underscore this emerging reality. Research has shown that AI-generated messages can be as, or in some cases even more, persuasive than those crafted by humans. Making them significantly more effective in changing minds on divisive topics in online debates. Simply making models bigger doesn't inherently make a single message dramatically more influential, but the overall trend indicates a powerful new persuasive force. One compelling example of this specialized persuasive technology comes from academia. The paper AI-Persuade: A Conversational AI for Persuasion Towards Pro-Environmental Behaviors details a system designed specifically to influence users to adopt more environmentally friendly habits. This AI doesn't just present facts; it engages in interactive conversations, employing a diverse toolkit of persuasion strategies — such as goal setting, positive framing, and social commitment — to foster long-term attitudinal and behavioral shifts. The researchers' user studies validated its potential to effectively guide individuals towards targeted outcomes. This points to a future where AI could be a significant force in public service campaigns, health interventions, and educational initiatives. AI's persuasive power isn't just about brute-force data processing. It taps into several psychological mechanisms: Despite AI's growing capabilities, human interaction retains unique strengths in persuasion. Genuine empathy, the ability to understand and share the feelings of another, is profoundly difficult for AI to replicate authentically. Building deep, long-term trust, the kind that underpins significant life changes or high-stakes decisions, often relies on shared experiences, vulnerability, and the nuanced dance of human relationships. Humans can adapt to entirely novel situations with a flexibility and intuition that current AI lacks, drawing on a lifetime of complex social learning. It matters to remember that AI is a tool to an end. The latter must be decided up by human users, based on ethics and moral values. The same tools that can encourage positive behaviors may be weaponized for manipulation, spreading misinformation, or unduly influencing vulnerable populations. The potential for AI-generated propaganda or highly personalized, deceptive marketing campaigns is a serious concern that demands ethical guidelines, transparency in AI deployment, and a focus on media literacy. AI's impact on decision-making and overreliance on our artificial assistants can diminish critical thinking, making us susceptible to manipulation if we're not vigilant. Ultimately, the good and bad of AI depends on the human mindset. The future likely involves a hybrid landscape where AI and human persuasion coexist and even collaborate. AI might handle initial engagement, provide personalized information, or manage large-scale outreach, while humans step in for more complex, empathetic, and high-trust interactions. As AI's persuasive abilities become more integrated into our lives, we need a framework to navigate this new terrain responsibly and effectively. Consider the A-Frame: The rise of the digital deluge is upon us. By understanding its power, recognizing its mechanisms, and committing to a framework of mindful engagement, we can harness the benefits of persuasive AI while safeguarding our autonomy and critical judgment in an increasingly AI-influenced world.

Democrats don't need a 'left-wing' Joe Rogan, they need to win back the real one
Democrats don't need a 'left-wing' Joe Rogan, they need to win back the real one

Fox News

time6 days ago

  • Politics
  • Fox News

Democrats don't need a 'left-wing' Joe Rogan, they need to win back the real one

It was hard to concentrate in my congressional office because I could overhear a lively interview with conservative media host Glenn Beck through the thin wall. You might assume I work for a Republican, but I'm chief of staff to progressive California Congressman Ro Khanna. What if I told you it was one of our best interviews in recent months? They disagreed on President Trump's deportation efforts and USAID funding, but they agreed on revitalizing manufacturing and leading against China. The headline for the interview read, "Progressive Democrat sits down with Glenn Beck despite disagreements: 'We're all Team America.'" We agreed he'd return soon. There's debate about whether Democrats need a stronger message or more robust left-wing media. But what Democrats really need is to relearn the art of persuasion—not just crafting a compelling message, but figuring out how to make it cut through today's crowded media landscape. Democrats don't need a "left-wing Joe Rogan." We need to persuade the real one, along with Americans nationwide, that we share common ground and are worth supporting. I know it's possible because I saw Ro begin that process with Glenn Beck. They didn't agree on everything, but the conversation opened a door. That's persuasion: not instant conversion, but showing up, listening, and finding places to start. Our leaders are too often surrounded by chattering consultants obsessed with poll-tested messages and terrified of ruffling feathers. Every morning, I get dozens of emails urging me to tell Americans that MAGA Republicans are trying to take away their healthcare. I believe it! But it takes more than one line to convince people. We need specifics, facts, and a clear vision of what Democrats stand for. Ro has been building this foundation for years. He's traveled to dozens of states, partnering with Silicon Valley to expand tech opportunities, and since the election, held town halls in Republican districts—not to preach, but to listen. At a recent Allentown, Pennsylvania, event, Ro spoke with the Trump supporters protesting outside about his bipartisan bill to lower prescription drug costs. By the end, they came inside—and applauded. Having a message is just the first step. The next challenge is breaking through today's media ecosystem—can it go viral on social media, get picked up by the press, or reach broader audiences, and still land? Amplification matters equally. It's not about giving anyone a platform or legitimacy—their platforms already exist, and their audiences view them as legitimate. It's about using those platforms to share our message and tailoring how we communicate to different audiences without compromising our values. We also need to balance between viral moments with nuanced messages about complicated issues. Ro's prescription drug bill has gained traction on X and Reddit. But his core vision—a new economic patriotism focused on 21st century solutions for the economic success of every community including new factories and AI academies—hasn't taken off online the same way. Yet, in longer-form interviews and podcasts, it's met with enthusiasm. Both messages matter, and we need to find the right time and place for each. After all, Joe Rogan supported Bernie Sanders in the 2020 presidential election. When he drifted toward Donald Trump, we shrugged and said he was gone for good. Why not try again with a tailored message and an eye toward persuasion? Joe, if you're reading this, I have a pitch for you.

AI Gets a Lot Better at Debating When It Knows Who You Are, Study Finds
AI Gets a Lot Better at Debating When It Knows Who You Are, Study Finds

Gizmodo

time19-05-2025

  • Science
  • Gizmodo

AI Gets a Lot Better at Debating When It Knows Who You Are, Study Finds

A new study shows that GPT-4 reliably wins debates against its human counterparts in one-on-one conversations—and the technology gets even more persuasive when it knows your age, job, and political leanings. Researchers at EPFL in Switzerland, Princeton University, and the Fondazione Bruno Kessler in Italy paired 900 study participants with either a human debate partner or OpenAI's GPT-4, a large language model (LLM) that, by design, produces mostly text responses to human prompts. In some cases, the participants (both machine and human) had access to their counterparts' basic demographic info, including gender, age, education, employment, ethnicity, and political affiliation. The team's research—published today in Nature Human Behaviour—found that the AI was 64.4% more persuasive than human opponents when given that personal information; without the personal data, the AI's performance was indistinguishable from the human debaters. 'In recent decades, the diffusion of social media and other online platforms has expanded the potential of mass persuasion by enabling personalization or 'microtargeting'—the tailoring of messages to an individual or a group to enhance their persuasiveness,' the team wrote. When GPT-4 was allowed to personalize its arguments, it became significantly more persuasive than any human—boosting the odds of changing someone's mind by 81.2% compared to human-human debates. Importantly, human debaters did not become so persuasive when given access to that personal information. 'In the context of persuasion, experts have widely expressed concerns about the risk of LLMs being used to manipulate online conversations and pollute the information ecosystem by spreading misinformation, exacerbating political polarization, reinforcing echo chambers and persuading individuals to adopt new beliefs,' the researchers added. GPT-4 can argue with you, and given a set of facts about you, it may excel at convincing you to change your point of view, the researchers found. The team notes in the paper's discussion that LLMs have previously been criticized for generating and diffusing hate speech, misinformation, and propaganda; at scale, LLMs with users' personal information could be harnessed for malicious purposes. The team's research pairs nicely with a recent ChatGPT update that allows the model to remember more of users' conversations (with their permission), meaning that the AI can have access to a catalogue of information about its users. But there's also good news—or bad news—depending on how you see it. GPT-4 was very effective at persuading its opponents on less controversial issues, but with more entrenched positions (referred to in the research as 'opinion strength'), the bot had a harder time convincing humans to change their minds. In other words, there's no indication that GPT-4 would be any more successful than you are at the Thanksgiving debate table. What's more, the researchers found that GPT-4 tends to use more logical and analytical language, while human debaters relied more on personal pronouns and emotional appeals. Surprisingly, personalization didn't dramatically change GPT-4's tone or style—it just made its arguments more targeted. In three out of four cases, human participants could correctly identify their opponent as AI, which the researchers attribute to GPT-4's distinct writing style. But participants had a difficult time identifying human opponents as human. Regardless, people were more likely to change their mind when they thought they were arguing with an AI than when they believed their opponent was human. The team behind the study says this experiment should serve as a 'proof of concept' for what could happen on platforms like Reddit, Facebook, or X, where debates and controversial topics are routine—and bots are a very established presence. The recent paper shows that it doesn't take Cambridge Analytica-level profiling for an AI to change human minds, which the machines managed with just six types of personal information. As people increasingly rely on LLMs for help with rote tasks, homework, documentation, and even therapy, it's critical that human users remain circumspect about the information they're fed. It remains ironic that social media—once advertised as the connective tissue of the digital age—fuels loneliness and isolation, as two studies on chatbots found in March. So even if you find yourself in a debate with an LLM, ask yourself: What exactly is the point of discussing such a complicated human issue with a machine? And what do we lose when we hand over the art of persuasion to algorithms? Debating isn't just about winning an argument—it's a quintessentially human thing to do. There's a reason we seek out real conversations, especially one-on-one: To build personal connections and find common ground, something that machines, with all their powerful learning tools, are not capable of.

AI is more persuasive than a human in a debate, study finds
AI is more persuasive than a human in a debate, study finds

Washington Post

time19-05-2025

  • Science
  • Washington Post

AI is more persuasive than a human in a debate, study finds

Technology watchdogs have long warned of the role artificial intelligence can play in disseminating misinformation and deepening ideological divides. Now, researchers have proof of how well AI can sway opinion — putting it head-to-head with humans. When provided with minimal demographic information on their opponents, AI chatbots — known as large language models (LLMs) — were able to adapt their arguments and be more persuasive than humans in online debates 64 percent of the time, according to a study published in Nature Human Behavior on Monday.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store