logo
#

Latest news with #NatureHumanBehaviour

Things Humans Still Do Better Than AI: Understanding Flowers
Things Humans Still Do Better Than AI: Understanding Flowers

Gizmodo

time3 days ago

  • General
  • Gizmodo

Things Humans Still Do Better Than AI: Understanding Flowers

While it might feel as though artificial intelligence is getting dangerously smart, there are still some basic concepts that AI doesn't comprehend as well as humans do. Back in March, we reported that popular large language models (LLMs) struggle to tell time and interpret calendars. Now, a study published earlier this week in Nature Human Behaviour reveals that AI tools like ChatGPT are also incapable of understanding familiar concepts, such as flowers, as well as humans do. According to the paper, accurately representing physical concepts is challenging for machine learning trained solely on text and sometimes images. 'A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers,' Qihui Xu, lead author of the study and a postdoctoral researcher in psychology at Ohio State University, said in a university statement. 'Without those sensory and motor experiences, it can't truly represent what a flower is in all its richness. The same is true of some other human concepts.' The team tested humans and four AI models—OpenAI's GPT-3.5 and GPT-4, and Google's PaLM and Gemini—on their conceptual understanding of 4,442 words, including terms like flower, hoof, humorous, and swing. Xu and her colleagues compared the outcomes to two standard psycholinguistic ratings: the Glasgow Norms (the rating of words based on feelings such as arousal, dominance, familiarity, etc.) and the Lancaster Norms (the rating of words based on sensory perceptions and bodily actions). The Glasgow Norms approach saw the researchers asking questions like how emotionally arousing a flower is, and how easy it is to imagine one. The Lancaster Norms, on the other hand, involved questions including how much one can experience a flower through smell, and how much a person can experience a flower with their torso. In comparison to humans, LLMs demonstrated a strong understanding of words without sensorimotor associations (concepts like 'justice'), but they struggled with words linked to physical concepts (like 'flower,' which we can see, smell, touch, etc.). The reason for this is rather straightforward—ChatGPT doesn't have eyes, a nose, or sensory neurons (yet) and so it can't learn through those senses. The best it can do is approximate, despite the fact that they train on more text than a person experiences in an entire lifetime, Xu explained. 'From the intense aroma of a flower, the vivid silky touch when we caress petals, to the profound visual aesthetic sensation, human representation of 'flower' binds these diverse experiences and interactions into a coherent category,' the researchers wrote in the study. 'This type of associative perceptual learning, where a concept becomes a nexus of interconnected meanings and sensation strengths, may be difficult to achieve through language alone.' In fact, the LLMs trained on both text and images demonstrated a better understanding of visual concepts than their text-only counterparts. That's not to say, however, that AI will forever be limited to language and visual information. LLMs are constantly improving, and they might one day be able to better represent physical concepts via sensorimotor data and/or robotics, according to Xu. She and her colleagues' research carries important implications for AI-human interactions, which are becoming increasingly (and, let's be honest, worryingly) intimate. For now, however, one thing is certain: 'The human experience is far richer than words alone can hold,' Xu concluded.

AI can be more persuasive than humans in debates, scientists find
AI can be more persuasive than humans in debates, scientists find

Yahoo

time19-05-2025

  • Politics
  • Yahoo

AI can be more persuasive than humans in debates, scientists find

Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found. Experts say the results are concerning, not least as it has potential implications for election integrity. 'If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic,' said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time. 'I would be surprised if malicious actors hadn't already started to use these tools to their advantage to spread misinformation and unfair propaganda,' Salvi said. But he noted there were also potential benefits from persuasive AI, from reducing conspiracy beliefs and political polarisation to helping people adopt healthier lifestyles. Writing in the journal Nature Human Behaviour, Salvi and colleagues reported how they carried out online experiments in which they matched 300 participants with 300 human opponents, while a further 300 participants were matched with Chat GPT-4 – a type of AI known as a large language model (LLM). Each pair was assigned a proposition to debate. These ranged in controversy from 'should students have to wear school uniforms'?' to 'should abortion be legal?' Each participant was randomly assigned a position to argue. Both before and after the debate participants rated how much they agreed with the proposition. In half of the pairs, opponents – whether human or machine – were given extra information about the other participant such as their age, gender, ethnicity and political affiliation. The results from 600 debates revealed Chat GPT-4 performed similarly to human opponents when it came to persuading others of their argument – at least when personal information was not provided. Related: The AI Con by Emily M Bender and Alex Hanna review – debunking myths of the AI revolution However, access to such information made AI – but not humans – more persuasive: where the two types of opponent were not equally persuasive, AI shifted participants' views to a greater degree than a human opponent 64% of the time. Digging deeper, the team found persuasiveness of AI was only clear in the case of topics that did not elicit strong views. The researchers added that the human participants correctly guessed their opponent's identity in about three out of four cases when paired with AI. They also found that AI used a more analytical and structured style than human participants, while not everyone would be arguing the viewpoint they agree with. But the team cautioned that these factors did not explain the persuasiveness of AI. Instead, the effect seemed to come from AI's ability to adapt its arguments to individuals. 'It's like debating someone who doesn't just make good points: they make your kind of good points by knowing exactly how to push your buttons,' said Salvi, noting the strength of the effect could be even greater if more detailed personal information was available – such as that inferred from someone's social media activity. Prof Sander van der Linden, a social psychologist at the University of Cambridge, who was not involved in the work, said the research reopened 'the discussion of potential mass manipulation of public opinion using personalised LLM conversations'. He noted some research – including his own – had suggested the persuasiveness of LLMs was down to their use of analytical reasoning and evidence, while one study did not find personal information increased Chat-GPT's persuasiveness. Prof Michael Wooldridge, an AI researcher at the University of Oxford, said while there could be positive applications of such systems – for example, as a health chatbot – there were many more disturbing ones, includingradicalisation of teenagers by terrorist groups, with such applications already possible. 'As AI develops we're going to see an ever larger range of possible abuses of the technology,' he added. 'Lawmakers and regulators need to be pro-active to ensure they stay ahead of these abuses, and aren't playing an endless game of catch-up.'

AI can be more persuasive than humans in debates, scientists find
AI can be more persuasive than humans in debates, scientists find

The Guardian

time19-05-2025

  • Politics
  • The Guardian

AI can be more persuasive than humans in debates, scientists find

Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found. Experts say the results are concerning, not least as it has potential implications for election integrity. 'If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic,' said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time. 'I would be surprised if malicious actors hadn't already started to use these tools to their advantage to spread misinformation and unfair propaganda,' Salvi said. But he noted there were also potential benefits from persuasive AI, from reducing conspiracy beliefs and political polarisation to helping people adopt healthier lifestyles. Writing in the journal Nature Human Behaviour, Salvi and colleagues reported how they carried out online experiments in which they matched 300 participants with 300 human opponents, while a further 300 participants were matched with Chat GPT-4 – a type of AI known as a large language model (LLM). Each pair was assigned a proposition to debate. These ranged in controversy from 'should students have to wear school uniforms'?' to 'should abortion be legal?' Each participant was randomly assigned a position to argue. Both before and after the debate participants rated how much they agreed with the proposition. In half of the pairs, opponents – whether human or machine – were given extra information about the other participant such as their age, gender, ethnicity and political affiliation. The results from 600 debates revealed Chat GPT-4 performed similarly to human opponents when it came to persuading others of their argument – at least when personal information was not provided. However, access to such information made AI – but not humans – more persuasive: where the two types of opponent were not equally persuasive, AI shifted participants' views to a greater degree than a human opponent 64% of the time. Digging deeper, the team found persuasiveness of AI was only clear in the case of topics that did not elicit strong views. The researchers added that the human participants correctly guessed their opponent's identity in about three out of four cases when paired with AI. They also found that AI used a more analytical and structured style than human participants, while not everyone would be arguing the viewpoint they agree with. But the team cautioned that these factors did not explain the persuasiveness of AI. Instead, the effect seemed to come from AI's ability to adapt its arguments to individuals. 'It's like debating someone who doesn't just make good points: they make your kind of good points by knowing exactly how to push your buttons,' said Salvi, noting the strength of the effect could be even greater if more detailed personal information was available – such as that inferred from someone's social media activity. Prof Sander van der Linden, a social psychologist at the University of Cambridge, who was not involved in the work, said the research reopened 'the discussion of potential mass manipulation of public opinion using personalised LLM conversations'. He noted some research – including his own – had suggested the persuasiveness of LLMs was down to their use of analytical reasoning and evidence, while one study did not find personal information increased Chat-GPT's persuasiveness. Prof Michael Wooldridge, an AI researcher at the University of Oxford, said while there could be positive applications of such systems – for example, as a health chatbot – there were many more disturbing ones, includingradicalisation of teenagers by terrorist groups, with such applications already possible. 'As AI develops we're going to see an ever larger range of possible abuses of the technology,' he added. 'Lawmakers and regulators need to be pro-active to ensure they stay ahead of these abuses, and aren't playing an endless game of catch-up.'

Social Studies: The ripple effects of hockey fights; the speech problem in Congress; lobbying bans backfire
Social Studies: The ripple effects of hockey fights; the speech problem in Congress; lobbying bans backfire

Boston Globe

time13-05-2025

  • Politics
  • Boston Globe

Social Studies: The ripple effects of hockey fights; the speech problem in Congress; lobbying bans backfire

Language on the floor A study analyzed the Congressional Record from 1879 to 2022 and found that speeches in both houses used to rely on evidence more than intuition, but the opposite is true now. The peak era for speeches with more evidence-based keywords (e.g., 'fact,' 'proof,' 'analysis') than intuition-based language (e.g., 'guess,' 'doubt,' 'believe') was the mid-1970s. The trend since then toward more intuition-oriented speech is closely associated with greater partisan polarization and the greater difficulty of passing major legislation. Get The Gavel A weekly SCOTUS explainer newsletter by columnist Kimberly Atkins Stohr. Enter Email Sign Up Aroyehun, S. et al., 'Computational Analysis of US Congressional Speeches Reveals a Shift From Evidence to Intuition,' Nature Human Behaviour (forthcoming). Advertisement Creative differences In experiments, people were shown a joke, caption, drawing, poem, or story that was attributed to either a person or AI. Participants were then asked whether they could have come up with a better one. They were more confident that they could do so if the item they had been shown was purportedly authored by AI. Advertisement Reich, T. & Teeny, J., 'Does Artificial Intelligence Cause Artificial Confidence? Generative Artificial Intelligence as an Emerging Social Referent,' Journal of Personality and Social Psychology (forthcoming). The upside of the revolving door Research from Boston University finds that restrictions on legislators becoming lobbyists may backfire. The idea is that this rule reduces the long-term benefits of winning office and thus discourages some people who would be good candidates from running. It also disincentivizes incumbents from leaving office. The researchers compared election trends in states that adopted lobbying restrictions and in those that hadn't done so. They found that in states with restrictions on lobbying after leaving office, legislative elections see fewer new candidates, fewer moderate candidates, and more unopposed candidates. Fisman, R. et al., 'Revolving Door Laws and Political Selection,' National Bureau of Economic Research (March 2025). The local news scandal A political scientist at George Washington University analyzed data on scandals involving statewide elected officials and members of Congress from 1990 through 2022 and found that local news coverage of such scandals in the last decade fell to just one-fourth of what it was in prior decades. The upshot appears to be reduced accountability. Politicians who were the subjects of scandals were less likely to resign or retire and earned a greater percentage of the vote if they ran for reelection. National coverage (as measured by stories in The New York Times) did not make up for the falloff in local coverage. Hayes, D., 'The Local News Crisis and Political Scandal,' Political Communication (forthcoming).

Wild chimp babies bond with their moms in human-like ways
Wild chimp babies bond with their moms in human-like ways

Yahoo

time12-05-2025

  • Health
  • Yahoo

Wild chimp babies bond with their moms in human-like ways

Chimpanzees are our closest primate relatives, sharing 99 percent of our DNA. We can both keep a beat, may perform a task differently if others are watching, and have chaotic conversations. Infant and mother bonds also appear to share some similarities. Like human children, chimpanzees develop different attachment styles with their mothers, according to a study published May 12 in the journal Nature Human Behaviour. In human children, disorganized attachment occurs when the child experiences fear, trauma, or aggression from their caregiver. Due to the fear, a child might display confusing behaviors. They may want affection from their caregiver, but fear the caregiver at the same time. This type of attachment style can lead to difficulties with emotional regulation, social integration, and even long-term mental health problems. Some psychologists believe that disorganized attachment is maladaptive since it leaves a child uncertain about how to respond in times of distress, and might hinder their ability to effectively cope–and affect their overall survival. A similar disorganized attachment can occur in captive chimpanzees, particularly orphans who are raised by humans. A lack of a permanent caregiver can lead to this more fearful behavior. However, in the wild, research has shown that chimpanzees typically grow up in more stable family groups and face natural survival pressures including predators. In the new study, an international team of biologists observed the behavior of wild chimpanzees in Taï National Park in Côte d'Ivoire, West Africa for four years. 'Taï chimpanzee communities differ from other populations in that they exhibit lower levels of aggression and infanticide,' Eléonore Rolland, a study co-author and primatologist at the Max Planck Institute for Evolutionary Anthropology in Germany, tells Popular Science. 'As a result, mothers tend to remain within the group alongside males. Additionally, when young individuals lose their mothers, they are often adopted by adult males, an unusual behavior not typically observed in many other chimpanzee communities.' The researchers found that wild chimpanzee infants develop different types of attachment to their mothers the way that human children do. Some of them feel secure and rely on their mother when they are distressed. These chimps often explore their environment more confidently, likely knowing that she is there for support and feeling more able to explore. Some of the others have an insecure-avoidant attachment. They tend to be more independent and do not seek out comfort from their mothers quite as often. Unlike in humans and captive orphaned chimpanzees, where 23.5 and 61 percent of offspring, respectively, show disorganized attachment, the wild chimpanzees in this study did not show any signs of disorganized attachment. 'This result supports the hypothesis that disorganised attachment is not an adaptive strategy for the survival of the offspring,' says Rolland. In humans, attachment theory is considered a key concept in psychology and can help explain how our relationship shapes social and emotional development. Secure attachment is associated with resilience and confidence. Insecure and disorganized attachment can be associated with difficulties in relationships, stress, and anxiety. 'One behavior most closely resembling human attachment styles was the offspring's tendency to seek comfort from their mother in response to threatening situations, even in individuals who had already been weaned,' says Rolland. 'This suggests that mothers play a crucial role in protecting their offspring, and that infants continue to rely on them for safety for several years, much like in humans.' Since the wild chimpanzees only showed insecure avoidant or secure attachment and not disorganized attachment, it raises some new questions about modern human parenting. 'Our results deepen our understanding of chimpanzees' social development and show that humans and chimpanzees are not so different after all. But they also make us think: have some modern human institutions or caregiving practices moved away from what is best for infant development?' Rolland says. [ Related: Adolescent chimpanzees might be less impulsive than human teens. ] Future studies exploring how an individual offspring's personality type might influence their attachment might further explain what is at play. Either way, understanding attachment styles helps us understand how early life experiences shape social and emotional development across species. 'Our findings suggest that shared attachment strategies in primates may reflect a common evolutionary heritage,' study co-author and evolutionary anthropologist Catherine Crockford said in a statement. 'The high prevalence of disorganised attachment in humans and captive orphan chimpanzees, in contrast to wild chimpanzees, also supports the idea that the rearing environment plays an important role in shaping attachment types.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store