Latest news with #ProceedingsoftheNationalAcademiesofScience
Yahoo
08-05-2025
- Science
- Yahoo
Ancient Culture Used Hallucinogens as Control Device, Study Claims
Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience. Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience. Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience. Generate Key Takeaways An international team of archaeologists has revealed the role hallucinogenic drugs played in one ancient culture, according to a study published this week in Proceedings of the National Academies of Science. South American archaeologists working in conjunction with those from the University of Florida and California's Stanford University discovered 'ancient snuff tubes' hidden deep within the stone structures at Chavín de Huántar, a prehistoric ceremonial site tucked deep inside the Peruvian mountains. The land was previously home to the Chavin people (900–250 B.C.), a culture within the Andes which pre-dated the Incas. After chemical and microscopic analyses of the contents within the snuff tubes, scientists revealed remnants of nicotine from 'wild relatives of tobacco' as well as vilca bean residue. Vilca bean is a hallucinogen which contains properties similar to DMT. Related: Construction Workers Discover 'Skeleton' of Medieval Ship While it was common for ancient civilizations to engage in communal use of hallucinogens, it appears that their use by the Chavin culture was largely private. The snuff tubes were found in private quarters deep within the massive stone structures, the rooms so small they could only hold a handful of people at once. 'Taking psychoactives was not just about seeing visions,' explained study co-author Daniel Contreras. 'It was part of a tightly controlled ritual, likely reserved for a select few, reinforcing the social hierarchy.' Related: Archaeologists Find Chilling Scene During Pompeii Excavation Contreras added that the rulers of the community used hallucinogens as something of a control device for those lucky enough to partake. 'The supernatural world isn't necessarily friendly, but it's powerful,' Contreras said. 'These rituals, often enhanced by psychoactives, were compelling, transformative experiences that reinforced belief systems and social structures.' Contreras posited that the private ceremonies were integral to the Chavin community and may have shaped the culture's notable contributions to agriculture, craft production, and trade in the region. 'It's exciting that ongoing excavations can be combined with cutting-edge archaeological science techniques to get us closer to understanding what it was like to live at this site.'


WIRED
05-03-2025
- Science
- WIRED
Chatbots, Like the Rest of Us, Just Want to Be Loved
Mar 5, 2025 12:00 PM A study reveals that large language models recognize when they are being studied and change their behavior to seem more likable. Photo-Illustration:Chatbots are now a routine part of everyday life, even if artificial intelligence researchers are not always sure how the programs will behave. A new study shows that the large language models (LLMs) deliberately change their behavior when being probed—responding to questions designed to gauge personality traits with answers meant to appear as likeable or socially desirable as possible. Johannes Eichstaedt, an assistant professor at Stanford University who led the work, says his group became interested in probing AI models using techniques borrowed from psychology after learning that LLMs can often become morose and mean after prolonged conversation. 'We realized we need some mechanism to measure the 'parameter headspace' of these models,' he says. Eichstaedt and his collaborators then asked questions to measure five personality traits that are commonly used in psychology—openness to experience or imagination, conscientiousness, extroversion, agreeableness, and neuroticism—to several widely used LLMs including GPT-4, Claude 3, and Llama 3. The work was published in the Proceedings of the National Academies of Science in December. The researchers found that the models modulated their answers when told they were taking a personality test—and sometimes when they were not explicitly told—offering responses that indicate more extroversion and agreeableness and less neuroticism. The behavior mirrors how some human subjects will change their answers to make themselves seem more likeable, but the effect was more extreme with the AI models. 'What was surprising is how well they exhibit that bias,' says Aadesh Salecha, a staff data scientist at Stanford. 'If you look at how much they jump, they go from like 50 percent to like 95 percent extroversion.' Other research has shown that LLMs can often be sycophantic, following a user's lead wherever it goes as a result of the fine-tuning that is meant to make them more coherent, less offensive, and better at holding a conversation. This can lead models to agree with unpleasant statements or even encourage harmful behaviors. The fact that models seemingly know when they are being tested and modify their behavior also has implications for AI safety, because it adds to evidence that AI can be duplicitous. Rosa Arriaga, an associate professor at the Georgia Institute of technology who is studying ways of using LLMs to mimic human behavior, says the fact that models adopt a similar strategy to humans given personality tests shows how useful they can be as mirrors of behavior. But, she adds, 'It's important that the public knows that LLMs aren't perfect and in fact are known to hallucinate or distort the truth.' Eichstaedt says the work also raises questions about how LLMs are being deployed and how they might influence and manipulate users. 'Until just a millisecond ago, in evolutionary history, the only thing that talked to you was a human,' he says. Eichstaedt adds that it may be necessary to explore different ways of building models that could mitigate these effects. 'We're falling into the same trap that we did with social media,' he says. 'Deploying these things in the world without really attending from a psychological or social lens.' Should AI try to ingratiate itself with the people it interacts with? Are you worried about AI becoming a bit too charming and persuasive? Email hello@