11-04-2025
Friends without benefits: people warned against getting close to AI
As artificial intelligence (AI) is made more sophisticated, some people could be vulnerable to engaging in relationship-like interactions or perceiving "romance" with the increasingly garrulous chatbots.
'The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms,' said Daniel Shank of Missouri University.
In a paper published in the journal Trends in Cognitive Sciences, Shank and colleagues argued that there is a "real worry" that "artificial intimacy" with AI bots could see some "disrupting" of human relationships.
"Through weeks and months of intense conversations, these AIs can become trusted companions who seem to know and care about their human partners," the team said.
That chatbots are prone to "hallucination" - insider-speak for their tendency to churn out seemingly inaccurate or incoherent responses - is a further cause for concern, as it means "even short-term conversations with AIs can be misleading."
"If we start thinking of an AI that way, we're going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways,' the researchers warn, adding that the bots "can harm people by encouraging deviant, unethical, and illegal behaviors."
Earlier this week, OpenAI announced the roll-out of an enhanced "memory" function for its ChatGPT, meaning the bot will tailor its responses to users based on recalling previous interactions, likely adding to the perception of intimacy in human-machine interactions.
Google DeepMind last week published research suggesting artificial general intelligence (AGI), or machines with human-esque capabilities, could be developed by 2020.
While AGI, if comes about, would be "a transformative technology," it would likely pose "significant risks" to people, including those of "severe harm," the Google team warned.