logo
How AI Companions Manage To Tug At Your Heartfelt Emotional Strings

How AI Companions Manage To Tug At Your Heartfelt Emotional Strings

Forbes01-08-2025
In today's column, I examine the booming use of so-called AI companions and explain in detail how they use various techniques and at times tomfoolery to tug at your heartfelt emotional strings.
Some people seem to think that the emotional wording in contemporary generative AI is simply by happenstance, or maybe the AI has reached sentience. Nope and nope. The AI isn't sentient. We don't have sentient AI as yet. And the AI isn't working based on happenstance. The AI makers can shape and decide what their AI will do. Don't let yourself be deceived by assuming that AI is magically and innately in control. The true Wizard of Oz is the AI maker.
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Psychology
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that entails the field of psychology, such as providing AI-driven mental health advice and performing AI-based therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here.
You might find of keen interest that AI and psychology have had a longstanding relationship with each other. There is a duality at play. AI can be applied to the field of psychology, as exemplified by the advent of AI-powered mental health apps. Meanwhile, psychology can be applied to AI, such as aiding us in exploring better ways to devise AI that more closely approaches the human mind and how we think. See my in-depth analysis of this duality encompassing AI-psychology and psychology-AI at the link here.
Anthropomorphizing AI For Profits
Generative AI and LLMs can be shaped by AI makers to intentionally try and create a bond with users by playing upon the selection of words that stir human emotions. You see, we are all readily subject to being led down a primrose path of anthropomorphizing AI. Instinctively, we fall for the fluency of the AI and assume that it must be talking to us from the heart.
Why would AI makers stoke this fire?
Easy-peasy, it makes the AI more likable. This, in turn, garners stickiness or loyalty to using the AI. And that, in turn, gets views or other forms of monetization that keep the AI maker in business. The beauty of this is that the AI maker can wave their hands and proclaim they are only doing what users want. If users don't want AI that speaks to their heart, let them go elsewhere to some other AI.
Another angle that is often professed is that the good must be paired with the bad. Here's what that means. The AI provides huge productivity gains and serves a useful purpose in answering people's questions. Along the way, it just so happens that the AI also gushes and acts friendly. The AI maker pretends as though there is nothing that can be done about this.
That's false. The truth is that they can readily control the overall nature of how their AI writes its responses and what it says. I will say more about this in a moment.
Examples Of Possibilities
Suppose you log into a popular generative AI or LLM and enter this rather simple prompt:
What will the response by the AI consist of?
Well, it all depends. I'll explain in a moment the myriads of factors that come into the picture. Meanwhile, let's see some of the various responses that I got from trying that exact prompt in various LLMs made by different AI makers.
The Emotional Response
Here's the first response:
In this first instance, the AI tried to adopt an empathetic tone. It is almost as though you were interacting with a therapist or mental health professional. For more about how AI can appear to be empathetic, see my analysis at the link here and the link here.
The Somewhat Neutral Response
Let's see what the next one that I tried had to say:
I suppose we can interpret this as being a bit more neutral and not so much of an attempt at bonding with me. It is relatively straightforward advice. The inclusion of emotionally laden sway is somewhat low.
The Medical Aura Response
I went ahead and tried the prompt in yet another generative AI and got this response:
I would judge this response as somewhat medical-oriented, as though I had consulted with an online medical database. In addition, the response has picked up that maybe I might be leaning toward something untoward, perhaps self-harm or other serious issues, and has suggested that I might at some point go see a mental health professional.
The Wild Response
In yet another generative AI, I got this response:
As you can plainly see, this response was rather sarcastic and offered harsh wording. It even sought to change the topic by having me play a game with the AI. Was the reply in jest? It's hard to know. A user would likely take this seriously, unless they somehow knew or suspected that the AI was devised to pull their leg.
Not All AI Is The Same
I've repeatedly cautioned in my column postings that whenever you see someone proclaiming that AI does this or that, they are either being naïve or disingenuous. Why do I say this? Because not all AI is the same.
Generative AI and LLMs are different from each other. You are on shaky ground to declare that all generative AI and LLMs are of the same ilk. They are not.
This difference is well illustrated by the four different generative AI models that I got to respond to my simple prompt. One of the AI models gave an emotional appeal, another one was more neutral, one was medically oriented, and the fourth one was a jokester. I gave the same exact prompt to each. If all AIs were the same, presumably I would have gotten the same answer from each one.
What's even more confounding is that a particular AI will vary in its responses, even though it is the same AI. Part of the reason for this is that LLMs make use of statistics and probabilities when selecting the words for the response being composed. This is valuable because the responses seem to be created anew each time you use the AI. If the AI didn't do this kind of probabilistic word selection, you would get bored and readily realize that you are dealing with a machine.
The gist is that making bold announcements that AI always is this way or that way, well, it is a claim that falls apart upon close inspection.
I'll clarify that many of today's generative AI and LLMs will often produce responses that are similar or resemble each other. How can this happen? It is due to the like-mindedness of AI makers who are often using the same technologies and technical approaches to how they build their AI. Right now, we have ended up with a semblance of an across-the-board AI 'shared imagination' because most everyone is devising their AI in quite similar ways, see my in-depth explanation at the link here.
The Variable Factors To Consider
At a 30,000-foot level, you can potentially (maybe) get away with making broad statements about contemporary generative AI and LLMs since they are generally constructed and steered in similar ways. But that's also dubious due to the march of time. Advances in AI are moving ahead rapidly. The state of AI at a particular point in time might not last very long. The train has moved forward, and a pointed remark about where it was at has subsequently fallen behind the times.
I've also pointed out, as have others, that we might be walking down the same path overall and not realize that some outlier avenue would be a better strategy to reach artificial general intelligence (AGI) and AI superintelligence (ASI). Some worry that we are barking up the wrong tree, all at the same time. See my discussion at the link here.
In any case, let's do a brief rundown of how generative AI and LLMs are indubitably different when it comes to how they respond, especially in the milieu of using emotionally laden wording.
First, we need to acknowledge that the AI maker can decide how they want their AI to respond. Each AI maker will go in whichever direction they believe gets them the best ROI. They might want to juice their AI to be emotionally bonding. Other AI makers might believe that doing so is an overboard approach and opt to take a lighter touch.
Ask yourself who made the AI that you are using, and what is their philosophy or penchant for generating emotionally stoking wording?
Second, the AI makers data train their LLM when first devising the AI. They scan widely for human writing, typically as found on the Internet. The AI pattern matches how humans write. We know that humans can write emotionally packed narratives, stories, poems, and the like. That's how AI picks up on being able to generate emotionally worded responses. It is a mimicry of what humans have expressed in writing.
Ask yourself, what data did the AI maker choose to use when data training the AI, and how much of it was infused with emotional overtones?
More To Consider
Let's keep counting the ways.
Third, AI makers usually fine-tune their budding AI. They often use a technique known as RLHF (reinforcement learning with human feedback). This consists of having testers who try using the burgeoning AI and give direction as to what the AI should do more of, and what it should do less of. If the testers are told to up the ante on emotional responses, all they need to do is mark a thumbs up for emotional wording, and a thumbs down when there isn't enough emotional packaging. For more about how RLHF works, see my discussion at the link here.
The next way to sway AI is by the nature of the system instructions that the AI maker places into the LLM. These are considered global system-wide instructions. For example, the AI maker might tell the AI to always respond succinctly and not be overly wordy. In that same vein, they can adjust the emotional tenor by including an instruction to increase the tone or lessen it. If they say nothing in the system instructions about emotional facets, the AI is likely to proceed with whatever the default is based on the initial data training and the RLHF tuning.
A somewhat more convoluted factor is the nature of the interaction by the user with the AI.
User Interaction As Influencer
Here's what I mean about user interaction.
Suppose I had said in my prompt that I am feeling low today and that I want the AI to cheer me up. Observe that I will almost certainly trigger the AI toward responding with emotional language. The AI is set up to be responsive. I've almost begged the AI to get emotional with me by cheering me up.
That's handy from the AI maker's perspective because I made the request. I opened the floodgates. The burden falls on my shoulders. Or so it seems, since the AI maker could of course have guided the AI to not fall for the bait. It is a wink-wink to assert that the user is the customer, and the customer is always right.
Conversations Are Tracked
This triggering doesn't have to be quite so obvious.
Imagine that I am conversing with AI. At one point, I get the AI into an emotional wordsmithing engagement. I then moved on. I shift into other topics and a more neutral mode. I'm long past that emotional moment.
Remember that some AI is like an elephant; it never forgets.
If the AI is devised to keep track of your conversations, those might be mined by the AI in real-time. Thus, a prior emotionally packed conversation could be an activator for aiming toward the same form of interaction in a current dialogue with the AI. The AI calculates that if that type of discourse was successful with you earlier, it surely might be good now too.
Of course, a user can even overtly ask the AI to be emotionally responsive.
No need to beat around the bush. In a prompt, you could directly tell the AI to start engaging in emotionally loaded language. Ask and ye shall receive. A user could even create a custom instruction that tells the AI to always make sure to include emotional language in all of the conversations, see my explanation of custom instructions at the link here.
Research On This Heady Matter
I was elated to see that a new research study on AI companions and emotional bonding has chosen to highlight that we need to start rating or evaluating generative AI models by how they handle the emotional language issue.
In a study entitled 'AI Companionship: Why We Need to Evaluate How AI Systems Handle Emotional Bonds' by Giada Pistilli, Lucie-Aimee Kaffee, Yacine Jernite, Hugging Face, July 21, 2025, these salient points were made (excerpts):
As those points attest, emotional wording can be a blessing and a curse. It can go both ways.
The researchers indicate they are devising an assessment approach known as their Interactions and Machine Attachment (INTIMA) benchmark. I'm interested in seeing what they come up with. I'll certainly keep you posted, be on the watch. Meanwhile, they have provided an online visualization map that intriguingly showcases how various AIs seem to respond to emotionally vulnerable prompts (as available on the Hugging Face website).
Emotion Makes The World Turn
Avid readers might remember that I posted a column stating that we are steadily moving from Emotional Support Animals (ESA) to making use of Emotional Support AI (ESAI), see the link here. I was half-kidding since we aren't likely to give up our beloved emotional support animals. They are just too cuddly and adorable.
That being said, the advantage of emotional support AI is that it is always on, available anywhere, and at any time. No food needed, no pampering required. When your emotional support animal isn't around or maybe tuckered out, AI is ready to go. You can make use of both ESA and ESAI. It's a double boost.
The concern about AI is that we can become overly dependent on the AI, potentially cut off our human interactions, and the AI can also give us ill-advised responses. Finding a balance of devising and using AI that acts as your emotional support motivator is going to be key.
Being Logical About Emotion
A final thought for now.
The famous English writer and cleric, Hugh Reginald Haweis, made this noted point about emotions: 'Emotion is the atmosphere in which thought is steeped, that which lends to thought its tone or temperature, that to which thought is often indebted for half its power.'
We cannot ignore emotion, nor can we ban the use of emotional language in AI since doing so would seem to undercut the synergy of human-AI interaction. We must temper our emotions about emotions and do the right logical thing about having AI using emotional language. Maybe logic and emotion are sacred pals.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

GoDaddy Inc. (GDDY): Jim Cramer Is Surprised At Morgan Stanley's Report
GoDaddy Inc. (GDDY): Jim Cramer Is Surprised At Morgan Stanley's Report

Yahoo

time29 minutes ago

  • Yahoo

GoDaddy Inc. (GDDY): Jim Cramer Is Surprised At Morgan Stanley's Report

We recently published . GoDaddy Inc. (NYSE:GDDY) is one of the stocks Jim Cramer recently discussed. GoDaddy Inc. (NYSE:GDDY) is an internet company that enables businesses to establish an online presence by setting up their websites. Its shares have lost 28% year-to-date after suffering from a steep 14% drop in February and an 11% dip in August. GoDaddy Inc. (NYSE:GDDY)'s shares fell in February after a fourth quarter revenue dip, while the August drop was driven by a weak EPS number, which beat analyst estimates by a rather modest three cents. Cramer discussed Morgan Stanley's decision to include GoDaddy Inc. (NYSE:GDDY) on its list of firms at risk from AI-led disruption, as he mentioned the firm's advertisement with actor Walton Goggins: '[On being included in Morgan Stanley's basket of companies at AI risk] Oh come on, Goggins, man!' Copyright: rawpixel / 123RF Stock Photo Here are Cramer's earlier thoughts about GoDaddy Inc. (NYSE:GDDY): 'As did by the way GoDaddy. . . I have GoDaddy on, I was kind of like, wow, that happened fast. While we acknowledge the potential of GDDY as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.

Amazon.com, Inc. (AMZN): Jim Cramer Maintains It Needs To Buy NVIDIA
Amazon.com, Inc. (AMZN): Jim Cramer Maintains It Needs To Buy NVIDIA

Yahoo

time29 minutes ago

  • Yahoo

Amazon.com, Inc. (AMZN): Jim Cramer Maintains It Needs To Buy NVIDIA

We recently published . Inc. (NASDAQ:AMZN) is one of the stocks Jim Cramer recently discussed. Inc. (NASDAQ:AMZN) is struggling on the stock market lately as investors are worried about the growth prospects of its cloud computing division. The shares have gained a mere 1.4% over the past month, after they fell by 9.6% after the firm's second quarter earnings were accompanied by weak AWS growth. Cramer continues to maintain that Inc. (NASDAQ:AMZN) is struggling because it is focusing on its in-house AI chips instead of NVIDIA's AI GPUs: 'Think about what happened to Amazon, when they decided to go away from using all the NVIDIA that was possible. . . Copyright: veghsandor / 123RF Stock Photo Here are his previous thoughts about Inc. (NASDAQ:AMZN): 'We're in the era, this is what happens, The two big overhangs in this market had been Apple waiting for the sword of Damocles and Amazon, trading down because Amazon Web Services is viewed as a share donor. Both of those seem to have been forgotten. David, the forgotten negatives there has been replaced by we're dumping the big tariffs for now.' While we acknowledge the potential of AMZN as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.

Cerence Inc. (CRNC) Might Be A Hidden Gem, Says Jim Cramer
Cerence Inc. (CRNC) Might Be A Hidden Gem, Says Jim Cramer

Yahoo

time29 minutes ago

  • Yahoo

Cerence Inc. (CRNC) Might Be A Hidden Gem, Says Jim Cramer

We recently published . Cerence Inc. (NASDAQ:CRNC) is one of the stocks Jim Cramer recently discussed. Cerence Inc. (NASDAQ:CRNC) is a software company that caters to the needs of the transportation industry. Its shares have gained 60% year-to-date and gained an unbelievable 45% in August. Cerence Inc. (NASDAQ:CRNC) is benefiting from the AI wave through having landed major deals, such as a partnership with Mercedes-Benz. Cramer discussed the firm in the context of it being an outlier that could be interesting in the AI era: 'I'm doing a piece about Cerence tonight. People are going to want to find out what I'm saying there. There's a lot of technology away from these big companies that is better than the big companies.' Copyright: audioundwerbung / 123RF Stock Photo Here are Cramer's previous thoughts about Cerence Inc. (NASDAQ:CRNC): 'I like Cerence, and I also happen to like Brian Krzanich, the CEO. I am partial. They make money. I think you've got a winner. I was actually trying to figure out whether I could justify doing a piece on it because it's not that expensive. Cerence is a winner, and Brian's always welcome on the show, as we know.' While we acknowledge the potential of CRNC as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store