logo
ChatGPT is making us weird

ChatGPT is making us weird

The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary.
My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human."
Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage.
And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.)
But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange.
As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it.
Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives.
The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today.
A change in the social contract
Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives.
While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents.
Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something."
Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings."
Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf.
The jury's still out on situations like these.
"AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about."
Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said.
After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite.
"This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us."
Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent."
Exacerbated biases
Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures.
"It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said.
So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present.
While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases.
A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions.
Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men.
While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet.
"There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on."
A largely untraced social shift
Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be.
OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended."
The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior.
When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being.
OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness.
An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which isanalyzing Claude usage, how AI is being used across jobs, and studying what values AI models have.
Representatives for Meta and Microsoft did not respond to requests for comment.
Behavioral risks and rewards
Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders.
Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist.
"Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way.
"Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are."
Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure."
"But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know."
While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online.
"This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?"
Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits.
While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools.
"We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Exclusive-Nvidia working on new AI chip for China that outperforms the H20, sources say
Exclusive-Nvidia working on new AI chip for China that outperforms the H20, sources say

Yahoo

time10 minutes ago

  • Yahoo

Exclusive-Nvidia working on new AI chip for China that outperforms the H20, sources say

By Liam Mo and Fanny Potkin BEIJING/SINGAPORE (Reuters) -Nvidia is developing a new AI chip for China based on its latest Blackwell architecture that will be more powerful than the H20 model it is currently allowed to sell there, two people briefed on the matter said. U.S. President Donald Trump last week opened the door to the possibility of more advanced Nvidia chips being sold in China. But the sources noted U.S. regulatory approval is far from guaranteed amid deep-seated fears in Washington about giving China too much access to U.S. artificial intelligence technology. The new chip, tentatively known as the B30A, will use a single-die design that is likely to deliver half the raw computing power of the more sophisticated dual-die configuration in Nvidia's flagship B300 accelerator card, the sources said. A single-die design is when all the main parts of an integrated circuit are made on one continuous piece of silicon rather than split across multiple dies. The new chip would have high-bandwidth memory and Nvidia's NVLink technology for fast data transmission between processors, features that are also in the H20 - a chip based on the company's older Hopper architecture. The chip's specifications are not completely finalised but Nvidia hopes to deliver samples to Chinese clients for testing as early as next month, said the sources who were not authorised to speak to media and declined to be identified. Nvidia said in a statement: "We evaluate a variety of products for our roadmap, so that we can be prepared to compete to the extent that governments allow." "Everything we offer is with the full approval of the applicable authorities and designed solely for beneficial commercial use," it said. The U.S. Department of Commerce did not respond to a Reuters request for comment. FLASHPOINT The extent to which China, which generated 13% of Nvidia's revenue in the past financial year, can have access to cutting-edge AI chips is one of the biggest flashpoints in U.S.-Sino trade tensions. Nvidia only received permission in July to recommence sales of the H20. It was developed specifically for China after export restrictions were put in place in 2023, but company was abruptly ordered to stop sales in April. Trump said last week he might allow Nvidia to sell a scaled-down version of its next-generation chip in China after announcing an unprecedented deal that will see Nvidia and rival AMD give the U.S. government 15% of revenue from sales of some advanced chips in China. A new Nvidia chip for China might have "30% to 50% off", he suggested in an apparent reference to the chip's computing power, adding that the H20 was "obsolete". U.S. legislators, both Democratic and Republican, have worried that access to even scaled-down versions of flagship AI chips will impede U.S. efforts to maintain its lead in artificial intelligence. But Nvidia and others argue that it is important to retain Chinese interest in its chips - which work with Nvidia's software tools - so that developers do not completely switch over to offerings from rivals like Huawei. Huawei has made great strides in chip development, with its latest models said to be on par with Nvidia in some aspects like computing power, though analysts say it lags in key areas such as software ecosystem support and memory bandwidth capabilities. Complicating Nvidia's efforts to retain market share in China, Chinese state media have also in recent weeks alleged that the U.S firm's chips could pose security risks, and authorities have cautioned Chinese tech firms about purchasing the H20. Nvidia says its chips carry no backdoor risks. Nvidia is also preparing to start delivering a separate new China-specific chip based on its Blackwell architecture and designed primarily for AI inference tasks, according to two other people familiar with those plans. Reuters reported in May that this chip, currently dubbed the RTX6000D, will sell for less than the H20, reflecting weaker specifications and simpler manufacturing requirements. The chip is designed to fall under thresholds set by the U.S. government. It uses conventional GDDR memory and features memory bandwidth of 1,398 gigabytes per second, just below the 1.4 terabyte threshold established by restrictions introduced in April that led to the initial H20 ban. Nvidia is set to deliver small batches of RTX6000D to Chinese clients in September, said one of the people.

CNBC Daily Open: OpenAI CEO, who sparked AI frenzy, worries about AI bubble
CNBC Daily Open: OpenAI CEO, who sparked AI frenzy, worries about AI bubble

CNBC

time13 minutes ago

  • CNBC

CNBC Daily Open: OpenAI CEO, who sparked AI frenzy, worries about AI bubble

There's a bubble forming in the artificial intelligence industry, according to OpenAI CEO Sam Altman. "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes," Altman said. "I'm sure someone's gonna write some sensational headline about that. I wish you wouldn't, but that's fine," he added. (Apologies to Altman.) Altman's AI company is currently in talks to sell about $6 billion in stock that would value OpenAI at around $500 billion, CNBC confirmed Friday. In another conversation, Altman warned that the U.S. may be underestimating the progress that China is making in AI. Given the above premises, should investors be more cautious about OpenAI? Altman was not posed this question, but one wonders whether his opinion would also be "yes." Outside pure-play AI companies, the money is, likewise, still flowing. Intel is receiving a $2 billion injection of cash from Japan's SoftBank. It's a much-needed boost to the beleaguered U.S. chipmaker. Intel has fallen behind foreign rivals such as TSMC and Samsung in manufacturing semiconductors that serve as the brains for AI models. But going by Altman's views, the investment in Intel might not be a good bet by SoftBank CEO Masayoshi Son. Not everyone agrees with Altman, of course. Wedbush's Dan Ives told CNBC on Monday that there might be "some froth" in parts of the market, but "the actual impact over the medium and long term is actually being underestimated." And Ray Wang, research director for semiconductors, supply chain and emerging technology at Futurum Group, pointed out that the AI industry is not heterogeneous. There are market leaders, and then there are companies that are still developing. In the real world, bubbles delight because they reflect their surroundings in a play of light. But the bubble Altman described could be one doesn't show the face of its meeting paves the way for trilateral talks with Putin. At the White House meeting, the U.S. president also discussed security guarantees for Ukraine — which would reportedly involve a purchase around $90 billion of American weapons by Kyiv. Intel is getting a $2 billion investment from SoftBank. Both companies announced the development Monday, in which SoftBank will pay $23 per share for Intel's common stock. Shares of Intel jumped more than 5% in extended trading. The artificial intelligence market is in a bubble, says Sam Altman. Separately, the OpenAI CEO said he's "worried about China," and that the U.S. may be underestimating the latter's progress in artificial intelligence. U.S. stocks close mostly flat on Monday. The three major indexes made moves that were less than 0.1 percentage pointsin either direction as investors await key U.S. retail earnings. Asia-Pacific markets were mixed Tuesday. SoftBank shares fell as much as 5.7%. [PRO] Opportunities in one area of the European market. Investors have been pivoting away from the U.S. as multiple European indexes outperform those on Wall Street. But one pocket of Europe still remains overlooked, according to analysts. American money pours into Europe's soccer giants as club valuations soar European soccer is a bigger business than ever, with clubs in the continent's five top leagues raking in 20.4 billion euros ($23.7 billion) in revenue in the 2023-2024 season. American investors have been eyeing a piece of that pie. U.S. investors now own, fully or in part, the majority of soccer teams in England's Premier League. That now includes four of the traditional Big Six clubs, with Chelsea, Liverpool, Manchester United and Arsenal all attracting U.S. investment.

CheapGPT.
CheapGPT.

The Verge

time13 minutes ago

  • The Verge

CheapGPT.

Posted Aug 19, 2025 at 6:48 AM UTC The budget ChatGPT Go plan sits between free access and the Plus Plan which costs ₹1,999 (about $23) per month. 'We are working to expand access to Go for additional countries and regions,' says OpenAI. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Thomas Ricker Posts from this author will be added to your daily email digest and your homepage feed. See All by Thomas Ricker Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All OpenAI

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store