ChatGPT is making us weird
The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary.
My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human."
Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage.
And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.)
But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange.
As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it.
Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives.
The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today.
A change in the social contract
Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives.
While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents.
Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something."
Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings."
Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf.
The jury's still out on situations like these.
"AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about."
Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said.
After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite.
"This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us."
Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent."
Exacerbated biases
Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures.
"It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said.
So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present.
While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases.
A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions.
Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men.
While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet.
"There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on."
A largely untraced social shift
Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be.
OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended."
The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior.
When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being.
OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness.
An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which isanalyzing Claude usage, how AI is being used across jobs, and studying what values AI models have.
Representatives for Meta and Microsoft did not respond to requests for comment.
Behavioral risks and rewards
Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders.
Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist.
"Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way.
"Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are."
Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure."
"But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know."
While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online.
"This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?"
Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits.
While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools.
"We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
37 minutes ago
- Yahoo
US FDA launches AI tool to reduce time taken for scientific reviews
(Reuters) -The U.S. Food and Drug Administration said on Monday that it had launched a generative AI tool, Elsa, aimed at improving efficiency across its operations, including scientific reviews. "Today's rollout of Elsa is ahead of schedule and under budget, thanks to the collaboration of our in-house experts across the centers," said FDA Commissioner Marty Makary. The agency said it is already using Elsa to expedite clinical protocol reviews, shorten the time needed for scientific evaluations, and pinpoint high-priority inspection targets. Once the FDA receives an application for a potential drug approval, it has six to 10 months to make a decision. Elsa assists with reading, writing, and summarizing tasks. It can summarize adverse events to support safety profile assessments of drugs and rapidly compare packaging inserts. "Elsa offers a secure platform for FDA employees to access internal documents while ensuring all information remains within the agency. The models do not train on data submitted by regulated industry, safeguarding the sensitive research and data handled by FDA staff," the FDA said. In May, the regulator said it would fully integrate AI by June 30, following an experimental run.


Forbes
44 minutes ago
- Forbes
4 Steps Of Pivoting In The Age Of AI
The ability to pivot effectively has become less of an occasional necessity and more of a core leadership competency. Especially as artificial intelligence reshapes entire industries at breakneck speed, start-up leaders are discovering that successful pivots require a unique blend of analytical rigor, emotional intelligence, and strategic courage. My conversations with four leaders—each navigating their own transformational journey—a clear framework emerges for how to orchestrate meaningful change without losing your organizational soul. When Rujul Zaparde, co-founder of procurement platform Zip, saw the emergence of ChatGPT, he didn't rush to slap AI onto existing features. Instead, he took his team back to fundamentals. "We really took a first principles approach to the company and the product to sort of say, okay, how do we, like, if we were to solve this problem, like, you know, the problem we solve for, like, what's the best way to solve it today?" This methodical approach proved transformative. Rather than viewing AI as a trendy add-on—which generally doesn't solve any real problem—Zaparde's team recognized they were uniquely positioned as an "orchestration layer" with access to supplier data, contract information, and financial systems that individual point solutions couldn't match. Their AI capabilities have since been used over four and a half million times. The lesson? Before pivoting, strip away assumptions and return to core problems. 1. What are your customers struggling with? 2. How intense is this problem? 3. How often do they experience this problem? 4. What's left short with your current approach? 5. How might your leverage any accumulated strengths and capabilities to develop a new approach? That requires writing, which brings us to step number two. Perhaps nowhere is the complexity of organizational pivots more apparent than at Chef, the marketplace connecting local cooks with neighbors. CEO Joey Grassia discovered that even when data clearly indicated the need for change, execution required something more nuanced than compelling metrics. "It really did take us 12 to 18 months to change the culture of the company so that people were willing to question what we've always done and willing to acknowledge that we may have to burn the boats on our old business to make the leap into the new one," Grassia reflects. The breakthrough came when Grassia crystallized the company's direction in a comprehensive memo outlining three specific milestones around growth, efficiency, and scale. "It wasn't until I wrote that memo and it was like, these are the three milestones of building a great business. Our existing business has no chance of accomplishing these things, so we have to disrupt ourselves. That was really the turning point." The transformation was dramatic. Writing a memo is one of the best ways to fast-track clarity. Since the pivot, Chef moved from standalone orders to standardized subscriptions, reducing customer cognitive load from 25 minutes per order to under 75 seconds for subsequent weeks. Forty percent of existing users opted into the new recurring model without any marketing push, and user spend tripled. Grassia found that successful pivots require what he calls "the wisdom to know that you need to do it and the courage to actually take the leap." But between wisdom and courage lies the critical work of building organizational conviction through clear, logical communication. While data and strategy provide the rational foundation for pivots, Charlie Greene, founder of memory-preservation platform Remento and recent Shark Tank winner, demonstrates why the most successful transformations also require deep emotional intelligence. Greene learned this lesson through experience: "Pivots are incredibly easy to talk about in retrospect, they're incredibly difficult to navigate as they're unfolding. Because you never wake up one day and realize that today is the day you're going to pivot." For Remento, the pivot from a conversation-structuring tool to a book-creation platform wasn't just about product features—it required reimagining their entire value proposition. The original product had high download rates but low usage. "While we had built a product that aligned perfectly with what users told us they wanted, we quickly learned that our solution didn't remove nearly enough of the friction needed to spur adoption." Greene's approach to managing this uncertainty offers a lesson in communication. "One of our values is clear as kind," he explains. "Being able to say to your team, look, here's where we are right now. Does anyone disagree with where we are right now? You know, like, okay, we can agree this is where we are right now. Can we all agree where we need to be in three months?" Greene emphasizes bringing teams into the decision process while maintaining ultimate accountability—a delicate balance between vulnerability and authority. In the next decade, AI will likely disrupt every company and every industry. If you're not examining how true that's likely to be for your business, you'll get swallowed by a competitor. Bryan Power, Head of People at Nextdoor, shared just how urgent this transformation is. Having navigated multiple organizational turnarounds, Power sees AI as fundamentally different from previous technological shifts. "One of the major disruptions that AI is doing is certain activities now the turnaround is so fast that managers have a mental model of how long something takes in their head because they did it when they were a contributor. But this is now something that takes minutes that used to take days." This speed transformation creates unprecedented management challenges. As Power explains, "What happens to those five days has not really been worked through because as an employee I'm like, you're paying me to do this thing. It doesn't matter. Fifteen minutes, five days. I did what you told me to do." Power's response at Nextdoor has been characteristically direct: comprehensive AI training programs and a cultural mandate that everyone must engage with these tools. "I can actually tell if someone's been using AI for a month or three months or for six months," he notes. "I can see the learning moves that people make in the early going, and by extension I can see when they haven't made them yet." The broader implication is sobering: "I saw people not embrace the Internet. And like, think about that point of view today is just mind boggling to me. But in my time, in my career, my early 20s, I saw the people that embraced it and it just completely made the first five to 10 years of their career because they were out front." Four key insights emerge from these leaders' experiences: Start with your problem. Before implementing any change, return to fundamental questions about the problems you're solving and whether current approaches can scale to meet future demands. Build internal logic through clear communication. Successful pivots require more than good ideas—they need comprehensive strategies with measurable milestones that help teams understand not just what's changing, but why change is inevitable. Balance data with emotional intelligence. While metrics provide direction, the human element of change management—creating psychological safety, maintaining transparency, and holding space for uncertainty—often determines execution success. Move quickly. Rather than viewing artificial intelligence as an optional enhancement, treat it as a fundamental shift that requires organizational learning and adaptation at every level. Finally, the leaders profiled here share one crucial characteristic: they didn't wait for perfect information before acting. Instead, they developed frameworks for making decisions under uncertainty while building organizational capabilities to execute those decisions effectively. In an era where the pace of change continues to accelerate, the question isn't whether your organization will need to pivot—it's whether you're building the cultural and strategic capabilities to do so successfully. The most successful pivots combine analytical rigor with human wisdom, creating organizations that don't just survive change but thrive because of it.


Geek Wire
an hour ago
- Geek Wire
Microsoft unveils Bing Video Creator — we tested it against Google's Veo 3 for pizza-making otters
It's Monday and you needed some videos of cute otters making pizza, right? Microsoft on Monday launched Bing Video Creator, a new generative AI tool that lets users create videos from text prompts. The tool is powered by Sora, the video model that debuted in December from OpenAI, a key Microsoft partner. This is the first time that users can access Sora for free. It's available on mobile, and for vertical videos, at launch — availability on desktop and within Copilot Search in Bing are coming soon, according to Microsoft's blog post. Bing Video Creator lets users make ten 5-second videos, and then requires Microsoft Rewards points for additional creations. The company's blog post includes a video created from the prompt: 'In a busy Italian pizza restaurant, a small otter works as a chef and wears a chef's hat and an apron. He kneads the dough with his paws and is surrounded by other pizza ingredients.' Here's what Bing produced: We gave the same prompt to Google's new Veo 3 tool. Here's what it came up with: Personally, I'm taking a pizza from Google's AI otter. I don't love Microsoft's otter putting their feet on the cutting board and the kneading technique looks a little rough.