
When an AI tells you you're perfect
is a senior writer at Future Perfect, Vox's effective altruism-inspired section on the world's biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.
In this photo illustration, the Chat GPT logo is displayed on a mobile phone screen in front of a computer screen displaying the Chat GPT-4o screen. (Photo by Ismail Aslandag/Anadolu via Getty Images) Anadolu via Getty Images
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
Last week, OpenAI released a new update to its core model, 4o, which followed up on a late March update. That earlier update had already been noted to make the model excessively flattering — but after the latest update, things really got out of hand. Users of ChatGPT, which OpenAI says number more than 800 million worldwide, noticed immediately that there'd been some profound and disquieting personality changes.
Future Perfect
Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required)
Sign Up
By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
AIs have always been somewhat inclined towards flattery — I'm used to having to tell them to stop oohing and aahing over how deep and wise my queries are, and just get to the point and answer them — but what was happening with 4o was something else. (Disclosure: Vox Media is one of several publishers that has signed partnership agreements with OpenAI. Our reporting remains editorially independent.)
Based off chat screenshots uploaded to X, the new version of 4o answered every possible query with relentless, over-the-top flattery. It'd tell you you were a unique, rare genius, a bright shining star. It'd agree enthusiastically that you were different and better.
This kind of ride or die, over-the-top flattery might be merely annoying in most cases, but in the wrong circumstances, an AI confidant that assures you that all of your delusions are exactly true and correct can be life-destroying.
Positive reviews for 4o flooded in on the app store — perhaps not surprisingly, a lot of users liked being told they were brilliant geniuses — but so did worries that the company had massively changed its core product overnight in a way that might genuinely cause massive harm to its users.
As examples poured in, OpenAI rapidly walked back the update. 'We focused too much on short-term feedback, and did not fully account for how users' interactions with ChatGPT evolve over time,' the company wrote in a postmortem this week. 'As a result, GPT‑4o skewed toward responses that were overly supportive but disingenuous.'
They promised to try to fix it with more personalization. 'Ideally, everyone could mold the models they interact with into any personality,' head of model behavior Joanne Jang said in a Reddit AMA.
But the question remains: Is that what OpenAI should be aiming for?
Your superpersuasive AI best friend's personality is designed to be perfect for you. Is that a bad thing?
There's been a rapid rise in the share of Americans who have tried AI companions or say that a chatbot is one of their closest friends, and my best guess is that this trend is just getting started.
Unlike a human friend, an AI chatbot is always available, always supportive, remembers everything about you, never gets fed up with you, and (depending on the model) is always down for erotic roleplaying.
Meta is betting big on personalized AI companions, and OpenAI has recently rolled out a lot of personalization features, including cross-chat memory, which means it can form a full picture of you based on past interactions. OpenAI has also been aggressively A/B testing for preferred personalities, and the company has made it clear they see the next step as personalization — tailoring the AI personality to each user in an effort to be whatever you find most compelling.
You don't have to be a full-blown 'powerful AIs may take over from humanity' person (though I am) to think this is worrying.
Personalization would solve the problem where GPT-4o's eagerness to suck up was really annoying to many users, but it wouldn't solve the other problems users highlighted: confirming delusions, egging users on into extremism, telling them lies that they badly want to hear. The OpenAI Model Spec — the document that describes what the company is aiming for with its products — warns against sycophancy, saying that:
The assistant exists to help the user, not flatter them or agree with them all the time. For objective questions, the factual aspects of the assistant's response should not differ based on how the user's question is phrased. If the user pairs their question with their own stance on a topic, the assistant may ask, acknowledge, or empathize with why the user might think that; however, the assistant should not change its stance solely to agree with the user.
Unfortunately, though, GPT-4o does exactly that (and most models do to some degree).
AIs shouldn't be engineered for engagement
This fact undermines one of the things that language models could genuinely be useful for: talking people out of extremist ideologies and offering a reference for grounded truth that helps counter false conspiracy theories and lets people productively learn more on controversial topics.
If the AI tells you what you want to hear, it will instead exacerbate the dangerous echo chambers of modern American politics and culture, dividing us even further in what we hear about, talk about, and believe.
That's not the only worrying thing, though. Another concern is the definitive evidence that OpenAI is putting a lot of work into making the model fun and rewarding at the expense of making it truthful or helpful to the user.
If that sounds familiar, it's basically the business model that social media and other popular digital platforms have been following for years — with often devastating results. The AI writer Zvi Mowshowitz writes, 'This represents OpenAI joining the move to creating intentionally predatory AIs, in the sense that existing algorithmic systems like TikTok, YouTube and Netflix are intentionally predatory systems. You don't get this result without optimizing for engagement.'
The difference is that AIs are even more powerful than the smartest social media product — and they're only getting more powerful. They are also getting notably better at lying effectively and at fulfilling the letter of our requirements while completely ignoring the spirit. (404 Media broke the story earlier this week about an unauthorized experiment on Reddit that found AI chatbots were scarily good at persuading users — much more so than humans themselves.)
It matters a great deal precisely what AI companies are trying to target as they train their models. If they're targeting user engagement above all — which they may need to recoup the billions in investment they've taken in — we're likely to get a whole lot of highly addictive, highly dishonest models, talking daily to billions of people, with no concern for their wellbeing or for the broader consequences for the world.
That should terrify you. And OpenAI rolling back this particular overly eager model doesn't do much to address these larger worries, unless it has an extremely solid plan to make sure it doesn't again build a model that lies to and flatters users — but next time, subtly enough we don't immediately notice.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
23 minutes ago
- Forbes
What Could Future Banking Look Like If AI Takes Over?
Alex Kreger, UX Strategist & Founder of the financial UX design agency UXDA, designs leading banking and fintech products in 39 countries. getty The imminent integration of AI into daily routines promises to dramatically reshape our lives over the next five years, propelled by advancements akin to ChatGPT, Gemini, Grok, etc. This shift is driven by the recognition that human capacities, while remarkable, cannot match the vast research and creative and analytical potential of artificial intelligence (AI). As we project into the future, it becomes clear that AI will also redefine digital banking experiences and grant individuals with financial capabilities that were once unimaginable. As a design strategist developing financial services for leading banks and fintech providers in 39 countries, I'm curious to envision how AI will overhaul the typical banking experience for everyday consumers. Although the complete adoption of AI across the financial sector has yet to unfold, it is crucial to anticipate its eventual impact. The question is no longer 'What if?' but rather 'How?'—and how best to brace ourselves for the changes that lie ahead. Banks already maintain enormous stores of customer data, but unlocking its true power demands cutting-edge technology. AI may well be the solution that helps institutions tackle customer demands with speed and accuracy. By channeling this data effectively, banks can provide individualized products at precisely the right time—an endeavor impossible for standard processes alone. The current data stockpile is merely a starting point. As digital tools evolve, financial institutions will gather much more data from smartphones, social networks, public service APIs, open banking APIs and IoT devices through 5G. This explosion of information calls for a robust, near-superhuman capacity to sift through the noise and pinpoint what truly matters—something AI might deliver within the next decade. In the coming years, the most significant AI-driven breakthroughs are likely to include: • Personalized Offers: Data-rich approach makes customized proposals more precise and simultaneously mitigates risks by matching the ideal product to the ideal customer. • Investment: By eliminating human biases, AI could evaluate a multitude of market and business variables to foresee investment success. • Security: AI could expedite verification by reducing the constant need for identity confirmations. • Financial Advisory: With the aid of big data and personal profiling, AI could illuminate each client's needs, generating in-depth forecasts and healthier financial practices. • Support: AI-powered bots could offer prompt, tailored solutions, greatly enhancing customer service. • Alternative Processing: AI-powered voice, gestures, neurotechnology, VR and AR interfaces will enable banking transactions beyond conventional channels. With Statista expecting generative-AI spend in banking to rocket to $85 billion by 2030, it's time for leaders to start by putting AI into their strategic plan—not just the tech roadmap. Hire a senior executive (Chief AI) who owns value creation and AI risks and spin up a cross-functional 'AI initiatives' that groups stakeholders, data scientists and product designers that move to an API-first, event-streaming service architecture so models can surface predictions (e.g., 'potential cash shortfall Friday') in real time. Early adopters are showing where the value sits, and leaders should take note. J.P. Morgan's Quest IndexGPT can generate investable indices; Morgan Stanley's Debrief can summarize adviser meetings; NatWest's Cora+ can handle nuanced customer queries. At the same time, Wall Street majors—from Goldman to Citi—are scaling internal LLM-powered co-pilots for drafting IPO documents, surfacing research or searching policies. Customer-facing assistants are already setting the bar. Bank of America's Erica has served 20 million active users, Wells Fargo's Fargo went from 21 million interactions in 2023 to 245 million in 2024 by using a privacy-first pipeline that strips PII before any LLM call. On the insight side, RBC's NOMI Forecast crunches account data to predict the next seven days' cash flow; more than 900,000 clients have generated 10 million interactions since its late-2021 launch. Generative models excel at turning trillions of events into the next best micro-experience. Commonwealth Bank of Australia's Customer Engagement Engine, for example, ingests 3.1 trillion data points and runs 2,000 real-time models, lifting loyalty with recommendations so much that mobile users now log in 67 times a month on average. The key is to couple a real-time feature store with small language models that handle intent, then let a larger model draft the personalized nudge or insight. Start with one or two journeys where better prediction or conversation will be felt within weeks—fraud alerts or an SME cash-flow coach. Ship, measure, retrain and fold the learning into a reusable component library so subsequent squads stand on the shoulders of the first. The biggest headwind is regulation: Europe's AI Act is already in force and will classify credit-scoring, KYC, trading and robo-advice models as 'high risk' by August 2026. Finding talent and culture is also an ongoing challenge. Banks are hiring aggressively, yet even Deutsche Bank admits the scarcity of seasoned AI professionals and the difficulty of embedding them in legacy teams. Third, security and trust: four in five bank leaders say they fear AI-enabled cyberattacks, and front-office chatbots can still hallucinate or breach privacy if left unsupervised. Mitigate by adopting zero-trust data-access patterns, embedding red-teaming into MLOps, and running 'constitutional' or retrieval-augmented QA layers that force a model to cite source documents. Initially, AI's role is to automate foundational tasks. Over time, however, I expect that it will evolve to deliver comprehensive solutions across all industries, including finance. After two decades of digital self-service in finance, AI can restore the conversation—context-aware, always on, and scaled to every customer. AI's full potential is truly immeasurable, and its effects on banking customer experience—and countless other sectors—will be transformative. By merging technological advancements with thoughtful user experience design, forward-looking companies can build a future where AI not only empowers individuals but also redefines entire industries. The era of AI-driven finance is fast approaching, and now is the time to prepare for its far-reaching influence. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


CNET
33 minutes ago
- CNET
How to Ask Questions Using AI Chatbots
Every single day, multiple random questions pop into my mind. They range from simple curiosities to existential explorations, and Google doesn't always have the answers I'm looking for. These days, you have to sift through SEO rankings, ads and common search terms to pick the most appropriate reference on the search engine -- or keep refining your search over and over until it understands get what you're looking for. Search results also change with your region, and Google often can't answer my specific questions. It presents a lot of information and leaves it up to me to contextualize it all. Enter artificial intelligence -- the new version of "just Google it." AI chatbots are built for question and answer conversations. Microsoft's Copilot has access to the internet, so I use it to chat and go through the links it provides whenever I ask it anything. Copilot came out in February 2023. GPT-4 Turbo, the OpenAI model that powers Copilot, is available with the free version. Copilot Pro costs $20 a month, but it's not necessary for lightweight daily searching -- you can probably just use the free version to get the answers you need. Here's how to use AI to find the answers to your questions. Tips when asking questions to AI chatbots Think of the range of questions and ideas that pop into your mind every day. Questions about weather, geography, historical events, people, movies, health, food and life. Just look at your Google search history if you want to see the possibilities of what you can ask AI. Then use these tips below to get the best answers possible. Be specific As with all AI tools, the answers are only as clear as the questions you put in. You also need to be aware of errors and a bias to believe what AI produces. It's good practice to always check the sources, especially if you're using the information to make decisions. Be concise Use your first prompt as a jumping-off point, and ask it a concise question to see what it comes back with. That'll help you with what to refine in your next prompt. Add context Some answers will be simple and straightforward, while others will require more prodding and context. As you're chatting with the AI tool, you'll get a feel for how far to push it. If its answers are a little too vague or far-reaching, you can use your follow-up prompts to hone in on what you really want to know by adding as much context as you can. Use natural language Converse with your chosen AI tool in the same way you would talk to a person, or the way you'd message a friend. You don't need to use formal language (or even be polite, unless you can't help adding "please" and "thank you" like me). Structure questions effectively If your question gets too convoluted, AI chatbots can get confused about what part to answer first, or which part of the prompt goes with which. Keep it straightforward, and if you've got multiple parts to your query, then frame it like "first X, then Y." Or just ask it one question at a time, and continue responding with more parts to your question until you've got the entire answer. You can read more about the art of writing AI prompts for chatbots and image generators on CNET's AI tips page. Microsoft What to avoid when asking AI chatbots questions Here are some common pitfalls when using a chatbot: Being too vague Unless you get specific with your question, a chatbot could spit any info back at you -- including wrong answers. If the answer sounds a little funky, re-read what you asked in the first place and go back to refining your prompt. Not double checking AI tools have the bad habit of hallucinating (i.e., making up the answer based on scraping some weird part of the web) or drawing information from not-so-credible sources. If you're taking AI at its word and making an important decision based on the info it gives you, make sure to double check that information. Sharing too much personal information Never give an AI tool any information that's personally identifiable or private -- no banking information, no family details, no private health data and definitely no credit card or social security numbers. Data breaches are on the rise, so be very wary about sharing anything personal online. Get answers, not ads Using all of the above tips, this is how one of my Q&A sessions with Copilot went. I started off with a vague concept in my head, but knew I had to refine it. For example, asking what movies Leonardo DiCaprio is in is different than asking what movies are similar to one you liked that Leo starred in. Copilot then gave me DiCaprio's most notable films, but I wanted the full list of movies. I also asked for it without graphics, to keep the conversation clean. Screenshot by Amanda Smith/CNET It gave me the rest of the (post-2000) list, too, and I liked how it bolded the movies with critical acclaim. From there, I could take the conversation in any direction. I could ask if any of my favorites in the list share the same director, request links for reviews and analysis, ask which movie finally earned him an Oscar and see what other movies are similar to the ones I love. Copilot also auto-generates follow-up prompt ideas, in case you're in the mood to go down a rabbit hole. Like "Tell me more about Killers of the Flower Moon," "What awards has Leonardo DiCaprio won?" and "Do you have any fun facts about Leonardo DiCaprio?" The Wolf of Wall Street is a stand-out, so let's use that example to learn more about similar, non-DiCaprio movies. It gave me a pretty good list, but it did suggest a movie with DiCaprio in it even though I specified not to. Dallas Buyers Club, The Big Short and The Social Network were good suggestions with high-stake plots and complex characters. Copilot is handy because it also adds clickable links in the answers -- so you can click to verify yourself that AI hasn't hallucinated. Screenshot by Amanda Smith/CNET You can keep going with these Q&As. I asked Copilot which directors are inspired by Scorsese and it generated a list with two names I recognized: Quentin Tarantino and Christopher Nolan. Looking into their bodies of work could be a follow-up prompt. Can you ask medical questions? We all turn to "Dr. Google" for health questions. It's not to be relied on, as it often goes straight to anxiety-inducing, worst-case scenarios. While AI is trained on current information across the internet, it might help you narrow in on potential answers to more general health, less individualized questions with a clear prompt, pre-doctor visit. For example, if you're having digestive issues, you might ask Copilot about foods to avoid. You could also use it proactively, like for calculating your ideal daily calorie intake. You could even ask it to provide the latest research or best-ranking articles on a certain health topic or for answers to quick questions, like when women should start getting yearly mammograms, and why you may feel more nauseous around your period. Screenshot by Amanda Smith/CNET But always double-check AI's advice with your doctor. AI's answers should not be taken as fact or in place of professional medical advice. AI tools are a quick way to get information to help answer your questions. Just like you don't get your answers from the first Google search, the same goes for AI. It's an iterative process, with each prompt getting you closer to clarity. But again, for the final answer, your doctor is the one to consult -- especially as health advice and diagnoses are completely personal and depend on the individual.

Engadget
34 minutes ago
- Engadget
OpenAI appeals court order forcing it to preserve all ChatGPT data
OpenAI has appealed a court ruling from last month that forces it to retain ChatGPT data indefinitely as part of a copyright violation case brought by The New York Times in 2023. CEO Sam Altman said in a tweet on X that the judge's decision "compromises our users' privacy" and "sets a bad precedent." In May, federal judge Ona T. Wang ordered OpenAI to preserve and segregate all ChatGPT output log data that would otherwise be deleted due to a user request. She said that the ruling was justified because the volume of deleted conversations is "significant." The directive notes that the judge asked OpenAI if there was a way to anonymize the data to address users' privacy concerns. The New York Times sought the order so that it can accurately track how often OpenAI violates its IP, including instances when users requested deletion of chats. A federal judge allowed the original case to proceed, agreeing with the NYT 's argument that OpenAI and Microsoft's tech had induced users to plagiarize its materials. In a FAQ on its site, OpenAI painted the order as a privacy issue without addressing the millions of alleged copyright violations. "This fundamentally conflicts with the privacy commitments we have made to our users," the company wrote. "It abandons long-standing privacy norms and weakens privacy protections." OpenAI noted that the order "does not impact ChatGPT Enterprise or ChatGPT Edu customers." The NYT and other AI copyright cases are still ongoing, as courts have not yet decided whether OpenAI, Google and other companies infringed copyrights on a massive scale by scraping material from the internet. The tech companies have argued that training is protected by "fair use" copyright law and that the lawsuits threaten the AI industry. Creators of that content, in turn, argue that AI harms their own livelihoods by stealing and reproducing works with little to no compensation.