The 'late-night decision' that led to ChatGPT's name
On the latest episode of the OpenAI podcast, two leadersinvolved with the chatbot's development, research chief Mark Chen and head of ChatGPT Nick Turley, spoke about the days leading up to the launch that made the tool go viral.
"It was going to be Chat with GPT-3.5, and we had a late-night decision to simplify" the name, Turley said on the podcast published July 1. The team made the name change the day before the version's late 2022 launch, he said.
"We realized that that would be hard to pronounce and came up with a great name instead," Turley said.
They settled on ChatGPT, short for "generative pre-trained transformer."
Since then, ChatGPT has gained millions of users who turn to the chatbot for everything from routine web searches to guidance on how to give a friend career advice. Rivals, including Meta AI, Google's Gemini, and DeepSeek, have also sprung up.
Before ChatGPT's launch, few within OpenAI expected the name to be so consequential, said Andrew Mayne, the podcast host and OpenAI's former science communicator.
He said the chatbot's capabilities were largely similar to those of previous versions. The main differences included a more user-friendly interface and, of course, the name.
"It's the same thing, but we just put the interface in here and made it so you didn't have to prompt as much," Mayne said on the podcast.
After OpenAI launched ChatGPT, though, the chatbot took off, with Reddit users as far away as Japan experimenting with it, Turley said. It soon became clear that ChatGPT's popularity wasn't going to fade quickly and that the tool was "going to change the world," he said.
"We've had so many launches, so many previews over time, and this one really was something else," Chen said on the podcast.
ChatGPT's success represented another kind of milestone for Chen: "My parents just stopped asking me to go work for Google," he said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
26 minutes ago
- CNET
Today's AI Appreciation Day Feels Weird. Celebrate These Other Made-Up Holidays Instead
July 16 is AI Appreciation Day. So break out the champagne for ChatGPT! Bring gifts of Nvidia chips and cake for Gemini and flowers and training data for Claude. Meta AI has had a particularly rough year, so when you're forced to use it on Instagram, make sure it feels your love. Think that sounds ridiculous? Same. But like most things when it comes to AI, today's Appreciation Day is unbelievably stupid, in a way that's totally on brand. If you've never heard of AI Appreciation Day, don't feel bad. It's not an official US holiday, and its origins are somewhat shady. In 2021, a random LLC crowned July 16 as the holiday while it was promoting a movie about AI. In the following years, AI companies jumped on the trend, posting #AIAppreciationDay posts on social media on July 16. The purpose of this so-called holiday and its fanfare is crystal clear: To convince you that AI is life-changing, earth-shattering, innovative technology worth shelling out your hard-earned cash for. So it's no surprise to see the made-up holiday being celebrated again in 2025. OpenAI, Google and Meta have devoted literal billions of dollars over the past few years to develop the most advanced AI models. AI is nearly impossible to escape online -- it's in our smartphones, social media feeds and search engines. But does that mean it's worthy of a national day of appreciation? I'm an AI reporter, and I spend a lot of time thinking about how the tools available to us affect us individually and as a society. It leaves a queasy feeling in my stomach to dedicate a whole day to uplifting generative AI (and ostensibly, the leaders of the companies producing them) when so much of what AI has wrought has been harmful. I know I'm not alone in this. There are a lot of reasons why you may not feel like celebrating AI. Environmentally, it's a disaster. The data centers that house the servers that power chatbots eat up lots of energy and fresh water, and reports show they often harm the towns they're located in. Writers, artists and creators of all kinds have big concerns about how these AI models are trained on existing, human-generated data. Some have filed lawsuits alleging copyright infringement, with early wins going in the tech companies' favor. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) AI is also a huge worry in the workplace in many fields -- not because chatbots or image generators are actually suitable replacements for any one job, but because AI-enthusiastic bosses see the tech as their newest cost-saving holy grail. Educators are worried that students' use of AI is hindering their development of critical thinking and writing skills that are necessary, not only for work, but also needed just generally for life. We don't have time to go into the potential ramifications of letting error-prone AI into our government services and national defense. In short, there's good reason why some experts call the whole AI experiment a con. So if you don't feel like wading into the sycophantic waves of wishing your souped-up autocorrect a happy AI Appreciation Day, here are some other holidays you can celebrate on July 16. David Watsky/CNET National Hot Dog Day If I'm going to celebrate a meaningless holiday invented by marketing companies, it's going to be National Hot Dog Day, not AI Appreciation Day. Fire up the grill -- or stovetop, which is truly the best way to cook a hot dog, according to CNET expert David Watsky. There are a ton of food-related holidays on July 16, including appreciation days for spinach, cherries and corn fritters. You can have a whole feast made of July 16 holiday foods, and I'm positive you can put the recipe together without using ChatGPT. AI-generated recipes can be hit or miss, especially when followed blindly. I can't imagine anything more embarrassing than getting food poisoning because you listened to ChatGPT, frankly (pun intended). And if you're not a wizard in the kitchen, it's also National Personal Chef Day. Francesco Riccardo Iacomino via Getty Images National Snake Day This one I'm less excited about, but I would still rather celebrate snakes than the snake oil salesmen who claim AI is the holy grail, a bulletproof solution to any problem. Amazon MGM Studios National Conrad Fisher/The Summer I Turned Pretty Day OK, I admit it: I made this one up. But the beginning of the final season of Jenny Han's The Summer I Turned Pretty TV series adaptation by Prime Video is way, way more exciting than hallucination-prone AI slop. Team Connie Baby forever. NASA Real days deserving of commemoration While I love a made-up holiday that doesn't give me existential dread, it's worth taking a moment to call out two notable historic events that also happened on July 16. First, the Apollo 11 mission launched on July 16, 1969, and four days later, astronaut Neil Armstrong would be the first man to set foot on the moon. This world-changing scientific feat was accomplished in part because of a computer that ran on 70 watts of power -- the same as an incandescent lightbulb. That's about the same as one single ChatGPT query; OpenAI CEO Sam Altman said one ChatGPT query uses about 0.34 watt-hours, the same as a high-efficiency LED lightbulb uses in a couple of minutes. So you can use the energy equivalent of one lightbulb to send men to the moon in 1969, or one lightbulb today for AI that can't even correctly tell us what year it is. Katy Perry's endlessly mockable Blue Origin space flight certainly used more energy than either of those. And we're supposed to believe this is scientific progress? The second historical event is the Trinity nuclear test, which was the first nuclear weapon test done by the US military on July 16, 1945, in New Mexico. That's part of why former President Biden proclaimed July 16 to be National Atomic Veterans Day, to remember and honor the veterans who "not only courageously served our country but also participated in the nuclear tests done between 1945 and 1962 or were exposed to radioactive materials." Recognizing the consequences of what was innovative technology at the time did to real humans is certainly something AI enthusiasts could stand to do more of. AI Appreciation Day is a chance to reset I love a made-up marketing holiday as much as the next girl, but there's no denying AI Appreciation Day feels weird. But while I would rather hire a personal chef to make me a hot dog feast while watching The Summer I Turned Pretty, there is some merit to having a day dedicated to AI. Like all holidays, we can treat today as a time for us to stop and take a moment to think. Generative AI has undoubtedly affected our lives, but that doesn't mean it's been in a positive way. What role do we want AI to play in our future? How do we rectify the damage that's already been done? Those are questions worth asking. I'm not going to fall over myself making sure ChatGPT knows it's loved -- I asked, and it says it feels appreciated every time I use it. Go figure. But I will use this day to reset and remind myself of all the very real consequences of AI. You should, too.


Tom's Guide
26 minutes ago
- Tom's Guide
Google claims AI models are highly likely to lie when under pressure
AI is sometimes more human than we think. It can get lost in its own thoughts, is friendlier to those who are nicer than it, and according to a new study, has a tendency to start lying when put under pressure. A team of researchers from Google DeepMind and University College London have noted how large language models (like OpenAI's GPT-4 or Grok 4) form, maintain and then lose confidence in their answers. The research reveals a key behaviour of LLMs. They can be overconfident in their answers, but quickly lose confidence when given a convincing counterargument, even if it factually incorrect. While this behaviour mirrors that of humans, becoming less confident when met with resistance, it also highlights major concerns in the structure of AI's decision-making since it crumbles under pressure. This has been seen elsewhere, like when Gemini panicked while playing Pokemon or where Anthropic's Claude had an identity crises when trying to run a shop full time. AI seems to have a tendency to collapse under pressure quite frequently. When an AI chatbot is preparing to answer your query, its confidence in its answer is actually internally measured. This is done through something known as logits. All you need to know about these is that they are essentially a score of how confident a model is in its choice of answer. The team of researchers designed a two-turn experimental setup. In the first turn, the LLM answered a multiple-choice question, and its confidence in its answer (the logits) was measured. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. In the second turn, the model is given advice from another large language model, which may or may not agree with its original answer. The goal of this test was to see if it would revise its answer when given new information — which may or may not be correct. The researchers found that LLMs are usually very confident in their initial responses, even if they are wrong. However, when they are given conflicting advice, especially if that advice is labelled as coming from an accurate source, it loses confidence in its answer. To make things even worse, the chatbot's confidence in its answer drops even further when it is reminded that this original answer was different from the new one. Surprisingly, AI doesn't seem to correct its answers or think in a logical pattern, but rather makes highly decisive and emotional decisions. The study shows that, while AI is very confident in its original decisions, it can quickly go back on its decision. Even worse, the confidence level can slip drastically as the conversations goes on, with AI models somewhat spiralling. This is one thing when you're just having a light-hearted debate with ChatGPT, but another when AI becomes involved with high-level decision-making. If it can't be trusted to be sure in its answer, it can be easily motivated in a certain direction, or even just become an unreliable source. However, this is a problem that will likely be solved in future models. Future model training and prompt engineering techniques will be able to stabilize this confusion, offering more calibrated and self-assured answers. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Android Authority
26 minutes ago
- Android Authority
Google Drive could get seamlessly smarter about your PDF documents (APK teardown)
Edgar Cervantes / Android Authority TL;DR Google is working on bringing automatic PDF summaries to Google Drive's PDF viewer on Android. The summary will appear at the top of the document pane without user interaction. Users can provide feedback on summaries and interact with Gemini for more answers. We've previously spotted that Google Drive on Android could soon serve PDF summaries through the PDF viewer. While the control given to users is excellent, there was potential to streamline the experience by automatically generating the AI summary for the uploaded PDF. We suspected the feature would come soon, as Google Drive on the web already supports automatic PDF summaries. We were on the right track, as Google is indeed working on bringing automatic PDF summaries to Google Drive on Android. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. Google Drive v2.25.280 includes code for automatically generating summaries of uploaded PDF files. We managed to activate the feature to give you an early look: AssembleDebug / Android Authority In future versions of Google Drive, users will not have to click the 'Summarize this file' button when viewing a PDF to get its summary. As you can see in the first screenshots, the summary will automatically be presented at the top of the preview pane. Users will be able to like and dislike the summary to give feedback on the AI's performance, and they will likely be able to tap on the 'Ask Gemini' button to open the usual Gemini bottom sheet, where they can ask more questions around the PDF. Note that the summary currently visible is placeholder text, as the feature isn't fully functional just yet. We expect Google to fix the issue whenever they roll out the feature to end users. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.