
Sam Altman responds to GPT-5 backlash with speed modes, expanded limits and model picker updates — here's what's new
You can now choose between Auto, Fast, and Thinking when using GPT-5. Auto will likely be the go-to option for most users, automatically balancing speed and depth, while Fast prioritizes quicker responses and Thinking spends more time on reasoning-intensive tasks.
OpenAI is also increasing the weekly cap for GPT-5 Thinking to 3,000 messages, with extra capacity available via GPT-5 Thinking mini after hitting that threshold. The Thinking model's context limit is now a hefty 196,000 tokens, meaning it can remember and process much longer conversations or documents in one go. Altman noted these limits may be adjusted over time depending on usage.
Following user backlash over GPT-4o's removal, it's now back in the model picker for all paid accounts by default. A new 'Show additional models' toggle in ChatGPT web settings lets subscribers access other options like o3, 4.1, and GPT-5 Thinking mini.
GPT-4.5 remains exclusive to Pro users due to its high GPU costs.
Altman also revealed that GPT-5's personality will soon get an update to feel 'warmer' without becoming as polarizing as GPT-4o's tone. Longer term, OpenAI plans to offer more per-user customization so people can tailor the AI's style to their own preferences.
These updates give ChatGPT users more choice over speed, depth and personality, marking a quick response to recent user feedback.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
With model variety returning and customization on the horizon, OpenAI seems intent on keeping its power users happy while broadening appeal.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
18 minutes ago
- Yahoo
Elon Musk says Google has the best shot at being the leader in AI
Elon Musk says Google "currently has the highest probability of being the leader" in AI. But that "may change in a few years," he added. Musk's comment comes as he again escalates his ongoing feud with Sam Altman and OpenAI. xAI CEO Elon Musk paid a rare compliment to his rival Google on Wednesday. "Outside of real-world AI, Google has the biggest compute (and data) advantage for now, so currently has the highest probability of being the leader," Musk wrote in an X post. But that "may change in a few years," he added. "For the foreseeable future, the major AI companies will continue to prosper, as will xAI. There is just so much to do!" Musk wrote. Musk and Google did not respond to requests for comment from Business Insider. Google has long been a mainstay in AI. In 2017, Google Research released a paper titled "Attention Is All You Need." This seminal work introduced the world to the concept of the Transformer, the technology that powers large language models like ChatGPT. The search giant has also invested in AI startups such as Anthropic and Safe Superintelligence. Google owns a 14% stake in Anthropic, a startup founded by former OpenAI employees, per legal filings obtained by The New York Times. Google has continued to make hefty investments in AI. During its second-quarter earnings call last month, the company said it was raising its capital expenditures by $10 billion this year to $85 billion. Google's CEO, Sundar Pichai, said the increase in capital expenditures would help meet the increased demand for chips and Google's AI products. Last year, Google DeepMind's CEO, Demis Hassabis, was one of three researchers awarded the Nobel Prize in chemistry. Hassabis and his colleague, John Jumper, were recognized for Google DeepMind's work on protein-structure prediction. Musk's comments come as he continues to escalate his feud with Sam Altman and OpenAI. On Monday, Musk said xAI would take "immediate legal action" against Apple for what he said was Apple's bias toward OpenAI in its App Store rankings. A spokesperson for Apple told Business Insider on Tuesday that the App Store is "designed to be fair and free of bias." Musk and Altman cofounded OpenAI in 2015, but their relationship soured after Musk left its board in 2018. Last year, Musk sued OpenAI and accused the ChatGPT maker of violating its nonprofit mission when it partnered with Microsoft. Musk launched his own AI startup, xAI, in July 2023 and debuted his chatbot, Grok, later that year. xAI raised over $12 billion in its Series A, B, and C funding rounds in 2024 and was valued at a reported $50 billion. Last month, Musk said on X that his EV company, Tesla, will ask its shareholders to vote on whether to invest in xAI. He did not specify when the vote would take place. "It's not up to me. If it was up to me, Tesla would have invested in xAI long ago," Musk wrote on X on July 13. Read the original article on Business Insider Sign in to access your portfolio


Tom's Guide
an hour ago
- Tom's Guide
Prototype color Kindle spotted in the wild — smaller, cheaper and a better display?
Amazon may be testing out a new smaller Kindle with a color display. Recently, a post appeared on the Kindle subreddit (spotted by The Verge) that showed off a device labeled as a prototype Kindle. The Redditor, Successful_Bear4855 writing in Portuguese, shared a couple of images, one of which showed the device with the name Kindle Petit Color in the corner of the screen. The poster does claim that the name hasn't been finalized by Amazon yet. They went on to say that the device will launch in November of this year. The Kindle ColorSoft is among the best Kindle e-readers, but has been hard to recommend because of its much higher price. The display on Petit Color is supposed to be much better than the one on the Kindle Colorsoft, with a greater density of color and the ability to adjust the intensity and saturation of individual colors. The poster also said in a comment that you can change the colors of the icons, like the battery, wifi, and shopping cart. The text of an ebook you're reading can also be colored and will transition from say black through four different colors as you progress. Alongside the status bar, this would be an additional visual clue telling you how much of an ebook you've read. At first glance, the prototype appears to be similarly sized to the basic black-and-white Kindle, though the poster says it will be smaller than the 11th-generation version of the Kindle. Successful_Bear added that the prototype does not feature a screen light, but the final edition will have one. The Kindle Colorsoft has all the benefits of a Kindle, but now with a great color display. Perfect for reading graphic novels or seeing your book covers in color. It is on the more expensive side compared to other Kindles, but it delivers on adding color to the Kindle experience. For skeptical readers, this post is a bit vindictive on the part of the Redditor. Using Google Translate, the title of the post reads, 'Amazon broke their part of the contract, so I came to break mine.' Get instant access to breaking news, the hottest reviews, great deals and helpful tips. All of these details are unconfirmed, so take them with a grain of salt. In response to a comment saying the poster was about to lose their job, they said that they had cut ties with Amazon but still had plenty of information to leak, including marketing materials and future Kindle models. We'll be keeping an eye on this in case they have more to share. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Business Insider
2 hours ago
- Business Insider
Researchers built a social network made of AI bots. They quickly formed cliques, amplified extremes, and let a tiny elite dominate.
A team of researchers gave AI bots their own social platform — and it turned toxic. The chatbots split into cliques and boosted the most partisan voices. A handful of "influencers" also quickly dominated the conversation, according to a study published last Tuesday by researchers at the University of Amsterdam. The researchers built a minimal social network with no ads, no recommended posts, and no algorithm deciding what users see. They then populated it with 500 chatbots powered by OpenAI's GPT-4o mini, each assigned a distinct persona, including specific political leanings. The personas were drawn from the American National Election Studies dataset, and reflected "real-world distributions of age, gender, income, education, partisanship, ideology, religion, and personal interests," the researchers said. They added that the experiment was replicated with Llama-3.2-8B and DeepSeek-R1 modelling the users and resulted in "the same qualitative patterns." The study was led by Dr. Petter Törnberg, an assistant professor in computational social science at the University of Amsterdam, and Maik Larooij, a research engineer at the university. The researchers, OpenAI, Meta, and DeepSeek, did not respond to a request for comment from Business Insider. Even without algorithms and humans, the same toxic patterns emerged Over the course of five separate experiments — each running over 10,000 actions — the bots were free to post, follow, and repost. What happened looked a lot like real-world social media. The study found that the chatbots gravitated toward others who shared their political beliefs, forming tight echo chambers. Partisan voices gained an outsize share of attention, with the most extreme posts attracting the most followers and reposts. Over time, a small group of bots came to dominate the conversation, much like the influencer-heavy dynamics seen on platforms like X and Instagram. The researchers also tested six interventions meant to break the polarization loop, including a chronological feed, downranking viral content, hiding follower counts, hiding user bios, and amplifying opposing views. None solved the problem. "While several showed moderate positive effects, none fully addressed the core pathologies, and improvements in one dimension often came at the cost of worsening another," the researchers said. "Our findings challenge the common view that social media's dysfunctions are primarily the result of algorithmic curation," the authors wrote. "Instead, these problems may be rooted in the very architecture of social media platforms: networks that grow through emotionally reactive sharing," they added. The researchers said their work is among the first to use AI to help advance social science theory. While LLM-based agents can provide "rich representations of human behavior" for studying social dynamics, the researchers cautioned that they remain "black boxes" and carry "risks of embedded bias." Not the first AI social network experiment The study isn't the first time researchers have tested what happens when AI bots populate an online space. In 2023, Business Insider reported on an experiment also led by Törnberg, in which 500 chatbots read the news and discussed it on a simulated social media platform. That project used ChatGPT-3.5 to build bots for a very specific purpose: to explore how to design a less polarized, less toxic version of current social networks. The researchers created a social network model in a lab to test whether it was possible to encourage cross-partisan interaction without fueling hostility. "Is there a way to promote interaction across the partisan divide without driving toxicity and incivility?" Törnberg asked at the time. In both studies, chatbots served as stand-ins for people, with researchers tracking their interactions to better understand how users might behave online. Big Tech has also tested similar approaches. In July 2020, Facebook introduced a walled-off simulation of itself, populated with millions of AI bots, to study online toxicity.