Latest news with #ChatGPT-5


Tom's Guide
a day ago
- Business
- Tom's Guide
Sam Altman's trillion-dollar AI vision starts with 100 million GPUs. Here's what that means for the future of ChatGPT (and you)
ChatGPT's CEO Sam Altman has a bold vision for the future of AI, something other big tech can't compete with: one powered by 100 million GPUs. That jaw-dropping number, casually mentioned on X just days after ChatGPT Agent launched as we await ChatGPT-5, is a glimpse into the scale of AI infrastructure that could transform everything from the speed of your chatbot to the stability of the global energy grid. Altman admitted the 100 million GPU goal might be a bit of a stretch — he punctuated the comment with 'lol" — but make no mistake, OpenAI is already on track to surpass 1 million GPUs by the end of 2025. And the implications are enormous. we will cross well over 1 million GPUs brought online by the end of this year!very proud of the team but now they better get to work figuring out how to 100x that lolJuly 20, 2025 What does 100 million GPUs even mean? (Image credit: Shutterstock) For those unfamiliar, I'll start by explaining the GPU, or graphics processing unit. This is a specialized chip originally designed to render images and video. But in the world of AI, GPUs have become the powerhouse behind large language models (LLMs) like ChatGPT. Unlike CPUs (central processing units), which handle one task at a time very efficiently, GPUs are built to perform thousands of simple calculations simultaneously. That parallel processing ability makes them perfect for training and running AI models, which rely on massive amounts of data and mathematical operations. So, when OpenAI says it's using over a million GPUs, it's essentially saying it has a vast digital brain made up of high-performance processors, working together to generate text, analyze images, simulate voices and much more. To put it into perspective, 1 million GPUs already require enough energy to power a small city. Scaling that to 100 million could demand more than 75 gigawatts of power, around three-quarters of the entire UK power grid. It would also cost an estimated $3 trillion in hardware alone, not counting maintenance, cooling and data center expansion. Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors This level of infrastructure would dwarf the current capacity of tech giants like Google, Amazon and Microsoft, and would likely reshape chip supply chains and energy markets in the process. Why does it matter to you? While a trillion-dollar silicon empire might sound like insider industry information, it has very real consequences for consumers. OpenAI's aggressive scaling could unlock: Faster response times in ChatGPT and future assistants More powerful AI agents that can complete complex, multi-step tasks Smarter voice assistants with richer, real-time conversations The ability to run larger models with deeper reasoning, creativity, and memory In short, the more GPUs OpenAI adds, the more capable ChatGPT (and similar tools) can become. But there's a tradeoff: all this compute comes at a cost. Subscription prices could rise. Feature rollouts may stall if GPU supply can't keep pace. And environmental concerns around energy use and emissions will only grow louder. The race for silicon dominance (Image credit: Shutterstock) Altman's tweets arrive amid growing competition between OpenAI and rivals like Google DeepMind, Meta and Anthropic. All are vying for dominance in AI model performance, and all rely heavily on access to high-performance GPUs, mostly from Nvidia. OpenAI is reportedly exploring alternatives, including Google's TPUs, Oracle's cloud and potentially even custom chips. More than speed, this growth is about independence, control and the ability to scale models that could one day rival human reasoning. Looking ahead at what's next (Image credit: ANDREW CABALLERO-REYNOLDS/AFP via Getty Images) Whether OpenAI actually hits 100 million GPUs or not, it's clear the AI arms race is accelerating. For everyday users, that means smarter AI tools are on the horizon, but so are bigger questions about power, privacy, cost and sustainability. So the next time ChatGPT completes a task in seconds or holds a surprisingly humanlike conversation, remember: somewhere behind the scenes, thousands (maybe millions) of GPUs are firing up to make that possible and Sam Altman is already thinking about multiplying that by 100. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Time of India
5 days ago
- Business
- Time of India
ChatGPT's new ‘agent' tool can be tricked by bad actors: OpenAI CEO Sam Altman cautions: ‘Cutting-edge' but 'experimental'
In the fast-unfolding world of artificial intelligence, OpenAI's latest innovation, the ChatGPT Agent , promises to redefine how humans collaborate with machines. But as CEO Sam Altman put it in his candid new post on X (formerly Twitter), this powerful assistant is as much a peek into the future as it is a reminder to tread carefully. Described as a leap forward in AI utility, the ChatGPT Agent is more than your average chatbot. It can manage complex, multi-step tasks using its own virtual computer, functioning almost like a digital executive assistant. Want to book travel, buy a wedding outfit, and select a gift for a friend—all without switching tabs? Agent can handle that. Want a report prepared based on your data and transformed into a presentation? It can do that too. Explore courses from Top Institutes in Select a Course Category Design Thinking Others Public Policy Cybersecurity Degree Artificial Intelligence Digital Marketing Data Analytics CXO Technology Product Management PGDM Data Science Data Science MCA healthcare Project Management others Management MBA Finance Leadership Healthcare Skills you'll gain: Duration: 22 Weeks IIM Indore CERT-IIMI DTAI Async India Starts on undefined Get Details Skills you'll gain: Duration: 25 Weeks IIM Kozhikode CERT-IIMK PCP DTIM Async India Starts on undefined Get Details 'It can think for a long time, use some tools, think some more, take some actions, think some more,' Altman explained, emphasizing the tool's advanced reasoning abilities and continuous decision-making. It's a blend of Deep Research and OpenAI's Operator models, but dialed up to full strength. — sama (@sama) Altman's Clear Warning: "Treat It as Experimental" But despite the allure, Altman is openly cautious about how users should approach the Agent. In his words: 'I would explain this to my own family as cutting edge and experimental… not something I'd yet use for high-stakes uses or with a lot of personal information.' You Might Also Like: OpenAI's Sam Altman reveals vision for AI's future: Could ChatGPT-5 become an all-powerful AGI 'smarter than us'? His tone is both enthusiastic and sober—encouraging users to try the tool, but with heavy warnings. Altman's honesty isn't new. He's previously called out ChatGPT's own shortcomings, from hallucinations to sycophantic responses. With Agent, he takes that transparency a step further. While OpenAI has built more robust safeguards than ever—ranging from enhanced training to user-level controls—he admits that they 'can't anticipate everything.' What Could Go Wrong? Agent's ability to carry out tasks autonomously means it can also make decisions that come with real-world consequences—especially if given too much access. For instance, Altman suggests that giving Agent access to your email and instructing it to 'take care of things' without follow-up questions could end poorly. It might click on phishing links or fall for scams a human would recognize instantly. He recommends granting Agent only the minimum access needed. Want it to book a group dinner? Give it access to your calendar. Want it to order clothes? No access is needed. The key is intentional use. You Might Also Like: OpenAI CEO Sam Altman's 'peak life experience' will melt your heart. It has nothing to do with AI The risk isn't just technical—it's societal. 'Society, the technology, and the risk mitigation strategy will need to co-evolve,' Altman noted in his post. It's a rare moment of foresight in a space too often dominated by hype. You Might Also Like: Sam Altman's subtle sarcastic jab at Elon Musk adds new fuel to the billionaire feud


Tom's Guide
15-07-2025
- Business
- Tom's Guide
Unless ChatGPT-5 gets these upgrades, I'm sticking with Claude — here's why
OpenAI is gearing up to launch ChatGPT-5, its most ambitious model yet. Rumored to feature a massive context window, enhanced reasoning, agent-like autonomy and full multimodal capability, it could mark a turning point in AI development. But as someone who tests and uses AI tools daily, I've already found my rhythm with Claude 4 Sonnet and Claude 4 Opus. And unless GPT-5 brings some serious upgrades to the table, I'm not switching anytime soon. Here's why Claude still feels like the better AI assistant, and what GPT-5 needs to do to win me over. Claude's strength lies in its ability to understand and retain nuance over long conversations. Whether I'm analyzing lengthy PDFs, asking it to summarize meeting transcripts, or writing a multi-layered piece, Claude rarely loses track of the thread. Its 200,000-token context window (in Opus) means I can give it dozens of pages of material without sacrificing accuracy or tone. What GPT-5 needs: An ultra-large context window (rumored to be 200K+) and better persistent memory across chats, especially for research, planning and writing tasks. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Claude recently launched its Connectors Directory, which allows it to pull data from and take actions inside apps like Google Drive, Slack, Notion, Canva and more. I've used it to summarize documents, autofill brand templates and even build full Canva presentations from a single prompt. It's seamless, intuitive and incredibly useful. What GPT-5 needs: Built-in integrations that work directly inside the chat interface, not clunky third-party plug-ins or separate browser extensions. While ChatGPT-4o brought real-time voice and emotion to the table, Claude consistently delivers more polished, human-sounding text. It's fantastic at tone matching, empathetic phrasing and sounding like a capable assistant rather than a chatbot. It just gets how I want things written, whether it's a memo, email or blog post. When I give Claude a draft, I can trust it to edit my work without losing the creative voice or tone. What GPT-5 needs: Sharper tone control and better emotional intelligence, especially for professional and creative writing tasks. Claude has a habit of hedging when it's not confident, which, surprisingly, makes it more trustworthy. It avoids hallucinations better than GPT-4o in my experience and tends to cite sources when possible more often. I've had fewer instances of incorrect or outdated information compared to other models. What GPT-5 needs: Real-time search integration with source transparency and more honest handling of uncertainty. Claude may not be fully autonomous yet, but it still behaves like a quiet, capable teammate. From resizing Canva graphics to summarizing my inbox, it's already taking real-world actions based on context. That's the direction AI is heading, and Claude is already there. What GPT-5 needs: Agentic capabilities that go beyond text generation with smart task handling, context awareness and proactive support. ChatGPT-5 is said to be coming out soon, but we still don't know when the new model will be released. I am very much excited for OpenAI's new model, just as I am with every new and enhanced AI. A smarter, more capable ChatGPT would push the whole AI field forward. But unless it delivers meaningful upgrades in context, integrations, voice and action, Claude will remain my AI of choice for getting real work done. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Tom's Guide
01-07-2025
- Business
- Tom's Guide
Meta's new 'Superintelligence' team could upend the entire AI industry — here's why OpenAI should be worried
Mark Zuckerberg is no longer content playing catch-up in the AI space, especially with Meta's biggest rival, ChatGPT's OpenAI. The proof is in his recent hiring spree that's poached top researchers from OpenAI, Google DeepMind and Claude's Anthropic to form Meta's new "Superintelligence" team. In an internal memo first reported by Wired, Zuckerberg welcomed more than a dozen elite AI scientists into Meta's newly branded Meta Superintelligence Labs (MSL). The move signals a bold shift that Meta is going after artificial general intelligence (AGI), and it's doing it with financial force. Among Meta's new recruits are multiple former OpenAI researchers, including Jiahui Yu, Shuchao Bi, Shengjia Zhao and Hongyu Ren. They're joined by several big names from Google and DeepMind such as Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai; all known for their work on high-performing multimodal models and model alignment. Zuckerberg is assembling full research groups and giving them the infrastructure (and budget) to go big. According to multiple reports, some of the hires were lured by seven- to nine-figure pay packages and direct pitches from Zuckerberg himself. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Meta also tapped Nat Friedman, former GitHub CEO, and Daniel Gross, an AI-focused investor, to co-lead the applied AI arm of MSL. The mix of pure research firepower and product-ready AI talent is the balance Meta will need if it wants to scale cutting-edge models into tools consumers actually want to use (like ChatGPT has proven to be). Until now, Meta has largely stayed in the background of the AI arms race, focusing on open-source LLMs like Llama while OpenAI and Anthropic dominated the spotlight with ChatGPT and Claude. But with this high-profile hiring spree, Meta is making one thing clear: it wants to lead AI developmen, not be in the shadows anymore. This escalation has several major implications: Losing talent to a direct competitor hurts, and OpenAI reportedly isn't happy about it. After several team members jumped to Meta, OpenAI's Chief Scientist, Jukan Choi described the exodus as feeling like 'someone broke into our house.' With multiple researchers leaving in a short window, including from OpenAI's Zurich office, it's clear that Meta's offers are lucrative but also strategically timed and targeted. What does this mean for the launch of ChatGPT-5? We don't know exactly, but my guess is that the much-anticipated chatbot could be delayed due to the loss of much of OpenAI's top talent. Artificial general intelligence (AGI) is no longer a distant goal for Meta. Zuckerberg now publicly says Meta is working toward it, and with this new team, it's building the talent to match. Meta is investing in long-context reasoning, multi-modal learning, alignment research and inference optimization — the very same pillars that OpenAI and DeepMind prioritize. Meta has something most companies don't: access to billions of users and massive compute infrastructure. Pairing world-class AI talent with Meta's scale, plus its reach across Facebook, Instagram, WhatsApp and Ray-Bans, could rapidly close the gap. Beyond a shockingly significant hiring spree, Zuckerberg's move is a signal that Meta wants to win the AI race. With top-tier researchers, aggressive investment and an infrastructure built for global rollout, Zuckerberg is making Meta a serious contender in the race for AI dominance. Whether this results in smarter chatbots, better wearable AI or the first real steps toward AGI, one thing is clear: the balance of power in AI is shifting, almost as fast as AI is evolving.
Yahoo
11-02-2025
- Science
- Yahoo
Sam Altman thinks GPT-5 will be smarter than him — but what does that mean?
Sam Altman did a panel discussion at Technische Universität Berlin last week, where he predicted that ChatGPT-5 would be smarter than him — or more accurately, that he wouldn't be smarter than GPT-5. He also did a bit with the audience, asking who considered themselves smarter than GPT-4, and who thinks they will also be smarter than GPT-5. 'I don't think I'm going to be smarter than GPT-5. And I don't feel sad about it because I think it just means that we'll be able to use it to do incredible things. And you know like we want more science to get done. We want more, we want to enable researchers to do things they couldn't do before. This is the history of, this is like the long history of humanity.' The whole thing seemed rather prepared, especially since he forced it into a response to a fairly unrelated question. The host asked about his expectations when partnering with research organizations, and he replied 'Uh… There are many reasons I am excited about AI. …The single thing I'm most excited about is what this is going to do for scientific discovery.' He didn't answer the host's question at any point during his reply, and he also didn't give any details or explanation regarding his comment. What does it mean for GPT-5 to be smarter than Sam Altman? Does it mean GPT-5 will be trained on data covering in-depth knowledge of more subjects than Altman has experience with? That's probably already the case with GPT-4 but people don't describe it as smart because it's so bad at following instructions, retaining context, and revising its responses. So, can we expect GPT-5 to improve in this area? It shouldn't be impossible — my experience with DeepSeek, for example, has been much more positive in this area. If I ask for no more than 100 words, two bullet-point lists, and information taken from a certain link, it actually delivers. Then, when I ask it to add an extra section summarizing an additional webpage I provide — I get what I asked for. I've never been able to achieve this kind of smooth and accurate operation with GPT-4, and I'm not even asking for anything complicated. These are the kind of things I consider when assessing how 'smart' I think an AI model is but it's impossible to know what kind of criteria Altman judges by. He keeps talking about science and research — he even mentioned curing cancer at one point — but it's hard to see how ChatGPT fits into such things. I can see how artificial intelligence as a whole might contribute, but an LLM? The official site for ChatGPT describes it as a brainstorming partner, a meeting summarizer, a code generator, and a way to search the web. Which of these features will meaningfully help a research scientist dealing with questions no human has the answers to yet? If Altman has thoughts or answers on these topics, he isn't sharing them. He just sticks to sweeping statements that only sound impressive until you realize you have no idea what he actually means in practical terms.