
OpenAI to unveil GPT-5 in August: Here's how it is different
The upcoming AI model has been in the making for a long time and has been delayed multiple times. In February this year, the company introduced GPT-4.5 Orion to ChatGPT Pro users. While the company has launched a number of AI models, the launch of GPT-5 has been repeatedly changed owing to safety tests and refinements.
On July 19, when OpenAI's experimental reasoning LLM achieved a gold medal-level performance in the International Math Olympiad, CEO Sam Altman took to his X account to share the news. While lauding the company's efforts, he also said that they would be launching GPT-5 soon. Describing the model, Altman said that it was an experimental model that incorporates new research techniques that the company will be using in its future models. 'We think you will love GPT-5, but we don't plan to release a model with IMO gold-level capability for many months,' he wrote.
we achieved gold medal level performance on the 2025 IMO competition with a general-purpose reasoning system! to emphasize, this is an LLM doing math and not a specific formal math system; it is part of our main push towards general intelligence.
when we first started openai,… https://t.co/X46rspI4l6
— Sam Altman (@sama) July 19, 2025
Murmurs around GPT-5 were heard earlier this month after 'GPT-5 Reasoning Alpha' appeared in the configuration file as shared on X by Tibor Blaho, an engineer. While earlier, Altman had said that GPT-5 will be arriving in Summer 2025, newer reports now suggest that the model will be arriving next month.
When it comes to features, GPT-5 is expected to be a significant leap from its predecessors and would come with unified multimodal capabilities that would integrate text, audio, image, and video processing—all of this into one architecture. The model is also reported to feature advanced reasoning capabilities with hybrid architecture for enhanced logical reasoning and problem-solving. The context window is expected to be extended beyond one million tokens in contrast to GPT-4's 128,000 tokens. Other expected features include native Sora integration and advanced Canvas tools. GPT-5 will also be working towards reducing hallucinations substantially. And, this reportedly will be achieved by implementing a chain of thought (CoT) mechanism.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
8 minutes ago
- Mint
SoftBank Swings to Profit on Nvidia Bet Ahead of Big AI Campaign
(Bloomberg) -- SoftBank Group Corp. swung to a profit in the June quarter, bolstered by gains in its holdings including Nvidia Corp. and Coupang Inc., in a boost for founder Masayoshi Son's planned bets on artificial intelligence technologies. The Tokyo-based company reported net income of ¥421.82 billion ($2.9 billion) in its fiscal first quarter, versus the average of analyst estimates compiled by Bloomberg of ¥158.23 billion. The Vision Fund logged a ¥451.39 billion profit. Son is doubling down on bets geared to help him capitalize on booming investment in AI hardware. As part of that shift in focus, SoftBank has been building stakes in Nvidia and Taiwan Semiconductor Manufacturing Co. among others, while selling off less relevant assets. SoftBank increased its stake in Nvidia to more than $3 billion as of end-March, helping the Japanese investor benefit from the AI accelerator maker's 46% rally during the three months through June. US President Donald Trump's threat to unleash 100% chip tariffs but exempt companies moving production to America is infusing optimism for SoftBank's $500 billion Stargate data center foray with OpenAI and Oracle. That nudged the Japanese company's stock up around 1.5% on Thursday, putting it on track to pass a record high it hit the day before. 'Our longer-term outlook for SoftBank Group is cautiously optimistic, with a consensus toward continued business expansion,' said Ashwin Binwani, founder of Alpha Binwani Capital. 'We are prepared for volatility and see it as a buy-the-dip opportunity.' The 67-year-old SoftBank founder seeks to play a more central role in the spread of AI through sweeping partnerships such as Stargate and a planned $30 billion investment in OpenAI. Son is also courting TSMC and others about taking part in a $1 trillion AI manufacturing hub in Arizona. But concern over whether SoftBank can manage multiple mass-scale funding needs as interest rates inch up is keeping its stock at a significant discount to the total net asset value of its holdings. Some of the conversations behind Stargate have slowed due to market volatility, uncertainty around US trade policy and questions around the financial valuations of AI hardware, Bloomberg News reported in May. 'Key points to consider in assessing SoftBank include whether investment in the Stargate Project involving AI infrastructure in the US will progress; whether additional investment in OpenAI amid a fluid management situation is tenable,' SMBC Nikko Securities analyst Satoru Kikuchi wrote in a note earlier this year. --With assistance from Aya Wagatsuma. More stories like this are available on

The Hindu
8 minutes ago
- The Hindu
Google's Gemini AI gets ‘Guided Learning' mode similar to ChatGPT's ‘Study Mode'
Google on Wednesday (August 6, 2025) announced that it was launching a 'Guided Learning' mode in its AI service Gemini that would help users learn with the chatbot in a more collaborative way. The update comes several days after rival OpenAI introduced a 'Study Mode' in ChatGPT that was aimed at breaking down more educational or academic problems into smaller steps or processes, rather than giving learners an answer right away. Guided Learning follows a similar approach, by posing open-ended questions, encouraging deep dives into a subject, and adapting explanations to user needs. Gemini's Guided Learning further integrates images, diagrams, videos, and interactive quizzes in order to offer a multimedia approach to learning, rather than just generating text. Some responses can even include YouTube videos to enhance learning, per the company. Like OpenAI, Google too stressed that it worked with educators, students, and pedagogical experts for this purpose. Google pointed to the development of LearnLM, a family of models that it said was grounded in educational research. 'We worked with educators to design Guided Learning to be a partner in their teaching, built on the core principle that real learning is an active, constructive process. It encourages students to move beyond answers and develop their own thinking by guiding them with questions that foster critical thought,' said Google VP of Learning, Maureen Heymans, in a blog post. OpenAI and Google are working to bring their AI offerings to schools, universities, and educational institutions through more accessible and affordable channels. However, many educators and researchers have raised concern about students failing to develop their academic research skills, reading comprehension, media literacy, or communication skills when they overly rely on chatbots.


Mint
37 minutes ago
- Mint
How reasoning AI is all set to change our lives
Varun Mayya, a Bengaluru-based entrepreneur building things at the intersection of generative AI and content creation, believes that the real world is messy and full of problems that require multi-step solutions and creative thinking. He wants AI that understands and can work with this. 'Reasoning AI models can infer cause and effect. If it's raining outside, for example, then they can infer that the ground is wet because of the rain, whereas vanilla models can get confused and assume that the events simply appear together and are not related," explains Mayya. The difference becomes apparent when the questions themselves are less like trivia and more like conundrums. Imagine, for instance, asking: 'Evaluate the feasibility of launching a vegan cloud kitchen in Gurgaon, with a break-even point within 9 months, based on current delivery and food-tech trends, along with regulatory risks." A chatbot that simply lists vegan recipes or gives a dated market estimate would be worse than useless. What's needed is a well-reasoned response that weighs demand and supply data, costs, consumer sentiment, policy shifts, and even the likely impact of next product pivot of food delivery apps. And now we have reasoning AI models that can do this, an evolution piggybacking on the rapid adoption of generative AI tools like OpenAI's ChatGPT, Microsoft's Copilot, and Google's Gemini along with a slew of AI experiences from innovative AI startups. Over the past couple of years, millions of individuals and businesses started interacting with a seemingly intelligent system using natural language, sparking a wave of experimentation and speculation about a productivity revolution. These early models were impressive content generators, capable of drafting emails, writing code snippets, and summarizing articles with astonishing speed. However, as the initial novelty subsided, businesses encountered the significant growing pains of this nascent technology. The very capabilities that made these early chatbots seem magical also revealed their profound limitations, creating a ceiling for their application in high-stakes enterprise environments. These limitations were not minor bugs but fundamental flaws that posed serious risks. These included hallucinations (fabricating information and presenting it as fact) and factual inaccuracy, outdated knowledge since the models were only as current as their last training dataset, and bias and discrimination as the AI systems inevitably inherit and often amplify the societal biases present in the data they are trained on. These limitations confined early generative AI to a sandbox of low-stakes creative work and preliminary drafting. They were powerful assistants but unreliable decision-makers. A strategic shift In response to these shortcomings, the AI ecosystem is now undergoing a fundamental and strategic pivot. The focus is no longer on simply generating more fluent or creative text. Instead, the race is on to build systems that can demonstrate genuine reasoning, engage in multi-step planning, and achieve verifiable accuracy. What users now want is not just a rapid-fire quip, but a reasoned argument; not just a list of surface facts, but a tapestry woven from context, inference, verification, and perspective. The new gold standard is smarter, not just faster, AI. This industry-wide pivot reflects a deeper ambition, one articulated by AI pioneer Yann LeCun: 'AI is not just about replicating human intelligence; it's about creating intelligent systems that can surpass human limitations". The goal is no longer just to mimic human conversation but to build cognitive engines that can solve problems at a scale and complexity beyond human capacity. An AI system must be able to analyse, interpret, and perform logical operations on that information. This is where the next evolution, often called Reasoning-Augmented Generation (RAG+) or advanced RAG, comes into play. This framework builds a layer of logical reasoning on top of the retrieved data. The 'black box" problem, where even developers could not explain why an AI produced a certain output, has been one of the single greatest barriers to enterprise adoption of AI tools, especially in regulated industries. Techniques that make the AI's thought process visible and auditable are the solution. When an AI can explain its step-by-step logic, a human expert, the 'human-in-the-loop," – be it a doctor, a lawyer, or a financial analyst – can validate that logic, challenge its assumptions, and ultimately trust its conclusion. The Rise of the AI Agent The abstract concepts of reasoning and planning find their most powerful and tangible expression in the form of AI agents. An AI agent is an autonomous system that receives a high-level goal from a user, independently creates a multi-step plan to achieve that goal, and then executes that plan by interacting with a variety of digital tools. Industry analysts predict that by 2028, 15% of all daily work decisions will be handled autonomously by AI agents, up from virtually zero today. Virtually zero, because we're not there yet. 'I recently bought a shoe using an agent and it bought the wrong one for me. It struggled through the manufacturer's website and thrice clicked on the wrong option before correcting itself. A human wouldn't have made the mistakes the agent did," says Mayya, underlining that we still have some miles to go towards critical evolution from AI as a tool to AI as a collaborator. That said, he thinks while AI agents are in their infancy, they are reasonably good for many use cases already, like solving most search-and-retrieving and information scraping tasks. The trend towards agentic AI is accelerating, with industry body NASSCOM projecting that the agentic AI market will surge from $5.1 billion in 2025 to over $47 billion by 2030. Of course, these advances bring new dilemmas. If a machine can reason, whose perspective, and whose logic, does it represent? With greater complexity comes opacity, and many models struggle to explain their thinking in human-understandable terms. Moreover, as reliance on AI for deep reasoning grows, there's the risk of outsourcing thinking entirely. Are users still equipped to spot errors, biases, or gaps in reasoning when the answer arrives neatly packaged, citations and all? The next frontier of AI lies not in flashier chat interfaces, but in robust reasoning engines that tackle the most challenging problems across domains. If there is a lesson in this shift from creative to cognitive, it is this: to extract the most value from AI, users must ask better questions—and demand better answers. And while the promise of smarter AI tempts us with visions of algorithmic sages, the truest breakthroughs may be quieter—better tools for collaboration, platforms that blend machine reasoning with human intuition, and the rise of augmented intelligence, where the best outcomes emerge from partnership, not replacement. OLD GEN AI VS NEW AGENTS Early chatbots • Content generation, summarization • Static, pre-trained knowledge, often outdated • Single-step, direct commands • Hallucinations, lack of verifiability Reasoning AI • Complex problem-solving, planning, automation • Live, real-time access to external & proprietary data • Multi-step, autonomous tasks requiring planning • Computational cost, ethical oversight complexity