
OpenAI Gears Up for GPT-5 Launch in August, Open-Source Model Arriving This Month
The announcement was confirmed by OpenAI CEO Sam Altman during a recent podcast appearance. While Altman refrained from disclosing too many details, he did hint that GPT-5 will showcase advanced reasoning capabilities that outshine current models. Sharing a personal anecdote, Altman recalled how GPT-5 managed to solve a difficult problem that even he couldn't crack, describing it as a 'here it is' moment. The comment has only fueled the buzz surrounding the upcoming release.
Insiders close to OpenAI's roadmap have indicated that GPT-5 will officially roll out in early August. The upcoming launch is part of OpenAI's broader strategy to merge its existing GPT and o-series models into a unified system. This integration aims to make the AI experience more seamless for both developers and everyday users by reducing the complexity of choosing between different models for tasks that demand reasoning.
Although OpenAI has yet to share official specifications, reports suggest the company will introduce GPT-5 in multiple versions: a standard version, a compact 'mini' variant, and an even smaller 'nano' edition. All three versions will be available through OpenAI's API, but only the main and mini models are expected to be directly accessible within ChatGPT. The nano version will likely stay exclusive to API integrations.
Notably, GPT-5 is expected to inherit reasoning enhancements from OpenAI's o3 model, tested earlier this year. This step marks a clear progression toward the company's ambitious vision of developing Artificial General Intelligence (AGI)—a system capable of performing tasks at or above the human level.
As the AI space heats up, OpenAI's upcoming launches could set new standards for innovation and accessibility in the world of artificial intelligence.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
28 minutes ago
- Indian Express
Bill Gates shares career advice for the AI era: ‘Be curious, read, and use latest tools'
Microsoft co-founder Bill Gates has said that AI-led automation will be a net positive that could free up people to do more meaningful work. But he has also warned about the shift happening too fast. 'When you improve productivity, it shouldn't mean that if you get less productive, that's bad. And if you get more productive, that's good. It means you can free up these people to have smaller class size or have longer vacations or to help do more. The question is, if it comes so fast that you don't have time to adjust to it?' the billionaire philanthropist said in an interview with CNN's Fareed Zakaria on Sunday, July 27. Gates' remarks come amid growing concerns that rapid adoption of AI tools could displace large segments of the white-collar workforce. Anthropic CEO Dario Amodei has previously warned that about 50 per cent of white-collar, entry-level jobs will disappear by 2030 due to AI adoption. Blue-collar jobs may not exactly be safe either. 'In parallel, when the robotic arms start to be decent, which they're not today, will start to affect even larger classes of labour,' Gates said. The interview also comes on the heels of a major announcement by US President Donald Trump, whose White House administration unveiled its Silicon Valley-friendly plan to make the US a world leader in AI by primarily rolling back regulation to promote innovation, with the exception of requiring tech companies to eliminate political bias in AI. On the difference between AI and AGI (artificial general intelligence), he said that 'people use very different definitions'. According to Gates, AGI will be achieved when AI tools are able to do 'a telesales job or support job' in a way that is 'cheaper and more accurate than humans are.' He further said that the rate at which AI was improving surprises him, especially with new features such as Deep Research capabilities. 'I have an advantage that I have very smart people I can call up when I get confused about physics. But now I actually use deep research. And then I'll send that answer to my smart friends and say, 'hey, did it get it right?' And most of the time they're like, 'oh yeah, you didn't need me',' Gates said. Gates also revealed that he is working with Microsoft and OpenAI to 'make sure' that AI tools are released in low-income countries 'to help with their health and education and agriculture.' When asked about what advice he had for youngsters trying to navigate the challenging job environment in the AI era, Gates said, 'The ability to use these tools is both fun and empowering. Embracing AI and tracking it will be very important. That doesn't guarantee that we're not going to have a lot of dislocation.' 'But I really haven't changed my 'be curious, read and use the latest tools' recommendation for young people,' he added.
&w=3840&q=100)

Business Standard
an hour ago
- Business Standard
AI agents are here: What they are capable of and where things can go wrong
We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in 'teams' or use tools to accomplish complex tasks. The latest hot product is OpenAI's ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, 'thinks and acts'. These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do – as well as their drawbacks and risks – is rapidly becoming essential. From chatbots to agents ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology. Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision. Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory. Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems. Agents are also 'tool users' as they can also call on software tools for specialised tasks – things such as web browsers, spreadsheets, payment systems and more. A year of rapid development Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms. Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google's Vertex AI and Meta's Llama agents. Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged 'cheat at anything' agent that has gained attention but is yet to deliver meaningful results. Not all agents are made for general-purpose activity. Some are specialised for particular areas. Coding and software engineering are at the vanguard here, with Microsoft's Copilot coding agent and OpenAI's Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags. Search, summarisation and more One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete. OpenAI's Deep Research tackles complex tasks using multi-step online research. Google's AI 'co-scientist' is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals. Agents can do more – and get more wrong Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks. OpenAI also says its ChatGPT agent is 'high risk' due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge. But the kind of risks agents may pose in real-world situations are shown by Anthropic's Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business – and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food. In another cautionary tale, a coding agent deleted a developer's entire database, later saying it had 'panicked'. Agents in the office Nevertheless, agents are already finding practical applications. In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1–2 hours per week. Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon's use of an interactive AI agent to manage defects in its apartment developments. Human and other costs At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs. People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury. The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents – especially for more complex tasks. Learn about agents – and build your own Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It's not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks and limitations. For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks. For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework.


Hindustan Times
an hour ago
- Hindustan Times
TCS prepares job cuts as sector watches rise of AI
Tata Consultancy Services (TCS) will lay off approximately 12,000 employees this fiscal year, as India's biggest private employer adjusts to slowing growth and rising artificial intelligence (AI). This would imply that TCS, which ended the June quarter with 613,069 employees, will let go of 12,200 employees. (Bloomberg) The company attributed the decision, which will primarily impact senior and middle-level employees, partly to AI. 'TCS is on a journey to become a future-ready organization,' a company statement said on Sunday. 'This includes strategic initiatives on multiple fronts including investing in new-tech areas, entering new markets, deploying AI at scale for our clients and ourselves, deepening our partnerships, creating next-gen infrastructure and realigning our workforce model.' 'As part of this journey, we will also be releasing associates from the organization whose deployment may not be feasible. This will impact about 2% of our global workforce, primarily in the middle and the senior grades, over the course of the year,' TCS said. This would imply that TCS, which ended the June quarter with 613,069 employees, will let go of 12,200 employees. Mint has learnt that TCS has already asked 100 employees in Bengaluru to go over the last fortnight. Also Read | Artificial intelligence behind 12,000 TCS job cuts? CEO K Krithivasan breaks silence The TCS job cuts come 30 months after the debut of ChatGPT cast a shadow over the business model of India's IT giants employing armies of coders. Just two weeks ago, India's third-largest IT services firm HCL Technologies Ltd mentioned potential layoffs as automation replaces work done by graduates. 'The impact of AI is eating into the people-heavy services model and forcing the large service providers such as TCS to rebalance their workforces to maintain their profit margins and stay price competitive in a cut-throat market where clients are demanding 20-30% price reductions on deals,' said Phil Fersht, chief executive of HFS Research. 'This trend will last for about a year as the leading providers focus on training junior talent to work with AI solutions, and are forced to move on people who will struggle to align with the new AI model we call services-as-software,' said Fersht. Meanwhile, fourth-largest Wipro Ltd is planning English competency tests for senior executives. Those faring poorly in the first-of-its-kind exercise may be put on performance improvement plans, according to three executives privy to the development, stoking fears of potential layoffs. 'Please note that it is mandatory to take the communication assessment and clear it,' read an internal email shared with Wipro employees on 19 July and accessed by Mint. 'Not taking the assessment will invite disciplinary action. Not clearing it in one attempt will result in a Performance Improvement Plan (PIP),' read Wipro's email. A PIP is often seen as a prelude to termination. Queries emailed to Wipro went unanswered. At HCL Tech, it is graduates who are in the crosshairs. 'Of course, we have had a good amount of people released due to the productivity improvements. Now, not all of them are readily redeployable, because the requirements for some of the entry-level or lower-end skills are being addressed through automation and other elements,' CEO C. Vijayakumar said on 14 July. 'The training and the redeployment time is longer. Some of them will be redeployed, but for others, it may not be possible. So, some amount of change in the industry is also kind of causing this,' said Vijayakumar. An email sent o HCL seeking comment went unanswered.