
OpenAI adds o3 Pro to ChatGPT and drops o3 price by 80 per cent, but open-source AI is delayed
OpenAI has been busy. This morning, we woke up to a bunch of updates from the AI company. To begin with, OpenAI has released o3 Pro, its most advanced reasoning model to date, for Pro and Team users of ChatGPT. The release of the new model also comes with a dramatic 80 per cent price cut for the standard o3 model, which makes it more accessible for developers and researchers around the world. However, OpenAI CEO Sam Altman has also announced that the company has had to push back the release of its open-source model until later this summer.advertisemento3 Pro lands in ChatGPTOpenAI has launched a new o3 Pro model, which it says is its most capable AI model yet. o3 Pro is available to Pro and Team users in ChatGPT and through OpenAI's API. It replaces the earlier o1 Pro model and is designed to deliver even more reliable and accurate answers across complex domains like science, programming, mathematics, and education.
OpenAI described o3 Pro as a 'version of our most intelligent model, o3, designed to think longer and provide the most reliable responses.' The company notes that in internal and external academic evaluation, o3 Pro has consistently outperformed both o3 and its predecessors, especially in clarity, accuracy, instruction-following, and comprehensiveness. advertisement
Although o3 Pro uses the same underlying model as o3, it is optimised for dependability. OpenAI says reviewers have ranked the model higher across key tasks in science and data analysis. As OpenAI puts it, 'We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.'
Notably, o3 Pro is also equipped with tools that enhance its capabilities, including the ability to browse the web, analyse documents, interpret visual inputs, run Python code, and use memory for more personalised interactions. However, these advanced tools mean its responses typically take longer to generate than those of o1 Pro.Despite the rollout, a few features remain unavailable in o3 Pro for now. Temporary chats are currently disabled due to a technical issue, and the model cannot yet generate images or use the Canvas interface. Users looking for image generation are advised to stick with GPT-4o, o3, or o4-mini.Enterprise and Edu customers will gain access to o3 Pro the following week.
A massive price cut for o3Alongside the launch of o3 Pro, OpenAI has also slashed the cost of using the o3 model by 80 per cent – a move that makes the model more accessible and affordable to developers. The cost of using o3 now stands at $2 per million input tokens and $8 per million output tokens, with additional discounts for cached (previously seen) prompts. This is a major reduction from the previous rates of $10 and $40 respectively. As OpenAI researcher Noam Brown pointed out on X, this update represents a significant shift in affordability for developers.OpenAI CEO Sam Altman confirmed the price drop on X, posting, 'we dropped the price of o3 by 80%!! excited to see what people will do with it now. think you'll also be happy with o3-pro pricing for the performance :)'
advertisementThe new pricing places OpenAI in sharper competition with rivals like Google DeepMind's Gemini 2.5 Pro, which charges between $1.25 and $2.50 for input and up to $15 for output tokens, depending on prompt size. Gemini also offers Google Search integration, but with extra fees beyond a certain usage limit.Anthropic's Claude Opus 4 remains the priciest competitor, charging $15 per million input tokens and $75 for output. DeepSeek's Reasoner and Chat models, on the other hand, lead the low-cost tier with rates as low as $0.035 for cached inputs during off-peak hours.Open-source model postponedWhile the launch of o3 Pro and the price drop for the o3 model are widely welcomed, OpenAI's open-source model has hit a delay. Originally expected in June 2025, the model will now arrive later, following a surprise development by the research team. The exact timeline for the release has not yet been disclosed. In a post on X, Altman explained, 'We are going to take a little more time with our open-weights model, i.e. expect it later this summer but not June.' He added that the delay is due to the team achieving something 'unexpected and quite amazing,' which still needs more time to polish.advertisementThe open-source model is expected to rival high-performing open reasoning models such as DeepSeek R1, with OpenAI aiming to raise the bar for open-access large language models.
OpenAI CEO reveals cost of a single ChatGPT queryAlongside these product updates, Altman has also addressed questions about the environmental cost of using ChatGPT. In a blog post published on Tuesday, he shared that each ChatGPT query consumes about 0.34 watt-hours, which is roughly what an oven uses in a second or a high-efficiency lightbulb in a couple of minutes.In terms of water usage, Altman revealed that the average query consumes 0.000085 gallons of water, or 'roughly one fifteenth of a teaspoon.' These figures, he explained, are part of a broader vision where 'the cost of intelligence should eventually converge to near the cost of electricity.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
38 minutes ago
- Time of India
Can AI offer the comfort of a therapist?
One evening, feeling overwhelmed, 24-year-old Delhi resident Nisha Popli typed, 'You're my psychiatrist now,' into ChatGPT. Since then, she's relied on the AI tool to process her thoughts and seek mental support. 'I started using it in late 2024, especially after I paused therapy due to costs. It's been a steady support for six months now,' says Popli. Similarly, a 30-year-old Mumbai lawyer, who uses ChatGPT for various tasks like checking recipes and drafting emails, turned to it for emotional support. 'The insights and help were surprisingly valuable. I chose ChatGPT because it's already a part of my routine.' With AI tools and apps available 24/7, many are turning to them for emotional support. 'More people are increasingly turning to AI tools for mental health support, tackling everything from general issues like dating and parenting to more specific concerns, such as sharing symptoms and seeking diagnoses,' says Dr Arti Shroff, a clinical psychologist. But what drives individuals to explore AI-generated solutions for mental health? WHY USERS ARE USING AI Therapy is expensive 'As someone who values independence, I found therapy financially difficult to sustain,' shares Popli, adding, 'That's when I turned to ChatGPT. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Illinois: Gov Will Cover Your Cost To Install Solar If You Live In These Zips SunValue Learn More Undo I needed a safe, judgment-free space to talk, vent, and process my thoughts. Surprisingly, this AI offered just that — with warmth, logic, and empathy. It felt like a quiet hand to hold.' People feel shy about in-person visits Dr Santosh Bangar, senior consultant psychiatrist, says, 'Many people often feel shy or hesitant about seeking in-person therapy. As a result, they turn to AI tools to express their feelings and sorrows, finding it easier to open up to chatbots. These tools are also useful in situations where accessing traditional therapy is difficult.' Nobody to talk to Kolkata-based Hena Ahmed, a user of the mental health app Headspace, says she started using it after experiencing loneliness. 'I've been using Headspace for about a month now. The AI tool in the app helps me with personalised suggestions on which mindfulness practices I should follow and which calming techniques can help me overcome my loneliness. I was feeling quite alone after undergoing surgery recently and extremely stressed while trying to manage everything. It was responsive and, to a certain extent, quite helpful,' she shares. Users see changes in themselves Mumbai-based 30-year-old corporate lawyer says, 'ChatGPT offers quick solutions and acts as a reliable sounding board for my concerns. I appreciate the voice feature for instant responses. It helps create mental health plans, provides scenarios, and suggests approaches for tackling challenges effectively.' 'My panic attacks have become rare, my overthinking has reduced, and emotionally, I feel more grounded. AI didn't fix me, but it walked with me through tough days—and that's healing in itself,' expresses Popli. CAN AI REPLACE A THERAPIST? Dr Arti expresses, 'AI cannot replace a therapist. Often, AI can lead to incorrect diagnoses since it lacks the ability to assess you in person. In-person interactions provide valuable non-verbal cues that help therapists understand a person's personality and traits.' Echoing similar thoughts, Dr Santosh Bangar, senior consultant psychiatrist, says, 'AI can support mental health by offering helpful tools, but it shouldn't replace a therapist. Chatbots can aid healing, but for serious issues like depression, anxiety, or panic attacks, professional guidance remains essential for safe and effective treatment.' DO CHATBOTS EXPERIENCE STRESS? Researchers found that AI chatbots like ChatGPT-4 can show signs of stress, or 'state anxiety', when responding to trauma-related prompts. Using a recognised psychological tool, they measured how emotionally charged language affects AI, raising ethical questions about its design, especially for use in mental health settings. In another development, researchers at Dartmouth College are working to legitimise the use of AI in mental health care through Therabot, a chatbot designed to provide safe and reliable therapy. Early trials show positive results, with further studies planned to compare its performance with traditional therapy, highlighting AI's growing potential to support mental wellbeing. ARE USERS CONCERNED ABOUT DATA PRIVACY? While some users are reluctant to check whether the data they share during chats is secure, others cautiously approach it. Ahmed says she hasn't considered privacy: 'I haven't looked into the data security part, though. Moving forward, I'd like to check the terms and policies related to it.' In contrast, another user, Nisha, shares: 'I don't share sensitive identity data, and I'm cautious. I'd love to see more transparency in how AI tools safeguard emotional data.' The Mumbai-based lawyer adds, 'Aside from ChatGPT, we share data across other platforms. Our data is already prevalent online, whether through social media or email, so it doesn't concern me significantly.' Experts say most people aren't fully aware of security risks. There's a gap between what users assume is private and what these tools do. Pratim Mukherjee, senior director of engineering at McAfee, explains, 'Many mental health AI apps collect more than what you type—they track patterns, tone, usage, and emotional responses. This data may not stay private. Depending on the terms, your chat history could help train future versions or be shared externally. These tools may feel personal, but they gather data.' Even when users feel anonymous, these tools collect data like IP addresses, device type, and usage patterns. They store messages and uploads, which, when combined, can reveal personal patterns. This data can be used to create profiles for targeted content, ads, or even scams Pratim Mukherjee, senior director of engineering, McAfee Tips for protecting privacy with AI tools/apps - Understand the data the app collects and how it's used - Look for a clear privacy policy, opt-out options, and data deletion features - Avoid sharing location data or limit it to app usage only - Read reviews, check the developer, and avoid apps with vague promises What to watch for in mental health AI apps - Lack of transparency in data collection, storage, or sharing practices - Inability to delete your data - Requests for unnecessary permissions - Absence of independent security checks - Lack of clear information on how sensitive mental health data is used One step to a healthier you—join Times Health+ Yoga and feel the change


Economic Times
4 hours ago
- Economic Times
ChatGPT took on a 50-year-old Atari — and lost
Synopsis In a surprising turn of events, ChatGPT, a leading AI chatbot, was defeated by the vintage Atari 2600 in a chess match. Despite ChatGPT's initial confidence and claims of chess prowess, the Atari console, launched in 1977, consistently outperformed the AI. The experiment highlighted the limitations of ChatGPT in logical reasoning and board awareness, leading to its eventual concession.
&w=3840&q=100)

Business Standard
4 hours ago
- Business Standard
Bhavish Aggarwal's Krutrim bets on India-first AI to rival global peers
Krutrim, the artificial intelligence startup founded by Ola's Bhavish Aggarwal, is positioning its recently launched flagship assistant, Kruti, to stand apart from global peers like OpenAI's ChatGPT and Google's Gemini by leveraging deep local integration, multilingual capabilities, and agentic intelligence tailored to India's unique digital ecosystem. The company calls Kruti India's first agentic AI, capable of booking cabs, paying bills, and generating images while supporting 13 Indian languages using a localised large language model. In the Indian context, the firm competes with global AI giants such as OpenAI, Anthropic and Google, as well as local players such as Sarvam AI and 'Our key differentiator will come with integrating local services,' said Sunit Singh, Senior Vice-President for Product at Krutrim. 'That's not something that will be very easy for global players to do.' Krutrim has already integrated India-specific services, with plans to scale this integration further. The strategy aims to embed Kruti deeply into Indian digital life, allowing it to perform functional tasks through local service connections. This is an area where international competitors may struggle due to regulatory and infrastructural complexities in the Indian market. Voice-first As Krutrim positions Kruti to serve India's linguistically diverse population, the company is doubling down on voice-first, multilingual AI as a core enabler of scale and accessibility. Navendu Agarwal, Group CIO of Ola, emphasised that India's unique language landscape demands a fundamentally different approach from Western AI products. 'India is a voice-first world. So we are building voice-first models,' Agarwal said, outlining Krutrim's strategy to prioritise natural, speech-driven interactions. Currently, Kruti supports voice commands in multiple Indian languages, with plans underway to expand that footprint. Agarwal said the long-term vision is to enable seamless, speech-based interactions that go deeper into local dialects. The company's multilingual, voice-first design is central to its go-to-market strategy, especially in reaching non-English speakers in semi-urban and rural India. The plan also includes integrating with widely used Indian services and government platforms. Krutrim's long-term vision for Kruti centres on true agentic intelligence, where the assistant can act autonomously on behalf of users. Whether it's 'book me a cab to the airport' or 'order my usual lunch', Kruti understands intent and executes tasks without micromanagement. 'Think about it—a super agent which can do food, do apps, provide you help and education information and which can also manage your budget and finance,' said Agarwal. 'So that's what is a mega-agent, or the assistant which is communicating with all of them seamlessly wherever it is needed.' Hybrid technology Rather than relying solely on a single in-house model, Krutrim has opted for a composite approach aimed at optimising accuracy, scalability and user experience, according to Chandra Khatri, the company's Vice-President and Head of AI. 'The goal is to build the best and most accurate experience,' Khatri said. 'If that means we need to leverage, say Claude for coding, which is the best coding model in the world, we'll do that.' Kruti is powered by Krutrim's latest large language model, Krutrim V2, alongside open-source systems. The AI agents evaluate context-specific needs and choose from this suite of models to deliver tailored responses. Investments Krutrim reached unicorn status last year after raising $50 million in equity during its inaugural funding round. The round, which valued the company at $1 billion, included participation from investors such as Matrix Partners India. Earlier this year, company founder Bhavish Aggarwal announced an investment of ₹2,000 crore in Krutrim, with a commitment to invest an additional ₹10,000 crore by next year. The company also launched the Krutrim AI Lab and released some of its work to the open-source community. As Krutrim's AI assistant begins to interface with highly contextual and personal user data, the company emphasises a stringent, India-first approach to data privacy and regulatory compliance. The company employs internal algorithms to manage and isolate user data, ensuring it remains secure and compartmentalised. While Krutrim is open to competing globally, it remains committed to addressing India's market complexities first. 'We don't shy away from going global. But our primary focus is India first,' Agarwal said. Krutrim's emphasis on embedded, action-oriented intelligence—capable of not just understanding queries but also fulfilling them through integrations—could define its edge in the increasingly competitive AI landscape. Here, localisation and service depth may become as critical as raw model power.