logo
#

Latest news with #SamAltman

OpenAI rolls out ChatGPT plan at about $4.60 in India to chase growth
OpenAI rolls out ChatGPT plan at about $4.60 in India to chase growth

Nikkei Asia

time42 minutes ago

  • Business
  • Nikkei Asia

OpenAI rolls out ChatGPT plan at about $4.60 in India to chase growth

India is OpenAI's second-largest market by user base after America. © Reuters August 19, 2025 13:35 JST (Reuters) -- ChatGPT maker OpenAI on Tuesday launched ChatGPT Go, a new India-only subscription plan priced at 399 rupees ($4.57) per month, its most affordable offering yet, as the company looks to deepen its presence in its second-largest market. Global companies often offer cheaper subscription plans for India's price-sensitive market, targeting the nearly 1 billion internet users in the world's most populous nation. The plan allows users to send up to 10 times more messages and generate 10 times more images compared to the free version, while also offering faster response times. Message limits increase with higher-tier subscription plans. ChatGPT Go is designed for Indians who want greater access to ChatGPT's advanced capabilities at a more affordable price, the Microsoft-backed startup said in a statement. The top-tier version of ChatGPT -- ChatGPT Pro -- is priced at 19,900 rupees per month in India, while ChatGPT Plus, its mid-range plan, costs 1,999 rupees a month. Earlier this year, CEO Sam Altman met with India's IT minister and discussed a plan to create a low-cost AI ecosystem. India is OpenAI's second-largest market by user base after the United States and may soon become the biggest, Altman said recently.

OpenAI rolls out cheapest ChatGPT plan at $4.6 in India to chase growth
OpenAI rolls out cheapest ChatGPT plan at $4.6 in India to chase growth

Ammon

timean hour ago

  • Business
  • Ammon

OpenAI rolls out cheapest ChatGPT plan at $4.6 in India to chase growth

Ammon News - ChatGPT maker OpenAI on Tuesday launched ChatGPT Go, a new India-only subscription plan priced at 399 rupees ($4.57) per month, its most affordable offering yet, as the company looks to deepen its presence in its second-largest market. Global companies often offer cheaper subscription plans for India's price-sensitive market, targeting the nearly one billion internet users in the world's most populous nation. The plan allows users to send up to ten times more messages and generate ten times more images compared to the free version, while also offering faster response times. Message limits increase with higher-tier subscription plans. ChatGPT Go is designed for Indians who want greater access to ChatGPT's advanced capabilities at a more affordable price, the Microsoft-backed startup said in a statement. The top-tier version of ChatGPT - ChatGPT Pro - is priced at 19,900 rupees/month in India, while ChatGPT Plus, its mid-range plan, costs 1,999 rupees/month. Earlier this year, CEO Sam Altman met with India's IT minister and discussed a plan to create a low-cost AI ecosystem.

Elon Musk vs Sam Altman feud takes a twist as xAI boss praises ChatGPT-5's ‘I don't know' response
Elon Musk vs Sam Altman feud takes a twist as xAI boss praises ChatGPT-5's ‘I don't know' response

Mint

time2 hours ago

  • Mint

Elon Musk vs Sam Altman feud takes a twist as xAI boss praises ChatGPT-5's ‘I don't know' response

After having a heated spat with OpenAI CEO Sam Altman over the last week, Elon Musk has gone on to praise the company's latest GPT-5 model. The xAI CEO, while replying to a post on X (formerly Twitter), called a response from GPT-5 'impressive.' The comments from Musk come shortly after the billionaire had stated that his Grok 4 Heavy model continued to be among the best in the market despite the launch of GPT-5 by OpenAI. In a screenshot shared by a user named Kol Tregaskes on X, the GPT-5 Thinking model, after taking 34 seconds to answer a query, said: 'Short answer: I don't know — and I can't reliably find out.' One of the biggest challenges with large language models (LLMs) has long been their tendency to give confident and plausible-sounding answers that are actually incorrect, a phenomenon technically known as 'hallucinations.' With GPT-5 admitting that it doesn't know the answer to a question, it helps build trust with users — similar to how people often trust human experts more when they openly acknowledge not knowing something. With the launch of GPT-5, OpenAI claimed a significant reduction in hallucinations compared to previous models. The new AI system has two components: a standard model and a GPT-5 Thinking model, with a router automatically deciding which queries go to which model. Despite these improvements, the model still fabricates information in about 10 percent of cases. Even ChatGPT Head Nick Turley has cautioned that it should not be relied on as a primary source of information. In a recent conversation with The Verge, Turley addressed the ongoing problem of hallucinations with GPT-5, stating, 'The thing, though, with reliability is that there's a strong discontinuity between very reliable and 100 percent reliable, in terms of the way that you conceive of the product.' Turley stated 'Until I think we are provably more reliable than a human expert on all domains, not just some domains, I think we're going to continue to advise you to double check your answer. I think people are going to continue to leverage ChatGPT as a second opinion, versus necessarily their primary source of fact,' he added.

Sam Altman admits China's DeepSeek forced OpenAI's hand on open models: ‘If we didn't do it…'
Sam Altman admits China's DeepSeek forced OpenAI's hand on open models: ‘If we didn't do it…'

Mint

time2 hours ago

  • Business
  • Mint

Sam Altman admits China's DeepSeek forced OpenAI's hand on open models: ‘If we didn't do it…'

OpenAI released its first open-weight models since GPT-2 earlier this month, promising strong real-world performance at low cost. With China's DeepSeek shaking up the AI industry earlier this year through open-source releases, it became clear why OpenAI didn't want to be on what CEO Sam Altman called the 'wrong side of history.' For the first time since the launch of its GPT-OSS models, Altman has openly acknowledged the 'China factor' behind the move. In an interaction with CNBC, Altman admitted that competition from Chinese open-source models like DeepSeek played a major role in OpenAI's decision. "It was clear that if we didn't do it, the world was gonna head to be mostly built on Chinese open source models," Altman told the publication. 'That was a factor in our decision, for sure. Wasn't the only one, but that loomed large' he added Altman also talked about the US policy of restricting the export of powerful semiconductors to China, stating, "My instinct is that doesn't work," he said. "You can export-control one thing, but maybe not the right thing… maybe people build fabs or find other workarounds," he added DeepSeek was not alone, however, many other Chinese companies had been gaining prominence in the tech circles due to their open weights models. Alibaba's Qwen for instance has been releasing its latest foundation models under the Apache 2.0 license. Meanwhile, Meta has already been shippng its Llama models uner community licenses while also adding these models directly on its social media platforms. Earlier in the year, the growing popularity of DeepSeek's AI models shattered all notions of American supremacy in the AI race, as the chatbot showcased performance similar to rival models from OpenAI and Google, despite being developed at a fraction of the cost. Meanwhile, OpenAI also released its latest GPT-5 model this month with claimed improvements in accuracy, reasoning, coding, health, writing and multi-modal abilities. The new model also led to the deprecation of older GPT and o series models for free users while the Pro, Plus, Team and Enterprise users still have access to the older models.

Sam Altman says there's an AI bubble. What Wall Street thinks.
Sam Altman says there's an AI bubble. What Wall Street thinks.

Mint

time2 hours ago

  • Business
  • Mint

Sam Altman says there's an AI bubble. What Wall Street thinks.

Big technology stocks are pricey, leading Wall Street to a debate whether artificial intelligence stocks are overvalued. Tech earnings have given investors a lot to celebrate over the last couple of weeks. Hyperscalers such as Microsoft, Alphabet, and posted better-than-expected earnings growth. The AI giants, including Meta Platforms, also committed to spending billions more on AI infrastructure in the months ahead. 'These huge investments support earnings because they are revenue for someone, and they help drive productivity gains and boost profit margins—not just for the technology companies but for all of corporate America," wrote Jeff Buchbinder, chief equity strategist at LPL Financial, on Monday. As of Friday's close, the S&P 500's market cap gain for the year was $4.865 trillion, according to Dow Jones Market Data. The Magnificent 7 has contributed roughly 43% of that. The market cap value of the S&P 500 as of Friday's close was $54.67 trillion, while the market cap for the Mag 7 was $19.68 trillion, or 36% of the S&P 500's market cap. Valuations are also high. The S&P 500's forward price to earnings valuation is 22.5 times. Six of the Mag 7 stocks are trading at higher multiples than that. Nvidia is trading at 34.9 times earnings expected over the next 12 months; Microsoft is at 32.7 times; Apple at 29.6 times; Amazon at 31.7 times; Meta at 26.1 times; and Tesla at 151.6 times forward earnings. Apple, Meta, and Tesla are also trading above their five-year historic averages. Other tech stocks are pricey, too. Netflix is trading at 41.2 times forward earnings, Oracle at 34.5 times, and Broadcom at 38.2 times. According to an article from The Verge on Friday, OpenAI Chief Executive Sam Altman told reporters that he thinks the AI market is in a bubble. 'Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes," Altman said, according to the article. Barron's has reached out to OpenAI for comment. On top of concerns that near-term expectations for AI stock returns are too high, the broader economic environment is uncertain. The July jobs report was a disappointment, and May and June revisions to the downside were a shock. Wholesale inflation came in hotter than expected for July, and the most hefty of President Donald Trump's tariff policies have recently started to go into effect. Wall Street is now looking to the Federal Reserve's monetary policy symposium this week, where Chair Jerome Powell could possibly offer a more hawkish view on rate cuts than investors might want to hear. This puts highly valued tech stocks at risk. 'If everybody is fully invested, or nearly fully invested in a small cadre of very highly valued stocks, who's there to buy them if they hit an air pocket or sentiment changes?" Steve Sosnick, chief strategist at Interactive Brokers, told Barron's on Monday. He added that it's like if there's an 'overcrowded building and everybody needs to rush to the exits, people get trampled, and that is the risk here." Not everyone on Wall Street is concerned. Richard Saperstein, chief investment officer at Treasury Partners, wrote on Monday that 'big technology stocks have led the market higher and will continue to dominate market performance. We expect continued earnings growth, reinvestment of cash flows and expansion of their globally dominant footprints." Saperstein noted that investors should remain fully invested in U.S. stocks with a concentration in large-cap technology stocks. 'Stocks will likely benefit from deregulation, onshoring and capital expenditure expensing which will eventually become a tailwind for economic growth," Saperstein added. Write to Angela Palumbo at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store