
OpenAI in talks to raise fresh funds from Saudi Arabia's PIF, India's Reliance Industries: The Information
OpenAI has held discussions with Saudi Arabia's Public Investment Fund (PIF),
Reliance Industries
, and UAE's MGX to participate in its ongoing $40 billion fundraising, US tech publication The Information reported.
The talks are part of the second instalment of the fundraise, where OpenAI is looking to raise about $30 billion by December, potentially valuing the company at around $260 billion before the new investment, the report said.
Japan's SoftBank, which is leading the round, has already been actively buying shares from employees. Between March and May, it purchased around $240 million worth of shares from a small group of current and former staffers, the report said. Last year, ahead of its mega round, SoftBank had also acquired about $2 billion worth of new and existing shares.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
If You Eat Ginger Everyday for 1 Month This is What Happens
Tips and Tricks
Undo
SoftBank is expected to contribute at least three-fourths of the $30 billion instalment, leaving about $7.5 billion to be raised from new investors, according to the report. The Japanese conglomerate may take loans or sell some of its public market holdings, including its shares in Arm Holdings, to fund the investment, the report added.
OpenAI has also discussed raising at least $100 million each from Coatue Management and Founders Fund as part of the ongoing fundraise, the report said, while talks with PIF and other prospective investors are still at an early stage.
Live Events
A direct investment by Saudi Arabia's PIF, which manages over $900 billion in assets, would mark a shift from its earlier exposure to US tech firms through venture funds like Andreessen Horowitz and Iconiq Capital, The Information noted.
Discover the stories of your interest
Blockchain
5 Stories
Cyber-safety
7 Stories
Fintech
9 Stories
E-comm
9 Stories
ML
8 Stories
Edtech
6 Stories
CEO Sam Altman had earlier met with Saudi Crown Prince Mohammed bin Salman during a visit to Riyadh, which coincided with the kingdom's latest AI investment push, the report said.
Separately in India, OpenAI is exploring a product and sales partnership with Reliance, whose portfolio includes Jio, India's largest wireless carrier, and the world's biggest oil refinery. OpenAI wants Reliance businesses to distribute or sell its AI offerings in India, which is now its second-largest user market, Altman said in February, the report said.
The scale of its capital needs is also rising sharply as
ChatGPT
's weekly active users exceed 500 million, up from 300 million in December. Between 2025 and 2027, OpenAI expects to spend about $35 billion on servers to support existing products, and another $55 billion on developing new models, the report added.
Completion of the round hinges on a pending corporate restructuring of its for-profit entity. While an earlier plan to convert it into a public benefit corporation was dropped, a revised restructuring still needs to be finalised by year-end. Failure to do so could see SoftBank cut its commitment and scale back the round to $20 billion, the report said.
Separately, OpenAI, SoftBank, MGX and
Oracle
have partnered on Stargate, a $500 billion initiative to build global AI data centres, according to The Information.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
an hour ago
- Mint
Why superintelligent AI isn't taking over anytime soon
A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn't enough to claim that their AI is the best. All three have recently insisted that it's going to be so good, it will change the very fabric of society. Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg's dream of AI superintelligence—that is, an AI smarter than we are. 'Humanity is close to building digital superintelligence," Altman declared in an essay this week, and this will lead to 'whole classes of jobs going away" as well as 'a new social contract." Both will be consequences of AI-powered chatbots taking over all our white-collar jobs, while AI-powered robots assume the physical ones. Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren't buying all that talk. The title of a fresh paper from Apple says it all: 'The Illusion of Thinking." In it, a half-dozen top researchers probed reasoning models—large language models that 'think" about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim. Generative AI can be quite useful in specific applications, and a boon to worker productivity. OpenAI claims 500 million monthly active ChatGPT users—astonishingly far reach and fast growth for a service released just 2½ years ago. But these critics argue there is a significant hazard in overestimating what it can do, and making business plans, policy decisions and investments based on pronouncements that seem increasingly disconnected from the products themselves. Apple's paper builds on previous work from many of the same engineers, as well as notable research from both academia and other big tech companies, including Salesforce. These experiments show that today's 'reasoning" AIs—hailed as the next step toward autonomous AI agents and, ultimately, superhuman intelligence—are in some cases worse at solving problems than the plain-vanilla AI chatbots that preceded them. This work also shows that whether you're using an AI chatbot or a reasoning model, all systems fail utterly at more complex tasks. Apple's researchers found 'fundamental limitations" in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered 'complete accuracy collapse." Similarly, engineers at Salesforce AI Research concluded that their results 'underscore a significant gap between current LLM capabilities and real-world enterprise demands." Importantly, the problems these state-of-the-art AIs couldn't handle are logic puzzles that even a precocious child could solve, with a little instruction. What's more, when you give these AIs that same kind of instruction, they can't follow it. Apple's paper has set off a debate in tech's halls of power—Signal chats, Substack posts and X threads—pitting AI maximalists against skeptics. 'People could say it's sour grapes, that Apple is just complaining because they don't have a cutting-edge model," says Josh Wolfe, co-founder of venture firm Lux Capital. 'But I don't think it's a criticism so much as an empirical observation." The reasoning methods in OpenAI's models are 'already laying the foundation for agents that can use tools, make decisions, and solve harder problems," says an OpenAI spokesman. 'We're continuing to push those capabilities forward." The debate over this research begins with the implication that today's AIs aren't thinking, but instead are creating a kind of spaghetti of simple rules to follow in every situation covered by their training data. Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple's paper, along with related work, exposes flaws in today's reasoning models, suggesting they're not the dawn of human-level ability but rather a dead end. 'Part of the reason the Apple study landed so strongly is that Apple did it," he says. 'And I think they did it at a moment in time when people have finally started to understand this for themselves." In areas other than coding and mathematics, the latest models aren't getting better at the rate that they once did. And the newest reasoning models actually hallucinate more than their predecessors. 'The broad idea that reasoning and intelligence come with greater scale of models is probably false," says Jorge Ortiz, an associate professor of engineering at Rutgers, whose lab uses reasoning models and other cutting-edge AI to sense real-world environments. Today's models have inherent limitations that make them bad at following explicit instructions—the opposite of what you'd expect from a computer, he adds. It's as if the industry is creating engines of free association. They're skilled at confabulation, but we're asking them to take on the roles of consistent, rule-following engineers or accountants. That said, even those who are critical of today's AIs hasten to add that the march toward more-capable AI continues. Exposing current limitations could point the way to overcoming them, says Ortiz. For example, new training methods—giving step-by-step feedback on models' performance, adding more resources when they encounter harder problems—could help AI work through bigger problems, and make better use of conventional software. From a business perspective, whether or not current systems can reason, they're going to generate value for users, says Wolfe. 'Models keep getting better, and new approaches to AI are being developed all the time, so I wouldn't be surprised if these limitations are overcome in practice in the near future," says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who has studied the practical uses of AI. Meanwhile, the true believers are undeterred. Just a decade from now, Altman wrote in his essay, 'maybe we will go from solving high-energy physics one year to beginning space colonization the next year." Those willing to 'plug in" to AI with direct, brain-computer interfaces will see their lives profoundly altered, he adds. This kind of rhetoric accelerates AI adoption in every corner of our society. AI is now being used by DOGE to restructure our government, leveraged by militaries to become more lethal, and entrusted with the education of our children, often with unknown consequences. Which means that one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should—even as it's shown itself to have antisocial tendencies such as 'opportunistic blackmail"—and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most. 'Although you can use AI to generate a lot of ideas, they still require quite a bit of auditing," says Ortiz. 'So for example, if you want to do your taxes, you'd want to stick with something more like TurboTax than ChatGPT." Write to Christopher Mims at


India Today
an hour ago
- India Today
Trump approves $14.9 billion US steel merger with Japan's Nippon Steel
U.S President Donald Trump formally approved Nippon Steel's 5401.T fraught $14.9 billion bid for U.S. Steel X.N on Friday, capping a tumultuous 18-month effort by the companies, beset by union opposition and two national security signed an executive order saying the tie-up could move forward if the companies signed an agreement with the Treasury Department resolving national security concerns posed by the deal. The companies then announced they had signed the agreement, fulfilling the conditions of Trump's directive and effectively garnering approval for the look forward to putting our commitments into action to make American steelmaking and manufacturing great again," the companies said in the statement, thanking Trump. They added the agreement includes $11 billion in new investments to be made by 2028 as well as governance, production and trade commitments. A golden share would be issued to the U.S. government, the companies added without providing further previously reported that Nippon would invest an additional $3 billion for a new mill after takeover will set up the ailing American steel icon to receive the critical investment, and allow Nippon Steel to capitalize on a host of American infrastructure projects, as its foreign competitors face steel tariffs of 50%. It also absolves the Japanese firm of paying $565 million in breakup fees if the companies failed to secure GREAT PARTNERWhile many investors saw approval as likely after Trump headlined a rally on May 30 giving his vague blessing to an "investment" by Nippon Steel, which he described as a "great partner", Friday's announcement was hardly of U.S. Steel had dipped earlier on Friday after a Nippon Steel executive told the Japanese Nikkei newspaper that its planned takeover of U.S. Steel required "a degree of management freedom" to go ahead after Trump earlier had said the U.S. would be in control with a golden bid, first announced by Nippon Steel in December 2023, has faced opposition from the start. Both Democratic former President Joe Biden and Trump, a Republican, asserted last year that U.S. Steel should remain U.S.-owned, as they sought to woo voters ahead of the presidential election in Pennsylvania, where the company is in January, shortly before leaving office, blocked the deal on national security grounds, prompting lawsuits by the companies, which argued the national security review they received was biased. The Biden White House disputed the steel companies saw a new opportunity in the Trump administration, which began on January 20 and opened a fresh 45-day national security review into the proposed merger in Trump's public comments, ranging from welcoming a simple "investment" in U.S. Steel by the Japanese firm to floating a minority stake for Nippon Steel, spurred InMust Watch


Economic Times
2 hours ago
- Economic Times
Trump approves Nippon-US Steel deal, says it has resolvable national security risk
Donald Trump indicated that concerns regarding Nippon Steel's acquisition of U.S. Steel could be resolved if specific conditions set by his administration are met, leading to a rise in U.S. Steel's shares. The agreement involves significant investments and governance commitments, including a golden share for the U.S. government. Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads U.S. President Donald Trump said on Friday that concerns over national security risks posed by Nippon Steel 's $14.9 billion bid for U.S. Steel can be resolved if the companies fulfill certain conditions that his administration has laid out, paving the way for the deal's of U.S. Steel rose 3.5% on the news in after-the-bell trading as investors bet the deal was close to in an executive order, said conditions for resolving the national security concerns would be laid out in an agreement, without providing details."I additionally find that the threatened impairment to the national security of the United States arising as a result of the Proposed Transaction can be adequately mitigated if the conditions set forth in section 3 of this order are met," Trump said in the order, which was released by the White companies thanked Trump in a press release, saying the agreement includes $11 billion in new investments to be made by 2028 and governance commitments including a golden share to be issued to the U.S. government. They did not detail how much control the golden share would give the of U.S. Steel had dipped earlier on Friday after a Nippon Steel executive told the Japanese Nikkei newspaper that its planned takeover of U.S. Steel required "a degree of management freedom" to go ahead after Trump earlier had said the U.S. would be in control with a golden bid, first announced by Nippon Steel in December 2023, has faced opposition from the start. Both Democratic former President Joe Biden and Trump, a Republican, asserted last year that U.S. Steel should remain U.S.-owned, as they sought to woo voters ahead of the presidential election in Pennsylvania, where the company is in January, shortly before leaving office, blocked the deal on national security grounds, prompting lawsuits by the companies, which argued the national security review they received was biased. The Biden White House disputed the steel companies saw a new opportunity in the Trump administration, which began on January 20 and opened a fresh 45-day national security review into the proposed merger in Trump's public comments, ranging from welcoming a simple "investment" in U.S. Steel by the Japanese firm to floating a minority stake for Nippon Steel, spurred a rally in Pennsylvania on May 30, Trump lauded an agreement between the companies and said Nippon Steel would make a "great partner" for U.S. Steel. But he later told reporters the deal still lacked his final approval, leaving unresolved whether he would allow Nippon Steel to take Steel and the Trump administration asked a U.S. appeals court on June 5 for an eight-day extension of a pause in litigation to give them more time to reach a deal for the Japanese firm. The pause expires Friday, but could be 18 is the expiration date of the current acquisition contract between Nippon Steel and U.S. Steel, but the firms could agree to postpone that date.