logo
OpenAI eyes India, Saudi, UAE backing in $40 billion AI funding round

OpenAI eyes India, Saudi, UAE backing in $40 billion AI funding round

ChatGPT creator OpenAI is in discussions with Saudi Arabia's Public Investment Fund (PIF), India's Reliance Industries, and existing investor MGX from the United Arab Emirates as part of its ongoing $40 billion fundraising efforts, according to a report by The Information on Wednesday.
The report said these investors could each contribute hundreds of millions of dollars to the round.
Stargate project and next-gen AI models drive funding push
The fundraising drive is aimed at supporting OpenAI's next phase of model development and the ambitious infrastructure initiative dubbed 'Stargate'. The effort is being led by Japanese investment giant SoftBank.
Earlier this year, OpenAI CEO Sam Altman held a meeting with Union Minister Ashwini Vaishnaw to discuss the country's plans for a low-cost AI ecosystem. Following that, Altman reportedly intended to visit the UAE for further discussions with Abu Dhabi-based MGX about potential investments, Reuters reported.
More global investors approached, $17 bn target by 2027
The Information report also noted that OpenAI has approached other potential investors such as Coatue Management and Founders Fund, seeking at least $100 million from each. Additionally, the company is projecting to secure a further $17 billion in funding by 2027.
India poised to lead global AI stack, says Jason Kwon
India is fast becoming a major force in the global artificial intelligence (AI) landscape, according to OpenAI's Chief Strategy Officer, Jason Kwon. He highlighted that India now has the second-largest user base of ChatGPT and is among the top 10 countries worldwide in terms of developers building on OpenAI's APIs.
'With the vast and growing pool of AI talent, a vibrant entrepreneurial spirit, and strong government support to expand the critical infrastructure, India is poised to succeed at all layers of the AI stack,' Kwon said during his visit to India last week.
OpenAI Academy India expands AI education and skills
Marking a significant milestone in its international outreach, OpenAI launched the OpenAI Academy India — the company's first overseas expansion of its educational initiative. The programme aims to support the IndiaAI Mission's 'FutureSkills' pillar by widening access to AI training across multiple segments including students, developers, educators, civil servants, nonprofit leaders, and small business owners, the company announced.
New API grants fuel AI tools for social good in India
Last week, OpenAI also announced a new round of API credit grants for 11 Indian non-profit organisations. The beneficiaries include Rocket Learning, Noora Health, and Udhyam, among others. The initiative is aimed at fostering the development of AI-powered tools that can drive social good.
'Over the last year, the India cohort has developed and deployed AI-powered applications across sectors, including healthcare, education, agriculture, disability inclusion, and gender equity, creating a tangible and measurable impact in underserved communities,' the company said in a statement.
AI-powered apps transform early learning and patient care
As part of OpenAI's global support program under the Academy, selected organisations receive hands-on technical assistance, cohort-based learning opportunities, and early access to OpenAI's tools.
In India, Rocket Learning uses generative AI and WhatsApp to deliver personalised early learning experiences for parents and daycare providers. Noora Health works to improve patient outcomes by sharing life-saving information with caregivers and families.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI chatbots need more books to learn from, so more libraries are opening their stacks
AI chatbots need more books to learn from, so more libraries are opening their stacks

The Hindu

time36 minutes ago

  • The Hindu

AI chatbots need more books to learn from, so more libraries are opening their stacks

Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artists and others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organised by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitising rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitised would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitisation is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitised as part of this project will be made public.' Harvard's collection was already digitised starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'

AMD unveils AI server as OpenAI taps its newest chips
AMD unveils AI server as OpenAI taps its newest chips

The Hindu

time37 minutes ago

  • The Hindu

AMD unveils AI server as OpenAI taps its newest chips

Advanced Micro Devices CEO Lisa Su on Thursday unveiled a new artificial intelligence server for 2026 that aims to challenge Nvidia's flagship offerings as OpenAI's CEO said the ChatGPT creator would adopt AMD's latest chips. Su took the stage at a developer conference in San Jose, California, called "Advancing AI" to discuss the MI350 series and MI400 series AI chips that she said would compete with Nvidia's Blackwell line of processors. The MI400 series of chips will be the basis of a new server called "Helios" that AMD plans to release next year. The move comes as the competition between Nvidia and other AI chip firms has shifted away from selling individual chips to selling servers packed with scores or even hundreds of processors, woven together with networking chips from the same company. The AMD Helios servers will have 72 of AMD's MI400 series chips, making them comparable to Nvidia's current NVL72 servers, AMD executives said. During its keynote presentation, AMD said that many aspects of the Helios servers - such as the networking standards - would be made openly available and shared with competitors such as Intel. The move was a direct swipe at market leader Nvidia, which uses proprietary technology called NVLink to string together its chips but has recently started to license that technology as pressure mounts from rivals. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su said. Su was joined onstage by OpenAI's Sam Altman. The ChatGPT creator is working with AMD on the firm's MI450 chips to improve their design for AI work. "Our infrastructure ramp-up over the last year, and what we're looking at over the next year, have just been a crazy, crazy thing to watch," Altman said. During her speech, executives from Elon Musk-owned xAI, Meta Platforms and Oracle took to the stage to discuss their respective uses of AMD processors. Crusoe, a cloud provider that specializes in AI, told Reuters it is planning to buy $400 million of AMD's new chips. AMD's Su reiterated the company's product plans for the next year, which will roughly match the annual release schedule that Nvidia began with its Blackwell chips. AMD shares ended 2.2% lower after the company's announcement. Kinngai Chan, an analyst at Summit Insights, said the chips announced on Thursday were not likely to immediately change AMD's competitive position. AMD has struggled to siphon off a portion of the quickly growing market for AI chips from the dominant Nvidia. But the company has made a concerted effort to improve its software and produce a line of chips that rival Nvidia's performance. AMD completed the acquisition of server builder ZT Systems in March. As a result, AMD is expected to launch new complete AI systems, similar to several of the server-rack-sized products Nvidia produces. Santa Clara, California-based AMD has made a series of small acquisitions in recent weeks and has added talent to its chip design and AI software teams. At the event, Su said the company has made 25 strategic investments in the past year that were related to the company's AI plans. Last week, AMD hired the team from chip startup Untether AI. On Wednesday, AMD said it had hired several employees from generative AI startup Lamini, including the co-founder and CEO. AMD's software called ROCm has struggled to gain traction against Nvidia's CUDA, which is seen by some industry insiders as a key part of protecting the company's dominance. When AMD reported earnings in May, Su said that despite increasingly aggressive curbs on AI chip exports to China, AMD still expected strong double-digit growth from AI chips.

Oracle shares hit record high as AI cloud demand propels revenue forecast
Oracle shares hit record high as AI cloud demand propels revenue forecast

The Hindu

time37 minutes ago

  • The Hindu

Oracle shares hit record high as AI cloud demand propels revenue forecast

Oracle shares surged 14% to breach the $200-mark for the first time on Thursday, after the company raised its annual revenue forecast, driven by strong demand for its AI-related cloud services. Confidence in the software sector remained strong despite geopolitical tensions, even as analysts warn that U.S. President Donald Trump's tariffs could undermine Big Tech's AI investments. Earlier this year, Oracle, whose cloud offerings help companies build their AI infrastructure, announced a joint venture called Stargate to deliver large-scale computing capabilities to OpenAI. "Oracle's once-stodgy image levels up to 'cloud-native mage,' and the competitive map now looks less like a classic three-player real time strategy and more like a battle-royale with everyone dropping in, looking for compute loot", said Michael Ashley Schulman, partner at Running Point Capital Advisors. Oracle expects total revenue to be at least $67 billion for fiscal 2026, CEO Safra Catz said on a post-earnings call. The Texas-based company's cloud services quarterly revenue rose 14% to $11.70 billion. Its overall revenue of $15.90 billion beat estimates of $15.59 billion. At least nine brokerages have raised their price target post-earnings. Oracle trades at a forward price-to-earnings ratio of 25.86, compared to rivals Microsoft at 31.34 and Amazon at 31.80, according to data compiled by LSEG. Microsoft's stock has gained 12.16%, while Amazon's has decreased by 2.8% so far this year. "ORCL has entered an entirely new wave of enterprise popularity that it has not seen since the Internet era in the late 90s," analysts at Piper Sandler added. Shares of the company were last trading at $201.38.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store