logo
OpenAI Continues To Be Overseen, Controlled By Nonprofit: Sam Altman

OpenAI Continues To Be Overseen, Controlled By Nonprofit: Sam Altman

NDTV06-05-2025

New Delhi:
Sam Altman-run OpenAI will continue to be overseen and controlled by the nonprofit and its 'for profit LLC' will transition to a public benefit corporation (PBC), giving the nonprofit better resources to support several benefits, the company has announced.
OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit.
4 facts about our structure:
-OpenAI will continue to be controlled by the current nonprofit
-Our existing for-profit will become a Public Benefit Corporation
-Nonprofit will control & be a significant owner of the PBC
-Nonprofit & PBC will continue to have the same mission
— OpenAI (@OpenAI) May 5, 2025
"Going forward, it will continue to be overseen and controlled by that nonprofit. Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)-a purpose-driven company structure that has to consider the interests of both shareholders and the mission," the ChatGPT maker said in a statement.
The nonprofit will control and also be a large shareholder of the PBC.
"Our mission remains the same, and the PBC will have the same mission. We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California," said OpenAI Board chair Bret Taylor.
Mr Altman said as the PBC grows, the nonprofit's resources will grow, so it can do even more.
"We're excited to soon get recommendations from our nonprofit commission on how we can help make sure AI benefits everyone-not just a few. Their ideas will focus on how our nonprofit work can support a more democratic AI future, and have real impact in areas like health, education, public services, and scientific discovery," he wrote in a letter to employees.
"We want our nonprofit to be the largest and most effective nonprofit in history that will be focused on using AI to enable the highest-leverage outcomes for people," he added.
He said this sets them up to continue to make rapid, safe progress and to put great AI in the hands of everyone.
"Creating AGI is our brick in the path of human progress; we can't wait to see what bricks you will add next," said Mr Altman.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Cisco eyes AI-ready data centres worldwide, says Chuck Robbins
Cisco eyes AI-ready data centres worldwide, says Chuck Robbins

Time of India

time39 minutes ago

  • Time of India

Cisco eyes AI-ready data centres worldwide, says Chuck Robbins

SAN DIEGO: Networking hardware and security solutions provider Cisco Tuesday said that the company aims to build artificial intelligence (AI)-centric data centres and future-proof workplaces for enterprise customers. "We want to help you (enterprises) build AI-ready data centers. We want to help you future-proof your workplace. And we want to do that with an underlying layer of digital resilience," Chuck Robbins, chairman and chief executive, Cisco Systems said. Robbins was speaking at the Cisco Live 2025 annual conference. The US multinational said that it would transform data centres to power AI workloads anywhere. "We need seamless operations, observability, and security everywhere. We want to help you (enterprises) do this with technologies like Secure AI Factory that we have announced with Nvidia," Robbins said. Cisco has doubled down on partnership with Jensen Huang-headed Nvidia. The two companies enable enterprises to have access to a secure and scalable platform to fast-forward the creation and deployment of AI solutions to drive business value. Founded by Leonard Bosack and Sandy Lerner back in 1984, the company said that it could facilitate enterprise customers with "massive innovation" enabling them to modernise and reimagine data centres with solutions such as Expanded AI Pods, Unified Nexus Dashboard. "So we need a huge amount of security into the underlying network in order to be able to apply network services. We believe that we have a meaningful role to play." The top executive said that Cisco offers a combination of networking and security offerings which acts as a key strength which its rivals lack. The San Jose-based multinational said that Cisco is foundational to data centre build outs worldwide, and offering "critical infrastructure" catering to the AI era. Cisco has also partnered with OpenAI, and is working on its newly-released Codex software engineering agent that allows network engineers access tools for writing, testing and building code. Robbins further said that the fusion of security with the network would be critical for agentic AI. Last month, Cisco and G42 extended their strategic partnership to drive AI innovation across the United Arab Emirates (UAE) and the United States. The company collaborated with the AI Infrastructure Partnership (AIP), led by BlackRock, Global Infrastructure Partners (GIP), MGX, Microsoft and Nvidia. (The author is attending Cisco Live 2025 at the invitation of Cisco Systems)

France's Mistral unveils its first 'reasoning' AI model
France's Mistral unveils its first 'reasoning' AI model

Time of India

time39 minutes ago

  • Time of India

France's Mistral unveils its first 'reasoning' AI model

French artificial intelligence startup Mistral on Tuesday announced a so-called "reasoning" model it said was capable of working through complex problems, following in the footsteps of top US immediately on the company's platforms as well as the AI platform Hugging Face, the Magistral "is designed to think things through -- in ways familiar to us," Mistral said in a blog AI was designed for "general purpose use requiring longer thought processing and better accuracy" than its previous generations of large language models (LLMs), the company other "reasoning" models, Magistral displays a so-called "chain of thought" that purports to show how the system is approaching a problem given to it in natural language. This means users in fields like law, finance, healthcare and government would receive "traceable reasoning that meets compliance requirements" as "every conclusion can be traced back through its logical steps", Mistral said. The company's claim gestures towards the challenge of so-called "interpretability" -- working out how AI systems arrive at a given response. Since they are "trained" on gigantic corpuses of data rather than directly programmed by humans, much behaviour by AI systems remains impenetrable even to their creators. Mistral also vaunted improved performance in software coding and creative writing by Magistral. Competing "reasoning" models include OpenAI 's o3, some versions of Google's Gemini and Anthropic's Claude, or Chinese challenger DeepSeek's R1. The idea that AIs can "reason" was called into question this week by Apple -- the tech giant that has struggled to match achievements by leaders in the field. Several Apple researchers published a paper called "The Illusion of Thinking" that claimed to find "fundamental limitations in current models" which "fail to develop generalizable reasoning capabilities beyond certain complexity thresholds".

Data protection: A privacy shield shouldn't end up as a barrier
Data protection: A privacy shield shouldn't end up as a barrier

Mint

timean hour ago

  • Mint

Data protection: A privacy shield shouldn't end up as a barrier

The internet's interface with users has been jostled over all the way from its early days of web browsers to today's handy mobile apps. Now tools of artificial intelligence (AI) have entered the fray, just as India's online privacy law is about to kick in. Together, they could alter the dynamics of internet usage. First, consider the rise of AI chatbots since the launch of OpenAI's ChatGPT in late 2022. According to Mary Seeker's report on AI trends cited by Plain Facts in Mint, India has more users of ChatGPT's mobile app than any other country; April data shows that we account for 13.5% of this chatbot's global user base, with the US share of 8.9% placing its home market second. The concept has clearly caught on, which explains why search engines like Google and Bing now offer similar AI features. Some seekers of information have begun to see AI generated snapshots as sufficient, which means they often do not visit source websites even when chatbots present them a set of links. This suggests significant tolerance of AI's hallucinatory risks. Also Read: Rahul Matthan: Don't let data privacy safeguards work against us Meanwhile, all 'data fiduciaries' that operate in India, such as sites that ask for our identity and other details, may soon have to meet new user-consent norms under the Digital Personal Data Protection Act of 2023 (once final rules are notified). Not only must they obtain our explicit opt-in consent for each purpose they plan to use our data for—be it to set the site right for us, pool our files in for a meta-data analysis, aim personalized ads at us, or whatever—they must also let us flip past choices and delete data accordingly. If the mandatory protocols for all this prove too clunky, they could get in the way of website access just as AI begins to impact web traffic. To be sure, India's privacy law offers a valuable shield against the misuse of our personal data. The battle for this law was hard-fought, and while it's imperfect, it serves as a vital piece of armoury. Its rules need to be implemented without ado. Also Read: The agentic AI revolution isn't the future, it's already here The worry lies elsewhere—in user behaviour. Although online privacy has been a rallying cry, users of apps and sites are reluctant to scroll through 'terms and conditions' when it's so much easier to click blindly on an 'I agree' box and proceed. In part, the privacy law is designed to solve this problem of signing away rights without realizing what it might imply. Yet, while the law clearly aims to ensure that users know exactly what they are getting into, whether they'll view point-by-point consent tick-boxes as a hurdle is yet to be tested. Also Read: Mint Quick Edit | India's new privacy rules: A mixed bag This assumes salience in the context of behavioural patterns that place a premium on speed. Web users who want to find out something fast, for example, are often content to consult a chatbot and save time on a deeper delve. If visiting a new site involves a consent rigmarole, even more web traffic may flock towards chatbots and stay there. Conceivably, this trend may concentrate power in a handful of AI market leaders that own a popular interface. It's not just about information. Versatile tools of Agentic AI offering to execute odd tasks for us could come next to impact websites driven by interactivity. While it is unclear if AI bots could fill up online consent forms on our behalf (via, say, a digital power-of-attorney device), it's clear that any such concentration of power would pose antitrust risks. To resist AI dominance, sundry websites must keep users engaged directly. And to that end, they will need to keep their privacy protocols as user-friendly as possible. A shield should not end up as a barrier.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store