logo
Sam Altman unveils OpenAI nonprofit restructuring plans to align AGI with global benefits; 'We want to create a brain for the world…'

Sam Altman unveils OpenAI nonprofit restructuring plans to align AGI with global benefits; 'We want to create a brain for the world…'

Time of India06-05-2025

is embarking on a significant corporate overhaul to more closely match its goal of developing
artificial general intelligence
(AGI) for the good of all human beings. Under CEO
's leadership, the firm is moving away from being a for-profit organization to a
Public Benefit Corporation
(PBC).
Tired of too many ads? go ad free now
This reorganisation is a turning point for OpenAI, as it indicates a more profound commitment to building AI responsibly and at scale. The move reflects the company's ambition to ensure AGI's benefits are widespread, advancing technology while prioritising ethical and equitable applications that serve global society.
Sam Altman's vision for 'Brain for the world' drives OpenAI's shift to public benefit model
OpenAI Inc. will be among the largest shareholders of the newly formed PBC. The nonprofit's exact stake will be decided with the help of independent advisers.
A large percentage of the proceeds from the for-profit branch will go to support AI projects in the public interest, continuing the company's focus on socially responsible technologies.
One of the key drivers behind this corporate restructuring is the growing global demand for AI technologies, such as OpenAI's flagship product,
ChatGPT
. Sam Altman explained that the company currently cannot meet the demand for AI services at scale.
As AI systems become more advanced, the appetite for these tools is only expected to grow.
"We want to create a brain for the world and make it extremely easy for people to use for anything they want," Altman wrote in his employees' letter. The new PBC model will allow OpenAI to scale more effectively, with its strong AI tools being accessible to more people, meeting demand in the marketplace, and ongoing development.
The shift to a PBC is also in line with OpenAI's vision of creating 'democratic AI.'
Tired of too many ads? go ad free now
This concept revolves around the idea of making AI tools accessible, safe, and aligned with human needs. By structuring the company in this way, OpenAI aims to prioritize the development of AI technologies that can be used for the greater good, while ensuring that the tools are distributed equitably across society.
What is 'Public Benefit Corporation' structure adopted by OpenAI
OpenAI has traditionally been run under a complicated capped-profit model, which limited the level of profit that the business could make.
In a daring step, the business is now converting its for-profit business into a conventional equity model through the PBC framework. This model, already being employed by businesses such as Anthropic and xAI, is meant to balance profitability with societal effect.
The restructuring is a significant change in the way OpenAI will be operating, allowing the company to scale more efficiently while continuing to prioritize bringing AI for the common good.
While the PBC model permits growth through equity, OpenAI Inc., the nonprofit parent company, will have complete control over the for-profit subsidiary.
OpenAI nonprofit launches commission to tackle global challenges with AI
In addition to the corporate restructuring, OpenAI Inc. is also taking steps to broaden its societal impact. The nonprofit arm will establish a new commission to explore how AI can be applied to critical sectors such as healthcare, education, science, and public services.
This move underscores OpenAI's commitment to responding to civic and regulatory concerns, ensuring that AI deployment is both equitable and responsible.
The commission will work towards developing AI solutions to priority global problems, spreading the benefits of AGI as far and wide as possible. Sam Altman underlined the significance of this undertaking by announcing that OpenAI's non-profit division will work to be "the largest and best in history" at applying AI to the most-leverage outputs for humanity.
OpenAI restructures to ensure AGI benefits all of humanity
Despite the corporate restructuring, OpenAI's core mission remains unchanged: to ensure that artificial general intelligence benefits everyone. The transition to a PBC is seen by Altman as a necessary step in achieving this ambitious goal. 'Creating AGI is our brick in the path of human progress,' Altman wrote, emphasizing that OpenAI views its work as foundational to advancing humanity as a whole.
The company remains focused on long-term objectives, with a clear commitment to building AGI that can solve complex global problems and drive human progress.
As the organization evolves, Altman is optimistic about the potential contributions of OpenAI's team and the broader AI community to this vision.
OpenAI's restructuring signals a bold new chapter in the development of AGI. With the creation of a PBC and the growth of its nonprofit activities, OpenAI is more able to scale its business, address increasing demand, and ensure that AI technologies are applied in a way that optimizes social value.
As the firm progresses toward AGI, the world will be eager to observe how its new corporate form informs the development of artificial intelligence.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

France's Mistral unveils its first 'reasoning' AI model
France's Mistral unveils its first 'reasoning' AI model

Time of India

time19 minutes ago

  • Time of India

France's Mistral unveils its first 'reasoning' AI model

French artificial intelligence startup Mistral on Tuesday announced a so-called "reasoning" model it said was capable of working through complex problems, following in the footsteps of top US immediately on the company's platforms as well as the AI platform Hugging Face, the Magistral "is designed to think things through -- in ways familiar to us," Mistral said in a blog AI was designed for "general purpose use requiring longer thought processing and better accuracy" than its previous generations of large language models (LLMs), the company other "reasoning" models, Magistral displays a so-called "chain of thought" that purports to show how the system is approaching a problem given to it in natural language. This means users in fields like law, finance, healthcare and government would receive "traceable reasoning that meets compliance requirements" as "every conclusion can be traced back through its logical steps", Mistral said. The company's claim gestures towards the challenge of so-called "interpretability" -- working out how AI systems arrive at a given response. Since they are "trained" on gigantic corpuses of data rather than directly programmed by humans, much behaviour by AI systems remains impenetrable even to their creators. Mistral also vaunted improved performance in software coding and creative writing by Magistral. Competing "reasoning" models include OpenAI 's o3, some versions of Google's Gemini and Anthropic's Claude, or Chinese challenger DeepSeek's R1. The idea that AIs can "reason" was called into question this week by Apple -- the tech giant that has struggled to match achievements by leaders in the field. Several Apple researchers published a paper called "The Illusion of Thinking" that claimed to find "fundamental limitations in current models" which "fail to develop generalizable reasoning capabilities beyond certain complexity thresholds".

Meta to pay nearly $15 billion for Scale AI stake: Report
Meta to pay nearly $15 billion for Scale AI stake: Report

The Hindu

time22 minutes ago

  • The Hindu

Meta to pay nearly $15 billion for Scale AI stake: Report

Meta Platforms has agreed to take a 49% stake in artificial intelligence startup Scale AI for $14.8 billion, The Information reported on Tuesday, citing two people familiar with the matter. Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The deal, which has not been finalised yet, appears to be beneficial for Scale AI's investors, including Accel, Index Ventures, Founders Fund and Greenoaks Capital, as well as its current and former employees, the report said. Meta, Scale AI and the startup's investors did not immediately respond to Reuters' requests for comment. As part of the deal, Scale AI CEO Alexandr Wang will take a top position inside Meta, leading a new "superintelligence" lab, according to the report. Meta CEO Mark Zuckerberg has been actively recruiting top AI researchers to boost the company's AI efforts, the report said. The company is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models released in April fell short of performance expectations. Meta delayed the release of its flagship "Behemoth" AI model due to concerns about its capabilities, the Wall Street Journal reported last month. The company is also facing antitrust concerns related to its acquisitions of Instagram and WhatsApp. According to The Information report, the structure for the potential deal with Scale AI could be designed to avoid more regulatory scrutiny. Scale AI was valued at $13.8 billion in a funding round last spring. It generated about $870 million in revenue in 2024 and expects more than $2 billion this year, the report said. The company counts AI firms OpenAI and Cohere as well as tech giants Microsoft, Meta and Cisco Systems among its customers, according to its website.

Data protection: A privacy shield shouldn't end up as a barrier
Data protection: A privacy shield shouldn't end up as a barrier

Mint

time25 minutes ago

  • Mint

Data protection: A privacy shield shouldn't end up as a barrier

The internet's interface with users has been jostled over all the way from its early days of web browsers to today's handy mobile apps. Now tools of artificial intelligence (AI) have entered the fray, just as India's online privacy law is about to kick in. Together, they could alter the dynamics of internet usage. First, consider the rise of AI chatbots since the launch of OpenAI's ChatGPT in late 2022. According to Mary Seeker's report on AI trends cited by Plain Facts in Mint, India has more users of ChatGPT's mobile app than any other country; April data shows that we account for 13.5% of this chatbot's global user base, with the US share of 8.9% placing its home market second. The concept has clearly caught on, which explains why search engines like Google and Bing now offer similar AI features. Some seekers of information have begun to see AI generated snapshots as sufficient, which means they often do not visit source websites even when chatbots present them a set of links. This suggests significant tolerance of AI's hallucinatory risks. Also Read: Rahul Matthan: Don't let data privacy safeguards work against us Meanwhile, all 'data fiduciaries' that operate in India, such as sites that ask for our identity and other details, may soon have to meet new user-consent norms under the Digital Personal Data Protection Act of 2023 (once final rules are notified). Not only must they obtain our explicit opt-in consent for each purpose they plan to use our data for—be it to set the site right for us, pool our files in for a meta-data analysis, aim personalized ads at us, or whatever—they must also let us flip past choices and delete data accordingly. If the mandatory protocols for all this prove too clunky, they could get in the way of website access just as AI begins to impact web traffic. To be sure, India's privacy law offers a valuable shield against the misuse of our personal data. The battle for this law was hard-fought, and while it's imperfect, it serves as a vital piece of armoury. Its rules need to be implemented without ado. Also Read: The agentic AI revolution isn't the future, it's already here The worry lies elsewhere—in user behaviour. Although online privacy has been a rallying cry, users of apps and sites are reluctant to scroll through 'terms and conditions' when it's so much easier to click blindly on an 'I agree' box and proceed. In part, the privacy law is designed to solve this problem of signing away rights without realizing what it might imply. Yet, while the law clearly aims to ensure that users know exactly what they are getting into, whether they'll view point-by-point consent tick-boxes as a hurdle is yet to be tested. Also Read: Mint Quick Edit | India's new privacy rules: A mixed bag This assumes salience in the context of behavioural patterns that place a premium on speed. Web users who want to find out something fast, for example, are often content to consult a chatbot and save time on a deeper delve. If visiting a new site involves a consent rigmarole, even more web traffic may flock towards chatbots and stay there. Conceivably, this trend may concentrate power in a handful of AI market leaders that own a popular interface. It's not just about information. Versatile tools of Agentic AI offering to execute odd tasks for us could come next to impact websites driven by interactivity. While it is unclear if AI bots could fill up online consent forms on our behalf (via, say, a digital power-of-attorney device), it's clear that any such concentration of power would pose antitrust risks. To resist AI dominance, sundry websites must keep users engaged directly. And to that end, they will need to keep their privacy protocols as user-friendly as possible. A shield should not end up as a barrier.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store