OpenAI Can Stop Pretending
OpenAI is a strange company for strange times. Valued at $300 billion—roughly the same as seven Fords or one and a half PepsiCos—the AI start-up has an era-defining product in ChatGPT and is racing to be the first to build superintelligent machines. The company is also, to the apparent frustration of its CEO Sam Altman, beholden to its nonprofit status.
When OpenAI was founded in 2015, it was meant to be a research lab that would work toward the goal of AI that is 'safe' and 'benefits all of humanity.' There wasn't supposed to be any pressure—or desire, really—to make money. Later, in 2019, OpenAI created a for-profit subsidiary to better attract investors—the types of people who might otherwise turn to the less scrupulous corporations that dot Silicon Valley. But even then, that part of the organization was under the nonprofit side's control. At the time, it had released no consumer products and capped how much money its investors could make.
Then came ChatGPT. OpenAI's leadership had intended for the bot to provide insight into how people would use AI without any particular hope for widespread adoption. But ChatGPT became a hit, kicking 'off a growth curve like nothing we have ever seen,' as Altman wrote in an essay this past January. The product was so alluring that the entire tech industry seemed to pivot overnight into an AI arms race. Now, two and a half years since the chatbot's release, Altman says some half a billion people use the program each week, and he is chasing that success with new features and products—for shopping, coding, health care, finance, and seemingly any other industry imaginable. OpenAI is behaving like a typical business, because its rivals are typical businesses, and massive ones at that: Google and Meta, among others.
[Read: OpenAI's ambitions just became crystal clear]
Now 2015 feels like a very long time ago, and the charitable origins have turned into a ball and chain for OpenAI. Last December, after facing concerns from potential investors that pouring money into the company wouldn't pay off because of the nonprofit mission and complicated governance structure, the organization announced plans to change that: OpenAI was seeking to transition to a for-profit. The company argued that this was necessary to meet the tremendous costs of building advanced AI models. A nonprofit arm would still exist, though it would separately pursue 'charitable initiatives'—and it would not have any say over the actions of the for-profit, which would convert into a public-benefit corporation, or PBC. Corporate backers appeared satisfied: In March, the Japanese firm Softbank conditioned billions of dollars in investments on OpenAI changing its structure.
Resistance came as swiftly as the new funding. Elon Musk—a co-founder of OpenAI who has since created his own rival firm, xAI, and seems to take every opportunity to undermine Altman—wrote on X that OpenAI 'was funded as an open source, nonprofit, but has become a closed source, profit-maximizer.' He had already sued the company for abandoning its founding mission in favor of financial gain, and claimed that the December proposal was further proof. Many unlikely allies emerged soon after. Attorneys general in multiple states, nonprofit groups, former OpenAI employees, outside AI experts, economists, lawyers, and three Nobel laureates all have raised concerns about the pivot, even petitioning to submit briefs to Musk's lawsuit.
OpenAI backtracked, announcing a new plan earlier this month that would have the nonprofit remain in charge. Steve Sharpe, a spokesperson for OpenAI, told me over email that the new proposed structure 'puts us on the best path to' build a technology 'that could become one of the most powerful and beneficial tools in human history.' (The Atlantic entered into a corporate partnership with OpenAI in 2024.)
Yet OpenAI's pursuit of industry-wide dominance shows no real signs of having hit a roadblock. The company has a close relationship with the Trump administration and is leading perhaps the biggest AI infrastructure buildout in history. Just this month, OpenAI announced a partnership with the United Arab Emirates and an expansion into personal gadgets—a forthcoming 'family of devices' developed with Jony Ive, former chief design officer at Apple. For-profit or not, the future of AI still appears to be very much in Altman's hands.
Why all the worry about corporate structure anyway? Governance, boardroom processes, legal arcana—these things are not what sci-fi dreams are made of. Yet those concerned with the societal dangers that generative AI, and thus OpenAI, pose feel these matters are of profound importance. The still more powerful artificial 'general' intelligence, or AGI, that OpenAI and its competitors are chasing could theoretically cause mass unemployment, worsen the spread of misinformation, and violate all sorts of privacy laws. In the highest-flung doomsday scenarios, the technology brings about civilizational collapse. Altman has expressed these concerns himself—and so OpenAI's 2019 structure, which gave the nonprofit final say over the for-profit's actions, was meant to guide the company toward building the technology responsibly instead of rushing to release new AI products, sell subscriptions, and stay ahead of competitors.
'OpenAI's nonprofit mission, together with the legal structures committing it to that mission, were a big part of my decision to join and remain at the company,' Jacob Hilton, a former OpenAI employee who contributed to ChatGPT, among other projects, told me. In April, Hilton and a number of his former colleagues, represented by the Harvard law professor Lawrence Lessig, wrote a letter to the court hearing Musk's lawsuit, arguing that a large part of OpenAI's success depended on its commitment to safety and the benefit of humanity. To renege on, or at least minimize, that mission was a betrayal.
The concerns extend well beyond former employees. Geoffrey Hinton, a computer scientist at the University of Toronto who last year received a Nobel Prize for his AI research, told me that OpenAI's original structure would better help 'prevent a super intelligent AI from ever wanting to take over.' Hinton is one of the Nobel laureates who has publicly opposed the tech company's for-profit shift, alongside the economists Joseph Stiglitz and Oliver Hart. The three academics, joining a number of influential lawyers, economists, and AI experts, in addition to several former OpenAI employees, including Hilton, signed an open letter in April urging the attorneys general in Delaware and California—where the company's nonprofit was incorporated and where the company is headquartered, respectively—to closely investigate the December proposal. According to its most recent tax filing, OpenAI is intended to build AGI 'that safely benefits humanity, unconstrained by a need to generate financial return,' so disempowering the nonprofit seemed, to the signatories, self-evidently contradictory.
Read: 'We're definitely going to build a bunker before we release AGI'
In its initial proposal to transition to a for-profit, OpenAI still would have had some accountability as a public-benefit corporation: A PBC legally has to try to make profits for shareholders alongside pursuing a designated 'public benefit' (in this case, building 'safe' and 'beneficial' AI as outlined in OpenAI's founding mission). In its December announcement, OpenAI described the restructure as 'the next step in our mission.' But Michael Dorff, another signatory to the open letter and a law professor at UCLA who studies public-benefit corporations, explained to me that PBCs aren't necessarily an effective way to bring about public good. 'They are not great enforcement tools,' he said—they can 'nudge' a company toward a given cause but do not give regulators much authority over that commitment. (Anthropic and xAI, two of OpenAI's main competitors, are also public-benefit corporations.)
OpenAI's proposed conversion also raised a whole other issue—a precedent for taking resources accrued under charitable intentions and repurposing them for profitable pursuits. And so yet another coalition, composed of nonprofits and advocacy groups, wrote its own petition for OpenAI's plans to be investigated, with the aim of preventing charitable organizations from being leveraged for financial gain in the future.
Regulators, it turned out, were already watching. Three days after OpenAI's December announcement of the plans to revoke nonprofit oversight, Kathy Jennings, the attorney general of Delaware, notified the court presiding over Musk's lawsuit that her office was reviewing the proposed restructure to ensure that the corporation was fulfilling its charitable interest to build AI that benefits all of humanity. California's attorney general, Rob Bonta, was reviewing the restructure, as well.
This ultimately led OpenAI to change plans. 'We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware,' Altman wrote in a letter to OpenAI employees earlier this month. The for-profit, meanwhile, will still transition to a PBC.
The new plan is not yet a done deal: The offices of the attorneys general told me that they are reviewing the new proposal. Microsoft, OpenAI's closest corporate partner, has not yet agreed to the new structure.
One could be forgiven for wondering what all the drama is for. Amid tension over OpenAI's corporate structure, the organization's corporate development hasn't so much as flinched. In just the past few weeks, the company has announced a new CEO of applications, someone to directly oversee and expand business operations; OpenAI for Countries, an initiative focused on building AI infrastructure around the world; and Codex, a powerful AI 'agent' that does coding tasks. To OpenAI, these endeavors legitimately contribute to benefiting humanity: building more and more useful AI tools; bringing those tools and the necessary infrastructure to run them to people around the world; drastically increasing the productivity of software engineers. No matter OpenAI's ultimate aims, in a race against Google and Meta, some commercial moves are necessary to stay ahead. And enriching OpenAI's investors and improving people's lives are not necessarily mutually exclusive.
The greater issue is this: There is no universal definition for 'safe' or 'beneficial' AI. A chatbot might help doctors process paperwork faster and help a student float through high school without learning a thing; an AI research assistant could help climate scientists arrive at novel insights while also consuming huge amounts of water and fossil fuels. Whatever definition OpenAI applies will be largely determined by its board. Altman, in his May letter to employees, contended that OpenAI is on the best path 'to continue to make rapid, safe progress and to put great AI in the hands of everyone.' But everyone, in this case, has to trust OpenAI's definition of safe progress.
The nonprofit has not always been the most effective check on the company. In 2023, the nonprofit board—which then and now had 'control' over the for-profit subsidiary—removed Altman from his position as CEO. But the company's employees revolted, and he was reinstated shortly thereafter with the support of Microsoft. In other words, 'control' on paper does not always amount to much in reality. Sharpe, the OpenAI spokesperson, said the nonprofit will be able to appoint and remove directors to OpenAI's separate for-profit board, but declined to clarify whether its board will be able to remove executives (such as the CEO). The company is 'continuing to work through the specific governance mandate in consultation with relevant stakeholders,' he said.
Sharpe also told me that OpenAI will remove the cap on shareholder returns, which he said will satisfy the conditions for SoftBank's billions of dollars in investment. A top SoftBank executive has said 'nothing has really changed' with OpenAI's restructure, despite the nonprofit retaining control. If investors are now satisfied, the underlying legal structure is irrelevant. Marc Toberoff, a lawyer representing Musk in his lawsuit against OpenAI, wrote in a statement that 'SoftBank pulled back the curtain on OpenAI's corporate theater and said the quiet part out loud. OpenAI's recent 'restructuring' proposal is nothing but window dressing.'
Lessig, the lawyer who represented the former OpenAI employees, told me that 'it's outrageous that we are allowing the development of this potentially catastrophic technology with nobody at any level doing any effective oversight of it.' Two years ago, Altman, in Senate testimony, seemed to agree with that notion: He told lawmakers that 'regulatory intervention by governments will be critical to mitigate the risks' of powerful AI. But earlier this month, only a few days after writing to his employees and investors that 'as AI accelerates, our commitment to safety grows stronger,' he told the Senate something else: Too much regulation would be 'disastrous' for America's AI industry. Perhaps—but it might also be in the best interests of humanity.
Article originally published at The Atlantic

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
20 minutes ago
- Forbes
AI Talent Pipeline: How Nations Compete In The Global AI Race
The artificial intelligence revolution is often framed as a clash of algorithms and infrastructure, but its true battleground lies in the minds shaping it – the AI talent. While nations vie for technological supremacy, the real contest revolves around human capital-who educates, retains, and ethically deploys the brightest minds. This deeper struggle, unfolding across universities, immigration offices, and corporate labs, beneath the surface of flashy model releases and geopolitical posturing, will determine whether AI becomes a force for equitable progress or a catalyst for deeper global divides. China's educational machinery is producing graduates at an unprecedented scale. In 2024, approximately 11.79 million students graduated from Chinese universities, an increase of 210,000 from the previous year, with around 1.6 million specializing in engineering and technology in 2022. By comparison, the United States produces far fewer graduates (around 4.1 million overall in 2022), with only 112,000 graduating with computer and information science degrees. This numerical advantage is reshaping the global talent landscape. According to MacroPolo's Global AI Talent Tracker, China has expanded its domestic AI talent pool significantly, with the percentage of the world's top AI researchers originating from China (based on undergraduate degrees) rising from 29% in 2019 to 47% in 2022. However, the United States remains "the top destination for top-tier AI talent to work," hosting approximately 60% of top AI institutions. Despite this, America's approach to talent faces structural challenges. The H-1B visa cap, which limits skilled foreign workers, is 'set at 65,000 per fiscal year, with an additional 20,000 visas for those with advanced U.S. degrees'. This constraint affects the tech industry's ability to recruit globally. This self-imposed constraint forces companies like Google and Microsoft to establish research centers in Toronto and London – a technological offshoring driven not by cost but by talent access. Europe's predicament completes this global triangle of talent misallocation. The continent invests heavily in AI education – ETH Zurich and Oxford produce world-class specialists-only to watch a significant number board one-way flights to California, lured by higher salaries and cutting-edge projects. This exodus potentially creates a vicious cycle: fewer AI experts means fewer competitive startups, which means fewer reasons for graduates to stay – a continental brain drain that undermines Europe's digital sovereignty. As AI permeates daily life, a quieter conflict emerges: balancing innovation with ethical guardrails. Europe's GDPR has levied huge fines since 2018 for data misuse, such as the $1.3 billion imposed on Meta for transferring EU citizens' data to the US without adequate safeguards, while China's surveillance networks track citizens through 626 million CCTV cameras. U.S. giant Apple was recently fined $95 million amid claims they were spying on users over a decade-long period. These divergent approaches reflect fundamentally different visions for AI's role in society. 'AI is a tool, and its values are human values. That means we need to be responsible developers as well as governors of this technology-which requires a framework,' argues Fei-Fei Li, Stanford AI pioneer. Her call for ethical ecosystems contrasts sharply with China's state-driven model, where AI development aligns with national objectives like social stability and economic planning. The growing capabilities of AI raise significant questions about the future of work. Andrew Ng, co-founder of Google Brain, observes: 'AI software will be in direct competition with a lot of people for a lot of jobs.' McKinsey estimates that 30% of U.S. jobs could be automated by 2030, but impacts will vary wildly. According to the WEF Future of Jobs Report 2025, 'on average, workers can expect that two-fifths (39%) of their existing skill sets will be transformed or become outdated over the 2025-2030 period.' The report goes on to state that, 'the fastest declining roles include various clerical roles, such as cashiers and ticket clerks, alongside administrative assistants and executive secretaries, printing workers, and accountants and auditors.' These changes are accelerated by AI and information processing technologies, as well as an increase in digital access for businesses. The panic that could be caused by these figures is palpable, but some are arguing that the real challenge isn't job loss – it's ensuring that displaced workers can transition to new roles. Countries investing in vocational AI training, like Germany's Industry 4.0 initiative, could potentially achieve smoother workforce transitions. By 2030, China could reverse its brain drain through initiatives like the Thousand Talents Plan, which has already repatriated over 8,000 scientists since 2008. If successful, this repatriation could supercharge domestic innovation while depriving Western labs of critical expertise. Europe's stringent AI Act may inadvertently cede ground to less regulated regions. Businesses could potentially start self-censoring AI projects to avoid EU compliance costs, with complexity, risk governance, data governance and talent listed as the 4 major challenges facing EU organisations in light of this Act, according to McKinsey. They went further to state that only 4% of their survey respondents thought that the requirements of the EU AI Act were even clear. This could create an innovation vacuum, pushing experimental AI development to jurisdictions with less stringent oversight. Breakthroughs in quantum computing could reshape talent flows. IBM's Condor and China's Jiuzhang 3.0 are vying to crack quantum supremacy, with the winner likely attracting a new wave of specialists. 'We need to make the world quantum-ready today by focusing on education and workforce development,' according to Alessandro Curioni, vice-president at IBM Research Europe. A recent WEF report on quantum technologies warns that, 'demand for experts is outpacing available talent and companies are struggling to recruit people in this increasingly competitive and strategic industry.' An IBM quantum data center dpa/picture alliance via Getty Images The focus on human talent suggests a more nuanced understanding of AI development – one that values creative problem-solving and ethical considerations alongside technical progress. "I think the future of global competition is, unambiguously, about creative talent," explains Vivienne Ming, executive chair and co-founder of Socos Labs. 'Everyone will have access to amazing AI. Your vendor on that will not be a huge differentiator. Your creative talent though-that will be who you are.' Similarly, Silvio Savarese, executive vice president and chief scientist at Salesforce AI Research, believes: 'AI is placing tools of unprecedented power, flexibility and even personalisation into everyone's hands, requiring little more than natural language to operate. They'll assist us in many parts of our lives, taking on the role of superpowered collaborators.' The employment landscape for AI talent faces complex challenges. In China, the record graduating class of 11.8 million students in 2024 confronts a difficult job market, with youth unemployment in urban areas at 18.8% in August, the highest for the year. These economic realities are forcing both countries to reconsider how they develop, attract, and retain AI talent. The competition isn't just about who can produce or attract the most talent – it's about who can create environments where that talent can thrive and innovate responsibly. The AI race transcends technical capabilities and infrastructure. While computing power, algorithms, and data remain important, human creativity, ethics, and talent ultimately determine how AI will shape our future. As nations compete for AI dominance, they must recognize that sustainable success requires nurturing not just technical expertise but also creative problem-solving and ethical judgment – skills that remain distinctly human even as AI capabilities expand. In this regard success isn't about which nation develops the smartest algorithms, but which creates environments where human AI talent can flourish alongside increasingly powerful AI systems. This human element may well be the deciding factor in who leads the next phase of the AI revolution.

Yahoo
21 minutes ago
- Yahoo
Meta plans to automate many of its product risk assessments
An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR. NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators. Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an "instant decision" with AI-identified risks, along with requirements that an update or feature must meet before it launches. This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates 'higher risks,' as 'negative externalities of product changes are less likely to be prevented before they start causing problems in the world.' In a statement, Meta seemed to confirm that it's changing its review system, but it insisted that only 'low-risk decisions' will be automated, while 'human expertise' will still be used to examine 'novel and complex issues.' This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
31 minutes ago
- Yahoo
Perplexity's new tool can generate spreadsheets, dashboards, and more
Perplexity, the AI-powered search engine gunning for Google, on Thursday released Perplexity Labs, a tool for subscribers to Perplexity's $20-per-month Pro plan that can craft reports, spreadsheets, dashboards, and more. Perplexity Labs is available on the web, iOS, and Android, and coming soon to Perplexity's apps for Mac and Windows. "Perplexity Labs can help you complete a variety of work and personal projects," Perplexity explains in a blog post. "Labs is designed to invest more time — 10 minutes or longer — and leverage additional tools [to accomplish tasks], such as advanced file generation and mini-app creation." Labs, which arrives the same day as viral AI agent platform Manus released a slide deck creation tool, is a part of Perplexity's effort to broaden beyond its core business of search. Perplexity is currently previewing a web browser, Comet, and recently acquired a social media network for professionals. Perplexity Labs, powered by AI, can conduct research and analysis, taking around 10 minutes and using tools like web search, code execution, and chart and image creation to craft reports and visualizations. Labs can create interactive web apps, Perplexity says, and write code to structure data, apply formulas, and create documents. All files created during a Perplexity Labs workflow — such as charts, images, and code files — are organized in a tab from where they can be viewed or downloaded. "This expanded capability empowers you to develop a broader array of deliverables for your projects," according to Perplexity's blog post. It all sounds good in theory, but AI being an imperfect technology, Labs likely doesn't always hit the mark. Of course, we'll reserve judgment until we have a chance to test it. Perplexity has increasingly invested in corporate-focused functionality, last summer launching an enterprise plan with user management, "internal knowledge search," and more. The moves could be in part at the behest of the VCs backing Perplexity, who are no doubt eager to see a return sooner than later. Perplexity is reportedly in talks to raise up to $1 billion in capital from investors at an $18 billion valuation. Sign in to access your portfolio