logo
OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

A new investigative report titled The OpenAI Files, released by non-profit watchdogs The Midas Project and The Tech Oversight Project, reveals troubling insights into OpenAI's internal operations, leadership, and shifting priorities. The report, based on a year-long investigation, provides detailed documentation on how the organization's structure and goals have evolved since its founding in 2015.
Founded to democratize artificial intelligence research and prevent misuse, OpenAI began as a non-profit organization. However, despite this designation, it has developed a widely used paid product, ChatGPT, and has maintained a hybrid structure involving a for-profit subsidiary. In late 2024, OpenAI announced plans to shift toward full commercialization. The move faced significant backlash from co-founder Elon Musk, former employees, civil society groups, and competitors like Meta, leading to a reversal in May 2025 and a recommitment to non-profit governance.
The watchdog report outlines four core areas of concern: organizational restructuring, leadership, transparency and safety, and conflicts of interest. It criticizes OpenAI for quietly altering its original investor profit cap—initially set at 100x return on investment. By 2023, it allowed annual increases of 20%, and by 2025, was reportedly considering removing the cap entirely. The groups argue that these changes contradict OpenAI's founding mission to ensure AGI (artificial general intelligence) benefits all of humanity.
Concerns about CEO Sam Altman are also central to the report. Watchdog organizations cite past controversies involving Altman's alleged absenteeism, manipulative behavior, and staff resignations. Former senior OpenAI figures, including Dario Amodei and Ilya Sutskever, are said to have described his leadership style as abusive.
Further, the report alleges that OpenAI failed to allocate promised resources to a dedicated AI safety team and instead pressured employees to meet product deadlines while discouraging internal criticism and whistleblowing. It also highlights the company's use of strict NDAs that threatened employees with the loss of vested stock if they spoke out.
Additionally, several board members are reported to have financial interests in businesses that benefit from OpenAI's market position. CEO Altman has invested in multiple affiliated ventures, while Board Chair Brett Taylor and board member Adebayo Ogunlesi lead or fund companies that rely on OpenAI's technology. These ties, the watchdogs argue, may compromise the integrity of OpenAI's mission and decision-making.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

MIT AI study: Using tools like ChatGPT is making you dumber, study reveals
MIT AI study: Using tools like ChatGPT is making you dumber, study reveals

Express Tribune

time19 hours ago

  • Express Tribune

MIT AI study: Using tools like ChatGPT is making you dumber, study reveals

A new study from the Massachusetts Institute of Technology (MIT) suggests that frequent use of generative artificial intelligence (GenAI) tools, such as large language models (LLMs) like ChatGPT, may suppress cognitive engagement and memory retention. In the experiment, published by MIT, researchers monitored the brain activity of participants as they wrote essays using different resources: one group relied on LLMs, another used internet search engines, and a third worked without any digital tools. The results revealed a consistent pattern — participants who used GenAI tools displayed significantly reduced neural connectivity and recall, compared to those who relied on their own cognitive abilities. Brain scans taken during the experiment showed that LLM users exhibited weaker connections between brain regions associated with critical thinking and memory. While their essays scored well in both human and AI evaluations — often praised for their coherence and alignment with the given prompt — the writing was also described as formulaic and less original. Notably, those who used LLMs struggled to quote from or recall their own writing in subsequent sessions. Their brain activity reportedly "reset" to a novice state regarding the essay topics, a finding that strongly contrasts with participants in the "brain-only" group, who retained stronger memory and demonstrated deeper cognitive engagement throughout. Participants who used search engines showed intermediate neural activity. Though their writing lacked variety and often reflected similar phrasing, they exhibited better memory retention than the LLM group, suggesting that the process of searching and evaluating sources provided more mental stimulation. In a later phase of the experiment, the groups were shuffled. Participants who had initially used GenAI tools showed improved neural connectivity when writing without digital aids — an encouraging sign that cognitive function could rebound when AI dependence is reduced. The findings could carry important implications for education and the workplace. BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying. Turns out, AI isn't making us more productive. It's making us cognitively bankrupt. Here's what 4 months of data revealed: (hint: we've been measuring productivity all wrong) — Alex Vacca (@itsalexvacca) June 18, 2025 With GenAI tools increasingly integrated into school assignments and professional tasks, concerns about cognitive atrophy are rising. Some students now generate entire essays with tools like ChatGPT, while educators rely on similar software to grade and detect AI-generated work. The study suggests that such widespread use of digital assistance — even when indirect — may hinder mental development and reduce long-term memory retention. As schools and organisations continue to navigate the integration of AI tools, the MIT research underscores the importance of balancing convenience with cognitive engagement. Researchers suggest that while GenAI can be a useful aid, overreliance could have unintended consequences for human memory and creativity.

Elon Musk reignites feud with Sam Altman after OpenAI controversy surfaces
Elon Musk reignites feud with Sam Altman after OpenAI controversy surfaces

Express Tribune

timea day ago

  • Express Tribune

Elon Musk reignites feud with Sam Altman after OpenAI controversy surfaces

Elon Musk has once again directed public criticism toward OpenAI CEO Sam Altman, calling him 'Scam Altman' in a recent post on the social media platform X. The comment came shortly after the release of The OpenAI Files, a report raising concerns about OpenAI's governance, profit model, and safety practices. Musk framed his remark as a reaction to the revelations outlined in the report. Musk and Altman, both prominent figures in the tech and artificial intelligence sectors, share a history as co-founders of OpenAI. Musk served on OpenAI's board from its founding in 2015 until stepping down in 2018. He has since criticized the company's evolution from a non-profit research lab to a 'capped-profit' model, arguing that the move contradicts OpenAI's original mission of promoting safe and open AI development. In addition to their involvement in AI, both Musk and Altman have been vocal supporters of cryptocurrency, adding another dimension to their public personas and influence in the tech world. Musk, who leads Tesla, SpaceX, and X, has long promoted digital assets such as Bitcoin and Dogecoin. Tesla holds over $1 billion in Bitcoin, and Musk's public endorsements of Dogecoin have often impacted its market price. Altman, similarly, has expressed support for Bitcoin, describing it as a critical technological step during a 2023 appearance on The Joe Rogan Experience. He also launched the cryptocurrency Worldcoin in 2019, with a focus on decentralized identity and finance. Musk's recent criticism comes amid broader industry debates over the future of artificial intelligence. Centralized models, like those used by OpenAI, have been criticized for concentrating power and limiting transparency. Decentralized alternatives, often supported by crypto infrastructure, are being explored as a counterbalance.

OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization
OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

Express Tribune

timea day ago

  • Express Tribune

OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

A new investigative report titled The OpenAI Files, released by non-profit watchdogs The Midas Project and The Tech Oversight Project, reveals troubling insights into OpenAI's internal operations, leadership, and shifting priorities. The report, based on a year-long investigation, provides detailed documentation on how the organization's structure and goals have evolved since its founding in 2015. Founded to democratize artificial intelligence research and prevent misuse, OpenAI began as a non-profit organization. However, despite this designation, it has developed a widely used paid product, ChatGPT, and has maintained a hybrid structure involving a for-profit subsidiary. In late 2024, OpenAI announced plans to shift toward full commercialization. The move faced significant backlash from co-founder Elon Musk, former employees, civil society groups, and competitors like Meta, leading to a reversal in May 2025 and a recommitment to non-profit governance. The watchdog report outlines four core areas of concern: organizational restructuring, leadership, transparency and safety, and conflicts of interest. It criticizes OpenAI for quietly altering its original investor profit cap—initially set at 100x return on investment. By 2023, it allowed annual increases of 20%, and by 2025, was reportedly considering removing the cap entirely. The groups argue that these changes contradict OpenAI's founding mission to ensure AGI (artificial general intelligence) benefits all of humanity. Concerns about CEO Sam Altman are also central to the report. Watchdog organizations cite past controversies involving Altman's alleged absenteeism, manipulative behavior, and staff resignations. Former senior OpenAI figures, including Dario Amodei and Ilya Sutskever, are said to have described his leadership style as abusive. Further, the report alleges that OpenAI failed to allocate promised resources to a dedicated AI safety team and instead pressured employees to meet product deadlines while discouraging internal criticism and whistleblowing. It also highlights the company's use of strict NDAs that threatened employees with the loss of vested stock if they spoke out. Additionally, several board members are reported to have financial interests in businesses that benefit from OpenAI's market position. CEO Altman has invested in multiple affiliated ventures, while Board Chair Brett Taylor and board member Adebayo Ogunlesi lead or fund companies that rely on OpenAI's technology. These ties, the watchdogs argue, may compromise the integrity of OpenAI's mission and decision-making.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store