a day ago
OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization
A new investigative report titled The OpenAI Files, released by non-profit watchdogs The Midas Project and The Tech Oversight Project, reveals troubling insights into OpenAI's internal operations, leadership, and shifting priorities. The report, based on a year-long investigation, provides detailed documentation on how the organization's structure and goals have evolved since its founding in 2015.
Founded to democratize artificial intelligence research and prevent misuse, OpenAI began as a non-profit organization. However, despite this designation, it has developed a widely used paid product, ChatGPT, and has maintained a hybrid structure involving a for-profit subsidiary. In late 2024, OpenAI announced plans to shift toward full commercialization. The move faced significant backlash from co-founder Elon Musk, former employees, civil society groups, and competitors like Meta, leading to a reversal in May 2025 and a recommitment to non-profit governance.
The watchdog report outlines four core areas of concern: organizational restructuring, leadership, transparency and safety, and conflicts of interest. It criticizes OpenAI for quietly altering its original investor profit cap—initially set at 100x return on investment. By 2023, it allowed annual increases of 20%, and by 2025, was reportedly considering removing the cap entirely. The groups argue that these changes contradict OpenAI's founding mission to ensure AGI (artificial general intelligence) benefits all of humanity.
Concerns about CEO Sam Altman are also central to the report. Watchdog organizations cite past controversies involving Altman's alleged absenteeism, manipulative behavior, and staff resignations. Former senior OpenAI figures, including Dario Amodei and Ilya Sutskever, are said to have described his leadership style as abusive.
Further, the report alleges that OpenAI failed to allocate promised resources to a dedicated AI safety team and instead pressured employees to meet product deadlines while discouraging internal criticism and whistleblowing. It also highlights the company's use of strict NDAs that threatened employees with the loss of vested stock if they spoke out.
Additionally, several board members are reported to have financial interests in businesses that benefit from OpenAI's market position. CEO Altman has invested in multiple affiliated ventures, while Board Chair Brett Taylor and board member Adebayo Ogunlesi lead or fund companies that rely on OpenAI's technology. These ties, the watchdogs argue, may compromise the integrity of OpenAI's mission and decision-making.