Latest news with #TheMidasProject
1726054615-0%2FOpenAI-(2)1726054615-0.png&w=3840&q=100)

Express Tribune
a day ago
- Business
- Express Tribune
OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization
A new investigative report titled The OpenAI Files, released by non-profit watchdogs The Midas Project and The Tech Oversight Project, reveals troubling insights into OpenAI's internal operations, leadership, and shifting priorities. The report, based on a year-long investigation, provides detailed documentation on how the organization's structure and goals have evolved since its founding in 2015. Founded to democratize artificial intelligence research and prevent misuse, OpenAI began as a non-profit organization. However, despite this designation, it has developed a widely used paid product, ChatGPT, and has maintained a hybrid structure involving a for-profit subsidiary. In late 2024, OpenAI announced plans to shift toward full commercialization. The move faced significant backlash from co-founder Elon Musk, former employees, civil society groups, and competitors like Meta, leading to a reversal in May 2025 and a recommitment to non-profit governance. The watchdog report outlines four core areas of concern: organizational restructuring, leadership, transparency and safety, and conflicts of interest. It criticizes OpenAI for quietly altering its original investor profit cap—initially set at 100x return on investment. By 2023, it allowed annual increases of 20%, and by 2025, was reportedly considering removing the cap entirely. The groups argue that these changes contradict OpenAI's founding mission to ensure AGI (artificial general intelligence) benefits all of humanity. Concerns about CEO Sam Altman are also central to the report. Watchdog organizations cite past controversies involving Altman's alleged absenteeism, manipulative behavior, and staff resignations. Former senior OpenAI figures, including Dario Amodei and Ilya Sutskever, are said to have described his leadership style as abusive. Further, the report alleges that OpenAI failed to allocate promised resources to a dedicated AI safety team and instead pressured employees to meet product deadlines while discouraging internal criticism and whistleblowing. It also highlights the company's use of strict NDAs that threatened employees with the loss of vested stock if they spoke out. Additionally, several board members are reported to have financial interests in businesses that benefit from OpenAI's market position. CEO Altman has invested in multiple affiliated ventures, while Board Chair Brett Taylor and board member Adebayo Ogunlesi lead or fund companies that rely on OpenAI's technology. These ties, the watchdogs argue, may compromise the integrity of OpenAI's mission and decision-making.

Yahoo
14-05-2025
- Business
- Yahoo
xAI's promised safety report is MIA
Elon Musk's AI company, xAI, has missed a self-imposed deadline to publish a finalized AI safety framework, as noted by watchdog group The Midas Project. xAI isn't exactly known for its strong commitments to AI safety as it's commonly understood. A recent report found that the company's AI chatbot, Grok, would undress photos of women when asked. Grok can also be considerably more crass than chatbots like Gemini and ChatGPT, cursing without much restraint to speak of. Nonetheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company's approach to AI safety. The eight-page document laid out xAI's safety priorities and philosophy, including the company's benchmarking protocols and AI model deployment considerations. As The Midas Project noted in the blog post on Tuesday, however, the draft only applied to unspecified future AI models "not currently in development." Moreover, it failed to articulate how xAI would identify and implement risk mitigations, a core component of a document the company signed at the AI Seoul Summit. In the draft, xAI said that it planned to release a revised version of its safety policy "within three months" — by May 10. The deadline came and went without acknowledgement on xAI's official channels. Despite Musk's frequent warnings of the dangers of AI gone unchecked, xAI has a poor AI safety track record. A recent study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found that xAI ranks poorly among its peers, owing to its "very weak" risk management practices. That's not to suggest other AI labs are faring dramatically better. In recent months, xAI rivals including Google and OpenAI have rushed safety testing and have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that the seeming deprioritization of safety efforts is coming at a time when AI is more capable — and thus potentially dangerous — than ever. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


TechCrunch
13-05-2025
- Business
- TechCrunch
xAI's promised safety report is MIA
Elon Musk's AI company, xAI, has missed a self-imposed deadline to publish a finalized AI safety framework, as noted by watchdog group The Midas Project. xAI isn't exactly known for its strong commitments to AI safety as it's commonly understood. A recent report found that the company's AI chatbot, Grok, would undress photos of women when asked. Grok can also be considerably more crass than chatbots like Gemini and ChatGPT, cursing without much restraint to speak of. Nonetheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company's approach to AI safety. The eight-page document laid out xAI's safety priorities and philosophy, including the company's benchmarking protocols and AI model deployment considerations. As The Midas Project noted in a blog post on Tuesday, however, the draft only applied to unspecified future AI models 'not currently in development.' Moreover, it failed to articulate how xAI would identify and implement risk mitigations, a core component of a document the company signed at the AI Seoul Summit. In the draft, xAI said that it planned to release a revised version of its safety policy 'within three months' — by May 10. The deadline came and went without acknowledgement on xAI's official channels. Despite Musk's frequent warnings of the dangers of AI gone unchecked, xAI has a poor AI safety track record. A recent study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found that xAI ranks poorly among its peers, owing to its 'very weak' risk management practices. That's not to suggest other AI labs are faring dramatically better. In recent months, xAI rivals including Google and OpenAI have rushed safety testing and have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that the seeming deprioritization of safety efforts is coming at a time when AI is more capable — and thus potentially dangerous — than ever.
Yahoo
08-03-2025
- Business
- Yahoo
Google scrubs mentions of 'diversity' and 'equity' from responsible AI team webpage
Google has quietly updated the webpage for its Responsible AI and Human Centered Technology (RAI-HCT) team, the team charged with conducting research into AI safety, fairness, and explainability, to scrub mentions of "diversity" and "equity." A previous version of the page used language such as "marginalized communities," "diverse," "underrepresented groups," and "equity" to describe the RAI-HCT team's work. That language has been removed, or in some cases replaced with less specific wording (e.g. "all," "varied," and "numerous" rather than "diverse") Google didn't immediately respond to a request for comment. The changes, which were spotted by watchdog group The Midas Project, come after Google deleted similar language from its Startups Founders Fund grant website. The company said in early February that it would eliminate its diversity hiring targets and review its diversity, equity, and inclusion (DEI) programs. Google is among the many big tech companies that have rolled back DEI initiatives as the Trump Administration targets what it characterizes as an "illegal" practice. Amazon and Meta have walked back DEI measures over the past few months, and OpenAI recently removed mentions of diversity and inclusion from a webpage on its hiring practices. Apple, however, recently pushed back against a shareholder proposal to end its DEI programs. Many of these companies, including Google, have contracts with federal agencies.