Former OpenAI Employee Rages Against Sam Altman: "Person of Low Integrity"
A former OpenAI employee is joining Elon Musk's campaign against CEO Sam Altman — and he's got a lot to say about his former boss.
After jumping ship to Anthropic, which was cofounded by former OpenAI-ers over AI safety and ethics concerns, researcher Todor Markov is now claiming in a new legal filing that his ex-boss is, essentially, a really bad dude.
The root of Markov's complaint, as he explained in his portion of a lengthy amicus brief that also includes statements from 11 other former OpenAI employees, are Altman's alleged lies about non-disparagement agreements that staffers are forced to sign early in their time at the company.
Last year, the researcher discovered the existence of the clause that essentially made him and other departing employees give up their right to ever speak critically about OpenAI if they wanted to keep their vested equity in their multi-billion-dollar former employer. During an all-hands meeting about the controversial clause, Altman claimed he had no knowledge of its existence — only to be caught with egg on his face immediately after when Vox published leaked documents showing that the CEO had signed off on it.
As Markov explained in his declaration, that debacle proved to him that Altman "was a person of low integrity who had directly lied to employees" about the restrictive non-disparagement agreements. This suggested to him that the CEO was "very likely lying to employees about a number of other important topics," including its commitment to building safe artificial general intelligence, or AGI.
In the company's charter, OpenAI promises to "use any influence we obtain over AGI's deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power." According to Markov, that promise was "positioned as the foundational document guiding all of our strategic decisions" — but ultimately, it proved empty.
"I realized the Charter had been used as a smokescreen," he wrote, "something to attract and retain idealistic talent while providing no real check on OpenAI's growth and its pursuit of AGI."
Like Musk, Markov believes that Altman's attempts to restructure OpenAI into a for-profit entity shows that its charter and mission "were used all along as a facade to manipulate its workforce and the public." Unlike that multi-hyphenate billionaire cofounder, however, the researcher isn't looking to buy anything — and seems mostly to want his voice heard.
More on Altman: This Appears to Be Why Sam Altman Actually Got Fired by OpenAI

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
23 minutes ago
- Associated Press
Consultant on trial for AI-generated robocalls mimicking Biden says he has no regrets
LACONIA, N.H. (AP) — A political consultant told a New Hampshire jury Wednesday that he doesn't regret sending voters robocalls that used artificial intelligence to mimic former President Joe Biden and that he's confident he didn't break the law. Steven Kramer, 56, of New Orleans, has long admitted to orchestrating a message sent to thousands of voters two days before New Hampshire's Jan. 23, 2024, presidential primary. Recipients heard an AI-generated voice similar to the Democratic president's that used his catchphrase 'What a bunch of malarkey' and, as prosecutors allege, suggested that voting in the primary would preclude voters from casting ballots in November. 'It's important that you save your vote for the November election,' voters were told. 'Your votes make a difference in November, not this Tuesday.' Kramer, who faces decades in prison if convicted of voter suppression and impersonating a candidate, said his goal was to send a wake-up call about the potential dangers of AI when he paid a New Orleans magician $150 to create the recording. He was getting frequent calls from people using AI in campaigns, and, worried about the lack of regulations, made it his New Year's resolution to take action. 'This is going to be my one good deed this year,' he recalled while testifying in Belknap County Superior Court. He said his goal wasn't to influence an election, because he didn't consider the primary a real election. At Biden's request, the Democratic National Committee dislodged New Hampshire from its traditional early spot in the 2024 nominating calendar but later dropped its threat not to seat the state's national convention delegates. Biden did not put his name on the ballot or campaign there but won as a write-in. Kramer, who owns a firm specializing in get-out-the-vote projects, argued that the primary was a meaningless straw poll unsanctioned by the DNC. At the time the calls went out, voters were disenfranchised, he said. Asked by his attorney, Tom Reid, whether he did anything illegal, Kramer said, 'I'm positive I did not.' Later, he said he had no regrets and that his actions likely spurred AI regulations in multiple states. Kramer, who will be questioned by prosecutors Thursday, also faces a $6 million fine by the Federal Communications Commission but told The Associated Press on Wednesday that he won't pay it. Lingo Telecom, the company that transmitted the calls, agreed to pay $1 million in a settlement in August. The robocalls appeared to come from a former New Hampshire Democratic Party chair, Kathy Sullivan, and told voters to call her number to be removed from the call list. On the witness stand earlier Wednesday, Sullivan said she was confused and then outraged after speaking to one of the recipients and later hearing the message. 'I hung up the phone and said, 'There is something really crazy going on,'' she said. 'Someone is trying to suppress the vote for Biden. I can't believe this is happening.' Months later, she got a call from Kramer in which he said he used her number because he knew she would contact law enforcement and the media. He also described his motive — highlighting AI's potential dangers — but she didn't believe him, she testified. 'My sense was he was trying to convince me that he'd done this defensible, good thing,' she said. 'I'm listening to this thinking to myself, 'What does he thing I am, stupid?' He tried to suppress the vote.'
Yahoo
25 minutes ago
- Yahoo
Nvidia chief calls AI ‘the greatest equalizer' — but warns Europe risks falling behind
PARIS (AP) — Will artificial intelligence save humanity — or destroy it? Lift up the world's poorest — or tighten the grip of a tech elite? Jensen Huang, the global chip tycoon, offered his opinion on Wednesday: neither dystopia nor domination. AI, he said, is a tool for liberation. Wearing his signature biker jacket and mobbed by fans for selfies, the Nvidia CEO cut the figure of a tech rockstar as he took the stage at VivaTech in Paris. 'AI is the greatest equalizer of people the world has ever created,' Huang said, kicking off one of Europe's biggest technology industry fairs. But beyond the sheeny optics, Nvidia used the Paris summit to unveil a wave of infrastructure announcements across Europe, signaling a dramatic expansion of the AI chipmaker's physical and strategic footprint on the continent. In France, the company is deploying 18,000 of its new Blackwell chips with startup Mistral AI. In Germany, it's building an industrial AI cloud to support manufacturers. Similar rollouts are underway in Italy, Spain, Finland and the U.K., including a new AI lab in Britain. Other announcements include a partnership with AI startup Perplexity to bring sovereign AI models to European publishers and telecoms, a new cloud platform with Mistral AI, and work with BMW and Mercedes-Benz to train AI-powered robots for use in auto plants. The announcements reflect how central AI infrastructure has become to global strategy, and how Nvidia — the world's most valuable chipmaker — is positioning itself as the engine behind it. At the center of the debate is Huang's concept of the AI factory: not a plant that makes goods, but a vast data center that creates intelligence. These facilities train language models, simulate new drugs, detect cancer in scans, and more. Asked if such systems risk creating a 'technological priesthood' — hoarding computing power and stymying the bottom-up innovation that fueled the tech industry for the past 50 years — Huang pushed back. 'Through the velocity of our innovation, we democratize,' he told The Associated Press. 'We lower the cost of access to technology.' As Huang put it, these factories 'reason,' 'plan,' and 'spend a lot of time talking to' themselves, powering everything from ChatGPT to autonomous vehicles and diagnostics. But some critics warn that without guardrails, such all-seeing, self-reinforcing systems could go the way of Skynet in ' The Terminator ' movie — vast intelligence engines that outpace human control. 'Just as electricity powered the last industrial revolution, AI will power the next one,' he said. 'Every country now needs a national intelligence infrastructure.' He added: 'AI factories are now part of a country's infrastructure. That's why you see me running around the world talking to heads of state — they all want AI to be part of their infrastructure. They want AI to be a growth manufacturing industry for them.' Europe, long praised for its leadership on digital rights, now finds itself at a crossroads. As Brussels pushes forward with world-first AI regulations, some warn that over-caution could cost the bloc its place in the global race. With the U.S. and China surging ahead and most major AI firms based elsewhere, the risk isn't just falling behind — it's becoming irrelevant. Huang has a different vision: sovereign AI. Not isolation, but autonomy — building national AI systems aligned with local values, independent of foreign tech giants. 'The data belongs to you,' Huang said. 'It belongs to your people, your country... your culture, your history, your common sense.' But fears over AI misuse remain potent — from surveillance and deepfake propaganda to job losses and algorithmic discrimination. Huang doesn't deny the risks. But he insists the technology can be kept in check — by itself. 'In the future, the AI that is doing the task is going to be surrounded by 70 or 80 other AIs that are supervising it, observing it, guarding it, ensuring that it doesn't go off the rails.' The VivaTech event was part of Huang's broader European tour. He had already appeared at London Tech Week and is scheduled to visit Germany. In Paris, he joined French President Emmanuel Macron and Mistral AI CEO Arthur Mensch to reinforce his message that AI is now a national priority. — Chan reported from London.
Yahoo
26 minutes ago
- Yahoo
Meta Platforms (META) Bets Big on AGI With $10 Billion Push and New AI Dream Team
Meta Platforms, Inc. (NASDAQ:) is one of the 10 AI Stocks on Wall Street's Radar. On June 10, Bloomberg News reported that the company's CEO, Mark Zuckerberg, is setting up a team of experts to achieve what is known as 'artificial general intelligence' (AGI), or machines that can match or surpass human capabilities. Citing sources, the report has revealed that the new AI team is being set up along with a reported investment of over $10 billion in Scale AI. It further reported how Scale AI founder Alexandr Wang is expected to join the group after a deal is done. Reportedly, Zuckerberg is planning to personally recruit around 50 people, including a new head of AI research for the AGI team. The decision is being made after looking at the performance and reception of Meta's latest large language model, Llama 4, the report stated. While we acknowledge the potential of META as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and Disclosure: None.