
Elon Musk-led group proposes buying OpenAI for $97.4 billion. OpenAI CEO says ‘no thank you'
A group of investors led by Elon Musk is offering about $97.4 billion to buy OpenAI, escalating a legal dispute with the artificial intelligence company that Musk helped found.
Musk and his own AI startup, xAI, and a consortium of investment firms want to take control of the ChatGPT maker and revert it to its original charitable mission as a nonprofit research lab, according to Musk's attorney Marc Toberoff.
OpenAI CEO Sam Altman quickly rejected the deal on Musk's social platform X, saying, 'no thank you but we will buy Twitter for $9.74 billion if you want.'
Musk bought Twitter, now called X, for $44 billion in 2022.
Musk and Altman, who together helped start OpenAI in 2015 and later competed over who should lead it, have been in a long-running feud over the startup's direction since Musk resigned from its board in 2018.
Musk, an early OpenAI investor and board member, sued the company last year, first in a California state court and later in federal court, alleging it had betrayed its founding aims as a nonprofit research lab benefiting the public good. Musk had invested about $45 million in the startup from its founding until 2018, Toberoff has said.
Musk and OpenAI lawyers faced off in a California federal court last week as a judge weighed Musk's request for a court order that would block the ChatGPT maker from converting itself to a for-profit company.
U.S. District Judge Yvonne Gonzalez Rogers hasn't yet ruled on Musk's request but in the courtroom said it was a 'stretch' for Musk to claim he will be irreparably harmed if she doesn't intervene to stop OpenAI from moving forward with its planned for-profit transition.
But the judge also raised concerns about OpenAI and its relationship with business partner Microsoft and said she wouldn't stop the case from moving to trial as soon as next year so a jury can decide.
'It is plausible that what Mr. Musk is saying is true. We'll find out. He'll sit on the stand,' she said.
Along with Musk and xAI, others backing the bid announced Monday include Baron Capital Group, Valor Management, Atreides Management, Vy Fund, Emanuel Capital Management and Eight Partners VC.
Toberoff said in a statement that if Altman and OpenAI's current board 'are intent on becoming a fully for-profit corporation, it is vital that the charity be fairly compensated for what its leadership is taking away from it: control over the most transformative technology of our time.'
Musk's attorney also shared a letter he sent in early January to the attorneys general of California and Delaware.
'As both your offices must ensure any such transactional process relating to OpenAI's charitable assets provides at least fair market value to protect the public's beneficial interest, we assume you will provide a process for competitive bidding to actually determine that fair market value,' Toberoff wrote, asking for more information on the terms and timing of that bidding process.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
20 minutes ago
- Yahoo
11 startups from YC Demo Day that investors are talking about
At Y Combinator's Spring 2025 Demo Day on Wednesday, nearly every presenting startup had something to do with AI — they're either developing AI agents or creating tools to facilitate their development. Indeed, several founders seem to be taking a leaf out of the pages of several successful AI startups: about half a dozen startups were presenting variations of 'Cursor for X.' For example, Den is building a 'Cursor for knowledge workers,' and Vessence is on its way to make a 'Cursor for lawyers.' It wasn't all only about AI, though. We noticed several startups are working on robotics, which seems to be having a bit of a revival at the moment. Below are some of the startups that caught both investors' and our attention. What it does: SEO for LLMs Why it's a fave: How people search for content is changing, with folks using various AI tools like ChatGPT, Gemini, Perplexity to find content. Understandably, brands need to find a way to increase their visibility on these platforms. Anvil claims it helps brands measure, optimize and increase their presence on these AI tools. What it does: Builds 3D chips Why it's a fave: Transistors aren't getting smaller as fast as they used to, so Atum's founders propose the best new way to put more transistors on a chip, and therefore increase processing power, is to stack them in three dimensions. Investors told me that Atum's vision is so revolutionary that the company has a chance to become the next NVIDIA. What it does: Automates enterprise software implementation Why it's a fave: The startup says prominent software vendors like SAP, ServiceNow, AWS and Box have already reached out to use Auctor's solution themselves and potentially for help with integrating software at customer sites. What it does: AI copilot for solopreneurs Why it's a fave: Cactus says people who run businesses all by themselves are often too busy to pursue new opportunities. The startup says its AI bot can take off some of the load by answering calls and accepting payments on your behalf. What it does: Cursor for enterprise knowledge workers Why it's a fave: Investors told me this is one of the hottest companies in the batch. Den promises its AI agents can replace Slack and Notion, enabling a company's employees to interact and share information with software tailored to each enterprise's specific needs. What it does: Automates customer operations with AI Why it's a fave: Eloquent says its AI bots can help customers of financial services companies do things like automatically unfreeze bank accounts or add drivers to car insurance policies. In other words, Eloquent promises an end to long waits for human customer service. The startup claims financial companies can deploy its AI near instantly without needing to involve internal engineering teams. Eloquent has already raised 'a large seed round,' Tugce Bulut, the startup's co-founder and CEO said on the TBPN podcast. What it does: Tooling for evaluation and reinforcement learning Why it's a fave: Evaluating all the new AI tools for quality is snowballing into a big, difficult problem. LLM Data Company says it can help with its own LLMs that can evaluate the quality of an AI agent, and it's already working with customers including Perplexity. What it does: AI-powered Bloomberg terminal Why it's a fave: 'Terminals are dashboards and not thinking tools,' says Amandeep Singh, co-founder of Scalar Field. While the startup's AI agents won't 'think' for you, it claims they can manipulate financial data with more flexibility than existing financial tools. What it does: Quantum accelerated AI servers to speed up AI training and inference Why it's a fave: While a fully functioning quantum computer may still be years away, the industry has been making progress. What caught my eye about Sygaldry is that its co-founder and CEO is Chad Rigetti, who founded and took his company public via a SPAC in 2021. What it does: Vibe coding to build applications Why it's a fave: Investors who saw a demo of Vybe building apps told me that it can create all sorts of cool tools. One person I talked to even called it a 'clear winner' of the batch.
Yahoo
an hour ago
- Yahoo
From English to Automotive Class, Teachers Assign Projects to Combat AI Cheating
This article was originally published in EdSurge. Kids aren't as sneaky as they think they are. They do try, as Holly Distefano has seen in her middle school English language arts classes. When she poses a question to her seventh graders over her school's learning platform and watches the live responses roll in, there are times when too many are suspiciously similar. That's when she knows students are using an artificial intelligence tool to write an answer. 'I really think that they have become so accustomed to it, they lack confidence in their own writing,' Distefano, who teaches in Texas, says. 'In addition to just so much pressure on them to be successful, to get good grades, really a lot is expected of them.' Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter Distefano is sympathetic — but still expects better from her students. 'I've shown them examples of what AI is — it's not real,' she says. 'It's like margarine to me.' Educators have been trying to curb the use of AI-assisted cheating since ChatGPT exploded onto the scene. It's a formidable challenge. For instance, there's a corner of TikTok reserved for tech influencers who rack up thousands of views and likes teaching students how to most effectively use AI programs to generate their essays, including step-by-step instructions on bypassing AI detectors. And the search term for software that purports to 'humanize' AI-generated content spiked in the fall, according to Google Trends data, only to fall sharply before hitting the peak of its popularity around the end of April. While the overall proportion of students who say they've cheated hasn't fluctuated by much in recent years, students also say generative AI is making academic dishonesty easier. But there may be a solution on the horizon, one that will help ensure students have to put more effort into their schoolwork than entering a prompt into a large language model. Teachers are transitioning away from question-and-answer assignments or straightforward essays — in favor of projects. It's not especially high-tech or even particularly ingenious. Yet proponents say it's a strategy that pushes students to focus on problem-solving while instructing them on how to use AI ethically. Related During this past school year, Distefano says her students' use of AI to cheat on their assignments has reached new heights. She's spent more time coming up with ways to stop or slow their ability to plug questions and assignments into an AI generator, including by giving out hard copy work. It used to mainly be a problem with take-home assignments, but Distefano has increasingly seen students use AI during class. Kids have long been astute at getting around whatever firewalls schools put on computers, and their desire to circumvent AI blockers is no different. Between schoolwork, sports, clubs and everything else middle schoolers are juggling, Distefano can see why they're tempted by the allure of a shortcut. But she worries about what her students are missing out on when they avoid the struggle that comes with learning to write. 'To get a student to write is challenging, but the more we do it, the better we get.' she says. 'But if we're bypassing that step, we're never going to get that confidence. The downfall is they're not getting that experience, not getting that feeling of, 'This is something I did.'' Distefano is not alone in trying to beat back the onslaught of AI cheating. Blue books, which college students use to complete exams by hand, have had a resurgence as professors try to eliminate the risk of AI intervention, reports The Wall Street Journal. Richard Savage, the superintendent of California Online Public Schools, says AI cheating is not a major issue among his district's students. But Savage says it's a simple matter for teachers to identify when students do turn to AI to complete their homework. If a student does well in class but fails their thrice-yearly 'diagnostic exams,' that's a clear sign of cheating. It would also be tough for students to fake their way through live, biweekly progress meetings with their teachers, he adds. Savage says educators in his district will spend the summer working on making their lesson plans 'AI-proof.' 'AI is always changing, so we're always going to have to modify what we do,' he says. 'We're all learning this together. The key for me is not to be AI-averse, not to think of AI as the enemy, but think of it as a tool.' Related Doing that requires teachers to work a little differently. Leslie Eaves, program director for project-based learning at the Southern Regional Education Board, has been devising solutions for educators like Distefano and Savage. Eaves authored the board's guidelines for AI use in K-12 education, released earlier this year. Rather than exile AI, the report recommends that teachers use AI to enhance classroom activities that challenge students to think more deeply and critically about the problems they're presented with. It also outlines what students need to become what Eaves calls 'ethical and effective users' of artificial intelligence. 'The way that happens is through creating more cognitively demanding assignments, constantly thinking in our own practice, 'In what way am I encouraging students to think?'' she says. 'We do have to be more creative in our practice, to try and do some new things to incorporate more student discourse, collaborative hands-on assignments, peer review and editing, as a way to trick them into learning because they have to read someone else's work.' In an English class lesson on 'The Odyssey,' Eaves offers as an example, students could focus on reading and discussion, use pen and paper to sketch out the plot structure, and use AI to create an outline for an essay based on their work, before moving on to peer-editing their papers. Eaves says that the teachers she's working with to take a project-based approach to their lesson plans aren't panicking about AI but rather seem excited about the possibilities. And it's not only English teachers who are looking to shift their instruction so that AI is less a tool for cheating and more a tool that helps students solve problems. She recounts that an automotive teacher realized he had to change his teaching strategy because when his students adopted AI, they 'stopped thinking.' 'So he had to reshuffle his plan so kids were re-designing an engine for use in racing, [figuring out] how to upscale an engine in a race car,' Eaves says. 'AI gave you a starting point — now what can we do with it?' When it comes to getting through to students on AI ethics, Savage says the messaging should be a combination of digital citizenship and the practical ways that using AI to cheat will stunt students' opportunities. Students with an eye on college, for example, give up the opportunity to demonstrate their skills and hurt their competitiveness for college admissions and scholarships when they turn over their homework to AI. Making the shift to more project-based classrooms will be a heavy lift for educators, he says, but districts will have to change, because generative AI is here to stay. 'The important thing is we don't have the answers. I'm not going to pretend I do,' Savage says. 'I know what we can do, when we can get there, and then it'll probably change. The answer is having an open mind and being willing to think about the issue and change and adapt.'
Yahoo
an hour ago
- Yahoo
Federal workers are still required to fill out DOGE ‘five things' email despite Musk being long gone
Despite Elon Musk's exit, federal employees still have to send an email explaining five things that they did in the past seven days, Axios reported. Musk left the his job at the helm of the Department of Government Efficiency at the end of last month and subsequently turned his rage against President Donald Trump. During his tenure, Musk concocted the idea of forcing federal employees to send emails in a post on X, saying that failing to respond more than once could result in their firing. At the time, some cabinet-level officials in the Trump administration such as FBI Director Kash Patel and Director of National Intelligence Tulsi Gabbard pushed back on the emails. Axios reported that White House Chief of Staff Susie Wiles was surprised by the policy. 'To use a phrase Susie might use, she was fit to be tied at Musk,' one official said. But despite Musk's exit, federal government employees at agencies like the Office of Personnel Management and NOAA are encouraged to send out the emails. 'We're told to send it every Monday before midnight,' one employee at Social Security told Axios. Despite this, the employee said 'I have never gotten a response from anyone.' White House spokeswoman Liz Huston responded by saying 'Commissioner Bisignano is streamlining the Social Security Administration to deliver more efficient service for American taxpayers.' OPM spokeswoman McLaurine Pinover said she submits her emails weekly. 'This practice is vital to maintain accountability and transparency in employee contributions,' she told Axios. 'It's an easy way to share my work with leadership.' Despite Musk almost completely decimating the Consumer Financial Protection Bureau, one employee said that they had not been told to stop sending the emails, but they stopped doing so. 'Got tired of saying I hadn't accomplished anything because we haven't been given any work,' they said. White House spokesman Harrison Fields defended the emails too. 'The mission of eliminating waste, fraud, and abuse is a part of the DNA of the federal government and will continue under the direction of the President, his cabinet, and agency heads to enhance government efficiency and prioritize responsible stewardship of taxpayer dollars,' he said. Trump tapped Musk, a major 2024 campaign donor and booster on social media for the president, to lead DOGE, which was named after an internet meme dog. During that time, Musk slashed numerous government agencies such as the US Agency for International Development and the CFPB while reducing the workforces who manage the nation's national park system or at the Department of Health and Human Services. Earlier this month, Musk turned his ire toward Trump's signature 'One Big, Beautiful Bill' domestic spending legislation an 'outrageous, pork-filled abomination.' Musk then went a step further and said that Trump was named in files related to the late financier and trafficker of underage girls Jeffrey Epstein, calling for his impeachment. Earlier this week, Musk apologized for some of his remarks.