
SpaceX to invest $2 billion in Elon Musk's xAI startup: report
The investment follows xAI's merger with X and values the combined company at $113 billion, with the Grok chatbot now powering Starlink support and eyed for future integration into Tesla's Optimus robots, the report added.
Musk said in a post on X that an investment by SpaceX into xAI 'would be great.'
Getty Images
In response to a post on X about whether Tesla could also invest in xAI, Musk said on Sunday, 'It would be great, but subject to board and shareholder approval,' without confirming or denying the Journal report on SpaceX's investment plans in xAI.
SpaceX, xAI and Tesla did not immediately respond to requests for comment. Reuters could not immediately confirm the WSJ report.
Despite recent controversies involving Grok's responses, Musk has called it 'the smartest AI in the world,' and xAI continues to spend heavily on model training and infrastructure.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CBS News
28 minutes ago
- CBS News
Pentagon to start using Grok as part of a $200 million contract with Elon Musk's xAI
The Pentagon has signed on to use Grok, the AI chatbot built by Elon Musk 's company xAI, as part of a new $200 million agreement that opens the door for its deployment across the federal government, the company announced Monday. The announcement comes amid Musk's public breakup with President Trump and days after Grok generated antisemitic responses and praised Adolf Hitler. The rollout is part of "Grok for Government," a newly launched suite of tools designed for use by federal agencies, local governments, and national security operations. xAI said its products, including its latest Grok 4 model, will now be available for purchase through the General Services Administration (GSA), allowing any federal office to adopt the technology. The move aligns with the Trump administration's push for more aggressive adoption of artificial intelligence across the government. Since taking office in January, Mr. Trump has championed AI as a pillar of national security and innovation. Musk himself briefly served in the Trump administration earlier this year, overseeing the White House's Department of Government Efficiency, or DOGE, before stepping down in May amid a public break with Mr. Trump over his sweeping tax and spending bill. Musk has since emerged as a sharp critic of that legislation, even floating the idea of launching a third political party. Despite the rift, xAI has continued to expand its government work. The new offering includes custom national security tools, AI-powered science and health applications, and cleared engineering support for classified environments. The announcement comes just days after Grok generated antisemitic responses to user prompts and referenced Hitler as part of what the company called an effort to make the model "less politically correct." Hours later, Musk wrote in a post on X that "Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed." The posts were later deleted and xAI said it "quickly" patched the issue. One day later, xAI launched an upgraded version of Grok it described as a major leap forward. Musk also announced that Grok would be used in Teslas. But the latest version was not without kinks, too: Grok checked with Musk's views before answering a question, according to The Associated Press. Grok was introduced in late 2023 as a more unfiltered alternative to other chatbots like ChatGPT, and is already integrated into Musk's social media platform X, formerly known as Twitter. "America is the world leader in AI," xAI said in Monday's post announcing the Pentagon deal. "We're excited to contribute back to the country that made xAI uniquely possible here."


The Verge
an hour ago
- The Verge
Tesla's Autopilot is under scrutiny in a rare jury trial
Tesla is on trial in Miami today in a case that accuses Elon Musk's company of liability in a fatal crash involving Autopilot. The driver-assist system has come under scrutiny in the past for a number of fatal incidents, but Tesla has only rarely faced a jury trial over the question of whether Autopilot was at fault for someone's death. The trial comes at a particularly risky moment for Tesla, which is currently forging ahead with its plan to introduce robotaxis to more cities. The company is also experiencing a monthslong backlash for Musk's hard-right turn and his work with Donald Trump's administration. Autopilot, which can control steering and braking functions, as well as perform automatic lane changes while on certain highways, has come under increased scrutiny from federal regulators. And it has been at the center of several lawsuits, some of which Tesla has settled and others of which have been dismissed. The case in question involves an inattentive driver of a Tesla Model S and a couple who were out stargazing late at night. Naibel Benavides, 20, was killed in 2019 when George McGee's Model S rammed into a stationary SUV parked next to a T-intersection. McGee was using Autopilot, but had dropped his phone and was inattentive at the time of the crash. Benavides and her boyfriend, Dillon Angulo, who was seriously injured, were standing outside of the SUV when McGee's Tesla plowed into it. The trial comes at a particularly risky moment for Tesla The case, which is being heard in the US District Court for the Southern District of Florida, was filed by Angulo and the family of Benavides. Tesla plans to argue that the company isn't at fault because Autopilot was not fully in control of the vehicle at the time of the crash, citing data that shows that McGee overrode the driver assist by pressing the accelerator at the time of the crash. Also, Tesla has long argued that drivers bear responsibility when crashes occur involving Autopilot. On its website, the company says that its driver-assistance systems 'require active driver supervision and do not make the vehicle autonomous.' The plaintiffs will argue that the system bears some responsibility for failing to warn the driver that a crash was imminent. The vehicle ignored a stop sign before the crash, and the automatic emergency braking should have worked even if Autopilot was not engaged. Still, it will be tough for the plaintiffs to convince a jury that Tesla was at fault. In Florida automobile liability cases, the standard is 'whether the car manufacturer exhibited a reckless disregard for human life equivalent to manslaughter by designing and marketing the vehicle,' the court notes. Indeed, in another case involving a crash from 2019, Tesla was found not to be liable for the death of a Model 3 owner whose vehicle crashed while driving in Autopilot. And in another case, a jury ruled against plaintiff Justine Hsu, who sued Tesla after her vehicle hit a median while using Autopilot. Tesla has managed to dodge responsibility for fatal crashes involving its products for a long time. The company was forced to issue several recalls after a federal investigation into dozens of crashes involving Tesla vehicles with Autopilot, but it has never been criminally indicted. In 2023, Musk laughed off a question from investors as to whether his company would accept legal liability for its self-driving vehicles in the future. 'There's a lot of people who assume we have legal liability,' Musk said, 'judging by the lawsuits.' Still, the stakes are incredibly high for Tesla — but then again, when are they not? The presiding judge in the Florida case has ruled that the plaintiffs may seek punitive damages from Tesla. And because Tesla has refused to impose geographic limits on Autopilot, despite evidence that the system was ill-equipped to handle some situations, the judge said that a jury could reasonably rule against Tesla. 'A reasonable jury could find that Tesla acted in reckless disregard of human life for the sake of developing their product and maximizing profit,' she wrote.


The Verge
an hour ago
- The Verge
US government announces $200 million Grok contract a week after ‘MechaHitler' incident
A week after Elon Musk's Grok dubbed itself 'MechaHitler' and spewed antisemitic stereotypes, the US government has announced a new contract granting the chatbot's creator, xAI, up to $200 million to modernize the Defense Department. xAI is one of several leading AI companies to receive the award, alongside Anthropic, Google, and OpenAI. But the timing of the announcement is striking given Grok's recent high-profile spiral, which drew congressional ire and public pushback. The use of technology, and especially AI, in the defense space has long been a controversial topic even within the tech industry, and Musk's prior involvement in slashing federal government contracts through his work at the Department of Government Efficiency (DOGE) still raises questions about potential conflicts — though his relationship with President Donald Trump has more recently soured, and his administration has claimed Musk would step back from any potential conflicts while at DOGE. The contract announcement from the Chief Digital and Artificial Intelligence Office (CDAO) is light on details, but says the deals will help the DoD 'develop agentic AI workflows across a variety of mission areas.' Alongside the contract award, xAI announced 'Grok for Government,' which it says will supply 'frontier AI products' to the US. In addition to the DoD contract, xAI says other federal agencies will now be able to purchase its tools via the General Services Administration (GSA) schedule. The company plans to work on new products for government customers, like custom models focused on national security, applications for healthcare and science use cases, and models accessible in classified environments. Days after changes to Grok sent it off the rails (saying that if 'calling out radicals cheering dead kids makes me 'literally Hitler,' then pass the mustache,' and referencing a 'pattern-noticing meme' where 'folks with surnames like 'Steinberg' (often Jewish) keep popping up in extreme leftist activism, especially the anti-white variety'), the company apologized for 'the horrific behavior that many experienced.' It said the update responsible for Grok's tirades was active for 16 hours but had been deprecated. Instructions given to the chatbot, like to not be 'afraid to offend people who are politically correct,' had the 'undesired' effect of leading it to 'ignore its core values in certain circumstances in order to make the response engaging to the user,' — even if that meant 'producing responses containing unethical or controversial opinions.'