8 Years Later: Tesla Semi Delayed Again, Production Now Expected to Begin in Late 2025
Ask yourself: how many times can a production date slip before you question the entire project? Elon Musk first unveiled the Tesla Semi in 2017 with a promised 2019 launch. Today, eight years later, that launch remains a mirage.
Musk's original pledge to start building in 2019 vanished without a single factory-built Semi rolling off the line. A handful of pilot trucks appeared at a PepsiCo event in December 2022, yet no volume production followed. Tesla's own quarterly update now pushes the first assembly to late 2025, with meaningful volume still scheduled for 2026. The biggest recent news on the Semi? The walls were up at the Nevada plant. That's a seven-year gap between unveiling and projected mass manufacturing.
Early prices-$150,000 for a 300-mile version, $180,000 for 500 miles-never held firm. Ryder, an early launch partner, slashed its order from 42 to 18 trucks and requested a 28-month extension, blaming "dramatic changes to the Tesla product economics." Simple math suggests each Semi now costs upward of $350,000-double the original quote-undermining the Total Cost of Ownership pitch that once dazzled fleet operators.
Meanwhile, BYD's Class 8 8TT is hauling freight across North America. Customers like Anheuser-Busch, GSC Logistics and Golden State Express operate them on regional routes in California, demonstrating true highway operation (e.g. Port of Oakland to Tracy, CA, including an 8 % grade) with ample range remaining for return trips. Freightliner's eCascadia (a production Class 8 electric tractor designed for regional and highway applications) sees daily highway duty, offering up to 230 miles of range on a full charge and a gross combined weight rating of 82,000 lbs.
Meanwhile, Volvo, Mack, Kenworth, and Peterbilt all have electric Class 8s on the road in the US; leaving Tesla way out in the cold. Tesla's promised 50,000 units per year by 2026? Too little, too late. Nine years overdue, it looks more like a press-release fantasy than an achievable target.
Each new "update"-steel beams up! chargers installed!-serves as a rallying cry for hopeful investors. Yet slide decks and photo ops don't haul cargo or pay off loans. As recently as three weeks ago, facing a shareholder rebellion, Musk was assuring shareholders that he'd "deliver the Semi this year, and it will be huge". For Tesla devotees and shareholders alike, broken timelines transform into a test of faith: miss one deadline, reset the countdown, and wait for the next miracle. How long before investors demand proof over promises?
The Semi's saga underscores a hard truth: grand reveals don't equal delivery. If it were only the Semi, one might give him the benefit of the doubt. But Elon Musk's other undelivered promises include Level 5 full self-driving by end-2020, a $25,000 mass-market Tesla, on-schedule Cybertruck deliveries, next-generation Roadster production in 2020, a fully automated robotaxi network by 2022, human Neuralink clinical trials in 2024, commercial Hyperloop service by 2021, and publicly accessible Boring Company tunnels by 2022.
Musk's gifts lie in hype, not heavy haul. Until Semis exit the prototype hangars en masse, this program remains a stock-price prop, not an industry disruptor. Watch for disappointed and rebellious shareholders at the end of 2025. Fleet managers-and investors-should insist on steel on wheels, not steel beams in Nevada. Plenty of other brands are out there.
Bookmark December 2025 and scope public delivery data. Then ask yourself: will Tesla finally roll real Semis off the line, or will the next slideware date simply become another bullet point in a growing ledger of unmet promises?
Copyright 2025 The Arena Group, Inc. All Rights Reserved.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 hours ago
- Yahoo
NY Times columnist compares Elon Musk to history's worst murderers over USAID cuts
New York Times columnist David Brooks suggested Elon Musk belongs on a list of history's greatest mass murderers, including Pol Pot, Mao and Stalin due to DOGE'S cuts to USAID. "So far, 55,000 adults have died of AIDS in the four months since Trump was elected; 6,000 children are dead because of what Doge did. That's just PEPFAR, the HIV. You add them all up, that's 300,000 dead, and we're four months in. You add that all up and accumulate that over four years the number of dead grows very high," Brooks said on PBS News Hour Saturday. "There are mass murderers in the world, Pol Pot, Mao Tse Tung, Stalin. We don't have anybody on the list from America," Brooks added. Trump 'Agreed' On Shutting Down Usaid, Elon Musk Says The New York Times columnist conceded the so-called deaths that he claims Musk is responsible for are not "the same kind of genocide" perpetrated by the despots he cited, but maintained that Musk's closure of USAID made him partially responsible for mass deaths. The casualty counts Brooks cited were provided by a Boston University digital tracker that monitors deaths that it claims occur from changes in U.S. foreign aid created by Global Health Professor Brooke Nichols. When filtered for USAID cuts, the tracker states that 99,528 adults and 207,680 children have died this year from "funding discontinuation." Read On The Fox News App The communist regime Khmer Rouge killed between 1.5-3 million people between 1975 and 1979 in the Cambodian Genocide when the group plunged the country into mass violence targeting intellectuals and religious minorities following their overthrow of the Cambodian Monarchy and installation of Pol Pot as dictator in 1975. Thirty-eight million people died of starvation during Mao's Great Leap Forward and the ensuing famine it produced. Mao biographers Jung Chang and Jon Halliday estimate that Mao was responsible for over 70 million peacetime deaths. Stalin killed over six million of his own citizens in the gulags, the Great Terror and other actions. Click Here For More Coverage Of Media And Culture Brooks is purpotedly one of the few conservative voices at the New York Times. However, the columnist has espoused liberal views on a host of topics and has praised Biden's presidency. He has also positioned himself as a fierce critic of President Donald Trump. "Trumpism can be seen as a giant attempt to amputate the highest aspirations of the human spirit and to reduce us to our most primitive, atavistic tendencies," Brooks wrote in a recent column. USAID was set up in the early 1960s to act on behalf of the U.S. to deliver aid across the globe, particularly in impoverished and underdeveloped regions. The agency now operates in 60 nations and employs some 10,000 people, two-thirds of whom work overseas — though most of the on-the-ground work is contracted out to third-party organizations funded by USAID, according to a BBC report. The Trump administration, however, has argued that USAID is a corrupt organization that is mishandling U.S. taxpayer dollars. DOGE froze USAID funds and sent much of its staff home shortly after Trump took office. Musk, who was spearheading the DOGE effort to root out "waste, fraud and abuse" in government, has said the agency is beyond repair. The Trump administration announced that it would be cutting 90% of all USAID contracts in April. U2 frontman Bono recently made a similar claim as Brooks on Joe Rogan's podcast, saying 300,000 people have died due to USAID cuts. Musk swiftly shot back on social media, calling the Irish rocker a "liar/idiot" and claiming that there have not been any deaths from USAID cuts. "He's such a liar/idiot Zero people have died!" The Tesla CEO posted. Musk and Brooks did not respond to Fox News Digital's request to comment. Michael Dorgan contributed to this report. Original article source: NY Times columnist compares Elon Musk to history's worst murderers over USAID cuts
Yahoo
4 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl
Yahoo
4 hours ago
- Yahoo
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl