logo
With Sales Plummeting, Tesla Cancels Key Cybertruck Component

With Sales Plummeting, Tesla Cancels Key Cybertruck Component

Yahoo09-05-2025
Tesla has officially abandoned its plan to sell a $16,000 range extender for the Cybertruck, Electrek reports — the latest in the automaker's struggles with the incredibly controversial vehicle.
This week, customers who paid a $2,000 reservation fee for the extender received emails from Tesla stating it was no longer selling the add-on, but that it would fully refund their deposits. Electrek reached out to the automaker and confirmed the decision.
Per the reporting, the cancellation comes a month after Tesla quietly pulled the range extender as an add-on on its website's vehicle configurator. No official reason for the cancellation has been given.
With a base range of 320 to 350 miles on a single charge, depending on the configuration, the Cybertruck launched with a far shorter range than the 500-plus miles that CEO Elon Musk once promised. The range extender, a battery pack designed to be installed in the pickup bed, was supposed to make up for that failure.
However, since the accessory was announced around the time the Cybertruck started being delivered in late 2023, Tesla has waffled on how much of a boost the add-on would give. It first said that it would extend the Cybertruck's range to 470 miles, but it later walked back that figure to 445 miles, when the automaker delayed its release date to mid-2025.
Symbolically, by canning the add-on entirely, Tesla has admitted defeat on ever fulfilling its original pledge.
In any case, the extender was always a questionable option. On top of costing as much as a used car, it was going to weigh a whopping 600 pounds and take up an entire third of the rear bed's room once installed.
But the dumped add-on is only the tip of Tesla's troubles with the Cybertruck, which has sold less than 50,000 units since it launched — a far cry from Musk's boast that it would easily move 250,000 per year. In reality, Cybertruck sales have continued to plummet, with only 6,400 of the electric pickups selling in the first quarter of this year. Its inventory of unsold Cybetrucks, meanwhile, has reportedly piled up to a record high of over 10,000 units.
Buyers have plenty of reasons to stay away. Fraught with quality concerns, the Cybertruck has faced eight separate recalls, with the latest one — issued because its glued-on body panels could fly off — affecting nearly all of the vehicles it had ever sold.
The Cybertruck's image has also been irreversibly tarnished by its close association with Musk, who has become deeply reviled for his far-right politics and his role in the Trump administration.
While Tesla recently announced a cheaper version of the Cybertruck with a slightly longer range and far fewer features, it seems unlikely that the bid will excite sales. Clearly, it didn't have much faith in the range extender doing well, and with reports that it's slowing down production of the Cybertruck, maybe the automaker is finally seeing the writing on the wall.
More on Tesla: Tesla's Sales Are Somehow Continuing to Fall
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Elon Musk says Google has the best shot at being the leader in AI
Elon Musk says Google has the best shot at being the leader in AI

Business Insider

time2 minutes ago

  • Business Insider

Elon Musk says Google has the best shot at being the leader in AI

xAI CEO Elon Musk paid a rare compliment to his rival Google on Wednesday. "Outside of real-world AI, Google has the biggest compute (and data) advantage for now, so currently has the highest probability of being the leader," Musk wrote in an X post. But that "may change in a few years," he added. "For the foreseeable future, the major AI companies will continue to prosper, as will xAI. There is just so much to do!" Musk wrote. Musk and Google did not respond to requests for comment from Business Insider. Google has long been a mainstay in AI. In 2017, Google Research released a paper titled "Attention Is All You Need." This seminal work introduced the world to the concept of the Transformer, the technology that powers large language models like ChatGPT. The search giant has also invested in AI startups such as Anthropic and Safe Superintelligence. Google owns a 14% stake in Anthropic, a startup founded by former OpenAI employees, per legal filings obtained by The New York Times. Google has continued to make hefty investments in AI. During its second-quarter earnings call last month, the company said it was raising its capital expenditures by $10 billion this year to $85 billion. Google's CEO, Sundar Pichai, said the increase in capital expenditures would help meet the increased demand for chips and Google's AI products. Last year, Google DeepMind's CEO, Demis Hassabis, was one of three researchers awarded the Nobel Prize in chemistry. Hassabis and his colleague, John Jumper, were recognized for Google DeepMind's work on protein-structure prediction. Musk's comments come as he continues to escalate his feud with Sam Altman and OpenAI. On Monday, Musk said xAI would take " immediate legal action" against Apple for what he said was Apple's bias toward OpenAI in its App Store rankings. A spokesperson for Apple told Business Insider on Tuesday that the App Store is "designed to be fair and free of bias." Musk and Altman cofounded OpenAI in 2015, but their relationship soured after Musk left its board in 2018. Last year, Musk sued OpenAI and accused the ChatGPT maker of violating its nonprofit mission when it partnered with Microsoft. Musk launched his own AI startup, xAI, in July 2023 and debuted his chatbot, Grok, later that year. xAI raised over $12 billion in its Series A, B, and C funding rounds in 2024 and was valued at a reported $50 billion. Last month, Musk said on X that his EV company, Tesla, will ask its shareholders to vote on whether to invest in xAI. He did not specify when the vote would take place. "It's not up to me. If it was up to me, Tesla would have invested in xAI long ago," Musk wrote on X on July 13.

Is Baby Grok really for kids? X's AI past says otherwise
Is Baby Grok really for kids? X's AI past says otherwise

Yahoo

timean hour ago

  • Yahoo

Is Baby Grok really for kids? X's AI past says otherwise

Elon Musk's xAI unveils plans for Baby Grok, a kid-friendly AI chatbot, amid questions over safety, trust, and Grok's controversial past Elon Musk's xAI is reportedly developing 'Baby Grok', a child-friendly version of its Grok AI chatbot. Early reports suggest that it could serve as an educational assistant for children, answering questions and guiding learning in a safe and age-appropriate manner. But creating a truly trustworthy AI for kids isn't just about censoring bad words; it requires rebuilding the AI's training and safety systems from the ground up, ensuring it can protect young users from harmful, biased, or misleading content. With AI now present in classrooms, homes, and even children's toys, the stakes are high. If Baby Grok succeeds, it could become a valuable tool for modern education. If it fails, it could become another high-profile example of AI safety gone wrong. So, how exactly would Baby Grok work? What challenges will it face, and can Musk's team really make it safe enough for kids? Let's dig into the details. Why is Elon Musk building Baby Grok now? Musk's AI company, xAI, launched Grok in 2023 as a bold alternative to ChatGPT, integrating it directly into X (formerly Twitter) for premium subscribers. Grok quickly made headlines for its humor, quick wit, and intentionally 'edgy' tone, a style that won over some fans but also drew sharp criticism when the AI produced inappropriate or offensive responses to user prompts. The pivot toward Baby Grok mirrors a broader industry shift toward AI in education. Tech giants like OpenAI, Google, and Meta are racing to build AI-driven tutoring systems, interactive study aids, and personalized learning companions. The global AI in education market is projected to grow from USD 3.79 billion in 2022 to USD 20.54 billion in 2027, at a CAGR of 45.6% from 2022 to 2027. For Musk, a 'safe-mode' version of Grok could serve multiple purposes: appealing to parents who are wary of unfiltered AI, offering schools a controlled digital learning assistant, and easing concerns from regulators pushing for stricter AI safety laws. At the same time, it could shield xAI from the brand damage that unmoderated, free-form AI can cause, something Grok's initial rollout proved all too well. What Baby Grok promises, and how it would work? Baby Grok is designed to be a smart, child-friendly learning assistant, not just a filtered version of Grok. Its key goals include. Translating complex topics into age-appropriate explanations Presenting subjects like math, science, or history in ways that match a child's comprehension level. Avoiding profanity, explicit content, and violence Going beyond simple word filters, ensuring context-sensitive awareness to keep conversations safe and respectful. Offering interactive learning games and storytelling Features like games and stories can make learning fun while supporting skill development, an approach that has been shown to improve engagement and emotional growth in children. Encouraging curiosity without overwhelming Providing thoughtful, manageable responses that spark interest without cognitive overload. Creating this experience takes more than flipping a 'safe mode' switch; it requires filtered training data, child-centered design, and continuous oversight to ensure Baby Grok is both safe and genuinely educational. How would Baby Grok actually keep conversations kid-friendly? Musk hasn't released technical specifications yet, but AI safety researchers stress that a truly safe children's AI requires a blend of technology, human oversight, and continuous updates: Custom training dataset: Rather than being trained on open internet data, which often includes adult, violent, or misleading content, Baby Grok would need a highly curated, education-first dataset (such as National Geographic Kids, BBC Bitesize, or NASA's Climate Kids). Real-time content filtering: Outputs should pass through advanced filters similar to Microsoft's Content Safety API or Google SafeSearch, which automatically block inappropriate language, explicit imagery, or unsafe suggestions. Human moderation: Since no filter is flawless, platforms like Roblox and Discord rely on moderation teams to review flagged content. Baby Grok would need the same level of vigilance. Adaptive safeguards: Online risks evolve fast. Ongoing updates guided by child psychologists, researchers, and organizations like eSafety are essential to keeping the AI responsive to emerging threats. Importantly, a 2025 benchmark study introduced MinorBench, a tool specifically designed to assess language models' compliance with child safety safeguards, and it revealed significant variability even among top AI systems like Grok in how well they refuse unsafe or inappropriate queries. Can an edgy adult AI be remade for children? Musk's challenge is credibility. Grok's 'uncensored' style was originally marketed as a feature for adults, a stark contrast to the thoughtful tone parents expect from child‑friendly tools. Platforms like YouTube Kids demonstrate how even strict filters can fail, and with generative AI, the risk is amplified; every response is created in real time, not pulled from a pre‑approved library. Researchers at the Center for Humane Technology emphasize that without deep safety integration, child-facing AI can still reproduce bias, misinformation, or harmful advice. A recent framework introduced in 2025, 'LLMs and Childhood Safety', underscores this risk by proposing protective measures for language models used with children, pointing out that current systems often lack the standards needed to reliably prevent harmful outputs, even when they're designed for younger audiences. Before you dive in, take a moment to watch this short video for a quick overview. It sets the stage perfectly. Once you're done, scroll back up and keep reading for the full story. Why is building AI for kids more complex than it looks? Developing AI for children involves meeting stricter legal and ethical standards that go well beyond typical AI design. First, there are data privacy protections. Tools like Baby Grok must comply with stringent child-specific laws such as COPPA in the U.S. and GDPR-K in Europe, which demand explicit parental consent and careful handling of minors' data. Equally important is bias reduction, ensuring the AI avoids reinforcing harmful stereotypes and treats all children fairly, regardless of background. Then there's the need for educational accuracy, meaning the AI's responses must be grounded in trusted sources like Britannica Kids or Scholastic Learn at Home. A 2025 study published in Nature Humanities and Social Sciences Communications reinforces that ethical design, transparency, and verified educational content are critical for AI systems targeting children. The challenge is not just building a smart tool but earning lasting trust from parents, educators, and regulators alike. What's next for Baby Grok? If xAI follows its usual rollout strategy, Baby Grok may begin in a closed beta reserved for Premium+ X subscribers. The company could then expand through integrations with classroom platforms or learning devices, partnerships with educational publishers, and features such as parent dashboards that offer conversation logs and usage controls. But success will largely depend on early safety testing. A single viral misstep, such as a child receiving harmful advice, could severely damage its reputation before it's widely adopted. A 2025 study on AI educational tools emphasizes the importance of establishing strong safety guardrails from the start, including prompt engineering, human-in-the-loop moderation, and continuous evaluation to ensure content remains curriculum-aligned and age-appropriate Can Baby Grok handle the weight of childhood? Baby Grok could redefine AI in education if it balances innovation with strong safety and transparency. But Musk's history of pushing fast, sometimes at the cost of polish, raises questions. Baby Grok represents xAI's bold move into child-focused AI amid rising demand for educational tech. Building a safe AI for kids isn't easy; it requires more than filters, involving retraining, oversight, and ethical design. Musk's credibility is under scrutiny, especially given Grok's edgy origins and controversial responses. Past platform failures show the risks of repurposing adult AI for children without deep safeguards. Trust will be the ultimate test, from parents, educators, and regulators alike. If successful, Baby Grok could set a new standard for child-safe AI. If not, it may serve as a cautionary tale in the history of AI development. The key issue isn't just 'Can Baby Grok be built?' but 'Will parents trust it?'. If xAI meets the highest safety benchmarks and proves its reliability through independent audits, it could set a new standard for kid-friendly AI. If not, it risks becoming yet another cautionary tale in the growing list of AI examples. This story is made with AI assistance and human editing. Solve the daily Crossword

SpaceX shows off huge size of Super Heavy rocket's new grid fins
SpaceX shows off huge size of Super Heavy rocket's new grid fins

Digital Trends

time2 hours ago

  • Digital Trends

SpaceX shows off huge size of Super Heavy rocket's new grid fins

SpaceX is preparing for the 10th test flight of the Starship, the world's most powerful rocket comprising the upper-stage Starship spacecraft and the first-stage Super Heavy rocket. The Elon Musk-led spaceflight company on Wednesday showed off the design of a new grid fin for the Super Heavy rocket. And with a surface area about 50% larger than the previous design, it's massive. Recommended Videos To emphasize the fact, SpaceX shared a photo showing an engineer standing on one of the new grid fins. Take a look below: The first grid fin for the next generation Super Heavy booster. The redesigned grid fins are 50% larger and higher strength, moving from four fins to three for vehicle control while enabling the booster to descend at higher angles of attack. — SpaceX (@SpaceX) August 13, 2025 The Starship's 10th test flight is expected to take place this month, and it's likely that the redesigned grid fins will debut in that mission. SpaceX said that besides being larger, the new grid fins, which act as aerodynamic control surfaces through small adjustments during flight, are also stronger. The new design means that the 233-feet-tall (71 meters) booster will now be fitted with three grid fins instead of four, which will enable the booster to descend at higher angles of attack as it comes in to land back at the launch tower shortly after deploying the Starship spacecraft to orbit. The fins will also be used for the spectacular 'catch' maneuver where the tower secures the booster just above the ground on its return. Note, also, that the fins have been moved further down the booster, from close to the top. SpaceX said the new position will reduce the heat they receive from Starship spacecraft's engines when they fire up during stage separation, lowering the risk of any damage occurring. SpaceX is planning to launch the Starship from its Starbase site in Boca Chica, Texas, before the end of this month. NASA will be watching the proceedings carefully, as it wants to use the Starship — alongside its own SLS rocket — for crew and cargo missions to the moon as part of the Artemis program. NASA has already inked a deal with SpaceX to use a modified version of the Starship spacecraft to land two astronauts on the lunar surface in the Artemis III mission, currently set for 2027. But whether that target date holds remains to be seen.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store