FAA says SpaceX must wait to launch Starship on 9th flight test from Texas: Here's why
Billionaire Elon Musk had made it known he hoped his commercial spaceflight company could launch the 400-foot rocket much sooner than federal regulators are apparently willing to allow. In a recent announcement, the Federal Aviation Administration, which licenses commercial rocket launches, said approval for Starship's next flight test is pending the completion of an investigation into the vehicle's previous explosion.
The Starship launch vehicle, which has been under development for years, has exploded twice in its first two demonstrations of 2025. While an investigation into the first fiery mishap in January has concluded, the FAA and SpaceX have yet to put the final touches on a similar inquiry into the most recent flight in March.
The delay comes as Musk, a close adviser of President Donald Trump, looks for SpaceX to significantly ramp up the testing of a vehicle that is due to play a significant role in upcoming U.S. spaceflight missions.
The FAA announced May 15 that it had approved license modifications officially granting SpaceX's request to increase the number of Starship launches from Starbase in South Texas to 25 per year.
"However, SpaceX may not launch until the FAA either closes the Starship Flight 8 mishap investigation or makes a return to flight determination," the agency said in a May 15 statement. "The FAA is reviewing the mishap report SpaceX submitted on May 14."
The two mishaps of 2025 prompted government officials in the United Kingdom to send a letter to the U.S. State Department, requesting that next flight's trajectory be changed to protect British territories in the Caribbean, ProPublica reported.
The mishaps are part of the reason federal regulators also announced that the FAA is expanding the size of aircraft and maritime hazard areas in the U.S. and other countries for the next flight test. The decision is also because SpaceX intends to reuse for the first time a Super Heavy booster rocket that has launched before.
Musk had hinted on Tuesday, May 13, that SpaceX could launch its Starship "next week" on its ninth flight test from the company's Starbase in Boca Chica near Brownsville, Texas.
Maritime warnings over the Gulf of Mexico, renamed by the U.S. government as the Gulf of America, even suggested the launch was being targeted for Wednesday, May 21. An updated advisory now indicates a launch is planned as early as Tuesday, May 27.
SpaceX, though, hasn't officially announced a target launch date while it awaits a green light from the FAA.
SpaceX is developing Starship to be a fully reusable transportation system, meaning both the rocket and vehicle can return to the ground for additional missions. In the years ahead, Starship is intended to carry both cargo and humans to Earth's orbit and deeper into the cosmos.
NASA's lunar exploration plans, which appear to be in jeopardy under President Donald Trump's proposed budget, call for Artemis III astronauts aboard the Orion capsule to board the Starship while in orbit for a ride to the moon's surface.
But Musk is more preoccupied with Starship reaching Mars – potentially, he has claimed, by the end of 2026. Under his vision, human expeditions aboard the Starship could then follow in the years after the first uncrewed spacecraft reaches the Red Planet.
Starship is regarded as the world's largest and most powerful launch vehicle ever developed.
At more than 400 total feet in height, Starship towers over SpaceX's famous Falcon 9 rocket – one of the world's most active – which stands at nearly 230 feet.
The launch vehicle is composed of both a 232-foot Super Heavy rocket and the 171-foot upper stage spacecraft, or capsule.
Super Heavy alone is powered by 33 of SpaceX's Raptor engines. The upper section, also called Starship or Ship for short, is the upper stage powered by six Raptor engines that will ultimately travel in orbit.
The first two flight tests of 2025 ended in dramatic explosions that sent cascades of fiery debris streaking across the sky. In both cases, the upper stage, the vehicle where astronauts and cargo would ride, came apart mere minutes into its flight instead of landing as planned in the Indian Ocean.
SpaceX is working with the FAA to investigate both mishaps, the most recent of which occurred March 6.
In a similar investigation into the first explosion of the year on Jan. 16, SpaceX concluded the mishap was due to a series of propellant leaks and fires in the aft section of the vehicle that caused 'all but one of Starship's engines to execute controlled shut down sequences." This led to the communication breakdown and the vehicle to trigger its own self destruction.
"With a test like this, success comes from what we learn, and today's flight will help us improve Starship's reliability,' the company said after the March 6 mishap. 'We will conduct a thorough investigation, in coordination with the FAA, and implement corrective actions to make improvements on future Starship flight tests.'
While the Starship has reached space, it has yet to reach Earth's orbit in any of its eight flight tests – instead traveling at a lower-altitude on a suborbital trajectory.
The vehicle has also not exploded on every iteration. In three tests between June and November 2024, Starship flew halfway around the world before reentering Earth's atmosphere and splashing down as planned in the Indian Ocean – critical proof that its basic design is functional.
Eric Lagatta is the Space Connect reporter for the USA TODAY Network. Reach him at elagatta@gannett.com
This article originally appeared on Corpus Christi Caller Times: SpaceX will have to wait to launch Starship again from Texas: Here's why
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Atlantic
an hour ago
- Atlantic
This Year Will Be the Turning Point for AI College
A college senior returning to classes this fall has spent nearly their entire undergraduate career under the shadow—or in the embrace—of generative AI. ChatGPT first launched in November 2022, when that student was a freshman. As a department chair at Washington University in St. Louis, I witnessed the chaos it unleashed on campus. Students weren't sure what AI could do, or which uses were appropriate. Faculty were blindsided by how effectively ChatGPT could write papers and do homework. College, it seemed to those of us who teach it, was about to be transformed. But nobody thought it would happen this quickly. Three years later, the AI transformation is just about complete. By the spring of 2024, almost two-thirds of Harvard undergrads were drawing on the tool at least once a week. In a British survey of full-time undergraduates from December, 92 percent reported using AI in some fashion. Forty percent agreed that 'content created by generative AI would get a good grade in my subject,' and nearly one in five admitted that they've tested that idea directly, by using AI to complete their assignments. Such numbers will only rise in the year ahead. 'I cannot think that in this day and age that there is a student who is not using it,' Vasilis Theoharakis, a strategic-marketing professor at the Cranfield School of Management who has done research on AI in the classroom, told me. That's what I'm seeing in the classes that I teach and hearing from the students at my school: The technology is no longer just a curiosity or a way to cheat; it is a habit, as ubiquitous on campus as eating processed foods or scrolling social media. In the coming fall semester, this new reality will be undeniable. Higher education has been changed forever in the span of a single undergraduate career. 'It can pretty much do everything,' says Harrison Lieber, a WashU senior majoring in economics and computer science (who took a class I taught on AI last term). As a college student, he told me, he has mostly inhabited a world with ChatGPT. For those in his position, the many moral questions that AI provokes—for example, whether it is exploitative, or anti-intellectual, or ecologically unsound—take a back seat to the simple truth of its utility. Lieber characterized the matter as pragmatic above all else: Students don't want to cheat; they certainly don't want to erode the value of an education that may be costing them or their family a small fortune. But if you have seven assignments due in five days, and AI could speed up the work by tenfold for the cost of a large pizza, what are you meant to do? In spring 2023, I spoke with a WashU student whose paper had been flagged by one of the generally unreliable AI detectors that universities have used to stem the tide of cheating. He told me that he'd run his text through grammar-checking software and asked ChatGPT to improve some sentences, and that he'd done this to make time for other activities that he preferred. 'Sometimes I want to play basketball,' he said. 'Sometimes I want to work out.' His attitude might have been common among large-language-model users during that first, explosive year of AI college: If a computer helps me with my paper, then I'll have more time for other stuff. That appeal persists in 2025, but as these tools have taken over in the dorms, the motivations of their users have diversified. For Lieber, AI's allure seems more about the promise of achievement than efficiency. As with most students who are accepted to and graduate from an elite university, he and his classmates have been striving their whole life. As Lieber put it, if a course won't have 'a tangible impact on my ability to get a good job,' then 'it's not worth putting a lot of my time into.' This approach to education, coupled with a ' dismal ' outlook for postgraduate employment, justifies an ever more ferocious focus on accomplishment. Lieber is pursuing a minor in film and media studies. He has also started a profitable business while in school. Still, he had to network hard to land a good job after graduation. (He is working in risk management.) Da'Juantay Wynter, another rising senior at WashU who has never seen a full semester without AI, told me he always writes his own essays but feels okay about using ChatGPT to summarize readings, especially if he is in a rush. And like the other students I spoke with, he's often in a rush. Wynter is a double major in educational studies and American-culture studies; he has also served as president of the Association of Black Students, and been a member of a student union and various other campus committees. Those roles sometimes feel more urgent than his classwork, he explained. If he does not attend to them, events won't take place. 'I really want to polish up all my skills and intellect during college,' he said. Even as he knows that AI can't do the work as well, or in a way that will help him learn, 'it's always in the back of my mind: Well, AI can get this done in five seconds.' Another member of his class, Omar Abdelmoity, serves on the university's Academic Integrity Board, the body that adjudicates cases of cheating, with AI or otherwise. In almost every case of AI cheating he's seen, Abdelmoity told me, students really did have the time to write the paper in question—they just got stressed or preoccupied by other things, and turned to AI because it works and it is available. Students also feel the strain of soaring expectations. For those who want to go to medical school, as Abdelmoity does, even getting a 4.0 GPA and solid MCAT scores can seem insufficient for admission to the best programs. Whether or not this is realistic, students have internalized the message that they should be racking up more achievements and experience: putting in clinical hours, publishing research papers, and leading clubs, for example. In response, they seek ways to 'time shift,' Abdelmoity said, so they can fit more in. And that's at an elite private university, he continued, where the pressure is high but so is the privilege. At a state school, a student might be more likely to work multiple jobs and take care of their family. Those ordinary demands may encourage AI use even more. In the end, Abdelmoity said, academic-integrity boards such as the one he sits on can only do so much. For students who have access to AI, an education is what you make of it. If the AI takeover of higher ed is nearly complete, plenty of professors are oblivious. It isn't that they fail to understand the nature of the threat to classroom practice. But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation. Some professors I spoke with have been taking modest steps in self-defense: They're abandoning online and take-home assignments, hoping to retain the purity of their coursework. Kerri Tobin, an associate professor of education at Louisiana State University, told me that she is making undergrads do a lot more handwritten, in-class writing—a sentiment I heard many times this summer. The in-class exam, and its associated blue book, is also on the rise. And Abdelmoity reported that the grading in his natural-science courses has already been rejiggered, deemphasizing homework and making tests count for more. These adjustments might be helpful, but they also risk alienating students. Being forced to write out essays in longhand could make college feel even more old-fashioned than it did before, and less connected to contemporary life. Other professors believe that moral appeals may still have teeth. Annabel Rothschild, an assistant professor of computer science at Bard College, said she's found that blanket rules and prohibitions have been less effective than a personal address and appeal to social responsibility. Rothschild is particularly concerned about the environmental harms of AI, and she reports that students have responded to discussions about those risks. The fact that she's a scientist who understands the technology gives her message greater credibility. It also helps that she teaches at a small college with a focus on the arts. Today's seniors entered college at the tail end of the coronavirus pandemic, a crisis that once seemed likely to produce its own transformation of higher ed. The sudden switch to Zoom classes in 2020 revealed, over time, just how outmoded the standard lecture had become; it also showed that, if forced by circumstance, colleges could turn on a dime. But COVID led to little lasting change in the college classroom. Some of the students I spoke with said the response to AI has been meager too. They wondered why faculty weren't doing more to adjust teaching practices to match the fundamental changes wrought by new technologies—and potentially improve the learning experience in the process. Lieber said that he wants to learn to make arguments and communicate complex ideas, as he does in his film minor. But he also wonders why more courses can't assess those skills through classroom discussion (which is hard to fake) instead of written essays or research papers (which may be completed with AI). 'People go to a discussion-based class, and 80 percent of the class doesn't participate in discussion,' he said. The truth is that many professors would like to make this change but simply can't. A lot of us might want to judge students on the merits of their participation in class, but we've been discouraged from doing so out of fear that such evaluations will be deemed arbitrary and inequitable —and that students and their parents might complain. When professors take class participation into account, they do so carefully: Students tend to be graded on whether they show up or on the number of times they speak in class, rather than the quality of what they say. Erin McGlothlin, the vice dean of undergraduate affairs in WashU's College of Arts & Sciences, told me this stems from the belief that grading rubrics should be crystal clear in spelling out how class discussion is evaluated. For professors, this approach avoids the risk of any conflicts related to accommodating students' mental health or politics, or to bureaucratic matters. But it also makes the modern classroom more vulnerable to the incursion of AI. If what a student says in person can't be assessed rigorously, then what they type on their computer—perhaps with automated help—will matter all the more. Like the other members of his class, Lieber did experience a bit of college life before ChatGPT appeared. Even then, he said, at the very start of his freshman year, he felt alienated from some of his introductory classes. 'I would think to myself, What the hell am I doing, sitting watching this professor give the same lecture that he has given every year for the last 30 years? ' But he knew the answer even then: He was there to subsidize that professor's research. At America's research universities, teaching is a secondary job activity, at times neglected by faculty who want to devote as much time as possible to writing grants, running labs, and publishing academic papers. The classroom experience was suffering even before AI came onto the scene. Now professors face their own temptations from AI, which can enable them to get more work done, and faster, just as it does for students. I've heard from colleagues who admit to using AI-generated recommendation letters and course syllabi. Others clearly use AI to write up their research. And still more are eager to discuss the wholesome-seeming ways they have been putting the technology to use—by simulating interactions with historical authors, for example, or launching minors in applied AI. But students seem to want a deeper sort of classroom innovation. They're not looking for gimmicks—such as courses that use AI only to make boring topics seem more current. Students like Lieber, who sees his college education as a means of setting himself up for his career, are demanding something more. Instead of being required to take tests and write in-class essays, they want to do more project-based learning—with assignments that 'emulate the real world,' as Lieber put it. But designing courses of this kind, which resist AI shortcuts, would require professors to undertake new and time-consuming labor themselves. That assignment comes at the worst possible time. Universities have been under systematic attack since President Donald Trump took office in January. Funding for research has been cut, canceled, disrupted, or stymied for months. Labs have laid off workers. Degree programs have cut doctoral admissions. Multi-center research projects have been put on hold. The ' college experience ' that Americans have pursued for generations may soon be over. The existence of these stressors puts higher ed at greater risk from AI. Now professors find themselves with even more demands than they anticipated and fewer ways to get them done. The best, and perhaps the only, way out of AI's college takeover would be to embark on a redesign of classroom practice. But with so many other things to worry about, who has the time? In this way, professors face the same challenge as their students in the year ahead: A college education will be what they
Yahoo
5 hours ago
- Yahoo
Here's the forecast for the Tesla share price
The Tesla (NASDAQ:TSLA) share price is very volatile for a mega-cap stock. The company has long excited investors with its futuristic ambitions, from fully autonomous vehicles to humanoid robots, and even operations on Mars. However, analysts are increasingly cautious about the valuation despite the bold promises. The price target The average 12-month price target from 37 Wall Street analysts is currently $307.23. That's compared to a share price of $335.58 as I write. Forecasts vary widely, with a high of $500 and a low of just $19. The consensus suggests that the stock's overvalued however, it's around fair value if we exclude GLJ Research's incredibly bearish take. But Tesla's valuation metrics should make investors think twice. Its forward price-to-earnings (P/E) ratio's an insane 198.87 times. That's more than 1,000% higher than the consumer discretionary sector median. Other indicators — including enterprise value-to-EBIT, price-to-sales, and price-to-cash flow — are also clearly elevated. Optimism baked in Much of the optimism baked into the share price concerns Tesla's ambitions beyond electric vehicles (EVs). As we know, CEO Elon Musk has repeatedly suggested that Robotaxis and the humanoid robot Optimus could transform the company's future and justify a far higher valuation. However, timelines for both are relatively vague. And there have been a few disappointments of late. The full rollout of Robotaxis has faced several delays, and full regulatory approval for autonomous vehicles remains a significant hurdle. Optimus, meanwhile, still appears far from commercialisation. Even if these technologies prove viable, questions remain over consumer adoption, pricing power, and competitive threats from other automakers and tech firms. In the case of robotaxis, infrastructure, insurance frameworks, and city-level policy are all really important, but none of these are within Tesla's direct control. As for Optimus, while the demo videos have drawn headlines, the path from prototype to scalable, revenue-generating product is uncertain. Investors may be underestimating how long it could take before either platform contributes materially to Tesla's earnings. The bottom line Investors have been here before. Some argue that the firm has a track record of delivering against the odds, particularly in scaling its EV operations. Yet recent numbers suggest growth may be slowing. What's more, the consensus forecasts show earnings per share (EPS) falling 30% in 2025. While there will likely be a rebound in later years, this is a considerable drop. Analysts anticipate EPS growth of 82% by 2028, though such long-range estimates are inherently uncertain. There's also not many analysts forecasting through to 2028. This puts investors in a challenging position. While the long-term vision's exciting, today's share price is heavily reliant on future breakthroughs rather than current performance. For those who believe in Tesla's ability to disrupt multiple industries, the current price might still make sense. However, I believe it's still a rather speculative investment. And that's simply because those valuation figures are incredibly hard to justify. As much as I like the brand, I don't think investors should consider the stock right now. The post Here's the forecast for the Tesla share price appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool James Fox has no position in any of the shares mentioned. The Motley Fool UK has recommended Tesla. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025 Erreur lors de la récupération des données Connectez-vous pour accéder à votre portefeuille Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données
Yahoo
7 hours ago
- Yahoo
Cat soap operas and babies trapped in space: the ‘AI slop' taking over YouTube
Babies trapped in space, zombie football stars and cat soap operas: welcome to YouTube in the era of AI video. Nearly one in 10 of the fastest growing YouTube channels globally are showing AI-generated content only, as breakthroughs in the technology spur a flood of artificial content. Guardian analysis of data from the analytics firm Playboard shows that out of the top 100 fastest growing channels in July this year, nine were showing purely AI-generated content. The offerings include channels featuring bizarre narratives such as a baby crawling into a pre-launch space rocket, an undead Cristiano Ronaldo and melodramas featuring humanised cats. AI video generation has surged amid the release of powerful tools such as Google's Veo 3 and Elon Musk's Grok Imagine. The channels have millions of subscribers in total, including 1.6 million for the space-stranded infant and 3.9 million for Super Cat League, which features human-like cats having affairs and, among one of many bizarre scenes, the felines shooting down and dismembering an eagle. Many of these videos qualify as 'AI slop', which refers to low-quality, mass-produced content that is surreal, uncanny or simply grotesque. But some contain a brief, rudimentary plot – in a sign of the growing sophistication of AI-generated content. YouTube has tried to stem the slop deluge by blocking the sharing of advertising revenue with channels that post repetitive and 'inauthentic' content – a policy targeted at AI content. 'All content uploaded to YouTube is subject to our community guidelines – regardless of how it's generated,' said a spokesperson for YouTube, which is owned by Google's parent company. After being contacted by the Guardian about the channels – which included channels in the fastest growing list for June – YouTube said it had removed three of them from the platform and blocked a further two from receiving advertising income. It did not specify which channels had been sanctioned. One expert said AI video generators herald the next wave of internet 'enshittification', a term first used by the British-Canadian author Cory Doctorow. Coined in 2022, Doctorow used it to describe the decline in quality of users' online experiences, as platforms prioritise profit over offering high-quality content. 'AI slop is flooding the internet with content that essentially is garbage,' said Dr Akhil Bhardwaj, an associate professor at the University of Bath's school of management. 'This enshittification is ruining online communities on Pinterest, competing for revenue with artists on Spotify and flooding YouTube with poor quality content.' 'One way for social media companies to regulate AI slop is to ensure that it cannot be monetised, thus stripping away the incentive for generating it.' Ryan Broderick, the author of the popular Garbage Day newsletter on internet culture, is scathing about the impact of AI video, writing last week that YouTube has become a 'dumping ground for disturbing, soulless AI shorts'. Instagram's Reels video feature is also flooded with AI content. On the platform, a video of various celebrities' heads attached to animal bodies has gained 3.7m views, starring the 'Rophant' (Dwayne Johnson and an elephant) and 'Emilla' (Eminem on a gorilla). On TikTok, many AI-generated videos have gone viral, including a video of Abraham Lincoln vlogging his ill-fated trip to the opera and cats competing in an Olympic diving event. However, the Lincoln and cat Olympic videos are more in the spirit of the internet's pre-slop era of playful wit. Instagram and TikTok said they require all realistic AI-content to be labelled. Videos suspected to contain AI from these channels were cross-checked with deepfake detection service provider Reality Defender. The channels featuring AI videos for July are: Super Cat League (3.9 million subscribers) বজল মিয়া 767k (2 million subscribers – this account has since been closed) LSB POWER GAMING (1.7 million subscribers) Amite Now Here (1.4 million subscribers) Starway (2.8 million subscribers) AmyyRoblox (2.4 million subscribers) Again Raz Vai (1.8 million subscribers) Cuentos Facinantes (4.8 million subscribers) MIRANHAINSANO (4.9 million subscribers) Sign in to access your portfolio