
Firefly's Alpha rocket, Lockheed satellite fall into ocean after failed launch
About two minutes following liftoff from the Vandenberg Space Force Base in California, the engine nozzle on Alpha's upper portion broke off in space moments after it attempted to separate from the rocket's core booster as designed, "substantially reducing the engine's thrust," Firefly said.
While typical rocket mishaps trigger explosions or on-board shutdown commands, Alpha's upper rocket body fired for several minutes and ascended to 320 km (200 miles) in altitude. But it failed to reach its intended orbit and was pulled back into the atmosphere by Earth's gravity.
"The stage and payload have now safely impacted the Pacific Ocean in a cleared zone north of Antarctica," Firefly said.
During Alpha's ascent, a company livestream of the flight showed several pieces of debris flying off the rocket. On-screen altitude and speed data disappeared moments before, earlier than usual.
With Tuesday's events, four of six Alpha flights since 2021 have failed. Firefly, which was able to achieve a breakthrough moon landing in March, is vying with a handful of similar upstarts seeking to build a launch business in a market dominated by Elon Musk's SpaceX.
The Alpha mission was the first under an agreement between Firefly and Lockheed for up to 25 flights through 2029. The satellite was self-funded by Lockheed and called LM 400 Technology Demonstrator, an effort to test technologies that Lockheed plans to sell to other customers, such as the Pentagon.
"Navigating risk and going fast are part of these self-funded demonstrations," a Lockheed spokesman said, adding that the satellite's production yielded insights that will benefit future customers.
About an hour after the launch, Firefly posted on X that a mishap put "the vehicle in a lower than planned orbit," then deleted that statement. The company said it was working with Lockheed, the U.S. Space Force and Federal Aviation Administration to determine the root cause of the failure.
With Alpha, Firefly hopes to fulfill demand from the U.S. Defense Department for launching national security payloads into space, particularly under tight timelines. The company had a successful launch in 2023 in a Space Force mission to demonstrate rapid-launch capabilities.
Founded in 2014, Austin-based Firefly went bankrupt in 2017, changed ownership amid U.S. national security concerns in 2022, ousted its CEO over an inappropriate relationship in 2024 then landed on the moon on its first try in March.
(Reporting by Joey Roulette; Editing by Mark Porter, Alexandra Hudson and Cynthia Osterman)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
32 minutes ago
- The Star
Nvidia says no 'backdoors' in chips as China questions security
BEIJING: Nvidia chips do not contain "backdoors" allowing remote access, the US tech giant has said, after Beijing summoned company representatives to discuss "serious security issues". The California-based company is a world-leading producer of AI semiconductors, and this month became the first company to hit US$4 trillion in market value. But it has become entangled in trade tensions between China and the United States, and Washington effectively restricts which chips Nvidia can export to China on national security grounds. "Cybersecurity is critically important to us. Nvidia does not have 'backdoors' in our chips that would give anyone a remote way to access or control them," Nvidia said in a statement Thursday (July 31). A key issue has been Chinese access to the "H20" - a less powerful version of Nvidia's AI processing units that the company developed specifically for export to China. Nvidia said this month it would resume H20 sales to China after Washington pledged to remove licensing curbs that had halted exports. But the tech giant still faces obstacles - US lawmakers have proposed plans to require Nvidia and other manufacturers of advanced AI chips to include built-in location tracking capabilities. Beijing's top internet regulator said Thursday it had summoned Nvidia representatives to discuss recently discovered "serious security issues" involving the H20. The Cyberspace Administration of China said it had asked Nvidia to "explain the security risks of vulnerabilities and backdoors in its H20 chips sold to China and submit relevant supporting materials". China is aiming to reduce reliance on foreign tech by promoting Huawei's domestically developed 910C chip as an alternative to the H20, said Jost Wubbeke of the Sinolytics consultancy. "From that perspective, the US decision to allow renewed exports of the H20 to China could be seen as counterproductive, as it might tempt Chinese hyperscalers to revert to the H20, potentially undermining momentum behind the 910C and other domestic alternatives," he said. Other hurdles to Nvidia's operations in China are the sputtering economy, beset by a years-long property sector crisis, and heightened trade headwinds under US President Donald Trump. CEO Jensen Huang said during a visit to Beijing this month that the company remained committed to serving local customers, adding that he had been assured during talks with top Chinese officials that the country was "open and stable". - AFP


The Star
an hour ago
- The Star
US tech companies skewered over false tsunami information
X users blasted the chatbot for the mistakes, with one calling AI a disaster for real time events. — AP When a massive 8.8 magnitude earthquake struck off Russia's Pacific coast on Tuesday, one core worry immediately emerged: A life-threatening tsunami. Around the Pacific Ocean, weather authorities leapt into action, modelling the threat and releasing warnings and advisories to prepare their communities for what could be a horrific night. As the news unfolded, residents of Hawaii, Japan and North America's West Coast jumped onto their devices to seek evacuation plans and safety guidance. Online, some got their information from artificial intelligence chatbots. And in the moment of potential crisis, a few of those newly prevalent tools appear to have badly bungled the critical task at hand. Grok, the chatbot made by Elon Musk's Bay Area-based xAI and embedded in the social media site X, repeatedly told the site's users that Hawaii's tsunami warning had been cancelled when it actually hadn't, incorrectly citing sources. Social media users reported similar problems with Google Search's AI overviews after receiving inaccurate information about authorities' safety warnings in Hawaii and elsewhere. Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage. Still, the issues speak to the growing role of AI tools in people's information diets – above other search results, Google often places AI overviews that compile information from around the Internet, and Grok, the chatbot, automatically responds to users that tag "@Grok" on X posts – and to the tools' potentially dangerous fallibility. On Tuesday evening, Grok wrote that the National Weather Services's Pacific Tsunami Warning Center had cancelled Hawaii's tsunami warning at 3.23pm local time "after data showed no significant waves. No impacts occurred – Hawaii is safe." This wasn't true. Authorities had issued the tsunami warning at 2.43pm, and waves didn't reach the islands until later in the evening. In reality, the Honolulu-based weather authority kept its warnings active until after 10.30pm, when it downgraded the guidance to an advisory. (Grok wasn't wrong everywhere; just four minutes after its "3.23pm" claim, it accurately told another user that "evacuations are ongoing" and that warnings were active for Hawaii and parts of Alaska.) X users blasted the chatbot for the mistakes, with one calling AI a "disaster for real time events." Another user tagged Musk and included several screenshots of Grok's errors, writing that the chatbot had "spread life-endangering misinformation during the tsunami event." A critic of Google – who prompted the search tool to show an AI overview by adding "+ai" to their search – called the text that showed up "dangerously wrong." X did not respond to SFGATE's request for comment. Google, which dubs high-stakes queries "Your Money or Your Life," said in a May document explaining the AI overviews that such queries give the tool "an even higher bar for showing supporting information from reliable and trustworthy sources." Google spokesperson Ned Adriance wrote, "Search quickly and automatically updates based on fresh web content and the latest information. In this case, Search now shows a dedicated alert feature that provides up-to-date information for people searching about this topic." ("Tsunami" searches on Wednesday morning by this reporter yielded a small "advisory" symbol and then, once the advisory was cancelled in the Bay Area, just a selection of news stories.) Grok, in reply to one of the posters complaining about its errors, wrote, "We'll improve accuracy." – SFGate, San Francisco/Tribune News Service


The Star
2 hours ago
- The Star
Did a lab of AI 'scientists' design a possible Covid-19 treatment?
What does a scientist look like? A bright-eyed intellectual donning a white lab coat and goggles? Often, yes. But these days, they could take on the look of a stack of computer servers softly humming away in an air-conditioned building. That's what a group of scientists from the San Francisco-based Chan Zuckerberg Biohub and Stanford University aim to do with a 'Virtual Lab' of artificial intelligence scientists tasked with doing original research on a potential treatment for COVID-19. 'You can imagine each researcher having their own team of AI scientist(s) that can be their assistants,' said James Zou, a professor and computer scientist at Stanford University who co-led the study. 'It's quite versatile … I'm super excited that the Virtual Lab could be an accelerator for many types of science.' The AI scientists held meetings, wrote code, and made (virtual) biological models before proposing a slate of molecules to help treat recent COVID variants. After testing the Virtual Lab's suggestions in the real lab, the scientists found two molecules that might serve as a potential COVID treatment, as they describe in a paper published Tuesday in the journal Nature. While the potential treatment has a long way to go before becoming medicine, the (human) researchers say their model of creating a group of AI scientists could help accelerate discoveries across the scientific world. Scientific discovery often relies on groups of experts coming together to workshop ideas from different angles to try and solve a problem together. This can produce results that can shift the scientific world – the work that led to the 2024 Nobel Prize in Chemistry involved dozens of scientists in fields from biology to computer science. But access to that depth of connection can be hard to come by, argue Zou and his colleagues. So Zou wondered if there was a way to imitate those conversations between real-world researchers but with AI. While some individual AI systems already are about as good as humans at answering some scientific questions, few people have experimented with putting those AIs in conversation with each other. To test the idea, the team decided to create a Virtual Lab of AI scientists and give it a thorny, open-ended problem: creating antibody treatments for recent strains of COVID-19. Antibodies can help treat the disease, but are made less effective every time the virus evolves into a new variant, so quickly developing new antibodies could help keep treatments up to date. The Virtual Lab was run by an AI Principal Investigator, who after getting the assignment, made a team of AI experts to collaborate with on the task. The human researchers armed the AI experts with software that would help them do their jobs such as a software to model proteins for an AI biologist. Together, the AI lab held group meetings to come up with ideas, and then individual meetings to accomplish individual tasks. The AI team came up with a path to propose treatments – opting to create nanobodies, the antibody's smaller cousin. The group proposed potential treatments, then wrote code, created computer models to test those treatments and improve on the design of the potential treatment. 'One of the benefits of the virtual lab is that their meetings are much more efficient than our human meetings,' said Zou, noting that the meetings are over in a matter of minutes and several can be run at the same time. 'They can actually run a lot of meetings and run these meetings in parallel so they don't get tired.' As a testament to this speed, while it took the researchers months to set up the virtual lab, it only took the Virtual Lab two days to propose 92 different candidates of potential COVID-19 treatments. Of these, two seemed particularly promising in attaching themselves to COVID-19 proteins in the lab, meaning they could be potential treatments. Importantly, while many AI systems provide answers without explaining how they got there, the Virtual Lab had a transcript of all of its conversations. This allowed the human to understand the logic behind the AI scientists' decisions. 'That was very encouraging to us,' said John Pak, a biochemist and staff scientist at the Chan Zuckerberg Biohub who co-led the study. 'As a researcher, you can always be kind of hesitant to incorporate (AI) into your daily routine, but with the virtual lab and the AI agents, it felt pretty natural to interact with.' Samuel Rodrigues, an AI researcher who was not involved in the study, called the research 'a very exciting advance' over email. Rodrigues, CEO of FutureHouse, a San Francisco-based company building AI to automate scientific research, described the approach of multiple AI scientists as 'very visionary' and 'extremely important' for incorporating AI into science. While he noted that the system would likely have to be tweaked to do other tasks, he argued that was a minor limitation. 'Overall, we are impressed by and are very big fans of this work,' he said. The scientists agree that to create more informed AI experts, future users could arm them with tools and training to make them better, but argue that the system is already quite versatile. Even so, they admit that the Virtual Lab has its limits. AI systems can sometimes make up facts based on erroneous or incomplete data, such as when an early version of Google's AI overview suggested putting glue into pizza sauce or eating a rock a day. To minimise these sorts of gaffes, the team included an AI scientific critic as part of the Virtual Lab to question the assertions of the rest of the group, and often had the lab run several meetings on the same question to see if they arrived at similar conclusions. Ultimately, the Virtual Lab still relied on a human expert who can guide the AI, check its work, and test its assertions in real life. The researchers also noted that while the nanobodies may be responsive in a petri dish, human bodies are far more complicated, so using these molecules as a treatment would require far more testing before scientists knew whether the nanobodies actually would work in people. Despite these limitations, both Zou and Pak argue that the Virtual Lab offers a valuable tool for research across fields. 'We're really focused on exploratory research that could – in the hands of others – be useful,' said Pak. 'I'm kind of excited about testing this out with different scientific questions … I'm looking forward to trying it out with other projects that we have going on in the lab.' – The Mercury News/Tribune News Service