The U.S. is building a fuel depot in space
The high-altitude endeavor, undertaken by the orbital servicing enterprise Astroscale U.S., is slated to occur in the summer of 2026, the company announced this week. This Department of Defense-funded mission will see Astroscale's 660-pound craft refuel a satellite with the propellant hydrazine, then maneuver to a fueling depot to fill up with more fuel, and then refuel another asset. (All the involved assets haven't yet been revealed by the Space Force.)
It will be the first time a Space Force craft is refueled in orbit. Such a fuel shuttle could keep missions in space longer and eliminate the need for any craft to suspend its mission to retrieve thruster propellant. It's a novel type of full-service gas station.
"This changes fundamentally how we do things in space," Ian Thomas, Astroscale U.S.' Refueler Program Manager, told Mashable.
SEE ALSO: NASA scientist viewed first Voyager images. What he saw gave him chills.
After launching, the refueled craft will travel to a region called geostationary orbit, which is a unique place around Earth where spacecraft orbit at same rate Earth is rotating — meaning they stay locked in the same position relative to our planet. There, Astroscale's craft will carefully approach its first Space Force satellite target, called Tetra-5, and transfer fuel. The refueler will then thrust away and inspect the scene with a specialized camera to ensure no valuable fuel is leaking. Then, the refueler will fly to a nearby fuel depot, or gas station, and attach and pull fuel from the depot before traveling to its second refueling target.
"This changes fundamentally how we do things in space."
"The point of the mission is to make sure all the different parts are viable and work," Thomas explained. "You have a fuel depot, a client, and us."
How Astroscale's refueler, "ASP-R," will approach and refuel spacecraft in orbit around Earth. Credit: Astroscale U.S.
For an outer space operation, while certainly not simple, it's relatively efficient once the refueler arrives at a spacecraft running on empty. "It is definitely longer than refueling your car but it's something that can be done in a matter of hours," Thomas said.
You've probably noticed that most spacecraft, whether satellites or NASA deep space probes, are fitted with solar panels. These are invaluable, as they provide power to a craft's computer systems, cameras, and beyond. But they can't provide propellant to move and reorient craft, avoid high-speed space junk, or keep a satellite from naturally getting dragged into Earth's atmosphere. That's why refueling is vital.
"The paradigm we had doesn't hold up anymore."
If a spacecraft can be refueled, engineers can design missions that aren't limited by fuel. The revolutionary, $10 billion James Webb Space Telescope, for example, has finite fuel, and its mission (while still lengthy) is limited to some 20 years.
"The paradigm we had doesn't hold up any more," Thomas emphasized.
An artist's conception of Astroscale's refueler orbiting Earth. Credit: Astroscale U.S.
This isn't Astroscale's first orbital rodeo. In a separate mission intended to deorbit large pieces of space debris (called Active Debris Removal by Astroscale-Japan), the company has already closely approached a large rocket stage to test close proximity maneuverability and reconnaissance; next up, an Astroscale spacecraft will use a robotic arm to bring the large 36-foot-long spent rocket stage down to Earth, in 2028.
But before then, the company may prove that running a fuel depot in Earth's orbit isn't just feasible; it could redefine how expensive orbiting spacecraft — whether used for national security, communications, or science — operate in space.
"If you run out of fuel, you run out of life," Thomas said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Insider
an hour ago
- Business Insider
OpenAI CEO Sam Altman Says GPT-6 Is Already in Development
Microsoft-backed (MSFT) AI firm OpenAI recently launched GPT-5, but CEO Sam Altman has already announced that GPT-6 is in development, and that it's coming sooner than expected. While he didn't give a release date, Altman made it clear that GPT-6 will be more advanced and personal. Indeed, it won't just respond to your questions but it will learn your preferences, habits, and personality to tailor its responses to you. According to Altman, the key to this personalization is memory. The system needs to remember who you are and what you like in order to offer a more meaningful experience. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Altman also said that future versions of ChatGPT will follow a new executive order from the Trump administration that requires federal AI tools to remain politically neutral but customizable. This means users will be able to adjust the AI's tone to match their views. At the same time, Altman admitted that GPT-5's initial rollout received backlash after users said that it felt cold and less helpful than earlier models. In response, OpenAI quietly updated the model to make its tone warmer, which is an improvement that Altman believes has made a big difference. Despite progress, there are still privacy concerns. In fact, Altman noted that temporary memory in ChatGPT isn't yet encrypted, which raises risks when handling sensitive information. Nevertheless, he said that encryption is likely coming, though no date has been set. In addition, when looking further ahead, Altman is exploring brain-computer interfaces, where AI could respond to your thoughts directly. However, while his team continues to improve ChatGPT for everyday use, Altman admitted that chatbot performance may have peaked for now. Is MSFT Stock a Buy? Turning to Wall Street, analysts have a Strong Buy consensus rating on MSFT stock based on 34 Buys and one Hold assigned in the last three months. In addition, the average MSFT price target of $623.60 per share implies 22.4% upside potential.


Tom's Guide
an hour ago
- Tom's Guide
Sora 2 was missing from GPT-5 — but it could be the biggest jump in AI video yet
OpenAI is set to release a new version of its flagship AI video model Sora sometime this quarter. While revolutionary at launch, Sora has since lost ground to competitors, with Google's Veo 3 now setting the gold standard for AI video generation. I expect Sora 2 will arrive in the coming weeks, maybe months given the rapid release of GPT-5. Like GPT-4o, GPT-5 is natively multimodal, handling any input or output type (including video) while performing complex reasoning tasks similar to the "o" series models. Sora remains a solid platform. Its Storyboard feature breaks new ground, and ChatGPT Pro subscribers can generate clips up to 20 seconds long. But the underlying model is showing its age. The output still suffers from motion control issues, lacks sound generation, and struggles with complex physics rendering — unlike Veo 3, Kling 2.1, or MiniMax 2. Even in the social video space, OpenAI now faces competition from virtually every AI platform, including Meta, Grok, and Midjourney. Yet OpenAI remains the world's largest AI lab with significant resources and — despite Meta's recent talent raids — a formidable engineering team. Don't count them out just yet. To compete with Google's video model or the emerging Chinese competitors, OpenAI must leverage its multimodal capabilities while expanding Sora's feature set. Tighter ChatGPT integration wouldn't hurt either. Here are five essential improvements for Sora 2: If OpenAI wants to compete with Veo 3, Sora 2 must handle both video and audio natively. Any model without sound generation starts at a disadvantage. Currently, Sora produces only silent clips — a significant weakness when Veo 3 generates sound effects, ambient noise, and even dialogue as core features. This isn't just about tacking on audio as an afterthought; it's about true integration. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Veo 3 can produce lip-synced character speech in multiple languages. Sora 2 needs that same built-in audio capability — from atmospheric soundscapes to spoken dialogue. If OpenAI delivers fully multimodal generation (video + audio) while maintaining 20-second clips or longer, it won't just catch Veo 3 — it could leap ahead entirely. Visual realism extends beyond resolution — it's fundamentally about physics. Current Sora outputs often display unnatural motion or distorted physics: water that defies gravity, objects that morph unpredictably, or movement that feels fundamentally wrong. Google clearly prioritized real-world physics with Veo 3, and the results speak for themselves. Their videos excel at simulating realistic physics and dynamic motion with minimal glitches. Meanwhile, Sora's older model produces jittery movement and inconsistent object interactions that shatter immersion. For Sora 2 to compete, its model must better understand real-world behavior — from natural human gaits to bouncing balls, from smoke dynamics to fluid mechanics. OpenAI essentially needs to integrate a physics engine into Sora. Believable motion and interactions (no more warping limbs or melting backgrounds) would close a critical gap with competitors. OpenAI's ace in the hole? ChatGPT has already trained millions to communicate conversationally with AI. Sora 2 should capitalize on this by making video creation feel like dialogue, not programming. Rather than demanding perfect prompts or complex interface navigation, the system should support natural back-and-forth refinement. Google's already moving this direction — their Flow tool uses Gemini AI to enable intuitive, everyday language prompting. Runway does this brilliantly with its chat mode, and now the new Aleph tool that allows Gen-4 to expertly refine any single element. Luma's Dream Machine was built with this concept from the ground up. Picture this workflow: Type "medieval knight on a mountain," receive a draft video, then simply say "make it sunrise and add a dragon" — and Sora instantly updates the scene. This conversational approach would lower barriers for newcomers while accelerating workflows for professionals. The technology exists. ChatGPT already interprets follow-up requests and adjusts outputs dynamically (as demonstrated with GPT-4os native image integration). Sora 2, fully integrated with ChatGPT, should let us talk our way to great videos. That user experience would outshine the technical prompting most competitors still require. It would also allow for native image generation first, then animation using Sora, similar to the way Google is working with Veo 3 in Gemini or the new Grok Imagine feature. Character and scene consistency represents another crucial improvement area. Currently, generating two clips of "a girl in a red dress" might produce two completely different people. Sora's outputs drift in style and details between generations, making coherent multi-scene stories or recurring characters nearly impossible. Sora 2 must enable consistent characters, objects, and art styles across longer videos or clip series. Competitors already deliver this — Kling 2.1 boasts "consistent characters and cinematic lighting straight from text prompts." Google's Flow goes further, allowing custom assets (character images, specific art styles) as "ingredients" across multiple scenes. OpenAI should provide similar capabilities: reference image uploads, style fine-tuning, or character persistence across scenes. If Sora 2 can maintain consistent character appearance throughout a video, creators can actually tell stories instead of producing disconnected clips. Especially if it has native audio integration over 20 second clips. Consistency and customization work together — whether you're an artist maintaining a signature style or a filmmaker needing character continuity, Sora 2 should deliver that control. Finally, OpenAI should maximize its ecosystem advantage by deeply integrating Sora 2 into ChatGPT while ensuring broad accessibility. Google's Veo connects to a broader toolkit (Gemini integration, API access, Flow app), and Meta will inevitably embed AI video throughout their products. OpenAI can differentiate by making Sora 2 a seamless ChatGPT feature. Applying this approach to Sora 2 would instantly give millions of ChatGPT users an AI video studio without switching apps. They could follow Google and have a low limit on videos per day with a premium plan for unlimited access — as they do now with ChatGPT Pro and Sora. Mobile optimization is crucial. Today's creators shoot, edit, and post entirely from phones. If Sora 2 works within ChatGPT's mobile app (or a dedicated Sora app) with quick generation capabilities, it could capture the TikTok and Reels creator market. Imagine telling your phone, "Hey ChatGPT, make a 15-second video of me as a cartoon astronaut landing on Mars," and receiving share-ready content instantly. By making Sora 2 ubiquitous — through ChatGPT, developer APIs, and mobile platforms — OpenAI can rapidly build its user base while gathering essential improvement feedback. Platforms like Leonardo, Freepik and Higgsfield are already heavily using Google's Veo 3 and Hailuo's MiniMax 2 because they're impressive, fast and available via API — OpenAI is falling behind in the creative AI space by not updating Sora. OpenAI has a genuine opportunity to reclaim leadership by learning from competitors' successes. Google's Veo 3 currently sets the benchmark with native audio, realistic physics, and strong prompt adherence, while emerging models like Kling 2.1 and MiniMax 2 continue pushing boundaries. Runway races ahead with new refinements to its Gen-4 model — which is of a similar quality in terms of physics to Sora but with more features, and others like Pika are focusing on the creator market, further pushing OpenAI out of the valuable space. Sora 2 can't be just an incremental upgrade — it needs to amaze. The encouraging news? OpenAI has the foundational elements: a powerful language model, a first-generation video model to build upon, and ChatGPT's massive user base. If OpenAI delivers native sound generation, realistic physics, conversational ease-of-use, character consistency, and seamless product integration, Sora 2 could very well beat Veo 3, Kling, and the entire field at their own game. When it all comes together, don't be surprised if the next viral AI video in your feed was created with Sora 2.
Yahoo
2 hours ago
- Yahoo
Researchers May Have Finally Cracked One Of The Universe's Oldest Mysteries
The early universe is perhaps one of the biggest mysteries of our time, and while we've come close to unraveling some of those earliest mysteries thanks to advancing tech like the James Webb Space Telescope and other observatories, scientists have still found themselves scratching their heads about how our universe formed and which molecules helped fuel the first star formations. Many believe that after the Big Bang occurred nearly 13.8 billion years ago, the universe was full of extremely dense material and very high temperatures. However, as the universe cooled -- in a matter of seconds, most theories suggest -- the first elements began to form from those materials. However, researchers believe they may have recreated the first molecules of our universe, which could help us solve one of the oldest mysteries of the cosmos -- whether or not the formation of early molecules slowed as the universe cooled down. Read more: This Is How Most Life On Earth Will End Solving Age-Old Mysteries With New Data For years, scientists have theorized that as the temperatures in the universe cooled, the rate at which the molecules form -- thus driving reactions that lead to star formation -- slowed. However, based on the information that they have uncovered by testing how certain molecules react to each other, the new research suggests that the reactions don't actually slow at all as the temperatures become lower. If proven, this could completely change the fundamentals of how scientists view the early formation of the universe, as well as our understanding of star and planet formation and how the universe has expanded since the Big Bang. The researchers found that the rate at which the reactions between the first molecules that likely formed in the early universe remain constant no matter what the temperature becomes. To test this hypothesis, the researchers had to recreate those first molecules, and then put them into an environment that would truly determine if their reactions slowed as the temperatures became lower. So, that's exactly what they did. Recreating The First Molecules In The Universe Scientists believe that the helium hydride ion (HeH+), which is thought to be the oldest molecule in the universe, was the first step in a chain reaction that led to the formation of molecular hydrogen (H2), which also happens to be the most common molecule found in our universe. Scientists theorize that these two molecules were essential to the formation of the first stars, and that for the gas cloud of a protostar to collapse to the point of nuclear fusion beginning, the heat inside of the protostar must dissipate. Thus, if the theories about the reactions slowing was true, it meant that formation could eventually slow as the star's fundamental compounds cooled. So, the researchers took the molecules and put them into conditions similar to what they might have experienced in the early universe. By taking the molecules and combining them together within a controlled environment -- which they could lower the temperature of -- they found that the reactions between the molecules didn't slow at all. Instead, they continued as they always had, allowing the researchers to actually study the collision rate based on how it varies between changes in the collision energy. There's still more work to do to prove the data fully here, but it does paint an interesting picture of what early formations within the universe might have looked like, as well as how they might have been driven. And, if we dive deeper into it, it could perhaps unlock more data about how the universe will continue to expand and form new stars and planets. Enjoyed this article? Sign up to BGR's free newsletter for the latest in tech and entertainment, plus tips and advice you'll actually use. Read the original article on BGR. Solve the daily Crossword