logo
NASA Plows Forward, Testing A Moon Rocket That The White House Is Trying To Cancel

NASA Plows Forward, Testing A Moon Rocket That The White House Is Trying To Cancel

Yahoo5 hours ago

Going to the Moon was always viewed as a near-impossible feat, to the point that a large contingent of skeptics continues to believe that one of humanity's greatest feats simply didn't happen despite the mountain of evidence to the contrary. NASA's return to the lunar surface is being made even more difficult by the Trump administration. Despite the White House's desire to cancel the Space Launch System, the Artemis program's primary launch vehicle, NASA continues to test the rocket as recently as Thursday.
NASA tested new SLS hardware twice over the last week for Moon missions that might never happen, Ars Technica reports. Last Friday, test-fired a hydrogen-fueled RS-25 rocket in Mississippi that would be used in the SLS core stage for Artemis V in 2030, the program's third lunar landing. Thursday's test in Utah featured a new solid rocket booster that would debut as a part of the SLS Block 2 rocket for Artemis IX in 2034. The additional 8,900 pounds of payload capacity provided by Block 2 would be used to supply the fledgling moonbase and support a permanent human presence on the Moon.
Read more: These Movies And TV Shows Have The Best Car Casting
The booster test at Northrop Grumman's site at Promontory, Utah went awry during the uncontrolled burn. Live-stream footage appears to show the exhaust burning away part of the rocket until it disintegrated just above the nozzle. The rocket's failure wasn't contained to just the test stand, as the flames spread to the surrounding hills.
The most significant selling point of the SLS will likely be its downfall. The cost-effective appeal of recycling components from the Space Shuttle to reach the Moon was a dream that created more headaches than the dollars saved. This became even more problematic after Elon Musk bankrolled Trump's campaign for a second term while claiming that a SpaceX lunar program could be faster and cheaper with zero evidence to back up that posturing.
While Musk believes the idea of returning to the Moon is dumb, the cancellation of SLS would be Artemis' demise, as contracts would likely be doled out for a commercial launch provider. This is occurring while NASA is facing a massive $6 billion budget cut, which will force the agency to reduce its staff by 32%. Employees are already being encouraged to take early buyouts or retire early, like their colleagues at other government agencies.
Want more like this? Join the Jalopnik newsletter to get the latest auto news sent straight to your inbox...
Read the original article on Jalopnik.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI turns to Google's AI chips to power its products, source says
OpenAI turns to Google's AI chips to power its products, source says

Yahoo

time38 minutes ago

  • Yahoo

OpenAI turns to Google's AI chips to power its products, source says

(Reuters) -OpenAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday. The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalized on its in-house AI technology from hardware to software to accelerate the growth of its cloud business.

AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do
AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do

Forbes

time40 minutes ago

  • Forbes

AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do

At the AMD Advancing AI event, CEO Lisa Su touted the company's AI compute portfolio. At the AMD Advancing AI event in San Jose earlier this month, CEO Lisa Su and her staff showcased the company's progress across many different facets of AI. They had plenty to announce in both hardware and software, including significant performance gains for GPUs, ongoing advances in the ROCm development platform and the forthcoming introduction of rack-scale infrastructure. There were also many references to trust and strong relationships with customers and partners, which I liked, and a lot of emphasis on open hardware and an open development ecosystem, which I think is less of a clear winner for AMD, as I'll explain later. Overall, I think the event was important for showing how AMD is moving the ball down the field for customers and developers. Under Su, AMD's M.O. is to have clear, ambitious plans and execute against them. Her 'say/do' ratio is high. The company does what it says it will do. This is exactly what it must continue doing to whittle away at Nvidia's dominance in the datacenter AI GPU market. What I saw at the Advancing AI event raised my confidence from last year — although there are a few gaps that need to be addressed. (Note: AMD is an advisory client of my firm, Moor Insights & Strategy.) AMD's AI Market Opportunity And Full-Stack Strategy When she took the stage, Su established the context for AMD's announcements by describing the staggering growth that is the backdrop for today's AI chip market. Just take a look at the chart below. So far, AMD's bullish projections for the growth of the AI chip market have turned out to be ... More accurate. So this segment of the chip industry is looking at a TAM of half a trillion dollars by 2028, with the whole AI accelerator market increasing at a 60% CAGR. The AI inference sub-segment — where AMD competes on better footing with Nvidia — is enjoying an 80% CAGR. People thought that the market numbers AMD cited last year were too high, but not so. This is the world we're living in. For the record, I never doubted the TAM numbers last year. AMD is carving out a bigger place in this world for itself. As Su pointed out, its Instinct GPUs are used by seven of the 10 largest AI companies, and they drive AI for Microsoft Office, Facebook, Zoom, Netflix, Uber, Salesforce and SAP. Its EPYC server CPUs continue to put up record market share (40% last quarter), and it has built out a full stack — partly through smart acquisitions — to support its AI ambitions. I would point in particular to the ZT Systems acquisition and the introduction of the Pensando DPU and the Pollara NIC. GPUs are at the heart of datacenter AI, and AMD's new MI350 series was in the spotlight at this event. Although these chips were slated to ship in Q3, Su said that production shipments had in fact started earlier in June, with partners on track to launch platforms and public cloud instances in Q3. There were cheers from the crowd when they heard that the MI350 delivers a 4x performance improvement over the prior generation. AMD says that its high-end MI355X GPU outperforms the Nvidia B200 to the tune of 1.6x memory, 2.2x compute throughput and 40% more tokens per dollar. (Testing by my company Signal65 showed that the MI355X running DeepSeek-R1 produced up to 1.5x higher throughput than the B200.) To put it in a different perspective, a single MI355X can run a 520-billion-parameter model. And I wasn't surprised when Su and others onstage looked ahead to even better performance — maybe 10x better — projected for the MI400 series and beyond. That puts us into the dreamland of an individual GPU running a trillion-parameter model. By the way, AMD has not forgotten for one second that it is a CPU company. The EPYC Venice processor scheduled to hit the market in 2026 should be better at absolutely everything — 256 high-performance cores, 70% more compute performance than the current generation and so on. EPYC's rapid gains in datacenter market share over the past few years are no accident, and at this point all the company needs to do for CPUs is hold steady on its current up-and-to-the-right trajectory. I am hopeful that Signal65 will get a crack at testing the claims the company made at the event. This level of performance is needed in the era of agentic AI and a landscape of many competing and complementary AI models. Su predicts — and I agree — that there will be hundreds of thousands of specialized AI models in the coming years. This is specifically true for enterprises that will have smaller models focused on areas like CRM, ERP, SCM, HCM, legal, finance and so on. To support this, AMD talked at the event about its plan to sustain an annual cadence of Instinct accelerators, adding a new generation every year. Easy to say, hard to do — though, again, AMD has a high say/do ratio these days. AMD's 2026 Rack-Scale Platform And Current Software Advances On the hardware side, the biggest announcement was the forthcoming Helios rack-scale GPU product that AMD plans to deliver in 2026. This is a big deal, and I want to emphasize how difficult it is to bring together high-performing CPUs (EPYC Venice), GPUs (MI400) and networking chips (next-gen Pensando Vulcano NICs) in a liquid-cooled rack. It's also an excellent way to take on Nvidia, which makes a mint off of its own rack-scale offerings for AI. At the event, Su said she believes that Helios will be the new industry standard when it launches next year (and cited a string of specs and performance numbers to back that up). It's good to see AMD provide a roadmap this far out, but it also had to after Nvidia did at the GTC event earlier this year. On the software side, Vamsi Boppana, senior vice president of the Artificial Intelligence Group at AMD, started off by announcing the arrival of ROCm 7, the latest version of the company's open source software platform for GPUs. Again, big improvements come with each generation — in this case, a 3.5x gain in inference performance compared to ROCm 6. Boppana stressed the very high cadence of updates for AMD software, with new features being released every two weeks. He also talked about the benefits of distributed inference, which allows the two steps of inference to be tasked to separate GPU pools, further speeding up the process. Finally, he announced — to a chorus of cheers — the AMD Developer Cloud, which makes AMD GPUs accessible from anywhere so developers can use them to test-drive their ideas. Last year, Meta had kind things to say about ROCm, and I was impressed because Meta is the hardest 'grader' next to Microsoft. This year, I heard companies talking about both training and inference, and again I'm impressed. (More on that below.) It was also great getting some time with Anush Elangovan, vice president for AI software at AMD, for a video I shot with him. Elangovan is very hardcore, which is exactly what AMD needs. Real grinders. Nightly code drops. What's Working Well For AMD in AI So that's (most of) what was new at AMD Advancing AI. In the next three sections, I want to talk about the good, the needs-improvement and the yet-to-be-determined aspects of what I heard during the event. Let's start with the good things that jumped out at me. What Didn't Work For Me At Advancing AI While overall I thought Advancing AI was a win for AMD, there were two areas where I thought the company missed the mark — one by omission, one by commission. The Jury Is Out On Some Elements Of AMD's AI Strategy In some areas, I suspect that AMD is doing okay or will be doing okay soon — but I'm just not sure. I can't imagine that any of the following items has completely escaped AMD's attention, but I would recommend that the company address them candidly so that customers know what to expect and can maintain high confidence in what AMD is delivering. What Comes Next In AMD's AI Development It is very difficult to engineer cutting-edge semiconductors — let alone rack-scale systems and all the attendant software — on the steady cadence that AMD is maintaining. So kudos to Su and everyone else at the company who's making that happen. But my confidence (and Wall Street's) would rise if AMD provided more granularity about what it's doing, starting with datacenter GPU forecasts. Clearly, AMD doesn't need to compete with Nvidia on every single thing to be successful. But it would be well served to fill in some of the gaps in its story to better speak to the comprehensive ecosystem it's creating. Having spent plenty of time working inside companies on both the OEM and semiconductor sides, I do understand the difficulties AMD faces in providing that kind of clarity. The process of landing design wins can be lumpy, and a few of the non-AMD speakers at Advancing AI mentioned that the company is engaged in the 'bake-offs' that are inevitable in that process. Meanwhile, we're left to wonder what might be holding things back, other than AMD's institutional conservatism — the healthy reticence of engineers not to make any claims until they're sure of the win. That said, with Nvidia's B200s sold out for the next year, you'd think that AMD should be able to sell every wafer it makes, right? So are AMD's yields not good enough yet? Or are hyperscalers having their own problems scaling and deploying? Is there some other gating item? I'd love to know. Please don't take any of my questions the wrong way, because AMD is doing some amazing things, and I walked away from the Advancing AI event impressed with the company's progress. At the show, Su was forthright about describing the pace of this AI revolution we're living in — 'unlike anything we've seen in modern computing, anything we've seen in our careers, and frankly, anything we've seen in our lifetime.' I'll keep looking for answers to my nagging questions, and I'm eager to see how the competition between AMD and Nvidia plays out over the next two years and beyond. Meanwhile, AMD moved down the field at its event, and I look forward to seeing where it is headed.

Data Defense Agents For People
Data Defense Agents For People

Forbes

timean hour ago

  • Forbes

Data Defense Agents For People

SAN JOSE, CALIFORNIA - MARCH 18: A In a world where AI agents are everywhere, how do we ensure that people still have agency? One idea that's surfacing, albeit in sort of a vague way, is similar to the concept of a service dog or emotional support animal: that a person would have a dedicated personal AI entity that works as their guardian angel in a world of peril. Think about trying to navigate all of the AI stuff coming your way as a human: all of the scams, all of the drama of other people's communications, not to mention government and business messaging churned out in automated ways. 'Consumers are out there trying to navigate a really complex marketplace, and as AI is injected into the marketplace by many companies, it's probably going to become even harder for consumers to understand if they're getting a good deal, to understand the different options out there when they're making a purchase,' said Ginny Fahs of Consumer Reports in a recent panel aimed at an idea very much like this, the idea of personal defense AI. 'And so an AI that is loyal to the consumer, loyal to us as individuals, first and foremost, is really going to be essential for building trust in these AI systems, and for … migrating to a more authentic economy.' Fahs was among a set of expert panelists at Imagination in Action in April, and I found this to be one of the more compelling talks, not least because of past interviews I've seen in the last two years. Take data rights advocate who famously coined the term 'idatity' to talk about the intersection of personal data and technology. Anyway, my colleague Sandy Pentland moderated this group discussion, which covered a lot of thoughts on just how this kind of AI advocacy would work. 'There was a need to reform laws to keep up, to have electronic signatures, electronic contracts, automated transactions,' said panelist Dazza Greenwood of the Internet age, relating that to today's efforts. 'And I helped to write those laws as a young lawyer and technologist.' Panelist Amir Sarhangi spoke about the value of trust and familiarity with a person's AI advocate. 'Having that trust being established there, and having the ability to know who the agent is and who the enterprise is, becomes very important,' he said. 'Part of it is this general problem of, how do you make sure that agents don't break laws, introduce unexpected liabilities, and (that they) represent the authentic interest of the consumer, and (that they can) actually be loyal, by design?' said panelist Tobin South, who got his PhD at MIT. How It Might Work Panelists also discussed some of the procedural elements of such technology. 'In collaboration with the Open ID Foundation, who kind of leads all the standards and protocols keeping our internet safe, we are pushing forward standards that can help make agents safe and reliable in this kind of new digital age,' South said. Fahs talked about something her company developed called a 'permission slip.' 'You could go to a company through the agent, and the agent would say to the company, 'please delete this person's data,' or 'please opt out of the sale of this person's data,'' she said. 'It was a version of an agentic interaction that was (prior to the explosion of AI), but where we really were getting an authorization from a user for a specific purpose to help them manage their data, and then going out to a company and managing that transaction, and then reporting back to the customer on how it went.' On privacy, Greenwood discussed how systems would deal with laws like California's CCPA, which he called a 'mini-GDPR,' and encouraged people to use the term 'fiduciary' to describe the agent's responsibilities to the user. Sarhangi talked about the history of building KYA. 'One of the things we started talking about is KYA which is, 'know your agent,' and 'know your agent' really is about understanding who's behind the agent,' he said. 'These agents will have wallets, basically on the internet, so you know what transactions are being conducted by the agent. And that's really powerful, because when they do something that's not good, then you have a good way of understanding what the history of that agent has been, and that will go as part of their … reputation.' Crowdsourcing Consumer Information Another aspect of this that came up is the ability of the agents to put together their people's experiences, and share them, to automate word of mouth. 'A really key type of a thing I'm excited about is what Consumer Reports does without thinking about it,' said Pentland, 'which is compiling all the experiences of all your millions of members to know that 'these blenders are good' and 'those blenders are bad,' and 'don't buy that' and 'you don't trust that dude over there.' So once an agent is representing you, you can begin doing this automatically, where all the agents sort of talk about how these blenders are no good, right?' Fahs agreed. 'I can so casually mention to my AI agent, 'oh, this purchase, I don't like that one feature'' she said. 'And if that agent has a memory, and has the ability to coordinate and communicate with other agents, that becomes kind of known in the network, and it means that future consumers can purchase better, or future consumers have more awareness of that feature.' South added some thoughts on data tools. 'There are many really cool cryptographic tools you can build to make the sharing of data really safe, right?' he said. 'You don't need to trust Google, to just own all your data, promise not to do anything wrong with it. There are real security tools you can build into this, and we're seeing this explosion right now.' South also mentioned NANDA, a protocol being developed by people like my colleague Ramesh Raskar at MIT. NANDA is a way to build a decentralized Internet with AI, and it seems likely to blossom into one of the supporting pillars of tomorrow's global interface. Agents and Agency The panel also talked about some of the logistics, for instance: how will the agent really know what you want? 'You want the user to feel like they can provide very, very fine-grained permissions, but you also don't want to be bugging them all the time saying, 'Do I have permission for this? Do I have permission for that?'' Fahs said. 'And so … what the interface is to articulate those preferences, and to, even, as the agent, have real awareness of the consumer's intent, and where that can be extended, and where there really does need to be special additional permission granted, I think is, is a challenge that product managers and designers and many of us are going to be trying to thread the needle on.' 'One of the things that current LLMs don't do very well is recognize what a specific person wants,' Pentland added. 'In other words, values alignment for a specific person. It can do it for groups of people, sort of with big interviews, but an agent like this really wants to represent me, not necessarily you, or you. And I think one of the most interesting problems there is, how do we do that?' 'Finally, we have the tools that (resemble) something like fiduciary loyal agents,' Greenwood said. 'There's an expression going around Stanford, which is: the limiting factor on AI is context: not the size of the window, but your ability to structure information, to feed it to the AI, both for understanding consumers, but to also do preference solicitation,' South said. 'If you want the agent to act on your behalf, or an AI to do things you actually want, you need to extract that information somehow, and so both as individuals, making your data available to AI systems, but also as an organization, structuring information so that AIs can know how to work with your systems.' The Race Toward Personal Advocacy I think all of this is very necessary right now, in 2025, as we try to really integrate AI into our lives. This is happening, it seems, pretty much in real time, so this is the time to ask the questions, to find the answers, and to build the solutions.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store