logo
Here's how Palmer Luckey's Anduril wants to beat General Atomics for the US Air Force's next big bet

Here's how Palmer Luckey's Anduril wants to beat General Atomics for the US Air Force's next big bet

Yahoo19-05-2025

Anduril is competing with General Atomics for the US Air Force's drone wingman program.
The startup says it's designed its drone, Fury, with commercial parts like a business jet engine.
The Air Force has cited the project as a way to bring "affordable mass" to its aerial missions.
Anduril Industries has revealed new details on how it plans to keep costs down for the US Air Force as it competes with defense heavyweight General Atomics for the drone wingman program.
The defense startup, cofounded by Palmer Luckey, was featured in a CBS "60 Minutes" segment on Sunday. During the segment, Anduril's CEO, Brian Schimpf, said the firm designed its AI-powered fighter jet, Fury, to be built from commercial parts to make manufacturing easier.
"We tried to eliminate really every bottleneck we could find around what makes an aircraft hard to produce," said Schimpf.
Schimpf said the Fury's designers, for example, chose to go with a commercial business jet engine instead of a military one.
The Warzone reported in 2023 that the Fury was designed with a Williams International FJ44-4M turbofan engine, which is popular in light business jets such as those in the Cessna Citation Series. Anduril didn't say in the Sunday CBS segment if the Fury still uses the same engine.
Schimpf also said that the Fury avoids "very exquisite, big aircraft landing gear" in favor of a simpler model.
"We designed it so that it can be built in any machine shop in America," he said of the landing gear.
"We've designed nearly every part of this that can be made in hundreds of different places within the US from lots of different suppliers," Schimpf added.
The Fury, designated YFQ-44A by the Air Force, is Anduril's bid to win the Pentagon's Collaborative Combat Aircraft contract, which seeks to build large autonomous or semi-autonomous drones that can fly in tandem with piloted advanced fighter jets for Next Generation Air Dominance.
The service wants these new aircraft to be much cheaper than regular fighter jets. Gen. David Allvin, the Air Force Chief of Staff, said in November that the purpose of the drone wingman program was to bring "affordable mass" to aerial missions.
It's a priority that reflects mounting concerns in the US that the American military could run out of weapons and ammo in a matter of weeks or even days if it were to go to war with a rival such as China.
Now, the Air Force says the drone wingman program is a core part of its mandate to recalibrate itself for near-peer conflict.
Frank Kendall, who served as Air Force Secretary until January, said he'd accelerated plans to develop Collaborative Combat Aircraft when analyses showed the drones would "change air warfare in some very fundamental ways."
Anduril was one of two contractors selected to be the drone project's lead in April 2024, meaning it already beat Boeing, Lockheed Martin, and Northrop Grumman to reach this phase of development.
General Atomics, which manufactures the MQ-9 Reaper and MQ-1 Predator, has also billed its offering — the XQ-67A — as a "low-cost, modular" uncrewed system.
Both companies' prototypes were shown on May 1 at California's Beale Air Force Base, which Allvin said would be the home site for initial testing and assessments. The Air Force is expected to make early selection decisions in its fiscal year of 2026, which starts in October.
Anduril and General Atomics did not respond to comment requests sent outside regular business hours by Business Insider.
Read the original article on Business Insider

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do
Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do

Forbes

time35 minutes ago

  • Forbes

Is Flawed AI Distorting Executive Judgment? — What Leaders Must Do

The AI symbol sits at the heart of a circle formed by bright yellow foldable caution signs adorned ... More with exclamation marks. This image creatively conveys the urgent need for awareness and careful consideration of AI's rapid growth and its implications. The design's high impact, with its strong contrast and focal point, makes it an effective tool for raising awareness or sparking conversations around technology, security, and innovation. Perfect for customizable content with plenty of space for additional messaging or branding. As AI embeds deeper into leadership workflows, a subtle form of decision drift is taking hold. Not because the tools are flawed but because we stop questioning them. Their polish is seductive. Their speed, persuasive. But when language replaces thought, clarity no longer guarantees correctness. In July 2023, the Chicago Sun-Times published an AI-generated summer reading list. The summaries were articulate. The titles sounded plausible. But only five of the fifteen books were real. The rest? Entirely made up: fictional authors, fabricated plots, polished prose built on nothing. It sounded smart. It wasn't. That's the risk. Now imagine an executive team building its strategy on the same kind of output. It's not fiction anymore. It's a leadership risk. And it's happening already. Quietly. Perceptibly. In organizations where clarity once meant confidence and strategy was something you trusted. Not just in made-up book titles but in the growing gap between what sounds clear and what's actually correct. Large language models aren't fact checkers. They're pattern matchers. They generate language based on probability, not precision. What sounds coherent may not be correct. The result is a stream of outputs that look strategic but rest on shaky ground. This isn't a call to abandon AI. But it is a call to re-anchor how we use it. To ensure leaders stay accountable. To ensure AI stays a tool, not a crutch. I'm not saying AI shouldn't inform decisions. But it must be paired with human intuition, sense making and real dialogue. The more confident the language, the more likely it is to go unquestioned. Model collapse is no longer theoretical. It's already happening. It begins when models are trained on outputs from other models or worse, on their own recycled content. Over time, distortions multiply. Edge cases vanish. Rare insights decay. Feedback loops breed repetition. Sameness. False certainty. Businessman with white cloud instead of head on blue background. Businessman and management. ... More Business and commerce. Digital art. As The Register warned, general purpose AI may already be declining in quality, not in tone but in substance. What remains looks fluent. But it says less. That's just the mechanical part. The deeper concern is how this affects leaders. When models feed on synthetic data and leaders feed on those outputs, what you get isn't insight. It's reflection. Strategy becomes a mirror, not a map. And we're not just talking bias or hallucinations. As copyright restrictions tighten and human-created content slows, the pool of original data shrinks. What's left is synthetic material recycled over and over. More polish. Less spark. According to researchers at Epoch, high quality training data could be exhausted by 2026 to 2032. When that happens, models won't be learning from the best of what we know. They'll be learning from echoes. Developers are trying to slow this collapse. Many already are, by protecting non-AI data sources, refining synthetic inputs and strengthening governance. But the impending collapse signals something deeper. A reminder that the future of intelligence must remain blended — human machine, not machine alone. Intuitive, grounded and real. Psychologists like Kahneman and Tversky warned us long ago about the framing trap: the way a question is asked shapes the answer. A 20 percent chance of failure feels different than an 80 percent chance of success, even if it's the same data. AI makes this trap faster and more dangerous. Because now, the frame itself is machine generated. A biased prompt. A skewed training set. A hallucinated answer. And suddenly, a strategy is shaped by a version of reality that never existed. Ask AI to model a workforce reduction plan. If the prompt centers on financials, the reply may omit morale, long-term hiring costs or reputational damage. The numbers work. The human cost disappears. AI doesn't interrupt. It doesn't question. It reflects. If a leader seeks validation, AI will offer it. The tone will align. The logic will sound smooth. But real insight rarely feels that easy. That's the risk — not that AI is wrong, but that it's too easily accepted as right. When leaders stop questioning and teams stop challenging, AI becomes a mirror. It reinforces assumptions. It amplifies bias. It removes friction. That's how decision drift begins. Dialogue becomes output. Judgment becomes approval. Teams fall quiet. Cultures that once celebrated debate grow obedient. And something more vital begins to erode: intuition. The human instinct for context. The sense of timing. The inner voice that says something's off. It all gets buried beneath synthetic certainty. To stop flawed decisions from quietly passing through AI-assisted workflows, every leader should ask: AI-generated content is already shaping board decks, culture statements and draft policies. In fast-paced settings, it's tempting to treat that output as good enough. But when persuasive language gets mistaken for sound judgment, it doesn't stay in draft mode. It becomes action. Garbage in. Polished out. Then passed as policy. This isn't about intent. It's about erosion. Quiet erosion in systems that reward speed, efficiency and ease over thoughtfulness. And then there's the flattery trap. Ask AI to summarize a plan or validate a strategy, and it often echoes the assumptions behind the prompt. The result? A flawed idea wrapped in confidence. No tension. No resistance. Just affirmation. That's how good decisions fail — quietly, smoothly and without a single raised hand in the room. Leadership isn't about having all the answers. It's about staying close to what's real and creating space for others to do the same. The deeper risk of AI isn't just in false outputs. It's in the cultural drift that happens when human judgment fades. Questions stop. Dialogue thins. Dissent vanishes. Leaders must protect what AI can't replicate — the ability to sense what's missing. To hear what's not said. To pause before acting. To stay with complexity. AI can generate content. But it can't generate wisdom. The solution isn't less AI. It's better leadership. Leaders who use AI not as final word but as provocateur. As friction. As spark. In fact, human-generated content will only grow in value. Craft will matter more than code. What we'll need most is original thought, deep conversation and meaning making — not regurgitated text that sounds sharp but says nothing new. Because when it comes to decisions that shape people, culture and strategy, only human judgment connects the dots that data can't see. In the end, strategy isn't what you write. It's what you see. And to see clearly in the age of AI, you'll need more than a prompt. You'll need presence. You'll need discernment. Neither can be AI trained. Neither can be outsourced.

‘60 Minutes' Staff Almost Quit ‘En Masse' Over Trump Suit
‘60 Minutes' Staff Almost Quit ‘En Masse' Over Trump Suit

Yahoo

timean hour ago

  • Yahoo

‘60 Minutes' Staff Almost Quit ‘En Masse' Over Trump Suit

Lesley Stahl revealed that she and her fellow 60 Minutes correspondents came close to quitting 'en masse' after their boss left the show with a dire warning about Donald Trump. The 33-year 60 Minutes veteran admitted she was 'angry' with Paramount head Shari Redstone on the Friday episode of The New Yorker Radio Hour. 'It is a frivolous lawsuit,' Stahl said of Trump's $20 billion legal action against CBS News. When host and New Yorker editor David Reminick asked Stahl whether she was 'angry' with Redstone, Stahl admitted, 'Yes, I think I am. I think I am.' Stahl also offered a theory for why Trump pursued the lawsuit against CBS News, in which he is accusing 60 Minutes of 'deceptively editing' Harris' interview to make her look better, in the first place. 'What is really behind it, in a nutshell, is to chill us,' Stahl said. 'There aren't any damages. He accused us of editing Kamala Harris in a way to help her win the election. But he won the election.' Settling the lawsuit would pave the way for Paramount's planned merger with Skydance Media, which would reportedly result in a $530 million personal payout for Redstone—and has to be approved by Trump's FCC officials. Paramount offered Trump $15 million to settle the lawsuit this week, but the president turned it down, citing 'mental anguish' over the Harris interview. He now wants $25 million and an apology to put his complaint to rest. The attempt to settle with Trump over the interview, which staffers have insisted was edited according to its usual standards and was not politically motivated, has caused internal tension at the network, culminating in the shock exits of 60 Minutes executive producer Bill Owens and CBS News President Wendy McMahon. Stahl said Owens' resignation 'was one of those punches where you almost can't breathe,' calling Owens and McMahon 'barriers' between 'us and the corporation.' Those barriers were tested even before Trump's lawsuit, Stahl recalled Friday. As for what 60 Minutes will be like once out of Redstone's hands at Paramount, Stahl said she's 'Pollyannaish' that Skydance will 'hold the freedom of the press up as a beacon, that they understand the importance of allowing us to be independent and do our jobs.' 'I'm expecting that. I'm hoping that, I want that, I'm praying for that,' Stahl said. 'And I have no reason to think that won't happen.' Reminick asked Stahl to consider what happens if it doesn't, and what it would take for her to follow Owens and McMahon out of the CBS News door. 'It depends,' she said. 'You ask me where my line is. I'm not sure. I don't think I can express what it is, but there is a line. Of course there is a line.' Stahl said that Owens resigning was one of those 'lines' and she and her fellow correspondents actually considering quitting 'en masse.' But their outgoing boss talked them out of it. 'It is hard' to 'have a news organization told by a corporation, 'Do this, do that with your story, change this, change that. Don't run that piece,'' Stahl explained, recounting what it was like to 'quietly resist' Redstone's complaints about 60 Minutes' Gaza coverage. 'The message came down through the line, through Wendy McMahon to Bill,' Stahl said, which she found 'very disconcerting.' 'It steps on the First Amendment. It steps on the freedom of the press. It makes me question whether any corporation should own a news operation,' she continued.

Winners and Losers: Energy Stocks Soared and Healthcare Crashed in May
Winners and Losers: Energy Stocks Soared and Healthcare Crashed in May

Business Insider

timean hour ago

  • Business Insider

Winners and Losers: Energy Stocks Soared and Healthcare Crashed in May

May was a month to remember for the U.S. stock market as the benchmark S&P 500 index posted a gain of 6% and had its best showing since 1990. But, as always, there were winners and losers among equities. Confident Investing Starts Here: The big winners among U.S. stocks during May were energy and technology stocks that are helping to power the artificial intelligence (AI) revolution. Specifically, NRG Energy (NRG) saw its share price rise 42% in the month and Constellation Energy (CEG) was close behind with a 37% gain. Both companies power AI data centers through cleaner energy sources such as natural gas. Other big winners in May were previously downtrodden technology stocks that are also associated with AI. These include data storage firm Seagate Technology (STX), whose share price increased 37% and outpaced AI chipmaker Nvidia (NVDA). Super Micro Computer (SMCI), which makes AI servers that run Nvidia microchips, also had a big month, with its stock running 26% higher. Healthcare Loses Out On the flipside, healthcare was the worst-performing sector of the market in May. The declines were led by insurer UnitedHealth Group (UNH), whose share price fell 27% amid worries after the company slashed its full-year guidance. Also dragging healthcare lower was pharmaceutical giant Eli Lilly (LLY), whose stock dropped 18% after the Trump administration said it wants prescription drug prices lower. Other healthcare stocks that took a drubbing in May include retail pharmacy chain CVS Health (CVS), and healthcare insurer Humana (HUM). The lone bright spot among healthcare stocks was Insulet (PODD), whose share price vaulted 29% higher on strong financial results. The stock has been on an upswing since the U.S. Food and Drug Administration (FDA) approved its insulin system for Type 2 Diabetes last summer. Is LLY Stock a Buy? The stock of Eli Lilly has a consensus Strong Buy recommendation among 18 Wall Street analysts. That rating is based on 16 Buy, one Hold, and one Sell recommendations issued in the last 12 months. The average LLY price target of $1,003.14 implies 34.82% upside from current levels.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store