logo
Anduril Industries to acquire Klas

Anduril Industries to acquire Klas

Yahoo06-05-2025

Defence technology company Anduril Industries has signed a definitive agreement to acquire Klas, an edge computing and tactical communications services provider.
Financial terms of the transaction were not disclosed.
Subject to regulatory approval, the acquisition will add Klas' 'rugged' hardware to Anduril's suite of autonomous systems and networked defence technologies.
It is expected to enhance the company's ability to provide 'faster, more resilient and seamlessly integrated' solutions for challenging, front-line operational environments.
Klas will continue its operations in Ireland and the US, with plans to expand manufacturing to meet demand across Anduril's portfolio.
The acquisition also establishes Anduril's first office in Dublin, expanding its international presence in Europe, the UK, Australia, Japan, and Taiwan.
Anduril said: 'Modern defence operations require compute and connectivity that go wherever the mission does—not just where the infrastructure allows.'
Klas offers Voyager line, a modular family of compute and networking systems designed to withstand extreme conditions.
Voyager is already part of Anduril's Menace systems, supporting operations across ground, maritime, and air domains.
'This acquisition builds on that foundation,' Anduril said.
The company added: 'Every autonomous system is only as capable as the compute and comms it carries—together, they form the nervous system that enables real-time decision-making, coordination, and mission execution.'
By combining Anduril's Lattice software platform, autonomy, and sensor capabilities with Klas' infrastructure, the company aims to deliver lightweight, mission-tailored systems that are faster to deploy, easier to maintain, and more resilient.
This is expected to allow Anduril to adapt edge compute and communications to specific mission needs, reducing integration risk and accelerating technology deployment.
In December, Anduril entered a strategic partnership with OpenAI, the developer behind ChatGPT and cutting-edge models such as GPT-4o, to strengthen AI capabilities for US national security.
The initiative focuses on embedding OpenAI's advanced AI technology into Anduril's defence platforms and its Lattice operating system, with a particular emphasis on improving counter-unmanned aircraft systems performance.
"Anduril Industries to acquire Klas " was originally created and published by Verdict, a GlobalData owned brand.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mark Zuckerberg has created the saddest place on the internet with Meta AI's public feed
Mark Zuckerberg has created the saddest place on the internet with Meta AI's public feed

Business Insider

timean hour ago

  • Business Insider

Mark Zuckerberg has created the saddest place on the internet with Meta AI's public feed

Meta's stand-alone AI app launched in late April, and like many people, my first thought was, "Huh?" After trying it out a little, my next thought was, "Oh no." A few weeks later, after the initial surge of curious new users, my thought is now: "Dear God." Meta AI's public stream is perhaps the most depressing feed I've come across in a long time. It's full of people sharing intimate information about themselves — things like thoughts on grief, or child custody, or financial distress. And it seems like some people aren't aware that what they're sharing will end up on a public feed. The Meta AI app is a bot more geared to casual chatting than complicated tasks like writing code or analyzing large data sets. And the part that's most different from ChatGPT or other large language models, like Google's Gemini, is Meta AI's public "Discover" feed. It shows off prompts, conversations, and image outputs from other users. Think of it like your Facebook feed, or your never-ending Instagram scroll, except on an app just for Meta AI. I'm not sure why someone would want to share some of the things I've read — nor am I sure why anyone who's not a nosy weirdo like I am would want to look at a feed of a bunch of strangers' interactions with a chatbot. And yet, people do share. Maybe they want to show off something like an image they thought was particularly cool. Meta AI's 'Discover' feed is full of personal interactions Back in late April, after the initial launch, Meta AI's feed was indeed mostly images. But mixed in, I noticed a few conversations that seemed more personal — someone asking about vitamin supplements for a 65-year-old woman, or for legal advice about getting fired, or for a special prayer. When I checked back on the app more recently, I was seeing even more of these personal takes — some even including a person's phone number and email address when they asked for help drafting a letter to a judge in a child custody case. Did that person mean to share that information with the world? Maybe, but I'm guessing they might have clicked the wrong button and didn't realize what they were doing. I could be wrong! Meta spokesman Daniel Roberts told me there's a multi-step process to sharing a chat history or image to Meta AI's Discovery feed. And to be clear, conversations with Meta AI aren't public by default. Like Roberts said, you have to explicitly choose to make something public by clicking "Share" and then "Post." Some other things I saw on the public feed: someone asking for help writing a poem for his wife's birthday, someone asking medical questions, someone asking for weight-loss tips. And someone who was talking to his wife who had died from cancer. Other conversations I saw: someone asking the AI to send them a reminder at the end of the month to cancel their Experian credit-monitoring subscription. One woman asked for help writing a letter to her local Elks Lodge after she said she'd been suspended for an altercation she had with another member. So, maybe some of those people wanted the world to read their AI chats. And others, maybe not. What I found even more eerie than the text and image feed was the audio feed. If a person uses the voice chat function, you can actually listen to recordings of their conversation if they're shared. At least two recordings I listened to appeared to be from people who didn't realize they had hit the audio button. One was having a conversation with a coworker about their shift schedule. Another sounded like a pocket dial: A man was having a conversation with another person, and Meta AI kept chiming in with suggestions. I found Meta AI's Discover feed depressing in a particular way — not just because some of the questions themselves were depressing. What seemed particularly dark was that some of these people seemed unaware of what they were sharing. People's real Instagram or Facebook handles are attached to their Meta AI posts. I was able to look up some of these people's real-life profiles, although I felt icky doing so. I reached out to more than 20 people whose posts I'd come across in the feed to ask them about their experience; I heard back from one, who told me that he hadn't intended to make his chat with the bot public. (He was asking for car repair advice.) Other users can reply to the posts that show up on the Meta AI Discover feed. On a few particularly personal posts, strangers warned the user that the potentially sensitive stuff they were sharing could be seen by the public. Meta wants its AI to be the future. But this doesn't feel like it. Mark Zuckerberg has said he thinks AI is super important. His company is reportedly making a huge investment in ScaleAI and could tap its CEO to lead a project with the goal of creating "superintelligence" that's smarter than a human. And yet, the consumer-facing features using Meta AI are kind of, uh, meh. There are user-generated romance chatbots on Instagram Messenger. There were the ill-fated ones voiced by celebrities (don't ask John Cena about this). Does Meta really want its AI efforts to amount to people posting — possibly accidentally — their interactions with its Meta AI bot on a public feed? In its rollout in April, Meta said the Discover feed was a "place to share and explore how others are using AI." I have to say that most of the stuff on Meta AI's Discover feed isn't grim and overly personal. Mostly, it's just silly images, mixed in with some other anodyne requests for recipes, or questions about how to even use this new technology. But I'm not convinced that this is "useful" or even "fun" to look at as a feed; it's a jumble of unrelated, random stuff, most of which actually is pretty boring. If Meta wants its users to adopt its AI like it's a useful and great thing, it's hard to square this with my awful experience on the Meta AI app.

People Are Asking ChatGPT for Relationship Advice and It's Ending in Disaster
People Are Asking ChatGPT for Relationship Advice and It's Ending in Disaster

Yahoo

time3 hours ago

  • Yahoo

People Are Asking ChatGPT for Relationship Advice and It's Ending in Disaster

Despite ChatGPT's well-documented issues, people are using it to advise them on relationship issues — and it's going about as well as you'd expect. In a new editorial, Vice advice columnist Sammi Caramela said she had been blissfully unaware of the ChatGPT-as-therapist trend until someone wrote into her work email about it earlier this year. Back in February, an unnamed man told the writer that his girlfriend refused to stop using the chatbot for dating advice and would even bring up things it had told her in arguments. Though Caramela was so shocked that she "nearly choked" on her coffee, the advice-seeker wasn't all that perturbed — and claimed that he found his girlfriend's ChatGPT use fascinating. "I was a bit floored by this confession. I had no idea people were actually turning to AI for advice, much less input on their relationships," the columnist wrote in her more recent piece. "However, the more I explored the topic, the more I realized how common it was to seek help from AI — especially in an era where therapy is an expensive luxury." Intrigued, Caramela found a friend who used the OpenAI chatbot for similar purposes, running relationship issues by it as a "non-biased" sounding board. Eventually, that person realized that ChatGPT wasn't unbiased at all, but rather "seemed to heavily validate her experience, perhaps dangerously so." Similar questions have been posed on the r/ChatGPT subreddit, and as Caramela explained, the consensus over there suggested not only that the chatbot is something of a "yes-man," but also that its propensity to agree with users can be dangerous for people who have mental health issues. "I often and openly write about my struggles with obsessive-compulsive disorder (OCD)," the writer divulged. "If I went to ChatGPT for dating advice and failed to mention how my OCD tends to attack my relationships, I might receive unhelpful, even harmful, input about my relationship." Digger deeper into the world of ChatGPT therapy, Caramela found multiple threads on OCD-related subreddits about the chatbot — and on the forum dedicated to ROCD, or relationship-focused OCD, someone even admitted that the chatbot told them to break up with their partner. "Programs like ChatGPT only speed the OCD cycle up because you can ask question after question for hours trying to gain some sense of certainty," another user responded in the r/ROCD thread. "There's always another 'what if' question with OCD." Like so many poorly-trained human professionals, chatbots aren't equipped to handle the nuance and sensitivity needed in any therapeutic context. Regardless of what OpenAI claims in its marketing, ChatGPT can't be truly empathetic — and if your "therapist" will never be able to have a human-to-human connection, why would you want it to give you dating advice in the first place? More on chatbot blues: Hanky Panky With Naughty AI Still Counts as Cheating, Therapist Says

Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes
Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes

Business Insider

time4 hours ago

  • Business Insider

Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes

Altman was writing about the impact that AI tools will have on the future in a blog post on Tuesday when he referenced the energy and resources consumed by OpenAI's chatbot, ChatGPT. "People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," Altman wrote. "It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon," he continued. Altman wrote that he expects energy to "become wildly abundant" in the 2030s. Energy, along with the limitations of human intelligence, have been "fundamental limiters on human progress for a long time," Altman added. "As data center production gets automated, the cost of intelligence should eventually converge to near the cost of electricity," he wrote. OpenAI did not respond to a request for comment from Business Insider. This is not the first time Altman has predicted that AI will become cheaper to use. In February, Altman wrote on his blog that the cost of using AI will drop by 10 times every year. "You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period," Altman wrote. "Moore's law changed the world at 2x every 18 months; this is unbelievably stronger," he added. Tech companies hoping to dominate in AI have been considering using nuclear energy to power their data centers. In September, Microsoft signed a 20-year deal with Constellation Energy to reactivate one of the dormant nuclear plants located in Three Mile Island. In October, Google said it had struck a deal with Kairos Power, a nuclear energy company, to make three small modular nuclear reactors. The reactors, which will provide up to 500 megawatts of electricity, are set to be ready by 2035. Google's CEO, Sundar Pichai, said in an interview with Nikkei Asia published in October that the search giant wants to achieve net-zero emissions across its operations by 2030. He added that besides looking at nuclear energy, Google was considering solar energy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store