Ukrainian soldier says ground robots are great for attacks because they carry far more explosive power than flying drones
Ukrainian soldiers use ground robots to blow up Russian troops and equipment. These carry a far heavier explosive payload than drones that fly.
Operators control these uncrewed ground vehicles, or UGVs, remotely. The UGVs can travel close to Russian positions, assuming they're not spotted, and detonate. And Ukraine's soldiers can stay safe and far from the action. They are a lot like flying drones, but the systems pack a greater punch because they don't take flight.
Oleksandr Yabchanka, the head of the robotic systems for Ukraine's Da Vinci Wolves Battalion, told Business Insider that Ukraine's soldiers attach bombs and explosives to ground robots, "turning that system into a kamikaze one."
The UGV actions mirror what has been done with aerial drones in Russia's invasion of Ukraine, where the flying drones explode and drop grenades.
"A crucial difference between aerial and on-the-ground unmanned systems is the mass that they can carry," Yabchanka said. He said Ukraine needs to "always be one step, half a step ahead of the enemy in terms of the powers of destruction." That's where these ground drones come into play.
Packing a much bigger punch
He said that while the biggest aerial drones can carry mines that weigh 22 pounds each, the smallest ground robots that he works with can take more than 48 pounds. On average, they can carry much more.
He said that just a few hours before he spoke with Business Insider, his unit sent a ground robot carrying 66 pounds of explosives into a basement held by Russia, where it eliminated Russian infantry.
Ukraine's soldiers work with a host of drone types linked to do a wide range of work. There are small airborne drones for tactical action ranging from recon to strike and larger aerial uncrewed systems used to hit targets inside Russian territory. There are also naval drones that target ships in the Black Sea, and then there are the UGVs, which can deal damage and carry out missions like casualty evac.
Yabchanka said the growing ground robot technology allows Ukrainian troops to massively amplify their firepower without having to put more soldiers in harm's way. That's key when they're fighting at a disadvantage against Russia's much larger military army.
He said that roughly 80% of Russians killed in battle are killed by uncrewed systems. The other 20% is mostly artillery — at the start of the war, it was the opposite. Aerial drones are responsible for most of the drone kills because of how prolific they are.
More ground robots could mean a major firepower boost for Ukrainian forces. "Imagine how much more powerful we could be if we could bring twice as much explosives to the front line as we do now," Yabchanka said.
Unlike the quadcopters, this technology isn't widely available to all units yet, but where it is being used, UGVs are evacuating wounded soldiers, firing into Russian positions with mounted weapons, carrying gear, laying mines, exploding inside enemy positions, and spying on the Russians, and more.
An uncrewed arms race
This is a technology that Russia is developing too. Yabchanka said that the question is: who will do it faster?
There's an effort on both sides of the war to advance this technology. The dynamic echoes the drone race that Ukraine and Russia are both currently locked in, with both sides developing new drones and counter-drone measures to defeat the other side's drone tech, as well as rushing to make as many drones as possible.
Yabchanka said Ukraine and its partners need to constantly innovate to keep coming out with new ground robot upgrades and improvements to other military technology.
It's something that requires consistent innovation, as "what was up to date and relevant half a year ago is not up to date and relevant anymore," he said.
He said they are developing so fast that they are getting upgraded on the actual front lines — with soldiers sometimes making tweaks themselves or calling the manufacturer directly to make a request for changes and upgrades to future drones.
Calling on Ukraine's partners
Yabchanka called for much greater European involvement in making this type of technology, saying that "whatever is required on our end is at your service."
Europe, like the US, has given Ukraine billions of dollars in military aid, but Ukraine has increasingly made more and more of its own weaponry as it looks to innovate faster, create weaponry that is designed for a fight with Russia, and overcome shortages in Western aid cause by delays and political debate.
Ukraine has become a pioneer in the development of certain types of weaponry, and European leaders and defense ministers have acknowledged that there are lessons Europe's defense industries can learn from Ukraine, particularly on drones, as they warn Russia could attack their countries.
Yabchanka said that Europe also has "more resources than Russia," making deepening cooperation a win-win.
He urged European industry and leaders to get onboard. "The manufacturers, developers, military personnel all stand ready for cooperation. Just come along; we'll deliver training and whatever else is necessary."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 hours ago
- Yahoo
ChatGPT is making us weird
Artificial intelligence chatbots like OpenAI's ChatGPT models are changing the way we think. Beyond our workflows and creative processes, the bots are disrupting our social cues and intimate lives. Business Insider spoke to a range of professionals to ask: Is ChatGPT making us weird? The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary. My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human." Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage. And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.) But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange. As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it. Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives. The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today. Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives. While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents. Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something." Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings." Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf. The jury's still out on situations like these. "AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about." Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said. After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite. "This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us." Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent." Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures. "It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said. So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present. While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases. A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions. Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men. While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet. "There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on." Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be. OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended." The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior. When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being. OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness. An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which is analyzing Claude usage, how AI is being used across jobs, and studying what values AI models have. Representatives for Meta and Microsoft did not respond to requests for comment. Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders. Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist. "Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way. "Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are." Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure." "But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know." While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online. "This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?" Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits. While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools. "We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people." Read the original article on Business Insider

Business Insider
4 hours ago
- Business Insider
ChatGPT is making us weird
The other day, my family group chat lit up when I posed a question about whether it's important to say "please" and "thank you" to ChatGPT when asking it to conduct a niche search or plan out an itinerary. My mother, ever a stickler for manners, said she makes a conscious choice to behave in this way. A choice she said she makes to "keep myself human." Another loved one later admitted she's been leaning on the chatbot for guidance as she navigates a tricky moment in her marriage. And I couldn't resist my temptation to ask ChatGPT to evaluate how attractive I am after The Washington Post reported that people were asking it for beauty advice. (It said I have "strong, expressive features," then told me to stand up straighter and smile more.) But I know it's not just my immediate circle: ChatGPT is making everyone behave a little strange. As large language models become fixtures of our digital lives, the ways we engage with them reveal a society in flux, where machines aren't only mimicking human interaction but quietly altering the expectations and norms that govern it. Business Insider spoke with four professionals who interact with chatbots like OpenAI's GPT models in radically different ways — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other, how we view ourselves, as well as how it's disrupting our manners and intimate lives. The conversations centered on ChatGPT, since OpenAI's chatbot is quickly becoming the AI world's equivalent of what Google is to search engines, but the professionals said similar conclusions could be drawn for Meta AI, Microsoft Copilot, Anthropic's Claude, or any other large language model on the market today. A change in the social contract Digital etiquette consultant and author Elaine Swann said that society has needed to adapt to new social cues as each wave of technology has changed our lives. While we've largely collectively agreed that it's all right to use shorthand in personal email correspondence and rude to take a cellphone call on speakerphone in public, we're still establishing a social code for how to interact with AI bots and agents. Kelsey Vlamis, a senior reporter at Business Insider, said she's started seeing a chatbot-related change in her personal life. While on vacation in Italy, she said her husband found himself impatient with their tour guide, consciously having to keep himself from interrupting with questions "since that's how he talks to ChatGPT when he is trying to learn something." Of course, he had to hold himself back, Vlamis added, "since that is not, in fact, how we talk to human beings." Since AI has gained momentum, social media is full of posts asking whether it's appropriate for a spouse to use ChatGPT to write a love note to their partner, or for a worker to rely on an AI agent to fill out a job application on their behalf. The jury's still out on situations like these. "AI is certainly smarter now, which is great for us, but at the same time, we have to be very careful that it doesn't substitute basically our judgment or empathy," Swann said. "We have to be careful with it, not just utilizing it as our sole source of information, but also making sure that we put a mirror up to ourselves in how we use it, and running its suggestions by people that we know and care about." Maintaining our baseline levels of respect — not just for each other, but the world around us — is also key, Swann said. After OpenAI CEO Sam Altman posted on X in late April that it costs "tens of millions of dollars" for the company to process niceties like "please" and "thank you" directed toward ChatGPT, she stressed that it's up to the company to make processing those statements more cost-effective, not up to users to stop being polite. "This is the world that we create for ourselves," Swann said. "And AI should also understand that this is how we speak to one another, because we're teaching it to give that back to us." Altman, for his part, said the massive amount of funds used on polite requests toward ChatGPT is money "well spent." Exacerbated biases Laura Nelson, an associate professor of sociology at the University of British Columbia, said that because the world's most popular chatbots are created by American companies, written by US-based programmers, and trained primarily on content written in the English language, they have deeply entrenched biases that are often seen in Western cultures. "It's really important to keep in mind that it's a particular world view that these algorithms have based their training data on," Nelson said. So if you ask ChatGPT to draw you a picture of a plate of breakfast, it'll conjure typical North American foods: bacon, eggs, sausage, and toast. It describes a bottle of wine as a "classic and thoughtful gift," though in many cultures, alcohol is rarely consumed, and a bottle would make a tone-deaf present. While those examples are relatively harmless, the bots also exacerbate more insidious and potentially damaging biases. A 2021 study published in Psychology & Marketing found that people prefer AI to be anthropomorphized as female in their devices, like it is in most pop culture representations, because it makes the technology seem more human. However, the study found that preference may be inadvertently entrenching the objectification of women. There have also been numerous reports that lonely, mostly male, users may verbally abuse or degrade their AI companions. Business Insider previously reported that artificial intelligence is also rife with discriminatory bias due to the data it's trained on, and ChatGPT in particular showed racial bias when screening résumés for jobs, over-selecting Asian women candidates and under-selecting Black men. While these biases may not immediately change our behavior, they can impact our thinking and the ways we operate as a society, Nelson said. And if ChatGPT or other AI applications are implemented into our decision-making, whether in our personal lives, in the workplace, or at the legal level, it'll have wide-reaching effects we haven't even considered yet. "There's just no question that AI is going to reflect our biases — our collective biases — back to it," Nelson said. "But there are a lot of people interacting with these bots, and we have no data to suggest what the global trends are, or the effects it's going to have long-term. It's a tricky thing to get a handle on." A largely untraced social shift Concrete data about the societal shift caused by AI is hard to come by, but the companies behind the tech know something is happening. Many of them have dedicated teams to figure out what effect their technology has on users, but their publicly available findings aren't peer-reviewed like a typical scientific study would be. OpenAI announced that a recent update to the GPT-4o model had a hiccup. It was "noticeably more sycophantic" than prior models, the company said in a press release. While it passed OpenAI's self-described "vibe check" and safety testing, they rolled it back after realizing its programming to please the user could fuel anger, urge impulsive actions, or reinforce negative emotions "in ways that were not intended." The company's announcement highlighted that OpenAI is keenly aware that the various AI applications gaining momentum online — from digital romantic partners to study buddies to gift-suggesting elves — have also started to have creeping effects on human emotions and behavior. When reached for comment, a spokesperson for OpenAI directed Business Insider to the company's recent statements on sycophancy in GPT-4o and an early study of emotional well-being. OpenAI's research, conducted with users over the age of 18, found that emotional engagement with the chatbot is rare. However, heavy users were more likely to report an emotional connection to the bot, and those who had personal conversations with ChatGPT were more likely to report feelings of loneliness. An Anthropic spokesperson said the company has a dedicated research team, Societal Impacts, which isanalyzing Claude usage, how AI is being used across jobs, and studying what values AI models have. Representatives for Meta and Microsoft did not respond to requests for comment. Behavioral risks and rewards Nick Jacobson, an associate professor of psychiatry at Dartmouth's Center for Technology and Behavioral Health, conducted the first trial study delivering psychotherapy to clinical populations using generative AI. His research found that a carefully programmed chatbot can be a helpful therapeutic tool for people suffering from depression, anxiety, and eating disorders. Engagement among patients in the study rivaled that of in-person therapy, they saw a significant reduction in the severity of their symptoms, and, when measured using the same test as human providers, the patients in the study reported they bonded with their therapeutic chatbot with a similar intensity as a human therapist. "Folks were really developing this strong, working bond with their bot," Jacobson said, a factor which is key to a productive therapeutic relationship. However, most bots aren't programmed with the care and precision that Jacobson's was, so those emotional bonds could be developed with an AI that doesn't have the skills to handle their users' emotional needs in a productive way. "Nearly every foundational model will act in ways that are profoundly unsafe to mental health, in various ways, shapes, and forms, at rates that are totally unacceptable," Jacobson said. "But there are so many people that are using them for things like therapy and just plain companionship that it's becoming a real problem — I think folks should handle this with greater care than I think they are." Emma J. Smith, a relationship and sex therapist, said she believes in-person therapy comes with unique benefits that can't be replicated by AI, but she sometimes recommends using chatbots for anxious clients to practice social interactions in a low-stakes environment, "so if it goes badly, or you get stuck, there's no pressure." "But some of the drawbacks are, like anything really, if it becomes a mechanism to avoid human interaction, or if it is taking you away from going out and being in the world," Smith said. "Video games are probably fine for a lot of people, and then there are some people that it takes over, and then they're missing out on their non-virtual life because they're too involved. I can see that that would be a problem with these bots, but because this is so new, we know what we don't know." While the results of his trial were promising, Jacobson warned that the large language model used in his study was carefully trained for years by some of the most prominent scholars in the psychiatric field, unlike most "therapy" bots available online. "This has inherently got a lot more danger than a lot of folks are necessarily aware of," Jacobson said. "There's probably a great deal of good that can happen from this, but there's a great deal we don't know, like for example, when folks are turning to these things for companionship, does that actually enhance their ability to practice in social settings and build human bonds, or do folks actually further withdraw and replace what would be otherwise human relationships with these parasocial relationships with these chatbots?" Jacobson is particularly concerned about AI's impact on developmental processes among younger people who haven't grown up with old-school social norms and habits. While testifying before the Senate Commerce Committee in early May about child safety in the AI era, Altman said he would not want his son to have a best friend bond with an AI bot, adding that children require "a much higher level of protection" than adults using AI tools. "We spent years and years focusing predominantly on safety, so it's very concerning to me how many people are jumping into the AI space in new ways, and just shipping it," Jacobson said. "And in my mind, that's acting quite irresponsibly. You know, a lot of folks in Silicon Valley want to move fast and break things, but in this case, they're not breaking things — they're breaking people."

Business Insider
5 hours ago
- Business Insider
2 'grossly underestimated' skills that will be valuable in the next era of engineering, according to Cisco's CPO
It's no secret that the role of engineers is changing — and in some cases, shrinking. That's not the case at Cisco, though, the company's president and chief product officer, Jeetu Patel, told Business Insider in an interview — at least, not yet. The company has 27,000 engineers and is "unapologetically hiring" more, he said. "We feel more constrained now than ever before on not having enough engineers to get prosecuted all the ideas that we've got going internally," Patel said. Patel isn't denying that AI advancements will redefine the role of engineers, though. Cisco recently partnered with OpenAI for design testing on its new Codex AI coding assistant, which offloads repetitive tasks and performs tasks like writing code, fixing bugs, and running tests. Patel said Codex will allow companies to pair human software engineers with an AI counterpart. Understanding the nuanced details of syntax in coding language will still be important, Patel said, but there will be less emphasis on it in the future. He said that skill won't be "consequential" in the next five years. Two other skills, which are "grossly underestimated," will take precedent, the CPO told BI. Patel said one of those will be "orchestrating agent workflow." That will involve overseeing a whole family of agents who talk to each other to ensure they solve problems. Salesforce EVP of talent growth and development, Lori Castillo Martinez, similarly said in an earlier interview with BI that it's more important than ever to know which tasks are best suited for agents and humans. She said the best managers are those who can analyze teams and maximize productivity. "That will be super important," Patel said about orchestrating agent workflow. Patel said the second skill that will become valuable is "quality of ideas." It appears that engineers will need to have a lot of them. Patel said tools like Codex "unlock" human imagination so that the "scarcity of developers" doesn't constrain companies from innovating. The CPO said that AI augmentation will make engineers 10 to 50 times as productive, with engineers spending more time thinking and less time fixing bugs. Patel said the speed at which an idea becomes a product will go from months to minutes. Dropbox VP of product and growth, Morgan Brown, previously shared similar advice with BI. In the age of AI, he said product managers should focus more on what he referred to as the "deep work," adding that companies are eager to have more great ideas. Patel said that this evolution will improve "output capacity, but also the satisfaction that someone gets from a job." "The only constraint becomes their imagination," Patel said.