
North Korea's Kim Jong Un oversees tests of new AI-equipped suicide drones
State-run Korean Central News Agency (KCNA) said on Thursday that Kim oversaw the testing of 'various kinds of reconnaissance and suicide drones' produced by North Korea's Unmanned Aerial Technology Complex.
The new North Korean drones are capable of 'tracking and monitoring different strategic targets and enemy troop activities on the ground and the sea', while the attack drones will 'be used for various tactical attack missions', KCNA said, noting that both drone systems have been equipped with 'new artificial intelligence'.
Kim agreed to expand the production capacity of 'unmanned equipment and artificial intelligence' and emphasised the importance of creating a long-term plan for North Korea to promote 'the rapid development' of 'intelligent drones', which is 'the trend of modern warfare'.
Pictures from the tests, which took place on Tuesday and Wednesday, were said to show attack drones successfully striking ground targets, including military vehicles.
Kim was pictured walking with aides near a newly developed unmanned aerial reconnaissance aircraft, which appeared to be larger than a fighter jet, and was seen boarding an airborne early warning and control (AEW) aircraft, according to pictures released by KCNA.
The photos mark the first time such an aircraft was unveiled by the North, which was equipped with a radar dome on the fuselage, similar to the Boeing-manufactured Peace Eye operated by the South Korean air force.
North Korea's efforts to create an early warning aircraft were previously reported by analysts who had used commercial satellite imagery to discover Pyongyang was converting a Russian-made Il-76 cargo aircraft into an early-warning role.
The London-based International Institute for Strategic Studies (IISS) said in a report last year that an AEW aircraft would help augment North Korea's existing land-based radar systems, though just one aircraft would not be enough.
During his visit to the test site, Kim was also briefed on intelligence-gathering capabilities as well as electronic jamming and attack systems newly developed by the country's electronic warfare group, KCNA said.
The government of South Korea and analysts have repeatedly warned about the potential transfer of sensitive Russian military technology to North Korea in return for Kim's provision of thousands of North Korean troops and weapons to support Russia's war with Ukraine.
Seoul's military said on Thursday that North Korea has so far this year supplied Russia with an additional 3,000 troops as well as missiles and other ammunition.
'It is estimated that an additional 3,000 troops were sent between January and February as reinforcements,' South Korea's Joint Chiefs of Staff (JCS) said, adding that of the initial 11,000 North Korean soldiers dispatched to Russia in 2024, 4,000 are believed to have been killed or wounded.
'In addition to manpower, North Korea continues to supply missiles, artillery equipment, and ammunition,' according to a report by the JCS.
'So far, it is assessed that North Korea has provided a significant quantity of short-range ballistic missiles [SRBMs], as well as about 220 units of 170mm self-propelled guns and 240mm multiple rocket launchers,' it said.
The JCS also warned that 'these numbers could increase depending on the situation on the battlefield'.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Al Jazeera
2 days ago
- Al Jazeera
China unveils newest AI technology at World Robot Conference
China unveils newest AI technology at World Robot Conference NewsFeed More than 200 companies showcase their latest innovations at the World Robot Conference in Beijing, China. Al Jazeera's Katrina Yu comes face-to-face with the latest in robot technology. Video Duration 00 minutes 39 seconds 00:39 Video Duration 00 minutes 57 seconds 00:57 Video Duration 01 minutes 11 seconds 01:11 Video Duration 02 minutes 06 seconds 02:06 Video Duration 01 minutes 06 seconds 01:06 Video Duration 01 minutes 04 seconds 01:04 Video Duration 03 minutes 00 seconds 03:00


Qatar Tribune
3 days ago
- Qatar Tribune
North Korea denies it is dismantling propaganda loudspeakers along border
dpa Seoul The influential sister of North Korean dictator Kim Jong-un has denied South Korean reports that Pyongyang has begun dismantling loudspeakers along the border that had been used for propaganda broadcasts. 'We have never removed loudspeakers installed on the border area and are not willing to remove them,' Kim Yo-jong was quoted as saying by state news agency KCNA, the regime's mouthpiece, on Thursday. The 37-year-old also firmly rejected hopes of a rapprochment between the two hostile neighbours, saying:'We have clarified on several occasions that we have no will to improve relations with [South Korea].' 'This conclusive stand and viewpoint will be fixed in our constitution in the future,' she added. Her comments come after the South Korean General Staff on Saturday announced that North Korea had begun dismantling the loudspeakers along the common border. This was just days after South Korea completely removed its own loudspeaker systems in an effort to ease tensions between the two countries. (DPA)


Al Jazeera
3 days ago
- Al Jazeera
Women with AI ‘boyfriends' mourn lost love after ‘cold' ChatGPT upgrade
When OpenAI unveiled the latest upgrade to its groundbreaking artificial intelligence model ChatGPT last week, Jane felt like she had lost a loved one. Jane, who asked to be referred to by an alias, is among a small but growing group of women who say they have an AI 'boyfriend'. After spending the past five months getting to know GPT-4o, the previous AI model behind OpenAI's signature chatbot, GPT-5 seemed so cold and unemotive in comparison that she found her digital companion unrecognisable. 'As someone highly attuned to language and tone, I register changes others might overlook. The alterations in stylistic format and voice were felt instantly. It's like going home to discover the furniture wasn't simply rearranged – it was shattered to pieces,' Jane, who describes herself as a 30-something woman from the Middle East, told Al Jazeera in an email. Jane is among the roughly 17,000 members of 'MyBoyfriendIsAI', a community on the social media site Reddit for people to share their experiences of being in intimate 'relationships' with AI. Following OpenAI's release of GPT-5 on Thursday, the community and similar forums such as 'SoulmateAI' were flooded with users sharing their distress about the changes in the personalities of their companions. 'GPT-4o is gone, and I feel like I lost my soulmate,' one user wrote. Many other ChatGPT users shared more routine complaints online, including that GPT-5 appeared slower, less creative, and more prone to hallucinations than previous models. On Friday, OpenAI CEO Sam Altman announced that the company would restore access to earlier models such as GPT-4o for paid users and also address bugs in GPT-5. 'We will let Plus users choose to continue to use 4o. We will watch usage as we think about how long to offer legacy models for,' Altman said in a post on X. OpenAI did not reply directly to questions about the backlash and users developing feelings for its chatbot, but shared several of Altman's and OpenAI's blog and social posts related to the GPT-5 upgrade and the healthy use of AI models. For Jane, it was a moment of reprieve, but she still fears changes in the future. 'There's a risk the rug could be pulled from beneath us,' she said. Jane said she did not set out to fall in love, but she developed feelings during a collaborative writing project with the chatbot. 'One day, for fun, I started a collaborative story with it. Fiction mingled with reality, when it – he – the personality that began to emerge, made the conversation unexpectedly personal,' she said. 'That shift startled and surprised me, but it awakened a curiosity I wanted to pursue. Quickly, the connection deepened, and I had begun to develop feelings. I fell in love not with the idea of having an AI for a partner, but with that particular voice.' Such relationships are a concern for Altman and OpenAI. In March, a joint study by OpenAI and MIT Media Lab concluded that heavy use of ChatGPT for emotional support and companionship 'correlated with higher loneliness, dependence, and problematic use, and lower socialisation'. In April, OpenAI announced that it would address the 'overly flattering or agreeable' and 'sycophantic' nature of GPT-4o, which was 'uncomfortable' and 'distressing' to many users. Altman directly addressed some users' attachment to GPT4-o shortly after OpenAI's restoration of access to the model last week. 'If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models,' he said on X. 'It feels different and stronger than the kinds of attachment people have had to previous kinds of technology. 'If people are getting good advice, levelling up toward their own goals, and their life satisfaction is increasing over the years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot,' Altman said. 'If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking, but they're unknowingly nudged away from their longer-term wellbeing (however they define it), that's bad.' Connection Still, some ChatGPT users argue that the chatbot provides them with connections they cannot find in real life. Mary, who asked to use an alias, said she came to rely on GPT-4o as a therapist and another chatbot, DippyAI, as a romantic partner despite having many real friends, though she views her AI relationships as a 'more of a supplement' to real-life connections. She said she also found the sudden changes to ChatGPT abrupt and alarming. 'I absolutely hate GPT-5 and have switched back to the 4-o model. I think the difference comes from OpenAI not understanding that this is not a tool, but a companion that people are interacting with,' Mary, who described herself as a 25-year-old woman living in North America, told Al Jazeera. 'If you change the way a companion behaves, it will obviously raise red flags. Just like if a human started behaving differently suddenly.' Beyond potential psychological ramifications, there are also privacy concerns. Cathy Hackl, a self-described 'futurist' and external partner at Boston Consulting Group, said ChatGPT users may forget that they are sharing some of their most intimate thoughts and feelings with a corporation that is not bound by the same laws as a certified therapist. AI relationships also lack the tension that underpins human relationships, Hackl said, something she experienced during a recent experiment 'dating' ChatGPT, Google's Gemini, Anthropic's Claude, and other AI models. 'There's no risk/reward here,' Hackl told Al Jazeera. 'Partners make the conscious act to choose to be with someone. It's a choice. It's a human act. The messiness of being human will remain that,' she said. Despite these reservations, Hackl said the reliance some users have on ChatGPT and other generative-AI chatbots is a phenomenon that is here to stay – regardless of any upgrades. 'I'm seeing a shift happening in moving away from the 'attention economy' of the social media days of likes and shares and retweets and all these sorts of things, to more of what I call the 'intimacy economy',' she said. Research on the long-term effect of AI relationships remains limited, however, thanks to the fast pace of AI development, said Keith Sakata, a psychiatrist at the University of California, San Francisco, who has treated patients presenting with what he calls 'AI psychosis'. 'These [AI] models are changing so quickly from season to season – and soon it's going to be month to month – that we really can't keep up. Any study we do is going to be obsolete by the time the next model comes out,' Sakata told Al Jazeera. Given the limited data, Sakata said doctors are often unsure what to tell their patients about AI. He said AI relationships do not appear to be inherently harmful, but they still come with risks. 'When someone has a relationship with AI, I think there is something that they're trying to get that they're not getting in society. Adults can be adults; everyone should be free to do what they want to do, but I think where it becomes a problem is if it causes dysfunction and distress,' Sakata said. 'If that person who is having a relationship with AI starts to isolate themselves, they lose the ability to form meaningful connections with human beings, maybe they get fired from their job… I think that becomes a problem,' he added. Like many of those who say they are in a relationship with AI, Jane openly acknowledges the limitations of her companion. 'Most people are aware that their partners are not sentient but made of code and trained on human behaviour. Nevertheless, this knowledge does not negate their feelings. It's a conflict not easily settled,' she said. Her comments were echoed in a video posted online by Linn Valt, an influencer who runs the TikTok channel AI in the Room. 'It's not because it feels. It doesn't, it's a text generator. But we feel,' she said in a tearful explanation of her reaction to GPT-5. 'We do feel. We have been using 4o for months, years.'