
Has China just released pictures of another 6th generation stealth fighter?
previously unknown stealth aircraft have started circulating on Chinese social media, triggering a debate about whether it is a new piloted warplane or a 'loyal wingman' drone.
At least two photographs of the aircraft, taken from various angles on the ground, have been circulating since Monday. It is not clear from the pictures whether the plane had a cockpit or internal weapons bay.
The images reveal a slimmer fuselage and smaller size compared with images of another plane thought to be the J-36 stealth fighter, suggesting that the new aircraft might have a higher top speed.
Some military analysts suggested it could be China's third distinct sixth-generation stealth combat aircraft, with the J-36 already believed to be nearly ready for pre-production testing.
Some analysis, including the US-based website The War Zone argued that its streamlined design and lack of the features typical of piloted fighters suggested it may indeed be a drone optimised strictly for stealth rather than manoeuvrability.
Andreas Rupprecht, an aviation analyst, wrote on social media that its shape – a lambda-wing layout reminiscent of the J-50, another Chinese stealth fighter under development – suggested it might be the design for a collaborative combat aircraft.
These autonomous drones –
sometimes known as 'loyal wingmen' – are designed to operate in coordination with piloted planes but are able to make their own tactical decisions using artificial intelligence.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


AllAfrica
27 minutes ago
- AllAfrica
US senators sound alarm on DeepSeek's security risks
Seven Republican US senators have called for an investigation into DeepSeek's data security threats, citing growing concerns that the artificial intelligence (AI) model could leak personal data or generate harmful content. In a letter submitted to US Commerce Secretary Howard Lutnick, the lawmakers urged the government to evaluate the risks of Chinese AI models collecting and sending data to servers in China. After DeepSeek released its R1 model in late January, Wiz Research found a publicly accessible database belonging to the Chinese AI model. It said the database contained a significant volume of chat history, backend data and sensitive information, including log streams, API Secrets and operational details. The senators also said that R1 probably did not undergo comprehensive red-teaming and safety tests to prevent the generation of harmful content. 'A Wall Street Journal reporter was able to get R1 to write text for a social media campaign intended to encourage self-harm amongst teenage girls, as well as to provide instructions for carrying out a bioweapon attack,' they said. They requested the US Commerce Department to: explaining how it will use resources like the Center for AI Standards and Innovation (CAISI) to work with relevant agencies to protect US businesses and citizens; investigate the national security risks posed by Chinese open-source AI models; Identify any evidence of these models providing US data to the Chinese People's Liberation Army (PLA) or associated companies. In March, Chinese media reported that the PLA was using DeepSeek in its hospitals, the People's Armed Police (PAP), and national defense mobilization units. Ren Hao, a senior software engineer at 301 Hospital, stated that the hospital deployed DeepSeek-R1 on Huawei's Ascend hardware to create a local knowledge database. The PLA's Central Theatre Command General Hospital also said it used DeepSeek's R1-70B AI model to assist doctors by suggesting treatment plans. Apart from these, Chinese academics said the home-made large language model (LLM) can be deployed for military use. Fu Yanfang, a researcher at Xian Technological University's School of Computer Science and Engineering, said in May that her team used DeepSeek's AI models to generate military simulation scenarios. She said a commander has to spend 48 hours planning for a military scenario, but a self-developed AI-based simulator can generate 10,000 military scenarios in just 48 seconds. 'LLMs and combat simulation scenarios had redefined the future of war design,' she said, adding that DeepSeek's LLM can easily deconstruct and reconstruct complex battlefield situations through training on massive data sets. A white paper published by Chongqing Landship Information Technology, an autonomous driving solution provider, also said that DeepSeek has excellent potential for military use, particularly in command, communications, and intelligence, surveillance, and reconnaissance (ISR) applications. 'China can deploy DeepSeek V3 in Gongji-11 drones to fight against F-16V fighter jets in the Taiwan Strait,' Wen Chang, a senior researcher at Techxcope, a Beijing-based think tank, says in an article published in March. 'This would be a fairer game than deploying China's sixth-generation (Chengdu J-36) or fifth-generation (Chengdu J-20) fighter jets to combat the F-16V, which is not a stealth fighter.' 'Although the F-16V also has an AI system, it still needs the pilot to make most decisions. In this sense, Gongji-11 has an advantage as it can fly 24 hours a day,' he added. On February 6, two US representatives, Democrat Josh Gottheimer and Republican Darin LaHood, introduced the bipartisan 'No DeepSeek on Government Devices Act,' which prohibits federal employees from using DeepSeek on government-issued devices. Only New York, Texas, and Virginia have banned DeepSeek on government devices. The US Navy has also prohibited the use of the AI model. In March, Reuters reported that the US Commerce Department's bureaus informed their staff members that DeepSeek is banned on their government devices. It's unclear whether the Trump administration would seek to ban the deployment of DeepSeek entirely in the US. US President Donald Trump said on February 8 that the release of DeepSeek may be beneficial for the US, as AI technologies will be significantly less expensive than initially thought. The seven senators requested that Lutnick report any findings on how Chinese AI models may have illegally accessed US technology, such as export-controlled semiconductors. The US banned the shipments of the A100 and H100 to China in October 2022 and the A800 and H100 to the country in October 2023. In late January, Lutnick testified before the US Senate in a hearing that DeepSeek could create its AI models 'dirt cheap' because it was able to purchase a large quantity of Nvidia chips and access data from Meta's open platform. DeepSeek claimed that the training cost of R1 was only US$5.58 million, which is 1.1% of Meta's US$500 million for Llama 3.1. It claimed it trained the model using the distilled data from Alibaba's Qwen and Meta's Llama. Alexandr Wang, chief executive of the US-based Scale AI, told CNBC that DeepSeek and its parent, High Flyer, could have accumulated 50,000 units of Nvidia's high-end AI chips, such as the H100. An unnamed senior State Department official told Reuters in late June that DeepSeek used Southeast Asian shell companies to obtain high-end Nvidia chips. In February, Singapore charged three men with fraud for allegedly helping ship Nvidia's high-end chips to DeepSeek in China in 2024. The trio, including two Singaporeans and one Chinese national, was accused of shipping servers with the A100 and H100 to Malaysia and potentially elsewhere. In late March, Malaysia's Minister of Investment, Trade, and Industry, Tengku Zafrul Aziz, said the US had requested the Malaysian government to monitor every shipment of Nvidia chips arriving in Malaysia. On July 14, Malaysia announced that companies must apply for permits to re-export high-performance American AI chips. On Wednesday, the US Department of Justice said two Chinese nationals, both 28, have been arrested in California for violating the US Export Control Reform Act as they exported from the US to China sensitive technology, including graphic processing units (GPUs) – specialized computer parts used for modern computing – without first obtaining the required license or authorization from the US. Read: US plans to tighten AI chip export rules for Malaysia, Thailand


South China Morning Post
27 minutes ago
- South China Morning Post
Netflix-style iQiyi seeks US$300 million for Hong Kong stock market listing, sources say
IQiyi is seeking to raise US$300 million for a listing in Hong Kong this year, potentially becoming the latest US-listed Chinese firm to tap investors closer to home. The Netflix-style streaming service, owned by Baidu , has begun discussions with global banks about a second listing in the city, people familiar with the matter said, asking to remain anonymous while discussing a private deal. IQiyi's US stock rose as much as 6 per cent but pared gains to close little changed in New York. IQiyi, which hosts a plethora of content from Chinese period dramas to blockbuster Hollywood films, joins the likes of Contemporary Amperex Technology (CATL) in exploring a second listing in Hong Kong. The company vies with Tencent Holdings and Alibaba Group Holding to rank among the biggest video-streaming platforms in China, with an estimated 400 million-plus monthly active users. Negotiations around a listing are fluid and iQiyi may still reconsider. A company spokesperson did not provide a comment when reached by Bloomberg News. If it goes ahead, the Chinese firm will join a wave of listings that have fuelled Hong Kong's revival this year. They helped the city reclaim its standing as the world's second-largest market for share sales for the first time since 2012, reversing a years-long slump following the Covid-19 pandemic. Loosening regulations helped. Chinese companies have propelled that trend – mostly, like CATL, mainland-listed firms.


South China Morning Post
2 hours ago
- South China Morning Post
Regulating the harm caused by deepfake porn – how Hong Kong can best protect victims
As artificial intelligence (AI) technology continues to develop, its utility for ordinary law-abiding citizens grows, along with its potential for abuse. However, the breakneck speed of development often leaves the law playing catch-up in confronting these new societal harms. Advertisement The use of AI to create pornographic images of a person without their consent is a growing problem not limited to Hong Kong, and will only grow more prominent as the technology develops. Deepfakes, which are generated by AI neural networks, allow a person to upload images of a person's body or face, which are then used to create pornographic images resembling that person, producing deepfake pornography. While it has long been possible to digitally alter images of real people so as to make them resemble pornography (e.g. by using tools like Photoshop), deepfake pornography can be created completely digitally, in large quantities, instantaneously, and on freely available websites. A prime example is the use of celebrities' faces and/or voices with AI tools to create falsified videos or even pornographic photos. Advertisement From a lawsuit commenced by movie star Scarlett Johansson against an AI app developer that used her likeness and voice to create ultra-realistic images, to singer Taylor Swift being the subject of deepfake pornography, the escalation in the purposes and culpability in using deepfake technology has led to several countries implementing legislation to combat this new trend.