logo
UK launches StormShroud drones that will fly ahead of Brit fighter jets & jam enemy air defences to ‘stand up to Putin'

UK launches StormShroud drones that will fly ahead of Brit fighter jets & jam enemy air defences to ‘stand up to Putin'

The Sun03-05-2025

ROBOT drones that can blind Russian radars will fly with RAF jets to protect them on missions.
A new fleet of StormShroud drones will use electronic warfare jammers to 'knock out enemy air defences', the MoD revealed.
7
7
7
7
The AI-powered drones – which are made in west Wales and Southampton – have been battle tested in Ukraine.
The UK fleet - which is thought to number about 24 - will be armed with world-class BriteStorm signal jammers that can dazzle enemy radars.
They are designed to spearhead air attacks – flying ahead of the fighter jets to clear a safe path through air defence missiles.
Downing Street said: "The drones offer a step change in capability by using a high-tech BriteStorm signal jammer to disrupt enemy radar at long ranges, protecting our aircraft and pilots."
They added: "In revolutionary new tactics, the drones support aircraft like Typhoon and F35 Lightning, by confusing enemy radars and allowing combat aircraft to attack targets unseen.
"This means for the first time, the RAF will benefit from high-end electronic warfare without needing crew to man it, freeing them up for other vital frontline missions."
RAF chief Sir Rich Knighton said: 'This is a seminal moment for the RAF to maintain our advantage in air combat and national security.'
The AI-powered drones will act autonomously, detecting and disrupting enemy air defence networks.
Sir Rich added: 'Autonomous collaborative platforms will revolutionise how we conduct a range of missions, from intelligence gathering to strike and logistical support.'
The Storm Shroud is based on a Tekever AR3 drone fitted with an 'electronic warfare payload' made by Leonardo UK in Luton.
Why US army is turning to 'snipers in the sky' in largest overhaul since Cold War
The RAF said: 'Storm Shroud will support RAF F-35B Lightning and Typhoon pilots by blinding enemy radars, which increases the survivability and operational effectiveness of our crewed aircraft.'
The UK has emerged as a world leader in drone technology.
Our drone market generated around £94.66million in 2023 but is predicted to reach as much as £1.5billion by 2031.
Drones are currently used for an array of tasks including surveying buildings, decommissioning nuclear installations, inspecting dangerous structures and providing emergency searches.
7
7
The Royal Mail is even trialling drone delivery in remote areas.
While 84 per cent of budding drone businesses have fewer than 50 employees, analysts Drone Industry Insights said: 'The drone market in the UK is among the top in the world, with significant potential for growth.
'The economy in the UK as a whole offers a unique opportunity for the drone industry to thrive, and it has witnessed significant advancements in the use of drones in recent years, particularly in healthcare and delivery.'
Meanwhile, a probe by The Sun on Sunday revealed that combat drones have now become the dominant killer on the battlefields of Ukraine.
Up to 80 per cent of casualties on both sides of the conflict have been caused by what are known by the military as ' unmanned air systems '.
The figures from Western officials mean that eight out of ten soldiers being killed and wounded in the conflict are being targeted remotely by devices piloted by young soldiers often miles behind the front line.
7

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

7 Ways AI Written Articles & Essays are Detected in 2025
7 Ways AI Written Articles & Essays are Detected in 2025

Geeky Gadgets

time14 minutes ago

  • Geeky Gadgets

7 Ways AI Written Articles & Essays are Detected in 2025

What if everything you thought was written by a human—wasn't? Imagine reading a heartfelt article, an insightful essay, or even a persuasive business proposal, only to discover it was crafted by artificial intelligence. As AI-generated content becomes more polished and pervasive, the line between human and machine authorship is blurring faster than ever. Yet, while AI can mimic human expression, it leaves behind subtle traces—patterns, quirks, and inconsistencies—that reveal its true origins. In this deep dive, we'll explore seven new techniques to uncover these hidden markers, empowering you to spot AI content with precision and confidence. From analyzing linguistic patterns to using advanced detection tools, these strategies go beyond the surface, offering insights into the mechanics of AI creation. Words at Scale explains how to identify unnatural phrasing, dissect metadata for clues, and even combine methods for a more reliable evaluation. Whether you're a journalist safeguarding credibility, an educator combating plagiarism, or simply a curious reader, these techniques will sharpen your ability to navigate an increasingly AI-driven world. By the end, you might just see the digital landscape—and the content within it—in an entirely new light. AI Content Detection Tips 1. Examine Linguistic Patterns and Inconsistencies AI-generated text often exhibits subtle linguistic patterns that differ from human writing. By analyzing sentence structures, word choices, and stylistic consistency, you can uncover anomalies that suggest machine involvement. For example: Overuse of specific phrases or words: AI models may repeatedly use certain terms due to their training data. AI models may repeatedly use certain terms due to their training data. Unnatural repetition of sentence structures: Sentences may follow a rigid format, lacking the variety typical of human writing. Sentences may follow a rigid format, lacking the variety typical of human writing. Limited variation in tone or sentence length: AI-generated content often lacks the dynamic flow of human expression. These irregularities can serve as clear indicators of AI authorship, especially when compared to the natural fluidity of human writing. 2. Identify Unnatural Phrasing and Repetition AI systems frequently struggle to replicate the nuanced tone and rhythm of human language. This can result in awkward phrasing or repetitive sentence patterns. Key signs to look for include: Overly formal or robotic expressions: AI-generated text may lack the casual or conversational tone of human writing. AI-generated text may lack the casual or conversational tone of human writing. Repetitive sentence structures: Similar sentence patterns may appear throughout the text, reducing its natural flow. Similar sentence patterns may appear throughout the text, reducing its natural flow. Absence of creativity: Content may feel mechanical, lacking the subtlety and originality of human thought. Spotting these characteristics can help you determine whether the content was generated by a machine. 3. Assess Content Coherence and Logical Flow While AI-generated content may appear coherent at first glance, closer inspection often reveals inconsistencies. Humans naturally connect ideas in a meaningful way, whereas AI-generated material may include: Abrupt topic shifts: Sudden changes in subject matter can disrupt the narrative. Sudden changes in subject matter can disrupt the narrative. Logical gaps: Arguments or reasoning may lack depth or fail to connect logically. Arguments or reasoning may lack depth or fail to connect logically. Disjointed content: The overall structure may feel fragmented or incomplete. Evaluating the logical flow and coherence of a piece can provide valuable clues about its origin. 7 AI-Generated Content Detection Techniques Explained Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in AI detection. 4. Investigate Metadata and Timestamps Metadata embedded in digital files can offer critical insights into the content's creation. By examining details such as creation dates, modification histories, and file properties, you can uncover potential red flags. For instance: Unusually short creation times: Automated generation often results in rapid content creation. Automated generation often results in rapid content creation. Incomplete or missing metadata: Gaps in metadata fields may indicate machine involvement. Gaps in metadata fields may indicate machine involvement. Inconsistent timestamps: Timeframes that don't align with typical human writing patterns can be revealing. These discrepancies can point to AI involvement and provide a deeper understanding of the content's origins. 5. Cross-Check Sources for Originality AI-generated content often relies heavily on existing material, sometimes bordering on plagiarism. Cross-referencing the text with known sources can help you identify: Plagiarized sections: Direct copies of publicly available information may indicate AI authorship. Direct copies of publicly available information may indicate AI authorship. Lack of unique insights: Content that fails to provide original thought or analysis may be machine-generated. Content that fails to provide original thought or analysis may be machine-generated. Over-reliance on a single source: Heavy dependence on one dataset or reference can suggest automated creation. If the content mirrors existing material without adding value, it is likely AI-generated. 6. Use Specialized AI Detection Tools Advanced tools designed to detect AI-generated content can significantly enhance your detection efforts. These tools analyze text for patterns and markers unique to machine-generated writing. Common features include: Perplexity analysis: Measures how predictable the text is, with AI-generated content often being more predictable. Measures how predictable the text is, with AI-generated content often being more predictable. Burstiness analysis: Evaluates variability in sentence structure and word choice, which is often limited in AI-generated text. Evaluates variability in sentence structure and word choice, which is often limited in AI-generated text. Linguistic marker identification: Algorithms identify specific patterns indicative of AI authorship. Incorporating these tools into your workflow can improve both accuracy and efficiency. 7. Combine Multiple Detection Strategies No single method is foolproof, but combining several approaches can yield more reliable results. For example: Pair linguistic analysis with metadata verification: This combination can reveal both stylistic and technical anomalies. This combination can reveal both stylistic and technical anomalies. Cross-reference sources while using AI detection tools: This ensures a comprehensive evaluation of the content. This ensures a comprehensive evaluation of the content. Stay updated on AI advancements: Regularly refining your strategies helps you keep pace with evolving technologies. A multi-faceted approach ensures you're better equipped to identify even the most sophisticated AI-generated content. Applications of AI Detection Techniques The ability to detect AI-generated content has practical applications across various industries: Journalism: Verifying the authenticity of news articles to maintain credibility and trust. Verifying the authenticity of news articles to maintain credibility and trust. Academia: Making sure the originality of research papers and preventing plagiarism in scholarly work. Making sure the originality of research papers and preventing plagiarism in scholarly work. Business: Evaluating the authenticity of marketing materials and customer communications to uphold brand integrity. Evaluating the authenticity of marketing materials and customer communications to uphold brand integrity. Education: Assessing the originality of student submissions to maintain academic standards. By implementing these techniques, you can safeguard trust and credibility in an era of rapidly advancing AI capabilities. Staying Ahead in the AI Era As AI-generated content becomes more sophisticated and widespread, the ability to detect it is essential for maintaining authenticity and trust. By using these seven techniques, you can confidently differentiate between human and machine-generated material. Whether you're a journalist, educator, business professional, or researcher, these strategies provide practical tools to navigate the challenges posed by AI-driven content creation. Staying informed and proactive will ensure you remain prepared to address the complexities of this evolving landscape. Media Credit: WordsAtScale Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Data bill opposed by Sir Elton John and Dua Lipa finally passes
Data bill opposed by Sir Elton John and Dua Lipa finally passes

BBC News

time24 minutes ago

  • BBC News

Data bill opposed by Sir Elton John and Dua Lipa finally passes

A bill which sparked an extraordinary stand-off between some of the UK's most high-profile artists - and their backers in the House of Lords - has finally been wanted an amendment to the drably-titled Data (Use and Access) Bill which would have forced tech companies to declare their use of copyright material when training AI it, they argued, tech firms would be given free rein to help themselves to UK content without paying for it, and then train their AI products to mimic it, putting human artists out of would be "committing theft, thievery on a high scale", Sir Elton John told the BBC. He was one of a number of household names from the UK creative industries, including Sir Paul McCartney and Dua Lipa to oppose the government refused the amendment. It says it is already carrying out a separate consultation around copyright and it wants to wait for the outcome of that. In addition there are plans for a separate AI bill. Critics of the peers' proposal say it would stifle the AI industry and result in the UK getting left behind in this lucrative and booming this left the bill in limbo, pingponging between the Houses of Commons and Lords for a month. But it has now finally been passed, without the amendment, and will become law once royal assent is given."We can only do so much here. I believe we've done it. It's up to the Government and the other place (the Commons) now to listen," said composer and broadcaster Lord government has welcomed the wide-ranging bill passing"This Bill is about using data to grow the economy and improve people's lives, from health to infrastructure and we can now get on with the job of doing that", a Department for Science, Innovation and Technology (DSIT) spokesperson said. More than AI Caught in the crossfire of this row were other useful proposals contained within the bill, including:New rules on the rights of bereaved parents to access their children's data if they dieChanges to allow NHS trusts to share patient data more easilyA 3D underground map of the UK's pipes and cables, aimed at improving the efficiency of roadworks by minimising the possibility of them being accidentally dug up. "So this is good news for NHS workers and the police who will be freed from over a million hours of time spent doing admin, bereaved parents who will be supported to get the answers they deserve, and people who will be kept safer online thanks to new offences for deepfake abuse," DSIT even though the Lords have decided they had made their point on AI, the argument has not gone who fought the battle have not changed their minds. Baroness Kidron, a film maker who led the charge for the amendment, told me the passing of the bill was "a pyrrhic victory at best" for the government, meaning it would lose more than it cost, she argues, is the giving away of UK assets, in the form of creative content, to largely US-based AI are many who remain defiant and they believe strongly that the UK's £124bn creative industry is under threat if the government doesn't actively engage with their demandsOwen Meredith, chief executive of the News Media Association which supported the Lords said the bill sent a "clear message" to the government "that Parliament, and the UK's 2.4 million creative workers, will fight tirelessly to ensure our world-renowned copyright law is enforced"."We keep being told that AI will change everything, which, I'm afraid, means that we will discuss this during debates on every bill," said Baroness Dido Harding in the House of Lords, recorded in Hansard. "We will prevail in the end." Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.

Meta sues app-maker as part of crack down on 'nudifying'
Meta sues app-maker as part of crack down on 'nudifying'

BBC News

timean hour ago

  • BBC News

Meta sues app-maker as part of crack down on 'nudifying'

Meta has taken legal action against a company which ran ads on its platforms promoting so-called "nudify" apps, which typically using artificial intelligence (AI) to create fake nude images of people without their has sued the firm behind CrushAI apps to stop it posting ads altogether, following a cat-and-mouse battle to remove them over a series of January, the blog FakedUp found 8,010 instances of ads from CrushAI promoting nudifying aps on Meta's Facebook and Instagram platforms."This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it," Meta said in a blog post. "We'll continue to take the necessary steps - which could include legal action - against those who abuse our platforms like this."The growth of generative AI has led to a surge in "nudifying" apps in recent has become such a pervasive issue that in April the children's commission for England called on the government to introduce legislation to ban them is illegal to create or possess AI-generated sexual content featuring children. Deepfakes and scams Meta said it had also made another change recently in a bid to deal with the wider problem of "nudify" apps online, by sharing information with other tech firms."Since we started sharing this information at the end of March, we've provided more than 3,800 unique URLs to participating tech companies," it said. The firm accepted it had an issue with companies avoiding its rules to deploy adverts without its knowledge, such as creating new domain names to replace banned said it had developed new technology designed to identify such ads, even if they didn't include apps are just the latest example of AI being used to create problematic content on social media concern is the use of AI to create deepfakes - highly realistic images or videos of celebrities - to scam or mislead June Meta's Oversight Board criticised a decision to leave up a Facebook post showing an AI-manipulated video of a person who appeared to be Brazilian football legend Ronaldo Nazá has previously attempted to combat scammers who fraudulently use celebrities in adverts by the use of facial recognition also requires political advertisers to declare the use of AI, because of fears around the impact of deepfakes on elections. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store