China claims its FPV drone algorithm can beat best pilots in aerobatic manoeuvres
Chinese researchers have allegedly developed a new algorithm that lets first-person-view (FPV) drones perform autonomous aerobatic manoeuvres better than those by veteran human operators during flight missions.
Inspired by sparrowhawks, falcons, and bats, the new artificial intelligence (AI) upgrade can facilitate high-speed, high-risk movements that usually only the most skilled human pilots can execute.
Traditionally, FPV drone performance has been improved by upgrading hardware (stronger motors, better sensors). However, with the latest attempt, scientists instead focused on smarter software, using advanced motion planning and decision-making algorithms to unlock next-level performance.
"This biological wisdom—transforming 'high-risk manoeuvres' into 'high-survival rewards'—holds the key to redefining traditional drone flight paradigms," said Gao Fei, an associate professor at Zhejiang University and one of the scientists associated with the project, while speaking to the South China Morning Post.
According to the team, the new FPV AI has two main components. The first is a motion-intent translator that converts a pilot's desired manoeuvre (e.g., a flip or loop) into specific, executable drone commands.
The second is a risk-reward evaluation that analyzes the trade-offs between safety (avoiding obstacles), efficiency (energy use), and performance (aerobatics). Essentially, these give the drone the 'brains' to understand what it needs to do.
"Conventional methods focus on enhancing hardware performance, employing more powerful motors or higher-precision sensors, yet few have systematically investigated the rationality of manoeuvre-planning itself," explained Gao.
"We have demonstrated that quadrotor drones can unlock more sophisticated flight manoeuvres and graceful postures solely through intelligent algorithms alone—no hardware upgrades required," he added.
The algorithm also enables the drone to perform tasks efficiently and avoid crashing, all without needing GPS or remote human input. What's more impressive is that the team's claims about its new drone brains are not just theoretical.
https://www.youtube.com/watch?v=GTifvVZBNWs&pp=ygUOZnB2IGRyb25lIHJhY2U%3D
During ensuing trails, the drone flew through tight obstacle courses, even inverted, with hummingbird-like agility. In a one-on-one test, it beat a professional human pilot with a perfect success rate of 100% versus the 12.5% exhibited by the pilot.
'We believe that aerobatic flight can enhance a vehicle's adaptability and flexibility in complex environments, improving its performance in practical applications,' Gao said.
The AI-enabled FPV drone also executed smoother, tighter manoeuvres with less recovery time between stunts. This level of autonomous aerial acrobatics has significant implications beyond having fun with races.
It could, for example, have military applications, enabling drones to be far more agile in urban or forested environments, evading detection and striking with precision. Navigating tight or dangerous spaces like collapsed buildings or volcanic vents would also greatly benefit search and rescue operations. It could also be used in filmmaking and space missions.
'Examples include deploying probes near volcanic eruption vents, rapidly exploring narrow gaps in collapsed buildings during disaster rescue, and enabling spacecraft to navigate space debris," the team observed. "It could also significantly benefit aerial cinematography by allowing drones to autonomously capture smooth, stable footage," the researchers added.
However, the new system is not foolproof. For example, the algorithm currently requires a pre-mapped environment to plan paths. It also cannot yet support drone swarms or real-time adaptation to unknown terrain. The scientists are hopeful that further research would open up "new possibilities for quadrotor drones to perform aggressive missions in extreme environments."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
32 minutes ago
- Yahoo
Ohio State University going all in on AI to build ‘intuition': Provost
(NewsNation) — As artificial intelligence continues to be hotly debated in the United States, one college is going all in on letting the capability of computational systems perform tasks typically associated with human intelligence. Ravi Bellamkonda, provost and executive vice president at Ohio State University, joined 'NewsNation Live' to discuss the school's 'AI Fluency Initiative' beginning this fall. The 'AI Fluency Initiative' will be required of all students at the school. The hope is that the program will help students become bilingual, meaning fluent in their major field of study and applying AI in that area. 'We've been using all sorts of tools to augment learning, augment our understanding of the world,' said Bellamkonda. 'There's no question we live in a data-rich world now, from politics, to news, to manufacturing, to medicine, to all these fields of study and interest for our students. We have lots and lots of data. And the power of AI is to synthesize that data and make meaning out of it.' Regulating AI for ten years in 'big, beautiful bill' no help: Tech expert Pew Research Center said that in 2024, teens reported using AI twice as much as they did the year before. Over half say they think it's okay to use AI for research, while nearly 30% acknowledge it's acceptable to use AI for math problems. Less than 20% added that it's okay to use AI to write an essay. 'We really want to build intuition, judgment, and ethical understanding,' Bellamkonda said. 'When is it appropriate? When is it not appropriate? When is it hallucinating? When is it not? And we feel like having that conversation in the classroom with a professor in the context of the subject they are learning is really important for us.' High school students create AI-detecting app to help teachers Apple says it has a new research paper out that says artificial intelligence might not be as intelligent as some want it to be. The company's new research paper claims that so-called 'large reasoning models' and 'large language models' such as ChatGPT give up when trying to solve complex problems. Their researchers say this is proof of limited capabilities that keep artificial intelligence from reasoning the way you or I do. 'We had the same fear, if you remember, when we had calculators,' added Bellamkonda. 'We were afraid that people would store formulas in there and not really understand. That's the challenge we have as educators now is to still make sure that our students have a deep understanding of the subject and they're not just letting AI do all the thinking.' How AI is shaping industries across the US Bloomberg also reported that Mark Zuckerberg, founder of Facebook, is seeking a team to build an AI that could reason on the same level as a human. He wants to hire around 50 people for the project. This comes after Meta delayed the release of a big new artificial intelligence model last month over concerns it wasn't good enough. In addition, St. Petersburg, Florida, is installing AI or so-called 'smart signals' that can connect with tech in some newer vehicles. That tech can alert the driver about upcoming hazards and traffic conditions, such as flooding or a pedestrian in the area. The city is looking to invest more than $1 million in the project. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


Forbes
38 minutes ago
- Forbes
How AI Is Exposing The Great SaaS Mediocrity Machine
Close-up of a person's hand holding an iPhone and using Google AI Mode, an experimental mode ... More utilizing artificial intelligence and large language models to process Google search queries, Lafayette, California, March 24, 2025. (Photo by Smith Collection/Gado/Getty Images) The software-as-a-service industry has grown into a three hundred billion dollar colossus, powering everything from Fortune 500 customer relationships to your neighborhood coffee shop's inventory system. Yet beneath this impressive facade, a growing chorus of industry insiders suggests something troubling: much of this growth may have been built on a foundation of what the late anthropologist David Graeber termed "bullshit jobs" – roles that add little genuine value but consume enormous resources. Now, as artificial intelligence begins automating many of the tasks that filled these positions, the emperor's new clothes are becoming visible to all. David Graeber's 2018 book "Bullshit Jobs: A Theory" argued that modern capitalism had spawned entire categories of meaningless work – jobs that even their holders secretly believed were pointless. His taxonomy included "flunkies" (who exist mainly to make superiors feel important), "duct tapers" (who solve problems that shouldn't exist), and "box tickers" (who create the appearance of meaningful activity). Sound familiar to anyone who's worked in SaaS lately? Consider the typical Series B software company today: layers of growth marketers optimizing conversion funnels that users abandon, customer success managers managing relationships with customers who don't renew, and product managers shipping features that solve no real problems. Industry veteran Charles Grant Powell-Thompson recently wrote that the sector has become "a bloated ecosystem of recycled playbooks, unoriginal hires, and shallow growth hacks." At the heart of what Powell-Thompson identifies as SaaS mediocrity lies an over-reliance on playbooks. Visit any early-stage startup or public SaaS company, and you'll encounter identical practices: OKRs, product-led growth strategies, "land and expand" sales models, and user journey funnels copied wholesale from previous companies. "Product teams build features to meet roadmap commitments, not user needs," Powell-Thompson observes. "Sales teams push demos and discount paths based on generic conversion data. Marketing teams copy HubSpot's inbound strategy from 2014 and declare victory after publishing fifteen listicles." The 2021 wave of startups flooding the "Notion for X" and "Figma for Y" space exemplifies this template thinking. Nearly all adopted identical growth strategies: bottom-up freemium entry with vague "community" layers. Most failed because they misunderstood what made the originals successful – deep product design and obsessive iteration, not surface-level copying. Perhaps nowhere is the bullshit jobs phenomenon more visible than in SaaS hiring patterns. The industry has developed what Powell-Thompson calls "the SaaS hiring loop" – continuously recycling talent from the same pool of failed or plateaued startups. Competence gets assumed based on LinkedIn logos rather than demonstrated outcomes. "A growth marketer who scaled vanity metrics at one mediocre tool is hired to repeat the cycle elsewhere," Powell-Thompson notes, "without ever proving they can build sustainable customer retention or profit." This creates a carousel of recycled talent carrying identical playbooks and assumptions but rarely delivering results that justify their roles. The industry favors "SaaS-native" professionals who speak fluent ARR and OKR but don't question fundamentals or challenge assumptions. The venture capital boom masked much of this inefficiency. When Zendesk launched Zendesk Sell in 2018 – an acquisition of Base CRM – the company spent years trying to wedge it into a Salesforce competitor before sunsetting it in 2023. Evernote, once beloved, spent a decade chasing premium users with poorly built features while ignoring performance and user experience. These weren't scams – they were built by intelligent people. But as Powell-Thompson points out, they 'hired too fast, grew too shallow, and learned too late that real markets don't behave like pitch decks. When capital dried up in 2022-2023, widespread layoffs revealed how many SaaS positions had been consuming resources without contributing meaningful value'. Artificial intelligence isn't just changing how software works; it's revealing which human roles actually mattered in the first place. When AI can generate marketing copy instantly, what value does the growth hacker who spent weeks A/B testing subject lines provide? When algorithms can analyze user behavior patterns in real-time, what's the point of analysts who took days to produce similar insights? The numbers are telling the story. SaaS companies have eliminated hundreds of thousands of positions since 2022, yet many report improved productivity metrics. This suggests that much of what looked like essential work may have been what Graeber would recognize as elaborate theater. Graeber's categories map remarkably well onto modern SaaS organizational charts: The Flunkies: Business development representatives who exist primarily to make actual salespeople feel important, spending days sending LinkedIn messages nobody reads. The Duct Tapers: Customer success managers whose primary function involves fixing problems created by poorly designed products or misaligned sales promises. The Box Tickers: Growth marketers who obsess over vanity metrics like email open rates while customer churn rates remain stubbornly high. The Taskmasters: Middle managers who exist solely to manage other managers, creating elaborate OKR frameworks that nobody follows. Powell-Thompson identifies a particularly troubling aspect of SaaS culture: the near-complete lack of self-awareness. Workers confuse busywork with impact, treating twelve-slide strategy decks and Miro boards as substitutes for real execution. The industry has developed what he calls "a language of legitimacy that sounds impressive but says little." Phrases like "We're building an extensible platform for enterprise workflows" or "Our ICP is mid-market GTM teams in the post-series B range" mask fundamental confusion about customers and value propositions. LinkedIn amplifies this dynamic. The average SaaS employee presents themselves as management guru, life coach, and visionary simultaneously, while often unable to ship a bug fix, close a sale, or explain their product's API in plain English. Ironically, as AI eliminates some forms of meaningless work, it may be creating others. A new class of "AI prompt engineers" has emerged, with some commanding six-figure salaries for sophisticated search operations. Companies are hiring "AI ethics officers" and "automation specialists" who spend more time in meetings about AI than implementing it. This suggests we may be witnessing the birth of what could be called "AI bureaucracy" – jobs that exist primarily to manage, oversee, or optimize interactions with AI systems, potentially as divorced from real value creation as the roles they're replacing. The bullshit jobs critique isn't without its limitations. What appears inefficient may serve important functions that aren't immediately obvious. Redundancy in complex systems often provides resilience. The customer success manager scheduling seemingly pointless check-in calls might prevent million-dollar churn through relationship management that AI cannot replicate. Some of the most successful SaaS companies – Slack, Notion, Figma – emerged from overcrowded markets. The process of hiring diverse perspectives, even if some prove redundant, may be necessary for breakthrough innovation. And behind every critique of meaningless work stands a real person with real financial obligations. The growth marketer whose role seems pointless represents someone's mortgage payment, someone's child's college fund. The solution isn't eliminating all potentially redundant roles – that would be both cruel and counterproductive. Instead, the industry needs conscious evolution guided by several principles: Honest Performance Metrics: Moving beyond vanity metrics to measure actual business impact. Roles that can't demonstrate clear value creation over reasonable time periods should be restructured or eliminated. Skill Depth Over Buzzword Fluency: As Powell-Thompson argues, the industry needs "fewer growth hackers and more grown-ups. Fewer decks, more decisions. Fewer generalists, more expertise." Human-AI Collaboration: Rather than viewing AI as human replacement, smart companies are determining how to combine human judgment with AI capability for genuinely valuable outcomes. Cultural Honesty: Developing organizational cultures capable of honest productivity assessment without corporate speak or fear-based defensiveness. The SaaS industry's AI-driven reckoning previews what's coming for knowledge work broadly. As artificial intelligence becomes more capable, every industry will face similar questions about which roles create genuine value versus which exist primarily to create the appearance of activity. Graeber's insight about bullshit jobs revealed a moral crisis – people trapped in roles they knew were meaningless, maintaining economic survival through elaborate performance. The AI revolution offers an opportunity to escape this trap, but only through honest assessment of what we're escaping from. The SaaS industry's three hundred billion dollar scale means getting this transition right matters far beyond Silicon Valley. The sector employs millions globally and underpins much of the modern economy's digital infrastructure. If Graeber was correct about bullshit jobs' prevalence, then AI's arrival represents more than technological disruption – it's an opportunity for work reform. A chance to align human effort with genuine value creation benefiting both workers and customers. But realizing this opportunity requires something both the SaaS industry and broader business world have struggled with: courage to be honest about what actually works, what doesn't, and why. As Powell-Thompson concludes: "The truth is simple, and brutal. SaaS didn't just scale software – it scaled mediocrity." The question now is whether the industry will use this moment of AI-driven disruption for genuine transformation or simply automate meaningful work while preserving meaningless jobs through increasingly elaborate justifications. The three hundred billion dollar question is which path the industry will choose.
Yahoo
42 minutes ago
- Yahoo
UK Regulator Partners with Nvidia to Help Banks Test AI Tools
NVIDIA Corporation (NASDAQ:NVDA) is one of the best Dow stocks to invest in. Recently, Britain's financial regulator announced a partnership with the company to help banks safely explore artificial intelligence technologies. The Financial Conduct Authority (FCA) revealed plans to launch a new 'Supercharged Sandbox' in October, aimed at supporting financial institutions in the early stages of AI experimentation. Through this initiative, firms will gain access to NVIDIA Corporation (NASDAQ:NVDA)'s accelerated computing tools and AI Enterprise Software, along with improved data, technical expertise, and regulatory guidance to help speed up innovation. A close-up of a colorful high-end graphics card being plugged in to a gaming computer. The FCA noted that this program is intended for companies still in the early "discovery and experiment" stage of AI adoption, while a separate testing environment is already available for firms further along in development. Jessica Rusu, the FCA's chief data, intelligence and information officer, made the following comment: 'This collaboration will help those that want to test AI ideas but who lack the capabilities to do so. We'll help firms harness AI to benefit our markets and consumers, while supporting economic growth.' NVIDIA Corporation (NASDAQ:NVDA) produces the graphics processing units (GPUs) that power the training and operation of advanced AI models. The stock has surged by nearly 4% since the start of 2025. While we acknowledge the potential of NVDA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and Disclosure. None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data