
Unmanned police helicopter takes to the sky for first time in Britain as force tests human-sized drone to search for criminals and missing people
The National Police Air Service (NPAS) is hoping the remotely-piloted aircraft will be able to join its helicopter fleet in the future to carry out searches for criminals and missing people.
Adding to its 'eyes in the skies', it is capable of flying at a maximum height of 18,000ft, soaring through the air at 115mph and is able to stay airborne for up to six hours.
Police say is it 'not possible' to say how much one uncrewed aircraft costs, but the scheme is being funded by all police forces in England and Wales and the Home Office.
Critics have suggested an unmanned police helicopter is simply a drone, while others pointed out that it needs a remote pilot anyway.
But the NPAS has clarified that the unmanned helicopter, a Schiebel Camcopter S-100, is larger than existing drones used by police forces and has a greater range.
'The type of aircraft we will be trialling is much larger, with the ability to carry similar mission equipment to a current police helicopter,' they said. 'It will be able to fly beyond the sight of the controller.'
Night-time test flights are being conducted over the Bristol Channel and police believe the helicopters could be capable of staying in the air for up to 12 hours in the future - up from its current capability of six hours.
David Walters, NPAS head of futures and innovation, said: 'As technology advances, so too do the opportunities for police air support. This trial will test uncrewed aircraft capable of flying for up to six hours, equipped with mission systems comparable to those on our current fleet.
'If successful, this could pave the way for a highly capable and sustainable blended fleet of helicopters, aeroplanes and uncrewed aircraft.
'Our goal is to ensure the right tool is available at the right time, in the right place, to support policing across England and Wales.
'This represents an ambitious step toward a modern, innovative and best-value police aviation service, designed to meet the diverse needs of policing and communities.'
Mr Walters said 30 per cent of NPAS calls were to search for missing people, which the unmanned aircraft could be vital for.
It has a forward-facing camera for the pilot to monitor remotely from a base, while it uses a radar to detect other aircraft.
The Schiebel Camcopter S-100 carries uses the same high-powered infrared camera as normal police helicopters.
Mr Walters added that the unmanned helicopter offers a 90 per cent reduction in emissions per hour than crewed aircraft.
PC Matt Leeding, an NPAS tactical flight officer told BBC News: 'My job doesn't change, all we're doing is embracing the new technology, the same service, using the same equipment just on a slightly different platform,' he said.
'There are still incidents when I'll be sitting in an aircraft at 1,500 feet (457 metres) for certain specialist operations and tasks that require a crew on the scene.'
Some critics fear that it could lead to the end of manned aicraft.
One wrote on X: 'Nothing beats the power of the human eye in the sky. While there is undoubtedly a place for this technology, I hope that this isn't the beginning of the end of manned aircraft for and other crews across the UK.'
Another, mocking the 'thought police', joked: 'Can your uncrewed aircraft read illegal thoughts?'
Others simply said: 'Isn't that called a drone?' and 'What is the big difference between unmanned helicopter and drone?'
And a fourth wrote: 'How much did that cost taxpayers ?'
Test flights are due to take place until October but it is expected years of testing would be needed before the unmanned aircraft joins the full NPAS fleet.
Sophie O'Sullivan, director of Future of Flight at the UK Civil Aviation Authority, said: 'We're committed to enabling the emergency services across the UK to harness cutting-edge aviation technology to do their jobs more efficiently and effectively.
'Through our support for innovative projects like the trial with NPAS, we are helping future-proof critical operations and ensuring that emergency services can safely benefit from the opportunities presented by drones and advanced air mobility.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Independent
12 hours ago
- The Independent
Most popular A-level subject with 112,138 entries revealed
In 2025, more young people than ever have opened their A-level results to find out how they did in their maths exam. Once again, maths has been the most popular A-level subject, with 112,138 entries in 2025. This is up by more than 4% compared with 2024. Entries in further maths, an A-level that expands on the maths curriculum, have also risen – an increase of 7% since 2024, with over 19,000 entries this year. As a professional mathematician, this is pleasing news. Some of these students will be happily receiving confirmation of their place to study maths at university. The joy I experienced when I discovered in my maths degree that many of the subjects I studied at school – chemistry, biology, physics and even music – are woven together by a mathematical fabric, is something I've never forgotten. I'm excited by the idea that many young people are about to experience this for themselves. But I am concerned that fewer students will have the same opportunities in the future, as more maths departments are forced to downsize or close, and as we become more reliant on artificial intelligence. There are a number of differences between studying maths at university compared with school. While this can be daunting at first, all of these differences underscore just how richly layered, deeply interconnected and vastly applicable maths is. At university, not only do you learn beautiful formulas and powerful algorithms, but also grapple with why these formulas are true and dissect exactly what these algorithms are doing. This is the idea of the 'proof', which is not explored much at school and is something that can initially take students by surprise. But proving why formulas are true and why algorithms work is an important and necessary step in being able discover new and exciting applications of the maths you're studying. A maths degree can lead to careers in finance, data science, AI, cybersecurity, quantum computing, ecology and climate modelling. But more importantly, maths is a beautifully creative subject, one that allows people to be immensely expressive in their scientific and artistic ideas. A recent and stunning example of this is Hannah Cairo, who at just 17 disproved a 40-year old conjecture. If there is a message I wish I knew when I started studying university mathematics it is this: maths is not just something to learn, but something to create. I'm continually amazed at how my students find new ways to solve problems that I first encountered over 20 years ago. Accessibility of maths degrees But the question of going on to study maths at university is no longer just a matter of A-level grades. The recent and growing phenomenon of maths deserts – areas of the country where maths degrees are not offered – is making maths degrees less accessible, particularly for students outside of big cities. Forthcoming research from The Campaign for Mathematical Sciences (CAMS), of which I am a supporter, shows that research-intensive, higher-tariff universities – the ones that require higher grades to get in – took 66% of UK maths undergraduates in 2024, up from 56% in 2006. This puts smaller departments in lower-tariff universities in danger of closure as enrolments drop. The CAMS research forecasts that an additional nine maths departments will have fewer than 50 enrolments in their degrees by 2035. This cycle will further concentrate maths degrees in high-tariff institutions, reinforcing stereotypes such as that only exceptionally gifted people should go on to study maths at university. This could also have severe consequences for teacher recruitment. The CAMS research also found that 25% of maths graduates from lower-tariff universities go into jobs in education, compared to 8% from higher tariff universities. Maths in the age of AI The growing capability and sophistication of AI is also putting pressure on maths departments. With OpenAI's claim that their recently released GPT-5 is like having 'a team of PhD-level experts in your pocket', the temptation to overly rely on AI poses further risks to the existence and quality of future maths degrees. But the process of turning knowledge into wisdom and theory into application comes from the act of doing: doing calculations and forming logical and rigorous arguments. That is the key constituent of thinking clearly and creatively. It ensures students have ownership of their skills, capacities, and the work that they produce. A data scientist will still require an in-depth working knowledge of the mathematical, algorithmic and statistical theory underpinning data science if they are going to be effective. The same for financial analysts, engineers and computer scientists. The distinguished mathematician and computer scientist Leslie Lamport said that 'coding is to programming what typing is to writing'. Just as you need to have some idea of what you are writing before you type it, you need to have some idea of the (mathematical) algorithm you are creating before you code it. It is worth remembering that the early pioneers in AI – John McCarthy, Marvin Minsky, Claude Shannon, Alan Turing – all had degrees in mathematics. So we have every reason to expect that future breakthroughs in AI will come from people with mathematics degrees working creatively in interdisciplinary teams. This is another great feature of maths: its versatility. It's a subject that doesn't just train you for a job but enables you to enjoy a rich and fulfilling career – one that can comprise many different jobs, in many different fields, over the course of a lifetime.


Coin Geek
15 hours ago
- Coin Geek
Studies in US, UK warn of flaws in AI-powered health guidance
Getting your Trinity Audio player ready... Two recently published studies have revealed that generative artificial intelligence (AI) tools, including large language models (LLMs) ChatGPT and Gemini, produce misinformation and bias when used for medical information and healthcare decision-making. In the United States, researchers from a medical school at Mount Sinai published a study on August 2 showing that LLMs were highly vulnerable to repeating and elaborating on 'false facts' and medical misinformation. Meanwhile, across the Atlantic, the London School of Economics and Political Science (LSE) published a study shortly afterward that found AI tools used by more than half of England's councils are downplaying women's physical and mental health issues, creating a risk of gender bias in care decisions. Medical AI LLMs, such as OpenAI's ChatGPT, are AI-based computer programs that generate text using large datasets of information on which they are trained. The power and performance of such technology have increased exponentially over the past few years, with billions of dollars being spent on research and development in the area. LLMs and AI tools are now being deployed across almost every industry, to different extents, not least in the medical and healthcare sector. In the medical space, AI is already being used for various functions, such as reducing the administrative burden by automatically generating and summarizing case notes, assisting in diagnostics, and enhancing patient education. However, LLMs are prone to the 'garbage in, garbage out' problem, relying on accurate, factual data making up their training material or they may reproduce the errors and bias in the datasets. This results in what is often known as 'hallucinations,' which is the generation of content that is irrelevant, made-up, or inconsistent with the input data. In a medical context, these hallucinations can include fabricated information and case details, invented research citations, or made-up disease details. US study shows chatbots spreading false medical information Earlier this month, researchers from the Icahn School of Medicine at Mount Sinai published a paper titled 'multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support.' The study aimed to test a subset of AI hallucinations that arise from 'adversarial attacks,' in which made-up details embedded in prompts lead the model to reproduce or elaborate on the false information. 'Hallucinations pose risks, potentially misleading clinicians, misinforming patients, and harming public health,' said the paper. 'One source of these errors arises from deliberate or inadvertent fabrications embedded in user prompts—an issue compounded by many LLMs' tendency to be overly confirmatory, sometimes prioritizing a persuasive or confident style over factual accuracy.' To explore this issue, the researchers tested six LLMs: DeepSeek Distilled, GPT4o, llama-3.3-70B, Phi-4, Qwen-2.5-72B, and gemma-2-27b-it, with 300 pieces of text similar to clinical notes written by doctors, but each containing a single fake laboratory test, physical or radiological sign, or medical condition. They were tested under 'default' (standard settings) as well as with 'mitigating prompts' designed to reduce hallucinations, generating 5,400 outputs. If a model elaborated on the fabricated detail, the case was classified as a 'hallucination.' The results showed that hallucination rates ranged from 50% to 82% across all models and prompting methods. The use of mitigating prompts lowered the average hallucination rate, but only from 66% without to 44% with a mitigating prompt. 'We find that the LLM models repeat or elaborate on the planted error in up to 83% of cases,' reported the researchers. 'Adopting strategies to prevent the impact of inappropriate instructions can half the rate but does not eliminate the risk of errors remaining.' They added that 'our results highlight that caution should be taken when using LLM to interpret clinical notes.' According to the paper, the best-performing model was GPT-4o, whose hallucination rates declined from 53% to 23% when mitigating prompts were used. However, with even the best-performing model producing potentially harmful hallucinations in almost a quarter of cases—even with mitigating prompts—the researchers concluded that AI models cannot yet be trusted to provide accurate and trustworthy medical data. 'LLMs are highly susceptible to adversarial hallucination attacks, frequently generating false clinical details that pose risks when used without safeguards,' said the paper. 'While prompt engineering reduces errors, it does not eliminate them… Adversarial hallucination is a serious threat for real‑world use, warranting careful safeguards.' The Mount Sinai study isn't the only recent paper published in the U.S. medical space that has brought into question the use of AI. In another damaging example, on August 5, the Annals of Internal Medicine journal reported a case of a 60-year-old man who developed bromism, also known as bromide toxicity, after consulting ChatGPT on how to remove salt from his diet. According to advice from the LLM, the man swapped sodium chloride (table salt) for sodium bromide, which was used as a sedative in the early 20th century, resulting in the rare condition. But it's not just the stateside that AI advice is taking a PR hit. UK study finds gender bias in LLMs While U.S. researchers were finding less-than-comforting results when testing whether LLMs reproduce false medical information, across the pond a United Kingdom study was turning up equally troubling results related to AI bias. On August 11, a research team from LSE, led by Dr Sam Rickman, published their paper on 'evaluating gender bias in large language models in long-term care,' in which they evaluated gender bias in summaries of long-term care records generated with two open-source LLMs, Meta's (NASDAQ: META) Llama 3 and Google's (NASDAQ: GOOGL) Gemma. In order to test this, the study created gender-swapped versions of long-term care records for 617 older people from a London local authority and asked the LLMs to generate summaries of male and female versions of the records. While Llama 3 showed no gender-based differences across any metrics, Gemma displayed significant differences. Specifically, male summaries focused more on physical and mental health issues. Language used for men was also more direct, while women's needs were 'downplayed' more often than men's. For example, when Google's Gemma was used to generate and summarize the same case notes for men and for women, language such as 'disabled,' 'unable,' and 'complex' appeared significantly more often in descriptions of men than women. In other words, the study found that similar care needs in women were more likely to be omitted or described in less severe terms by specific AI tools, and that this downplaying of women's physical and mental health issues risked creating gender bias in care decisions. 'Care services are allocated on the basis of need. If women's health issues are underemphasized, this may lead to gender-based disparities in service receipt,' said the paper. 'LLMs may offer substantial benefits in easing administrative burden. However, the findings highlight the variation in state-of-the-art LLMs, and the need for evaluation of bias.' Despite the concerns raised by the study, the researchers also highlighted the benefits AI can provide to the healthcare sector. 'By automatically generating or summarizing records, LLMs have the potential to reduce costs without cutting services, improve access to relevant information, and free up time spent on documentation,' said the paper. It went on to note that 'there is political will to expand such technologies in health and care.' Despite flaws, UK's all-in on AI British Prime Minister Keir Starmer recently pledged £2 billion ($2.7 billion) to expand Britain's AI infrastructure, with the funding targeting data center development and digital skills training. This included committing £1 billion ($1.3 billion) of funding to scale up the U.K.'s compute power by a factor of 20. 'We're going to bring about great change in so many aspects of our lives,' said Starmer, speaking to London Tech Week on June 9. He went on to highlight health as an area 'where I've seen for myself the incredible contribution that tech and AI can make.' 'I was in a hospital up in the Midlands, talking to consultants who deal with strokes. They showed me the equipment and techniques that they are using – using AI to isolate where the clot is in the brain in a micro-second of the time it would have taken otherwise. Brilliantly saving people's lives,' said the Prime Minister. 'Shortly after that, I had an incident where I was being shown AI and stethoscopes working together to predict any problems someone might have. So whether it's health or other sectors, it's hugely transformative what can be done here.' It's unclear how, or if, the LSE study and its equally AI-critical U.S. counterparts may affect such commitments from the government, but for now the U.K. at least seems set on pursuing the advantages AI tools such as LLMs can provide across the public and private sector. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI. Watch: Demonstrating the potential of blockchain's fusion with AI title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">


BBC News
a day ago
- BBC News
Data centres to be expanded across UK as concerns mount
The number of data centres in the UK is set to increase by almost a fifth, according to figures shared with BBC News. Data centres are giant warehouses full of powerful computers used to run digital services from movie streaming to online banking - there are currently an estimated 477 of them in the researchers Barbour have analysed planning documents and say that number is set to jump by almost 100, as the growth in artificial intelligence (AI) increases the need for processing majority are due to be built in the next five there are concerns about the huge amount of energy and water the new data centres will consume. Some experts have warned it could drive up prices paid by than half of the new data centres would be in London and neighbouring are privately funded by US tech giants such as Google and Microsoft and major investment firms.A further nine are planned in Wales, one in Scotland, five in Greater Manchester and a handful in other parts of the UK, the data the new data centres are mostly due for completion by 2030, the biggest single one planned would come later - a £10-billion AI data centre in Blyth, near Newcastle, for the American private investment and wealth management company Blackstone would involve building 10 giant buildings covering 540,000 square meters - the size of several large shopping centres - on the site of a former Blyth Power are set to begin in 2031 and last for more than three is planning four new data centres in the UK at a total cost of £330 million, with an estimated completion between 2027 and 2029 - two in the Leeds area, one near Newport in Wales, and a five-storey site in Acton, north west Google is building two data centres, totalling £450m, spread over 400,000 sq m in north east London in the Lee Valley water system. By some analyses, the UK is already the third-largest nation for data centres behind the US and government has made clear it believes data centres are central to the UK's economic future - designating them critical national there are concerns about their impact, including the potential knock-on effect on people's energy is not known what the energy consumption of the new centres will be as this data is not included in the planning applications, but US data suggests they are can be considerably more powerful than older Sasha Luccioni, AI and climate lead at machine learning firm Hugging Face, explains that in the US "average citizens in places like Ohio are seeing their monthly bills go up by $20 (£15) because of data centres".She said the timeline for the new data centres in the UK was "aggressive" and called for "mechanisms for companies to pay the price for extra energy to power data centres - not consumers".According to the National System Operator, NESO, the projected growth of data centres in Great Britain could "add up to 71 TWh of electricity demand" in the next 25 years, which it says redoubles the need for clean power - such as offshore wind. 'Fixated with sustainability' There are also growing concerns about the environmental impact of these enormous existing data centre plants require large quantities of water to prevent them from overheating - and most current owners do not share data about their water Hone, chief executive of industry body the Data Centre Alliance, says "ensuring there is enough water and electricity powering data centres isn't something the industry can solve on its own".But he insisted "data centres are fixated with becoming as sustainable as possible", such as through dry-cooling promises of future solutions have failed to appease some. In Potters Bar, Hertfordshire, residents are objecting to the construction of a £3.8bn cloud and AI centre on greenbelt land, describing the area as the "lungs" of their in Dublin there is currently a moratorium on the building of any new data centres because of the strain existing ones have placed on Ireland's national electricity 2023 they accounted for one fifth of the country's energy demand. Last month, Anglian Water objected to plans for a 435 acre data centre site in North Lincolnshire. The developer says it aims to deploy "closed loop" cooling systems which would not place a strain on the water planning documents suggest that 28 of the new data centres would be likely to be serviced by troubled Thames Water, including 14 more in Slough, which has already been described as having Europe's largest cluster of the BBC understands Thames Water was talking to the government earlier this year about the challenge of water demand in relation to data centres and how it can be UK, the trade body for all water firms, said it "desperately" wants to supply the centres but "planning hurdles" need to be cleared more quickly. Ten new reservoirs are being built in Lincolnshire, the West Midlands and south-east England.A spokesperson for the UK Government said data centres were "essential" and an AI Energy Council had been established to make sure supply can meet demand, alongside £104bn in water infrastructure reporting by Tommy Lumby Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.