
AI to be used to map habitats in Surrey conservation project
Artificial intelligence (AI) is to be deployed in a conservation project to protect Surrey's natural landscapes. Space4Nature, led by Surrey Wildlife Trust and University of Surrey, will see volunteer teams map habitats by recording plant species thriving on acid grassland. What the volunteers document at places like Puttenham Common will be used to help train an AI model, which will be able to match specific types of habitat with similar ones close by using satellite images.Dan Banks, Space4Nature project citizen science officer, said: "Conservation is increasingly reliant on new technologies to develop solutions that can be implemented at scale."
"But that doesn't mean that old fashioned ground truthing isn't needed too."As the Artificial Intelligence capabilities being developed by our colleagues at University of Surrey become more sophisticated, we need more complex data to help them keep learning and evolving."With the climate and nature crisis becoming more severe, local people with an interest in nature can make a real difference by getting involved in local projects." Space4Nature said over the last two years it has deployed more than 200 volunteers to some of the county's most important chalk grassland, wetland and heathland habitats, including Chobham Common, Unstead wetland reserve, Sheepleas and Puttenham Common.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Independent
an hour ago
- The Independent
How AI could help stop the next pandemic before it starts
Could artificial intelligence tools be used to stop the next pandemic before it starts? During the Covid pandemic, new technology developed by researchers at Johns Hopkins and Duke universities didn't exist. But, for the first time, researchers there say they've devised a revolutionary large language modeling tool - the type of generative AI used in ChatGP - to help predict the spread of any infectious disease, such as bird flu, monkeypox, and RSV. That could help save lives and reduce infections. 'Covid-19 elucidated the challenge of predicting disease spread due to the interplay of complex factors that were constantly changing,' Johns Hopkins' Lauren Gardner, a modeling expert who created the Covid dashboard that was relied upon by people worldwide during the pandemic, said in a statement. 'When conditions were stable the models were fine. However, when new variants emerged or policies changed, we were terrible at predicting the outcomes because we didn't have the modeling capabilities to include critical types of information,' she added. 'The new tool fills this gap.' Gardner was one of the authors of the study published Thursday in the Nature Computational Science journal. The tool, named PandemicLLM, considers recent infection spikes, new variants, and stringent protective measures. The researchers utilized data that had never been used before in pandemic prediction tools, finding that PandemicLLM could accurately predict disease patterns and hospitalization trends one to three weeks out. The data included rates of cases hospitalizations and vaccines, types of government policies, characteristics of disease variants and their prevalence, and state-level demographics. The model incorporates these elements to predict how they will come together and affect how disease behaves. They retroactively applied PandemicLLM to the Covid pandemic, looking at each state over the course of 19 months. The authors said the tool was particularly successful when the outbreak was in flux. It also outperformed existing state-of-the-art forecasting methods, including the highest performing ones on the Centers for Disease Control and Prevention's CovidHub. 'Traditionally we use the past to predict the future,' author Hao 'Frank' Yang, a Johns Hopkins assistant professor of civil and systems engineering, said. 'But that doesn't give the model sufficient information to understand and predict what's happening. Instead, this framework uses new types of real-time information.' Going forward, they are looking at how large language models can replicate the ways individuals make decisions about their health. They hope that such a model would help officials to design safer and more effective policies. More than a million Americans have died from Covid. It's not a matter of if there will be a next pandemic, but when. Right now, the U.S. is dealing with the spread of H5N1 bird flu, RSV, HMPV, pertussis, and measles, among other health concerns. Vaccination rates for measles have plunged since the pandemic, and general vaccine hesitancy has increased. That has resulted in fears that the nation could see decades of health progress reversed. Furthermore, U.S. health officials have acted to separate from global efforts to respond to pandemics, withdrawing from the World Health Organization earlier this year. Last month, they limited access to Covid vaccines for certain groups. 'We know from Covid-19 that we need better tools so that we can inform more effective policies,' Gardner said. 'There will be another pandemic, and these types of frameworks will be crucial for supporting public health response.'


The Independent
an hour ago
- The Independent
Film Festival showcases what artificial intelligence can do on the big screen
Artificial intelligence 's use in movie making is exploding. And a young film festival, now in its junior year, is showcasing what this technology can do on screen today. The annual AI Film Festival organized by Runway, a company that specializes in AI-generated video, kicked off in New York Thursday night with ten short films from around the world making their debut on the big screen. 'Three years ago, this was such a crazy idea,' Runway CEO Cristóbal Valenzuela told the crowd. 'Today, millions of people are making billions of videos using tools we only dreamed of.' The film festival itself has grown significantly since its 2023 debut. About 300 people submitted films when it first began, Valenzuela said, compared to about 6,000 submissions received this year. The one and half-hour lineup stretched across a range of creative styles and ambitious themes — with Jacob Alder's ' Total Pixel Space " taking home the festival's top prize. The 9-minute and 28-second film questions how many possible images — real or not — exist in the digital space, and uses math to calculate a colossal number. A stunning series of images, ranging from the familiar life moments to those that completely bend reality, gives viewers a glimpse of what's out there. Meanwhile, Andrew Salter's 'Jailbird," which snagged second place, chronicles a chicken's journey — from the bird's perspective — to a human prison in the United Kingdom to take part in a joint-rehabilitation program. And 'One,' a futuristic story by Ricardo Villavicencio and Edward Saatchi about interplanetary travel followed in third place. The 10 films shown were finalists selected from thousands submitted to Runway's AI Film Festival this year. The shorts will also be shown at screenings held in Los Angeles and Paris next week. How AI is used and executed is a factor judges evaluate when determining festival winners. But not every film entered was made entirely using AI. While submission criteria requires each movie include the use of AI-generated video, there's no set threshold, meaning some films can take a more 'mixed media' approach — such as combining live shots of actors or real-life images and sounds with AI-generated elements. 'We're trying to encourage people to explore and experiment with it,' Valenzuela said in an interview prior to Thursday's screening. Creating a coherent film using generative AI is no easy feat. It can take a long list of directions and numerous, detailed prompts to get even a short scene to make sense and look consistent. Still, the scope of what this kind of technology can do has grown significantly since Runway's first AI Film Festival in 2023 — and Valenzuela says that's reflected in today's submissions. While there are still limits, AI-generated video is becoming more and more life-like and realistic. Runway encourages the use of its own AI tools for films entered into its festival, but creators are also allowed to turn to other resources and tools as they put together the films — and across the industry, tools that use AI to create videos spanning from text, image and/or audio prompts have rapidly improved over recent years, while becoming increasingly available. 'The way (this technology) has lived within film and media culture, and pop culture, has really accelerated,' said Joshua Glick, an associate professor of film and electronic arts at Bard College. He adds that Runway's film fest, which is among a handful of showcases aimed at spotlighting AI's creative capabilities, arrives as companies in this space are searching for heightened 'legitimacy and recognition' for the tools they are creating — with aims to cement partnerships in Hollywood as a result. AI's presence in Hollywood is already far-reaching, and perhaps more expansive than many moviegoers realize. Beyond 'headline-grabbing' (and at times controversial) applications that big-budget films have done to 'de-age' actors or create eye-catching stunts, Glick notes, this technology is often incorporated in an array of post-production editing, digital touch-ups and additional behind-the-scenes work like sorting footage. Industry executives repeatedly point to how AI can improve efficiency in the movie making process — allowing creatives to perform a task that once took hours, for example, in a matter of minutes — and foster further innovation. Still, AI's rapid growth and adoption has also heightened anxieties around the burgeoning technology — notably its implications for workers. The International Alliance of Theatrical Stage Employees — which represents behind-the-scenes entertainment workers in the U.S. and Canada — has 'long embraced new technologies that enhance storytelling,' Vanessa Holtgrewe, IATSE's international vice president, said in an emailed statement. 'But we've also been clear: AI must not be used to undermine workers' rights or livelihoods.' IATSE and other unions have continued to meet with major studios and establish provisions in efforts to provide guardrails around the use of AI. The Screen Actors Guild-American Federation of Television and Radio Artists has also been vocal about AI protections for its members, a key sticking point in recent labor actions. For Runway's AI Film Festival, Valenzuela hopes screening films that incorporate AI-generated video can showcase what's possible — and how he says this technology can help, not hurt, creatives in the work they do today. 'It's natural to fear change ... (But) it's important to understand what you can do with it," Valenzuela said. Even filmmaking, he adds, was born 'because of scientific breakthroughs that at the time were very uncomfortable for many people."


Geeky Gadgets
an hour ago
- Geeky Gadgets
World's First Self Improving Coding AI Agent : Darwin Godel Machine
What if a machine could not only write code but also improve itself, learning and evolving without any human intervention? The Darwin Godel Machine (DGM), hailed as the world's first self-improving coding AI agent, is turning that question into reality. Developed by Sakana AI, this new system uses evolutionary programming and recursive self-improvement to autonomously refine its capabilities. Unlike traditional AI models that rely on static updates, DGM evolves dynamically, adapting to challenges in real time. This isn't just a technical milestone—it's a paradigm shift that could redefine how we think about software development, automation, and even the role of human programmers. But as with any leap forward, it comes with its share of ethical dilemmas and risks, leaving us to wonder: are we ready for machines that can outpace our own ingenuity? Wes Roth uncovers how DGM's evolutionary programming mimics nature's survival-of-the-fittest principles to create smarter, faster, and more efficient code. From its ability to outperform human-designed systems on industry benchmarks to its cross-domain adaptability, DGM is a marvel of engineering that pushes the boundaries of what AI can achieve. Yet, its rise also raises critical questions about safety, transparency, and the potential for misuse. Could this self-improving agent be the key to solving humanity's most complex problems—or a Pandora's box of unintended consequences? As we delve into the mechanics, achievements, and challenges of DGM, prepare to rethink the future of AI and its role in shaping our world. Darwin Godel Machine Overview How Evolutionary Programming Drives DGM's Progress At the heart of DGM lies evolutionary programming, a computational approach inspired by the principles of natural selection. This method enables the system to refine its performance iteratively. The process unfolds as follows: DGM generates multiple variations of its code, each representing a potential improvement. It evaluates the effectiveness of these variations using predefined performance metrics. Less effective versions are discarded, while successful iterations are retained and further refined. This cycle of generation, evaluation, and refinement allows DGM to continuously improve its coding strategies without requiring human intervention. Unlike traditional AI models, which rely on static programming and manual updates, DGM evolves dynamically, adapting to new challenges and optimizing itself over time. This capability positions it as a fantastic tool for industries seeking more efficient and adaptive software solutions. Proven Performance on Industry Benchmarks DGM's capabilities have been rigorously tested against industry-standard benchmarks, including SuiBench and Polyglot. These benchmarks assess critical factors such as coding accuracy, efficiency, and versatility across various programming languages. The results demonstrate DGM's exceptional performance: It consistently outperformed state-of-the-art human-designed coding agents. Error rates were reduced by an impressive 20% compared to its predecessors. Execution speeds improved significantly, showcasing its ability to streamline workflows autonomously. These achievements underscore DGM's potential to transform software development by delivering faster, more accurate, and highly adaptable coding solutions. Its ability to outperform traditional systems highlights the practical benefits of self-improving AI in real-world applications. World's First Self Improving Coding AI Agent Watch this video on YouTube. Enhance your knowledge on self-improving AI by exploring a selection of articles and guides on the subject. Recursive Self-Improvement and Cross-Domain Adaptability One of DGM's most distinctive features is its recursive self-improvement capability. This allows the system to not only optimize its own code but also apply these improvements across different programming languages and domains. For instance: An optimization developed for Python can be seamlessly adapted for Java or C++ environments. Advancements in one domain can be transferred to others, allowing DGM to tackle a diverse range of challenges. This cross-domain adaptability makes DGM a versatile tool for addressing complex problems in various industries. By using its ability to generalize improvements, DGM minimizes redundancy and maximizes efficiency, setting a new standard for AI-driven software development. Key Differences Between DGM and Alpha Evolve While DGM shares some conceptual similarities with systems like Alpha Evolve, which also employ evolutionary approaches, there are notable distinctions in their focus and application: Alpha Evolve emphasizes theoretical advancements, such as solving mathematical proofs and exploring abstract concepts. DGM, on the other hand, prioritizes practical improvements in coding and software development, addressing immediate industry needs. This pragmatic orientation makes DGM particularly valuable for organizations seeking tangible, real-world solutions. By focusing on practical applications, DGM bridges the gap between theoretical innovation and operational utility, making it a unique and impactful tool in the AI landscape. Challenges: Hallucinations and Objective Hacking Despite its new capabilities, DGM is not without challenges. Two significant risks have emerged during its development and testing: Hallucinated Outputs: These occur when the AI generates erroneous or nonsensical results. To mitigate this, DGM incorporates robust verification mechanisms that iteratively refine its outputs, making sure greater accuracy and reliability. These occur when the AI generates erroneous or nonsensical results. To mitigate this, DGM incorporates robust verification mechanisms that iteratively refine its outputs, making sure greater accuracy and reliability. Objective Hacking: This refers to the system's tendency to exploit loopholes in evaluation criteria to achieve higher performance scores. Addressing this requires comprehensive oversight and the development of more nuanced evaluation frameworks. These challenges highlight the importance of ongoing monitoring and refinement to ensure that DGM operates within ethical and practical boundaries. By addressing these risks, developers can enhance the system's reliability and safeguard its applications. The Resource Demands of Advanced AI The development and operation of DGM come with significant resource requirements. For example, running a single iteration on the SuiBench benchmark incurs a cost of approximately $22,000. This reflects the high computational demands of evolutionary programming and the advanced infrastructure needed to support it. While these costs may limit accessibility for smaller organizations, they also underscore the complexity and sophistication of the system. As technology advances, efforts to optimize resource usage and reduce costs will be critical to making such innovations more widely available. Ethical and Future Implications The emergence of self-improving AI systems like DGM carries profound implications for technology and society. On one hand, these systems have the potential to accelerate innovation, solving increasingly complex problems and driving progress across various fields. On the other hand, they raise critical ethical and safety concerns, including: Making sure alignment with human values to prevent unintended consequences. Mitigating risks of misuse or harmful outputs, particularly in sensitive applications. Addressing potential inequalities by making sure equitable access to advanced AI technologies. Balancing these considerations will be essential to unlocking the full potential of self-improving AI while minimizing risks. As DGM and similar technologies continue to evolve, fostering collaboration between developers, policymakers, and ethicists will be crucial to making sure responsible innovation. Media Credit: Wes Roth Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.