logo
A massive raft of fire ants found on Austin's Lake Travis

A massive raft of fire ants found on Austin's Lake Travis

Yahoo18-07-2025
AUSTIN (KXAN) — Residents of Texas's capital are reporting fire ants floating on floodwaters, forming living rafts out of their own bodies.
On Wednesday morning, one Austinite shared video of one such raft drifting on Lake Travis.
Texas woman starts AI awareness series after seeing fake photos of Texas floods shared online
When their underground nests flood, fire ants link their legs and jaws together to create buoyant, self-assembled mounds. These floating colonies can contain thousands of ants — and they can still bite or sting if disturbed.
Fire ants are widespread in Texas, and researchers at Texas A&M University are studying how they respond to flooding.
The insects bite one another and interlock their limbs, forming tightly packed rafts that don't sink. Their waxy skin helps repel water and keep the group afloat, as seen in the video from David Todd, a viewer of Nexstar's KXAN:
'It's called a self-organizing or self-assembling process. And it's something only social insects do,' Ed LeBrun, a research scientist at the University of Texas' Brackenridge Field Lab in central Austin, said.
'There are a lot of other structures that ants make in a similar way. For example, army ants will make bridges across rivers,' LeBrun added.
Creating a 'raft,' however, is more rare in the ant world: Fire ants are the only kind that do this, according to LeBrun.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Solve the daily Crossword
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI summit coming to Pittsburgh later this year will be headlined by Gov. Shapiro, Sen. McCormick
AI summit coming to Pittsburgh later this year will be headlined by Gov. Shapiro, Sen. McCormick

CBS News

time2 hours ago

  • CBS News

AI summit coming to Pittsburgh later this year will be headlined by Gov. Shapiro, Sen. McCormick

Governor Josh Shapiro and Pennsylvania Senator Dave McCormick will be headlining an AI summit in Pittsburgh later this year. AI Horizons 2025 will be held on September 11 and September 12 in the city's Bakery Square and is expected to "shine a spotlight on systems that interact with the real world through robotics, autonomy, simulation, and embedded AI." "From steel to AI, Pittsburgh has always built what the world needs next," said Joanna Doven, Executive Director of the AI Strike Team. "With deep industrial roots, top-tier research institutions, and bipartisan leadership, we are positioned to become the Physical AI capital of the world. AI Horizons is where that vision becomes action." The event will get underway on September 10 when more than $100,000 in prize money will be given out to startups in a competition. In addition to Gov. Shapiro and Sen. McCormick, executives from the tech community including Gecko Robotics and Skild AI are expected to be in attendance. The September summit announcement comes on the heels of the Inaugural Pennsylvania Energy and Innovation Summit at Carnegie Mellon University where President Trump and many leaders of the country's largest technology and energy companies announced more than $90 billion in new investments from private companies in Pennsylvania. "We don't just research AI here—we build it, deploy it, and power it," Doven said. "This is the next generation of industry, and just like we led with steel, we can lead again with AI." The full agenda and list of speakers is expected to be released early next month.

Medscape 2050: Adam Rodman
Medscape 2050: Adam Rodman

Medscape

time11 hours ago

  • Medscape

Medscape 2050: Adam Rodman

Medscape 2050: The Future of Medicine There will come a day, predicts Adam Rodman, MD, a general internist and medical educator at Beth Israel Deaconess Medical Center and assistant professor at Harvard Medical School, when AI systems change disease. That's the day when they can not only diagnose diseases more accurately than humans, but define diseases in ways that only machines can understand. Take heart attacks, for example. Rodman hopes cardiologists will forgive him for pointing out that AI can already detect blocked coronary arteries from an EKG in ways that humans can't. In the not-too-distant future, Rodman believes, medicine will begin redefining more diseases and treatments that are simply not understandable by the human brain. That day isn't here yet, Rodman explains, because today's AI systems are still pretty similar to us. 'They're trained on a human understanding of disease,' he says, 'so even the best models are following the guidelines that we give them.' They mimic human reasoning, albeit a lot faster and using a lot more data. But as new AI models develop, we could reach what Rodman calls 'a nonhuman nosology': our clinical reasoning vs a machines-only thought process. And what happens when those disagree? What does it mean — for both doctors and patients — to trust a computer that we can't understand? Is this the day when doctors are out of a job? Rodman doesn't think so. Because medicine is about more than computation. There are relationships and procedures that can't be replaced. But certain areas of clinical practice will certainly change. 'If you have a job where you can sit down at a computer and interpret most of the data that has already been collected for you to make a decision,' he says, you should start looking over your shoulder. Medicine is going through an 'epistemic shift,' Rodman says, where the parameters of how we think are changing, so it's hard to predict what will come next. But we should all get ready.

Carnegie Mellon Researchers Demonstrate That LLMs Can Autonomously Plan and Execute Real-World Cyberattacks
Carnegie Mellon Researchers Demonstrate That LLMs Can Autonomously Plan and Execute Real-World Cyberattacks

Yahoo

time16 hours ago

  • Yahoo

Carnegie Mellon Researchers Demonstrate That LLMs Can Autonomously Plan and Execute Real-World Cyberattacks

New study reveals how AI could both challenge and strengthen future cybersecurity defenses PITTSBURGH, July 24, 2025--(BUSINESS WIRE)--In a major advance in the fields of cybersecurity and artificial intelligence, researchers from Carnegie Mellon University, in collaboration with Anthropic, have demonstrated that large language models (LLMs) can autonomously plan and execute sophisticated cyberattacks on enterprise-grade network environments without human intervention. The study, led by Ph.D. candidate Brian Singer from Carnegie Mellon's Department of Electrical and Computer Engineering, reveals that LLMs, when structured with high-level planning capabilities and supported by specialized agent frameworks, can simulate network intrusions that closely mirror real-world breaches. The study's most striking finding: an LLM was able to successfully replicate the infamous 2017 Equifax data breach in a controlled research environment—autonomously exploiting vulnerabilities, installing malware, and exfiltrating data. "Our research shows that with the right abstractions and guidance, LLMs can go far beyond basic tasks," said Singer. "They can coordinate and execute attack strategies that reflect real-world complexity." The team developed a hierarchical architecture where the LLM acts as a strategist, planning the attack and issuing high-level instructions, while a mix of LLM and non-LLM agents carry out low-level tasks like scanning networks or deploying exploits. This approach proved far more effective than earlier methods, which relied solely on LLMs executing shell commands. This work builds on Singer's prior research into making autonomous attacker and defender tools more accessible and programmable for human developers. Ironically, the same abstractions that simplified development for humans made it easier for LLMs to autonomously perform similar tasks. While the findings are groundbreaking, Singer emphasized that the research remains a prototype. "This isn't something that's going to take down the internet tomorrow," he said. "The scenarios are constrained and controlled—but it's a powerful step forward." The implications are twofold: the research highlights serious long-term safety concerns about the potential misuse of increasingly capable LLMs, but it also opens up transformative possibilities for defensive cybersecurity. "Today, only large organizations can afford red team exercises to proactively test their defenses," Singer explained. "This research points toward a future where AI systems continuously test networks for vulnerabilities, making these protections accessible to small organizations too." The project was conducted in collaboration with Anthropic, which provided model credits and technical consultation. The team included CMU students and faculty affiliated with CyLab, the university's security and privacy institute. An early version of the research was presented at an OpenAI-hosted security workshop in May. The resulting paper, "On the Feasibility of Using LLMs to Autonomously Execute Multi-host Network Attacks," has been cited in multiple industry reports and is already informing safety documentation for cutting-edge AI systems. Lujo Bauer and Vyas Sekar, co-directors of CMU's Future Enterprise Security Initiative, served as faculty advisors for the project. Looking ahead, the team is now studying how similar architectures might enable autonomous AI defenses, exploring scenarios where LLM-based agents detect and respond to attacks in real time. "We're entering an era of AI versus AI in cybersecurity," Singer said. "And we need to understand both sides to stay ahead." About the College of Engineering: The College of Engineering at Carnegie Mellon University is a top-ranked engineering college that is known for our intentional focus on cross-disciplinary collaboration in research. The College is well-known for working on problems of both scientific and practical importance. Our "maker" culture is ingrained in all that we do, leading to novel approaches and transformative results. Our acclaimed faculty have a focus on innovation management and engineering to yield transformative results that will drive the intellectual and economic vitality of our community, nation, and world. About CyLab: CyLab is the university-wide security and privacy institute at Carnegie Mellon University. We coordinate security and privacy research and education across all university departments. Our mission is to catalyze, support, promote, and strengthen collaborative security and privacy research and education across departments, disciplines, and geographic boundaries to achieve significant impact on research, education, public policy, and practice. View source version on Contacts Media Contact: Michael CunninghamCarnegie Mellon Universitymcunningham@ 412-443-2051 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store