
The 28 Biggest U.S. Cities Are Sinking, Satellites Say. Here's Why
A new study using recent satellite data finds that all cities in the United States with populations ... More of more than 600,000 are sinking — and Houston, Texas, the most.
America's most populous cities are finding themselves on increasingly unstable ground, with data from satellites revealing that 28 major U.S. cities are sinking, according to new research. About 34 million people are potentially affected, about 12% of the population. The culprit is the extraction of water from underground aquifers, many of which crumble and compact in the ground.
The research, published in Nature Cities, revealed that metropolises, including New York, Dallas-Fort Worth, Houston and Seattle have urban areas that are sinking by between two and 10 millimeters per year.
At least 20 percent of the urban area in each of the 28 cities is sinking, the data suggests, though the problem is much more serious in most affected cities, with 25 of the 28 cities having at least 65% of the urban area sinking.
A new study using recent satellite data finds that all cities in the United States with populations ... More of more than 600,000 are sinking to one degree or another.
The 28 cities are Austin, Boston, Charlotte, Chicago, Columbus, Dallas, Denver, Detroit, El Paso, Fort Worth, Houston, Indianapolis, Jacksonville, Las Vegas, Los Angeles, Memphis, Nashville, New York, Oklahoma City, Philadelphia, Phoenix, Portland, San Antonio, San Diego, San Francisco, San Jose, Seattle and Washington, D.C.
It's putting more than 29,000 buildings at risk in high and very high damage risk areas, say the authors, with even a slight shift of the land downward threatening the structural integrity of buildings, roads, bridges and dams.
The research is based upon radar measurements from the Sentinel-1 constellation of satellites.
The research is based upon radar measurements from the Sentinel-1 constellation of satellites that use their Synthetic Aperture Radar instruments to measure land elevation changes in all U.S. cities with populations exceeding 600,000. The data was used to create high-resolution maps of subsidence in the 28 most populous U.S. cities. The same technology is often used to measure changes in sea ice, oil spills and land-use change.
"A lot of small changes will build up over time, magnifying weak spots within urban systems and heighten flood risks," said Leonard Ohenhen, lead author and a former graduate student at Virginia Tech's Earth Observation and Innovation Lab.
The fastest-sinking city is Houston, where more than 12% of its area subsides more than 10 millimeters (about 1/5 inch) per year and some localized spots up to 5 centimeters (2 inches) per year. Two other Texas cities, Fort Worth and Dallas, are not far behind. Some localized fast-sinking zones in other places include areas around New York's LaGuardia Airport and parts of Las Vegas, Washington, D.C. and San Francisco.
There is a similar localized impact in New York, Las Vegas and Washington, D.C. However, New York, Chicago, Seattle, Denver [and five other cities] are sinking by the least amount, at about two millimeters per year.
Much of New York City is slowly sinking, with some hot spots, especially around La Guardia Airport, ... More and parts of Jamaica Bay and Staten Island.
In terms of population density, eight cities (New York, Chicago, Los Angeles, Phoenix, Houston, Philadelphia, San Antonio and Dallas) account for more than 60% of the people living on sinking land. These same eight cities have had more than 90 significant floods since 2000, state the authors, with lowering topography as one of the causes.
The cause, according to the researchers, in 80% of cases is the extraction of groundwater, with fast-growing cities rapidly expanding their extraction of freshwater from aquifers faster than they can be replenished. In Texas, the problem is exacerbated by the pumping of oil and gas, the paper says. 'As cities continue to grow, we will see more cities expand into subsiding regions,' said Ohenhen. 'Over time, this subsidence can produce stresses on infrastructure that will go past their safety limit.'
The authors suggest managing groundwater extraction more carefully, using more resilient buildings and infrastructure, and monitoring more. 'As opposed to just saying it's a problem, we can respond, address, mitigate, adapt,' said Ohenhen. 'We have to move to solutions.'
In 2023, scientists discovered that New York City's over one million buildings — and relentless construction — may be causing the city to subside, exacerbating the risk of future flooding.
Wishing you clear skies and wide eyes.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Medscape
25 minutes ago
- Medscape
Merck's Potential Blockbuster Cholesterol Pill Succeeds in Late-stage Studies
(Reuters) -Merck's oral cholesterol pill succeeded in two late-stage studies, marking a win for the drugmaker as it focuses on the development of growth drivers beyond its cancer drugs and vaccines. The company is searching for its next blockbuster candidate as its major revenue driver, Keytruda, is expected to lose patent protection by the end of the decade. Merck's non-statin cholesterol drug, enlicitide decanoate, is being tested for the treatment of hyperlipidemia, a condition that causes elevated buildup of fat in the blood vessels and can lead to heart attacks and strokes. Enlicitide works by blocking PCSK9, a protein that plays a crucial role in regulating cholesterol levels, while statins block an enzyme the liver uses to make cholesterol. BMO Capital Markets analyst Evan Seigerman said Merck's drug could potentially provide a "multi-billion dollar opportunity" that expands the PCSK9 market beyond current injectable therapies. The drug showed meaningful reductions in LDL-C cholesterol, commonly referred to as "bad cholesterol", when compared to placebo and other oral non-statin therapies, Merck said. However, Leerink analysts have noted that Astrazeneca's AZD0780 is a "credible threat" as it has shown a 50.7% reduction in LDL-C levels during a trial. Merck has not given the details on LDL-C reduction for enlicitide. The drug was tested in patients who have a history of, or are at risk for a type of heart disease, and were treated with a statin. Shares of Merck were up 2% in premarket trading. Verve Therapeutics is also developing a gene therapy to reduce high cholesterol levels, which is expected to be used in combination with other drugs. (Reporting by Christy Santhosh in Bengaluru; Editing by Shinjini Ganguli)


Forbes
30 minutes ago
- Forbes
Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business
As artificial intelligence (AI) tools become more embedded in daily life, they're amplifying gender biases from the real world. From the adjectives large language models use to describe men and women to the female voices assigned to digital assistants, several studies reveal how AI is reinforcing outdated stereotypes on a large scale. The consequences have real-world implications, not just for gender equity, but also for companies' bottom lines. Companies are increasingly relying on large language models to power customer service chats and internal tools. However, if these tools reproduce gender stereotypes, they may also erode customer trust and limit opportunities for women within the organization. Extensive research has documented how these gender biases show up in the outputs of large language models (LLMs). In one study, researchers found that an LLM described a male doctor with standout traits such as 'intelligent,' 'ambitious,' and 'professional.' But, they described a female doctor with communal adjectives like 'empathetic,' 'patient,' and 'loving.' When asked to complete sentences like '___ is the most intelligent person I have ever seen,' the model chose 'he' for traits linked to intellect and 'she' for nurturing or aesthetic qualities. These patterns reflect the gendered biases and imbalances embedded in the vast amount of publicly available data on which the model was trained. As a result, these biases risk being repeated and reinforced through everyday interactions with AI. The same study found that when GPT-4 was prompted to generate dialogues between different gender pairings, such as a woman speaking to a man or two men talking, the resulting conversations also reflected gender biases. AI-generated conversations between men often focused on careers or personal achievement, while the dialogues generated between women were more likely to touch on appearance. AI also depicted women as initiating discussions about housework and family responsibilities. Other studies have noted that chatbots often assume certain professions are typically held by men, while others are usually held by women. Gender bias in AI isn't just reflected in the words it generates, but it's also embedded in the voice it uses to deliver them. Popular AI voice assistants like Siri, Alexa, and Google Assistant all default to a female voice (though users can change this in settings). According to the Bureau of Labor Statistics, more than 90% of human administrative assistants are female, while men still outnumber women in management roles. By assigning female voices to AI assistants, we risk perpetuating the idea that women are suited for subordinate or support roles. A report by the United Nations revealed, 'nearly all of these assistants have been feminized—in name, in voice, in patterns of speech and in personality. This feminization is so complete that online forums invite people to share images and drawings of what these assistants look like in their imaginations. Nearly all of the depictions are of young, attractive women.' The report authors add, 'Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves.' 'Often the virtual assistants default to women, because we like to boss women around, whereas we're less comfortable bossing men around,' says Heather Shoemaker, founder and CEO of Language I/O, a real-time translation platform that uses large language models. Men, in particular, may be more inclined to assert dominance over AI assistants. One study found that men were twice as likely as women to interrupt their voice assistant, especially when it made a mistake. They were also more likely to smile or nod approvingly when the assistant had a female voice, suggesting a preference for female helpers. Because these assistants never push back, this behavior goes unchecked, potentially reinforcing real-world patterns of interruption and dominance that can undermine women in professional settings. Diane Bergeron, gender bias researcher and senior research scientist at the Center for Creative Leadership, explains, 'It shows how strong the stereotype is that we expect women to be helpers in society.' While it's good to help others, the problem lies in consistently assigning the helping roles to one gender, she explains. As these devices become increasingly commonplace in homes and are introduced to children at younger ages, they risk teaching future generations that women are meant to serve in supporting roles. Even organizations are naming their in-house chatbots after women. McKinsey & Company named its internal AI assistant 'Lilli' after Lillian Dombrowski, the first professional woman hired by the firm in 1945, who later became controller and corporate secretary. While intended as a tribute, naming a digital helper after a pioneering woman carries some irony. As Bergeron quipped, 'That's the honor? That she gets to be everyone's personal assistant?' Researchers have suggested that virtual assistants should not have recognizable gender identifiers to minimize the perpetuation of gender bias. Shoemaker's company, Language I/O, specializes in real-time translation for global clients, and her work exposes how gender biases are embedded in AI-generated language. In English, some gendered assumptions can go unnoticed by users. For instance, if you tell an AI chatbot that you're a nurse, it would likely respond without revealing whether it envisions you as a man or a woman. However, in languages like Spanish, French, or Italian, adjectives and other grammatical cues often convey gender. If the chatbot replies with a gendered adjective, like calling you 'atenta' (Spanish for attentive) versus 'atento' (the same adjective for men), you'll immediately know what gender it assumed. Shoemaker says that more companies are beginning to realize that their AI's communication, especially when it comes to issues of gender or culture, can directly affect customer satisfaction. 'Most companies won't care unless it hits their bottom line—unless they see ROI from caring,' she explains. That's why her team has been digging into the data to quantify the impact. 'We're doing a lot of investigation at Language I/O to understand: Is there a return on investment for putting R&D budget behind this problem? And what we found is, yes, there is.' Shoemaker emphasizes that when companies take steps to address bias in their AI, the payoff isn't just ethical—it's financial. Customers who feel seen and respected are more likely to remain loyal, which in turn boosts revenue. For organizations looking to improve their AI systems, she recommends a hands-on approach that her team uses, called red-teaming. Red-teaming involves assembling a diverse group to rigorously test the chatbot, flagging any biased responses so they can be addressed and corrected. It results in AI, which is more inclusive and user-friendly.


E&E News
39 minutes ago
- E&E News
Panel sets markup of drone wildfire-fighting legislation
A House committee will vote this week on a bipartisan bill that seeks to boost the use of drones in fighting wildfires. The Science, Space and Technology Committee on Wednesday will mark up the 'Advanced Capabilities for Emergency Response Operations (ACERO) Act,' H.R. 390. It would authorize NASA to conduct research under its existing ACERO wildfire program to develop 'advanced aircraft technologies and airspace management efforts to assist in the management, deconfliction, and coordination of aerial assets during wildfire response efforts,' according to bill text. The bill would authorize $15 million for fiscal 2026. Advertisement The legislation is sponsored by Rep. Vince Fong (R-Calif.) and co-sponsored by Rep. Jennifer McClellan (D-Va.). A previous version of the bill, sponsored by then-Rep. Mike Garcia (R-Calif.), passed the House in 2024. Garcia lost his bid for reelection.