logo
The Upsetting Truth About What Wildfire Smoke Does to Your Body

The Upsetting Truth About What Wildfire Smoke Does to Your Body

Gizmodo3 hours ago

Much of Canada is ablaze again, with more than 200 active wildfires having consumed roughly 10,000 square miles (26,000 square kilometers) since January, the Canadian Interagency Forest Fire Centre reported Thursday, June 5.
This escalating situation pumped massive amounts of smoke across the Canada-U.S. border, affecting air quality as far south as Florida, according to the National Oceanic and Atmospheric Administration (NOAA). As climate change lengthens and intensifies wildfire season in many regions across the world, understanding the dangers of smoke exposure is increasingly important. A wave of new research paints a complex picture of how wildfire smoke impacts the body, linking it to startling health outcomes that go far beyond the respiratory system.
'There is an urgent need for research to fully understand the health impacts of wildfire smoke to raise awareness among public and health professionals, as well as to support the development of effective regulations to mitigate the impacts,' Yaguang Wei, assistant professor of environmental medicine at Mount Sinai's Icahn School of Medicine, recently told the Harvard Gazette.
Wei is the lead author of a new study, published in May in the journal Epidemiology, which found that wildfire smoke can damage the lungs and heart for up to three months after the fire is out. He and his colleagues linked this 'medium-term' exposure to increased risks of various cardiorespiratory conditions, including heart disease, stroke, high blood pressure, pneumonia, chronic lung disease, and asthma.
'Even brief exposures from smaller fires that last only a few days can lead to long-lasting health effects,' Wei told the Harvard Gazette.
Infectious fumes
Wildfire smoke is a mixture of gases, air pollutants, water vapor, and fine particulate matter (PM2.5), according to the Environmental Protection Agency (EPA). It contains significant levels of toxic compounds such as polycyclic aromatic hydrocarbons (PAHs) and volatile organic compounds (VOCs), some of which are known carcinogens. Recent studies even suggest that wildfire smoke carries microbial and fungal pathogens.
One such study, published in the ISME Journal in 2021, noted that 80% of microbes found in wildfire smoke samples were still viable. While it's still unclear how these organisms survive the extremely high temperatures in wildfires, researchers do have an idea of how they get into the smoke in the first place. George Thompson, a professor of medicine at the University of California, Davis, who was not involved in the study, told Gizmodo that wildfires pull pathogens from the surrounding soil and vegetation as they draw in air.
'The good news is, most of those bacteria and fungi really don't cause infections [in healthy individuals],' Thompson said. 'We're most concerned for our patients whose immune systems have been impacted already,' such as those receiving chemotherapy or recovering from trauma, he added.
A 2023 study, however, found evidence to suggest that wildfire smoke could raise infection risk among the general population. The findings, published in The Lancet, Planetary Health, associated California wildfires with an 18% to 22% increase in invasive fungal infections such as valley fever. Thompson pointed out that the study was based on large hospital data, which is 'a great start,' but further research will need to corroborate this link.
The brain on fire
The most hazardous component of wildfire smoke is not pathogens, but PM2.5. These minuscule particles penetrate deep inside the lungs and wreak havoc on the respiratory system. Previous research has shown that the tiniest, ultrafine particles can pass from the lungs directly into the bloodstream. This can damage blood vessels and trigger harmful inflammation and oxidative stress in various organs, including the brain.
Multiple studies have associated wildfire smoke exposure with incidence of dementia. Last year, research published in JAMA Neurology analyzed health data from more than 1.2 million Southern Californians aged 60 and older, and found a significant link between long-term exposure to wildfire-related PM2.5 and a heightened risk of dementia.
Specifically, every 1 microgram per cubic meter increase in the three-year average of wildfire PM2.5 raised the odds of a dementia diagnosis by 18%. In comparison, the same increase in PM2.5 from non-wildfire sources was linked to only a 1% greater risk of developing dementia.
'I was expecting for us to see an association between wildfire smoke exposure and dementia,' lead author Holly Elser, an epidemiologist and resident physician in neurology at the University of Pennsylvania, told the Los Angeles Times in 2024. 'But the fact we see so much stronger of an association for wildfire as compared to non-wildfire smoke exposure was kind of surprising.'
Psychological fallout
Other studies have linked wildfire smoke to adverse psychological outcomes. Research published in JAMA Network Open in April analyzed data on wildfire PM2.5 levels and mental health-related emergency department visits throughout California between July and December 2022—the state's worst wildfire season on record. The study found that wildfire smoke correlated with a significant spike in mental health emergency department visits for up to seven days post-exposure.
'Our study suggests that—in addition to the trauma a wildfire can induce—smoke itself may play a direct role in worsening mental health conditions like depression, anxiety, and mood disorders,' co-author Kari Nadeau, a physician-scientist at the Harvard T.H. Chan School of Public Health, said in a university statement.
Questions remain
All of this research demonstrates that wildfire smoke is more than just a respiratory hazard. But experts are still in the early stages of unraveling its complex health impacts—particularly in terms of mental health, Angela Yao, a senior scientist with the Environmental Health Services at the B.C. Centre for Disease Control in Canada, told Gizmodo.
Many questions remain unanswered, she said. For example, 'How do you disentangle the impact of smoke from the impact of the fire itself?' Future studies will need to investigate these confounding factors. But, 'even with the current evidence that we have—it already gives us confidence that we should take a lot of action,' she added.
To protect yourself from the hazards of wildfire smoke, Yao recommended limiting the length and intensity of time spent outdoors. 'The harder you breathe, the more smoke you inhale,' she said. If you must go outside, wearing an N95 mask or a P100 can reduce your smoke exposure, according to the EPA.
Keep windows and doors shut to ensure that your indoor air is safe. It's also important to make sure your home's HVAC system is running properly, Yao added. If you don't have one, you can purchase a portable air filter or build your own using a furnace filter and a box fan.
As wildfire season becomes increasingly impactful, taking steps to protect yourself and your family from smoke has never been more critical. Experts still have a long way to go towards fully understanding the risks of wildfire exposure, but one thing is clear—these hazards aren't going away any time soon.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business
Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business

Forbes

time40 minutes ago

  • Forbes

Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business

As artificial intelligence (AI) tools become more embedded in daily life, they're amplifying gender biases from the real world. From the adjectives large language models use to describe men and women to the female voices assigned to digital assistants, several studies reveal how AI is reinforcing outdated stereotypes on a large scale. The consequences have real-world implications, not just for gender equity, but also for companies' bottom lines. Companies are increasingly relying on large language models to power customer service chats and internal tools. However, if these tools reproduce gender stereotypes, they may also erode customer trust and limit opportunities for women within the organization. Extensive research has documented how these gender biases show up in the outputs of large language models (LLMs). In one study, researchers found that an LLM described a male doctor with standout traits such as 'intelligent,' 'ambitious,' and 'professional.' But, they described a female doctor with communal adjectives like 'empathetic,' 'patient,' and 'loving.' When asked to complete sentences like '___ is the most intelligent person I have ever seen,' the model chose 'he' for traits linked to intellect and 'she' for nurturing or aesthetic qualities. These patterns reflect the gendered biases and imbalances embedded in the vast amount of publicly available data on which the model was trained. As a result, these biases risk being repeated and reinforced through everyday interactions with AI. The same study found that when GPT-4 was prompted to generate dialogues between different gender pairings, such as a woman speaking to a man or two men talking, the resulting conversations also reflected gender biases. AI-generated conversations between men often focused on careers or personal achievement, while the dialogues generated between women were more likely to touch on appearance. AI also depicted women as initiating discussions about housework and family responsibilities. Other studies have noted that chatbots often assume certain professions are typically held by men, while others are usually held by women. Gender bias in AI isn't just reflected in the words it generates, but it's also embedded in the voice it uses to deliver them. Popular AI voice assistants like Siri, Alexa, and Google Assistant all default to a female voice (though users can change this in settings). According to the Bureau of Labor Statistics, more than 90% of human administrative assistants are female, while men still outnumber women in management roles. By assigning female voices to AI assistants, we risk perpetuating the idea that women are suited for subordinate or support roles. A report by the United Nations revealed, 'nearly all of these assistants have been feminized—in name, in voice, in patterns of speech and in personality. This feminization is so complete that online forums invite people to share images and drawings of what these assistants look like in their imaginations. Nearly all of the depictions are of young, attractive women.' The report authors add, 'Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves.' 'Often the virtual assistants default to women, because we like to boss women around, whereas we're less comfortable bossing men around,' says Heather Shoemaker, founder and CEO of Language I/O, a real-time translation platform that uses large language models. Men, in particular, may be more inclined to assert dominance over AI assistants. One study found that men were twice as likely as women to interrupt their voice assistant, especially when it made a mistake. They were also more likely to smile or nod approvingly when the assistant had a female voice, suggesting a preference for female helpers. Because these assistants never push back, this behavior goes unchecked, potentially reinforcing real-world patterns of interruption and dominance that can undermine women in professional settings. Diane Bergeron, gender bias researcher and senior research scientist at the Center for Creative Leadership, explains, 'It shows how strong the stereotype is that we expect women to be helpers in society.' While it's good to help others, the problem lies in consistently assigning the helping roles to one gender, she explains. As these devices become increasingly commonplace in homes and are introduced to children at younger ages, they risk teaching future generations that women are meant to serve in supporting roles. Even organizations are naming their in-house chatbots after women. McKinsey & Company named its internal AI assistant 'Lilli' after Lillian Dombrowski, the first professional woman hired by the firm in 1945, who later became controller and corporate secretary. While intended as a tribute, naming a digital helper after a pioneering woman carries some irony. As Bergeron quipped, 'That's the honor? That she gets to be everyone's personal assistant?' Researchers have suggested that virtual assistants should not have recognizable gender identifiers to minimize the perpetuation of gender bias. Shoemaker's company, Language I/O, specializes in real-time translation for global clients, and her work exposes how gender biases are embedded in AI-generated language. In English, some gendered assumptions can go unnoticed by users. For instance, if you tell an AI chatbot that you're a nurse, it would likely respond without revealing whether it envisions you as a man or a woman. However, in languages like Spanish, French, or Italian, adjectives and other grammatical cues often convey gender. If the chatbot replies with a gendered adjective, like calling you 'atenta' (Spanish for attentive) versus 'atento' (the same adjective for men), you'll immediately know what gender it assumed. Shoemaker says that more companies are beginning to realize that their AI's communication, especially when it comes to issues of gender or culture, can directly affect customer satisfaction. 'Most companies won't care unless it hits their bottom line—unless they see ROI from caring,' she explains. That's why her team has been digging into the data to quantify the impact. 'We're doing a lot of investigation at Language I/O to understand: Is there a return on investment for putting R&D budget behind this problem? And what we found is, yes, there is.' Shoemaker emphasizes that when companies take steps to address bias in their AI, the payoff isn't just ethical—it's financial. Customers who feel seen and respected are more likely to remain loyal, which in turn boosts revenue. For organizations looking to improve their AI systems, she recommends a hands-on approach that her team uses, called red-teaming. Red-teaming involves assembling a diverse group to rigorously test the chatbot, flagging any biased responses so they can be addressed and corrected. It results in AI, which is more inclusive and user-friendly.

Panel sets markup of drone wildfire-fighting legislation
Panel sets markup of drone wildfire-fighting legislation

E&E News

timean hour ago

  • E&E News

Panel sets markup of drone wildfire-fighting legislation

A House committee will vote this week on a bipartisan bill that seeks to boost the use of drones in fighting wildfires. The Science, Space and Technology Committee on Wednesday will mark up the 'Advanced Capabilities for Emergency Response Operations (ACERO) Act,' H.R. 390. It would authorize NASA to conduct research under its existing ACERO wildfire program to develop 'advanced aircraft technologies and airspace management efforts to assist in the management, deconfliction, and coordination of aerial assets during wildfire response efforts,' according to bill text. The bill would authorize $15 million for fiscal 2026. Advertisement The legislation is sponsored by Rep. Vince Fong (R-Calif.) and co-sponsored by Rep. Jennifer McClellan (D-Va.). A previous version of the bill, sponsored by then-Rep. Mike Garcia (R-Calif.), passed the House in 2024. Garcia lost his bid for reelection.

Trump Budget's Caps on Grad School Loans Could Worsen Doctor Shortage
Trump Budget's Caps on Grad School Loans Could Worsen Doctor Shortage

New York Times

timean hour ago

  • New York Times

Trump Budget's Caps on Grad School Loans Could Worsen Doctor Shortage

President Trump's proposed budget would make deep cuts in government health plans and medical research, and, critics say, could also make finding a doctor more difficult: It restricts loans that students rely on to pursue professional graduate degrees, making the path to becoming a physician harder even as doctor shortages loom and the American population is graying. The domestic policy bill, which passed in the House last month, would cap direct federal unsubsidized loans at $150,000 — far less than the cost of obtaining a medical education — and phase out the Grad PLUS loans that help many students make up the difference. Medicine, dentistry and osteopathic medicine are among the most expensive graduate programs. Four years of medical education costs $286,454 at a public school, on average, and $390,848 at a private one, according to the Association of American Medical Colleges. Medical school graduates leave with an average debt of $212,341, the association found. The price of a four-year program in osteopathic medicine is $297,881 at a public school, on average, and $371,403 at a private school, according to the American Association of Colleges of Osteopathic Medicine. The average indebtedness of their graduates is $259,196. The proposed loan caps 'will either push students and families into the private loan market, where they take on more risk and have less consumer protection, or simply push people out of higher education altogether,' said Aissa Canchola Bañez, policy director at the Student Borrower Protection Center, a nonprofit advocacy group. Private student loans are also not eligible for Public Service Loan Forgiveness programs, which many students rely on to manage their debt. Students from low-income families may have difficulty qualifying for private loans. Want all of The Times? Subscribe.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store