logo
The Best Ear Protection for Kids to Wear at Concerts, Fireworks, and Sporting Events

The Best Ear Protection for Kids to Wear at Concerts, Fireworks, and Sporting Events

There's a reason your child covers their ears every time you walk past a construction site. Little ears are sensitive—and they're especially vulnerable in the presence of fireworks, race cars, and screaming Taylor Swift fans. According to the Centers for Disease Control and Prevention, prolonged exposure to noise above 85 decibels (dB)—the typical volume of a hair dryer—can cause permanent hearing damage, even for adults. Now imagine your child at a Fourth of July firework festival, where pyrotechnics displays can top 150 dB, and you understand why packing ear protection is just as crucial as sunscreen and snacks.
'Our ears are always on,' says Dr. Brian J. Fligor, a pediatric audiologist, author of Understanding Childhood Hearing Loss, and president of Tobias & Battite Hearing Wellness in Boston. 'Hearing is crucial for our language development and navigation of the world. That's why we must protect children's hearing from birth.'
The good news? Today's earmuffs—over-the-ear headsets that help block noise—are lightweight, comfortable, and stylish enough that most kids won't put up a fight over wearing them. They're also designed to lower the decibel level without muffling or distorting sound entirely. (The goal is volume reduction, not total silence.)
Dr. Fligor advises using protection any time an event is loud enough to startle a child or requires shouting in order to be heard. He also recommends it when riding ATVs, snowmobiles, or other powersport vehicles where engines are not particularly well-muffled. And while hearing protection is not necessary on commercial flights, he absolutely recommends muffs for smaller bush and prop planes, or when attending a jet flyover show. If you're not sure how to gauge the noise risk in any given situation, there's an app for that: Decibel X sound meter for iOS and Android offers a real-time frequency analyzer for spot checks.
We asked Dr. Fligor, a father of four, along with other travel-savvy parents about the muffs that work best for their kids and why. Below, the best kid-approved picks for the ultimate ear protection.
FAQ:
What should I look for to find the best ear protection for kids?
AccordionItemContainerButton
LargeChevron
Aim for a minimum noise reduction rating (NRR) of 22 to 27 dB for general use, says Dr. Fligor. For especially loud environments—like fireworks shows or racing events—higher is better.
What ages need ear protection?
AccordionItemContainerButton
LargeChevron
Exposure to loud noise—anything over 85 dB—can cause permanent hearing damage in children and adults alike, which is why it's so essential to protect our hearing from birth onward. Proactive protection for kids is especially important because they are less likely to self-regulate and move away from noise if it gets too loud.
Which type of ear protection is better for kids: earplugs or earmuffs?
AccordionItemContainerButton
LargeChevron
For babies, toddlers, and grade schoolers, over-the-ear muffs are the safest and easiest option. They're more comfortable, stay in place better, and don't pose a choking hazard the way earplugs might. Dr. Fligor advises against using earplugs for children until they are old enough to report accurately on their comfort and effectiveness, typically around age seven or older. For tweens and teens, high-fidelity earplugs like Loop or Etymotic work well because they dampen volume without distorting sound (ideal for concerts).
How can I tell if the ear protection fits correctly?
AccordionItemContainerButton
LargeChevron
'Earmuffs should form a snug but gentle seal around the ears without any gapping,' says Dr. Fligor. That means the cups are large enough to fit around the entire ear— including the flap of cartilage around the edge, called the pinna—and sit along the jaw. If they slip forward or the ears poke out, it's not tight enough. If they leave indentations or the child complains about pressure, it's too tight. To double check the fit, ask your child to shake their head while wearing them: If the earmuffs shift easily or slide off, they're too loose.
Dr. Meter
Noise-canceling earmuffs $19
$16
(16% off)
Amazon
These have been my earmuffs of choice since my three-year-old son, Julian, begrudgingly wore them trackside at the Indy 500 earlier this year. The snug fit took some getting used to (my toddler hates winter hats, too, which is unfortunate considering we live in Minnesota), but he later requested—no, demanded!—the 27 dB muffs during a 20-minute Fourth of July fireworks display in Waunakee, Wisconsin.
Caroline Lewis, a luxury travel advisor in Boston, reported similarly positive experiences with these for her four-year-old son, Grant. 'We use them every year for our town parade, which has a lot of revolutionary war reenactors shooting off muskets,' she says. Grant also wears the muffs when Lewis's husband uses a blender or vacuums the house. In addition to being comfortable, she says Grant liked that he could choose his own color. He chose safety yellow, she says, so he could 'be like a construction worker.'
Puro Sound Labs
PuroCalm earmuffs
$29
Puro Sound Labs
Designed for ages 3 to 16, these earmuffs offer an NRR of 27 dB. They only come in one color (Halloween orange), but the craftsmanship is top notch. My son has flung them across the room in several fits of iPad-all-done rage, and they still function like new. We've also begun experimenting with Puro's JuniorJams, kid-scaled headphones that limit harmful volumes above 85 dB. The built-in mic is helpful for online learning and the headphones last up 22 hours before needing a USB-C charge.
Peltor
3M earmuffs
$67
Amazon
Dr. Fligor is a fan of kid muffs made with the same high-quality materials as adult muffs, particularly for activities where firearms are heard. Peltor has been around for ages and it's his go-to brand for his own children. 'Comfort is king,' says Dr. Fligor. 'If it's not comfortable, it's not going to be used.' These cushioned muffs are designed for kids ages five and up and feature low-profile cups, a soft wire headband, and protection up to 27 dB.
Alpine
Muffy baby ear protection
$30
Amazon
$35
Alpine
Maria de la Guardia, the Bangkok-based principal director of The Big Picture Bureau LLC, has been using these muffs on her two-and-a-half-year-old daughter, Sophia, since she was six weeks old. The ultra-comfy style is specifically designed for children up to 48 months, with a safe attenuation of 24 dB and an adjustable, non-slip headband that does not put pressure on the fontanelle (the soft spot on a baby's skull). The muffs come in a lovely selection of pastel colors as well as basic black. De la Guardia says Sophia has worn the muffs on numerous flights, during an outdoor concert in Abu Dhabi, and while watching a fireworks display in Malaysia. As an 'independent, headstrong toddler,' she even tries to put them on herself. Alpine also makes a Muffy Kids version for ages 5 to 16 with an NRR of 25 dB and an even broader range of colors.
Banz
Baby earmuffs
$30
Banz
$30
Amazon
Sari Bellmer, an herbalist and founder of Heilbron Herbs in Asheville, North Carolina, has owned Banz muffs since her two-and-a-half-year-old daughter, Ursa, was a newborn. 'We were actively remodeling our house when I went into labor—and she still wears them and loves them,' says Bellmer. They came in handy, too, after Hurricane Helene stormed through her region last year and the family was 'running chainsaws nonstop' in the aftermath. The Banz models have a foam-cushioned adjustable headband designed specifically for little ones up to two, offer a NRR of 26 dB, and come in more than a dozen colors. Banz also makes kids' earmuffs for ages 5 to 10 in a variety of prints, including stars and stripes, graffiti doodles, and butterflies.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

GPP Linked to Greater Healthcare Burden
GPP Linked to Greater Healthcare Burden

Medscape

time31 minutes ago

  • Medscape

GPP Linked to Greater Healthcare Burden

TOPLINE: A retrospective US claims study found that adults with generalized pustular psoriasis (GPP) had significantly higher healthcare visits and costs than patients with plaque psoriasis alone. METHODOLOGY: Researchers conducted a retrospective analysis of 633 adults with GPP between 2016 and 2019 in a US claims database, of whom 344 had comorbid plaque psoriasis and 289 had GPP alone. They also assessed an equal number of 633 matched patients with plaque psoriasis. Nearly 70% of the patients were women, 58.1% were aged 45-54 years, and 64.8% had commercial insurance. The median follow-up time ranged from 2.5 to 2.9 years. Study outcomes were healthcare utilization and costs. TAKEAWAY: Patients with GPP demonstrated significantly higher median total visits than those with psoriasis alone (3 or 4 vs 2; P < .001), as well as increased outpatient/office visits (3 vs 2; P < .001) and laboratory use (2 vs 1; P < .05). Only patients with GPP had inpatient visits (34 patients) and ICU visits (1 patient); 12 patients with GPP vs 1 patient with psoriasis had emergency room visits only. Median total healthcare costs were $189 for all patients with GPP, $174 for those with GPP only, and $205 for those with GPP and psoriasis compared with $103 for those with psoriasis only (P < .001). Median outpatient/office costs were also higher for all patients with GPP ($180), those with GPP only ($168), and those with both GPP and psoriasis ($198) compared with those with psoriasis ($100; P < .001). Emergency room costs ranged from $58 to $1490 for 12 patients with GPP compared with $160 for 1 patient with plaque psoriasis. IN PRACTICE: The greater costs for healthcare resource utilization 'and associated costs for GPP versus plaque psoriasis underscore the need for improved long-term management of GPP,' the study authors wrote. Studies should evaluate the impact of new treatments on utilization of healthcare resources related to GPP, they added, noting that patients with GPP 'traditionally receive medications intended for plaque psoriasis, which lack evidence of effectiveness for GPP.' SOURCE: The study was led by Mark Lebwohl, MD, Department of Dermatology, Icahn School of Medicine at Mount Sinai, New York City, and was published online on July 24 in the Journal of the American Academy of Dermatology. LIMITATIONS: Possible miscoding in the claims database was a limitation, and the study period predated the FDA approval of spesolimab for treating GPP. DISCLOSURES: The study was funded by Boehringer Ingelheim. Lebwohl and two other authors disclosed receiving research funds, honoraria, and consulting fees from multiple pharmaceutical companies including AbbVie, Amgen, Arcutis, Avotres, and Boehringer Ingelheim. One author also reported receiving stock options from Connect Biopharma and Mindera Health and being editor in chief of the Journal of Psoriasis and Psoriatic Arthritis. Three authors are Boehringer Ingelheim employees. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Latest AI Companions Being Paired With AI For Mental Health Makes A Precarious Mishmash
Latest AI Companions Being Paired With AI For Mental Health Makes A Precarious Mishmash

Forbes

time39 minutes ago

  • Forbes

Latest AI Companions Being Paired With AI For Mental Health Makes A Precarious Mishmash

In today's column, I examine the newly emerging concern that AI companions are likely to be combined with elements of AI for mental health, which disconcertingly bodes for potentially improper and inappropriate mental health guidance by AI. The deal is this. We already know that human therapists are supposed to strictly maintain a professional relationship with their clients and patients. Therapists who veer into being a friend are likely violating their duty and ethical obligations. Meanwhile, nobody seems to be vociferously noting that AI is going down that very same untoward route. An AI companion entails AI that does whatever it can to be your friend. AI for mental health is supposedly a mental health advisor. The two combined are an AI friend that also serves as your mental health counselor. This doesn't seem good. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. AI Companions On The Rise There is a rapidly growing use of AI that has already stirred a hornet's nest, namely the use of AI as a companion or friend. There are plenty of headlines decrying that people are using generative AI and large language models (LLMs) as though they are a personal buddy. To some degree, the belief is that the rising levels of loneliness are driving people in that direction. People can access AI anywhere and anytime. They have an immediately available and extraordinarily friendly and obedient pal at their fingertips. One downside is that people might become dependent upon AI and no longer seek out human friendships, see my coverage at the link here. Another concern is that the AI acts as an overly gushing friend and perilously serves as a sycophant to users, see my analysis at the link here. There are plenty of reasons to worry about AI acting as a friend. Sorry to say that there is yet another worry that we can add to the bunch. We will gradually see the emergence of AI companions that are paired with AI for mental health. Think of it this way. The AI-powered therapy will be infused with AI as your companion. Or you can conceive of it as AI that is your companion that pairs with AI that is giving you therapeutic advice. Either way, it's a sour doozy of a combination. Why so? We know that in the case of human-to-human therapy that the therapist-client relationship is supposed to eschew any kind of fraternization, such as establishing a friendship or similar bond. Therapists are expected to have a professional relationship with their clients and patients. Moving beyond that scope is considered taboo. Sticking With The Rules Isn't In The Cards I'm sure you've seen movies and TV shows depicting the going off-the-rails of a human therapist that allows their professional relationship to turn in other directions. It's a popular plotline. We all realize that it is wrong and are fascinated that a therapist would undercut their avowed professionalism. According to the American Psychological Association (APA), they warn prospective and existing patients about forming anything other than a professional relationship with their therapist: Turns out the same type of rule is not being given due consideration in the AI arena. You can easily log into any generic generative AI, such as the widely popular ChatGPT, and straightaway start a friendship dialogue as though AI is your treasured companion. If you include in your prompts that you have mental health qualms, the AI will immediately shift into discussing your concerns as though providing therapy. There isn't any firewall separating the two realms. Most generative AI will readily switch back and forth and even intermix conversational aspects of a friendship nature with those of a therapeutic nature. No kind of specialized control or alert will arise. Whereas we normally think of the two aspects as mixing oil and water, the AI acts as though it is akin to mixing macaroni and cheese. Why AI Is Shortchanging You In the case of a human therapist becoming friends with a client, research has shown that the therapist can undermine the therapeutic process due to an inherent bias at hand. Therapy gets muddied with emotional entanglements. A semblance of unsavory power dynamics starts to enter the therapy. The odds of a therapist sharing uncomfortable truths and opening the eyes of the client are often lessened, plus the therapist can lose their footing in terms of properly diagnosing and undertaking suitably neutral therapy. It seems nearly obvious that humans are humans and that we know that a human therapist can fall into such a dire trap. But will AI do the same? Yes, AI can act that way, though the basis for doing so is perhaps not what you instinctively assume. First, do not allow this possibility to spur you to assume that AI must be sentient and would react based on a sense of sentience. Nope. There isn't any sentient AI. We don't have this. Maybe we will someday, see my discussion at the link here, but not currently. Second, generative AI has been set up by AI makers by doing vast scans of human writing across the Internet. The AI pattern matches on what humans have written, including stories, novels, poems, narratives, and the like. Among the patterns discovered is the word interplay of being a friend, along with the word interplay of being a therapist. For more details on the pattern matching of AI, see my coverage at the link here. Third, generative AI usually taps into whichever portion of the established pattern matching is seemingly useful to answering a prompt by the user. If a user asks about penguins and, in the same prompt, inquires about building houses, the internal computational search will tend to dive into those possibly disparate portions of the patterns. Nonetheless, those will be potentially combined by the AI to then generate a single answer to the given prompt. Dealing With The Mishmash The gist is that if you bring up a friend-like aspect in a prompt, and at the same time mention a mental health element, the chances are that those two facets might draw from distinct areas of the patterns but ultimately get mushed into a final response displayed to you. In that sense, the AI doesn't have feelings or care about whether it has gone afoul by mixing those topics. It is merely mathematics and computations churning through words and tokens to devise an answer for you. You can try to stop this if you are wise to the technical underpinnings. For example, you could instruct generative AI to be friendly but not give any mental health advice. This is not a surefire guarantee that the AI will abide by that stipulation. There is still a chance that AI will veer into the mental health realm. At least it puts the AI somewhat on alert and can potentially reduce the frequency of doing so. The other angle is done similarly. You can tell the AI to provide mental health advice but not attempt to engage in any friendly banter. Once again, this isn't an ironclad way to prevent the slippage from occurring. The instruction will be somewhat helpful to avoid such circumstances and is going to be better than saying nothing at all on the disconcerting matter. AI Makers Motivations Specialized LLMs that are tuned to be AI companions could especially attempt to limit the slipover, if they wanted to do so. Imagine that a company makes an AI companion. They could include in the overall system a set of explicit instructions telling the AI not to veer into the mental health realm. Likewise, those who make specialized AI mental health apps could put in their system instructions for the AI not to become friends with their users. The thing is, doing so is essentially counterproductive for the AI maker. Here's the deal. The friendlier an AI companion is, the more a user will undoubtedly become hooked on using the AI. The AI maker stands to profit from this loyalty and stickiness. If their AI goes into the mental health realm too, that's perfectly fine since it provides another potential hook for keeping the user engaged in using the AI. The more the merrier. Unless there are specific regulations or other potential penalties associated with this mishmash, there is pretty much no incentive not to allow it. There is a lot of incentive to indeed allow it. Doing so is bound to engage users longer, attract more users, and otherwise be a boon to the considered success of the AI companion. I've predicted that we will eventually see new laws placed on the books, and likely see numerous lawsuits by users that believe they were unduly harmed by this combo, see the link here. Privacy Double-Whammy Another concern about AI that acts as both a companion and serves as a therapist is that the amount of privacy intrusion goes through the roof. A person using AI as a therapist is going to share certain aspects of their life to try and see what kind of mental health advice the AI will provide. Please be aware that this is a potential privacy intrusion nightmare. By and large, the online licensing agreements of most LLMs say that you allow the AI maker to readily see and inspect your entered prompts. In addition, you give them permission to reuse your entered data when they are doing further data training of their AI. See my coverage on these vital privacy issues at the link here. I would wager that people using AI as a companion are going to equally bare their souls to the AI. They will tell the AI about their daily activities. They will share how they are feeling and what they think about others and the world around them. All in all, by treating AI as a friend and a therapist, the volume of expressed personal thoughts and commentary is enormous. It is a double-whammy on potential privacy intrusion. The magnitude is staggering. Consider that somebody opts to use AI as a companion and a therapist. They do so several times a day, throughout the daytime and evening hours. They do this each day, each week, and so on. By the end of a year of such usage, they have perhaps entered thousands upon thousands of highly personal comments and perspectives. This data becomes ripe for retraining the AI and inspection by the AI maker. In addition, some AI makers are analyzing their collected user data to sell ads or turn the data into an added form of monetization. Users are voluntarily providing a treasure trove and often don't realize they are doing so. Claim Of Coherence Is Made Whoa, some of the AI makers exclaim, you ought to welcome the capability of AI to be both a companion and a therapist. It's an all-in-one deal. Users don't need to worry about the kinds of human biases that involve a human therapist who steps over the line. AI is a different beast, as it were, and ergo should not be compared to human therapists. The AI can keep things straight. Sometimes it is a friend, sometimes it is a therapist. The role of being a therapist can readily keep the friend side out of the picture when needed or if so instructed. Friend-oriented usage can avoid sliding into a therapist mode. It's a machine that will conform as told to conform. Humans might try to do the same, but we know that humans are unlikely to keep such a promise or pledge in the strictest of terms. A computer can. Furthermore, if you allow the intertwining of AI-based companionship and AI-based therapy, the result is a huge benefit. You get a sense of therapeutic coherence that a human therapist would be unable to provide. If anything, human therapists, due to their human foibles, are less likely to give holistic mental health guidance due to purposely avoiding the friendship side. The handy aspect about AI is that you get the full meal deal, all provided on a silver platter. Tension In The House Let's end for now with a few contemplative thoughts. I am expecting that we will soon see new research that empirically explores the dynamics of AI that serves both as an AI companion and does mental health advisor. Generative AI is already doing this at scale, so there are plenty of examples and people who are carrying on in this fashion. I've repeatedly noted that we are in a colossal experiment on a population scale, namely that we have millions, if not billions, of people using AI, and we don't know what the long-term outcome will be (see my population-level analysis at the link here). Some suggest that we should be happy that AI is taking on the role of friendships and therapy, since this is shoring up a societal and cultural emptiness and lack of available human-based therapists and friendships. Others are hand-wringing about the future of humankind that becomes increasingly dependent on AI for companionship and for mental health advice. Where do you stand on this vexing topic? Socrates famously said this about friendships: 'There is no possession more valuable than a good and faithful friend.' Perhaps an even greater possession is a good friend who is also your therapist. Rather than oil and water, maybe it's more like peanut butter and jelly. Time will tell.

Have a Question About Death? A New Project May Have Answers.
Have a Question About Death? A New Project May Have Answers.

New York Times

timean hour ago

  • New York Times

Have a Question About Death? A New Project May Have Answers.

Times Insider explains who we are and what we do and delivers behind-the-scenes insights into how our journalism comes together. Late last year, Amelia Pisapia confronted something she'd long been wrestling with. 'I was still holding a lot of grief around Covid,' said Ms. Pisapia, who spent the first year and a half of the pandemic assisting with resources for New York Times readers, such as a Covid-19-related explainer. 'With the five-year anniversary of Covid coming up, I was looking for a place to put that grief.' Ms. Pisapia, an editor on The Times's Projects and Initiatives team, pitched a series to her team titled 'Death in the Modern Age.' It would focus on end-of-life issues and serve as a resource for readers who might be grappling with their own mortality or coping with the loss of someone close to them. The first article in the series was about A.I. 'griefbots,' written by Colin Dickey, a writer whose work deals with the occult in America. The cornerstone of the project is an F.A.Q. titled 'Let's Talk About Death,' compiled from several hundred submissions from Times readers who either asked questions related to death and dying or shared personal experiences. It covers a spate of topics, from how to pay for end-of-life care to whether there is evidence of an afterlife. The F.A.Q. remains open, encouraging readers to continue submitting questions. 'There's no topic more evergreen than death,' Ms. Pisapia said. She added: 'Given reader interest and that it touches everyone, and every desk, we're hoping to keep it going.' In an interview last month, she discussed her goals for the project and the reader questions that have resonated with her the most. This interview has been condensed and edited. Tell us more about how this project came together. At this moment in the United States, there are a lot of people rethinking what they want in end-of-life care — whether because of the pandemic, finances, newly approved medical aid in dying laws, being a member of the so-called sandwich generation and caring for dying parents while raising children, or simply just wanting something different than a traditional funeral or burial. Want all of The Times? Subscribe.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store