logo
Massage Guns Are a Fitness Staple—but Do They Actually Deliver?

Massage Guns Are a Fitness Staple—but Do They Actually Deliver?

Yahoo08-07-2025
You'd be hard-pressed to walk into a gym without seeing someone trying to fix their sore muscles. One guy's foam rolling his calves like his life depends on it, someone else is aggressively stretching their shoulders with a resistance band, and in the back, yep, that's a guy mid-cold plunge looking like he regrets all of his life choices.
From high-tech massage tools to compression boots that look like they belong in a sci-fi movie, the recovery market is overflowing with gadgets that promise to beat soreness and speed up your bounce-back. But just because your favorite gym bro swears by it doesn't mean it actually works—or is worth your hard-earned cash. Case in point: massage guns. A new study suggests they might be more hype than help.
The study published in The Journal of Strength and Conditioning Research followed 20 healthy, active adults as they did an intense biceps workout with their non-dominant arm designed to create soreness and muscle damage.The participants were then broken up into two groups: the group that used a massage gun and the group that didn't do any kind of recovery treatment. The massage gun group used the recovery tool for five minutes right after the workout, then again at 24 and 48 hours later.
For the next week, the researchers tracked how sore each participant's arms felt, how much their range of motion changed, how swollen their muscles were, how strong they stayed, and how well their muscles fired.
The results? Everyone got sore, and everyone recovered to the same extent, whether they used the massage gun or not. That means the massage guns didn't reduce soreness, improve strength, or speed up recovery any more than doing nothing at all.
This study challenges the widely held belief that massage guns are an effective tool for muscle recovery and soreness relief. While they might feel good in the moment (which is a perfectly valid reason to use them), the findings suggest they don't actually help you bounce back any faster after a tough workout.
Massage Guns Are a Fitness Staple—but Do They Actually Deliver? first appeared on Men's Journal on Jul 7, 2025
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Laird Superfood Expands Distribution with New Costco Availability Across Key U.S. Regions
Laird Superfood Expands Distribution with New Costco Availability Across Key U.S. Regions

Yahoo

time9 minutes ago

  • Yahoo

Laird Superfood Expands Distribution with New Costco Availability Across Key U.S. Regions

Strategic Product Expansion Targets New Markets and Growing Demand for Clean, Adaptogenic Beverages BOULDER, Colo., Aug. 13, 2025 /PRNewswire/ -- Laird Superfood, Inc. (NYSE: LSF), a leader in functional coffee, creamers, and superfood products made with simple, minimally processed ingredients, is proud to announce expanded availability of its fan-favorite products at Costco locations across key U.S. regions. This milestone marks a significant step forward in the company's mission to make clean, functional and delicious superfood products more accessible nationwide. Effective July 2025: Sweet and Creamy Superfood Creamer is now available every day in Costco warehouses throughout Los Angeles, San Diego, Hawaii, Arizona & Colorado and across the Southeast North Carolina, South Carolina, Tennessee, Alabama, Mississippi, Georgia, Florida & Puerto Rico. This top-selling, dairy-free creamer is crafted from real coconut milk that is a natural source of MCT's and offers a smooth, subtly sweet taste that turns your everyday coffee into an experience your body craves. Perform Superfood Coffee, designed to fuel sustained energy and mental focus, is now available at Costco for a limited time in Los Angeles and Hawaii. With a proprietary blend of functional mushrooms and premium, organic, high altitude coffee Perform Coffee supports energy and cognitive performance with every sip. "Our retail expansion at Costco represents a key growth milestone for Laird Superfood," said Jason Vieth, CEO of Laird Superfood. "We're thrilled to bring our clean, functional products to even more Costco members, especially in regions where demand for functional, better-for-you beverages is booming." Laird Superfood continues to deepen its retail presence through strategic partnerships that align with its values of health, sustainability, and innovation. These new placements further solidify the company's position as a go-to brand in the functional food and beverage space. To find the nearest Costco carrying Laird Superfood products, visit or check your local warehouse for availability. About Laird Superfood®Laird Superfood is a minimally processed food brand dedicated to fueling active lifestyles with superfood products that support energy, endurance, and overall well-being. Founded in 2015 by world-renowned big wave surfer Laird Hamilton, the brand was born from his personal mission to find a better morning routine that could improve and sustain his performance while out catching waves. Alongside his wife, former professional beach volleyball legend, bestselling author and fitness icon Gabby Reece, the brand has expanded from superfood creamers to offer instant lattes, coffees, bars, prebiotic daily greens, and more. Laird Superfood is committed to offering simple ingredients and minimally processed foods that can help fuel people from sunrise to sunset. PRESS CONTACTAlafair Hallalafair@ View original content to download multimedia: SOURCE Laird Superfood

Her 6-Year-Old Son Told Her He Wanted to Die. So She Built an AI Company to Save Him
Her 6-Year-Old Son Told Her He Wanted to Die. So She Built an AI Company to Save Him

Gizmodo

time10 minutes ago

  • Gizmodo

Her 6-Year-Old Son Told Her He Wanted to Die. So She Built an AI Company to Save Him

The burgeoning world of AI-powered mental health support is a minefield. From chatbots giving dangerously incorrect medical advice to AI companions encouraging self-harm, the headlines are filled with cautionary tales. High-profile apps like and Replika have faced backlash for harmful and inappropriate responses, and academic studies have raised alarms. Two recent studies from Stanford University and Cornell University found that AI chatbots often stigmatize conditions such as alcohol dependence and schizophrenia, respond 'inappropriately' to certain common and 'encourage clients' delusional thinking.' They warned about the risk of over-reliance on AI without human oversight. But against that backdrop, Hafeezah Muhammad, a Black woman, is building something different. And she's doing it for reasons that are painfully personal. 'In October of 2020, my son, who was six, came to me and told me that he wanted to kill himself,' she recounts, her voice still carrying the weight of that moment. 'My heart broke. I didn't see it coming.' At the time, she was an executive at a national mental health company, someone who knew the system inside and out. Yet, she still couldn't get her son, who has a disability and is on Medicaid, into care. 'Only 30% or less of providers even accept Medicaid,' she explains. 'More than 50% of kids in the U.S. now come from multicultural households, and there weren't solutions for us.' She says she was terrified, embarrassed and worried about the stigma of a child struggling. So she built the thing she couldn't find. Today, Muhammad is the founder and CEO of Backpack Healthcare, a Maryland-based provider that has served more than 4,000 pediatric patients, most of them on Medicaid. It's a company staking its future the radical idea that technology can support mental health without replacing the human touch. On paper, Backpack sounds like many other telehealth startups. In reality, its approach to AI is deliberately pragmatic, focusing on 'boring' but impactful applications that empower human therapists. An algorithm pairs kids with the best possible therapist on the first try (91% of patients stick with their first match). AI also drafts treatment plans and session notes, giving clinicians back hours they used to lose to paperwork. 'Our providers were spending more than 20 hours a week on administrative tasks,' Muhammad explains. 'But they are the editors.' This human-in-the-loop approach is central to Backpack's philosophy. The most critical differentiator for Backpack lies in its robust ethical guardrails. Its 24/7 AI care companion is represented by 'Zipp,' a friendly cartoon character. It's a deliberate choice to avoid the dangerous 'illusion of empathy' seen in other chatbots. 'We wanted to make it clear this is a tool, not a human,' Muhammad says. Investor Nans Rivat of Pace Healthcare Capital calls this the trap of 'LLM empathy,' where users 'forget that you're talking to a tool at the end of the day.' He points to cases like where a lack of these guardrails led to 'tragic' outcomes. Muhammad is also adamant about data privacy. She explains that individual patient data is never shared without explicit, signed consent. However, the company does use aggregated, anonymized data to report on trends, like how quickly a group of patients was scheduled for care, to its partners. More importantly, Backpack uses its internal data to improve clinical outcomes. By tracking metrics like anxiety or depression levels, the system can flag a patient who might need a higher level of care, ensuring the technology serves to get kids better, faster. Crucially, Backpack's system also includes an immediate crisis detection protocol. If a child types a phrase indicating suicidal ideation, the chatbot instantly replies with crisis hotline numbers and instructions to call 911. Simultaneously, an 'immediate distress message' is sent to Backpack's human crisis response team, who reach out directly to the family. 'We're not trying to replace a therapist,' Rivat says. 'We're adding a tool that didn't exist before, with safety built in.' Beyond its ethical tech, Backpack is also tackling the national therapist shortage. In many cases, therapists, unlike doctors, traditionally have to pay out of pocket for the expensive supervision hours required to get licensed. To combat this, Backpack launched its own two-year, paid residency program that covers those costs, creating a pipeline of dedicated, well-trained therapists. More than 500 people apply each year, and the program boasts an impressive 75% retention rate. In 2021, then-U.S. Surgeon General Dr. Vivek H. Murthy has called mental health 'the defining public health issue of our time' while referring at the time to the mental health crisis plaguing young people. Muhammad doesn't dodge the criticism that AI could make things worse. 'Either someone else will build this tech without the right guardrails, or I can, as a mom, make sure it's done right,' she says. Her son is now 11, thriving, and serves as Backpack's 'Chief Child Innovator.' 'If we do our job right, they don't need us forever,' Muhammad says. 'We give them the tools now, so they grow into resilient adults. It's like teaching them to ride a bike. You learn it once, and it becomes part of who you are.'

The quiet ban that could change how AI talks to you
The quiet ban that could change how AI talks to you

Fast Company

time10 minutes ago

  • Fast Company

The quiet ban that could change how AI talks to you

As AI chatbots become ubiquitous, states are looking to put up guardrails around AI and mental health before it's too late. With millions of people turning to AI for advice, chatbots have begun posing as free, instant therapists – a phenomenon that, right now, remains almost completely unregulated. In the vacuum of regulation on AI, states are stepping in to quickly erect guardrails where the federal government hasn't. Earlier this month, Illinois Governor JB Pritzker signed a bill into law that limits the use of AI in therapy services. The bill, the Wellness and Oversight for Psychological Resources Act, blocks the use of AI to ' provide mental health and therapeutic decision-making,' while still allowing licensed mental health professionals to employ AI for administrative tasks like note taking. The risks inherent in non-human algorithms doling out mental health guidance are myriad, from encouraging recovering addicts to have a ' small hit of meth ' to engaging young users so successfully that they withdraw from their peers. One recent study found that nearly a third of teens find conversations with AI as satisfying or more satisfying than real-life interactions with friends. States pick up the slack, again In Illinois, the new law is designed to 'protect patients from unregulated and unqualified AI products, while also protecting the jobs of Illinois' thousands of qualified behavioral health providers,' according to the Illinois Department of Financial & Professional Regulation (IDFPR), which coordinated with lawmakers on the legislation. 'The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,' IDFPR Secretary Mario Treto, Jr said. Violations of the law can result in a $10,000 fine. Illinois has a history of successfully regulating new technologies. The state's Biometric Information Privacy Act (BIPA), which governs the use of facial recognition and other biometric systems for Illinois residents, has tripped up many tech companies accustomed to operating with regulatory impunity. That includes Meta, a company that's now all-in on AI, including chatbots like the ones that recently made chats some users believed to be private public in an open feed. Earlier this year, Nevada enacted its own set of new regulations on the use of AI in mental health services, blocking AI chatbots from representing themselves as 'capable of or qualified to provide mental or behavioral health care.' The law also prevents schools from using AI to act as a counselor, social worker or psychologist or from performing other duties related to the mental health of students. Earlier this year, Utah added its own restrictions around the mental health applications of AI chatbots, though its regulations don't go as far as Illinois or Nevada. The risks are serious In February, the American Psychological Association met with U.S. regulators to discuss the dangers of AI chatbots pretending to be therapists. The group presented its concerns to an FTC panel, citing a case last year of a 14-year-old in Florida who died by suicide after becoming obsessed with a chatbot made bt the company 'They are actually using algorithms that are antithetical to what a trained clinician would do,' APA Chief Executive Arthur C. Evans Jr. told The New York Times. 'Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.' We're still learning more about those risks. A recent study out of Stanford found that chatbots marketing themselves for therapy often stigmatized users dealing with serious mental health issues and issued responses that could be inappropriate or even dangerous. 'LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits,' co-author and Stanford Assistant Professor Nick Haber said. 'But we find significant risks, and I think it's important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store