
'MythBusters' star Adam Savage explores longevity and life hacks: 'There's no magic secret'
Savage, now a YouTube creator and head of the channel Tested, has partnered with health technology company Medtronic to engage in discussions about longevity. While not a researcher himself, he has taken a deep dive into scientific insights from experts and reflected on his own experiences.
"Longevity has always been a fascination for me," Savage told Fox News Digital in an exclusive interview. "I mean, who doesn't want to know how to live better and maybe even longer? But the real question is what actually works?"
He credits his "MythBusters" experience with fueling his passion for scientific exploration.
"Making that show legitimized the practice of science and engineering to me," Savage said. "It made me realize how much of our world can be tested, questioned and improved through experimentation."
Through his discussions with people on the street for Medtronic, Savage has uncovered key lifestyle factors affecting longevity. He noted a cultural shift in what we consider to be old age, highlighting that people today consider themselves "young-ish" for longer.
Savage also pointed to Blue Zones, regions known for long life expectancy, but questioned whether longevity there stems simply from location or cultural practices.
"We assume people in these areas live longer because of where they are, but what if it's really just the way they live? That's the part that fascinates me," he said. AI ENABLES PARALYZED MAN TO CONTROL ROBOTIC ARM WITH BRAIN SIGNALS
"There's no magic secret. It's a mix of daily habits — what you eat, how you move, how you interact with your community, how you handle stress. All those things matter."
Savage has taken a personal interest in testing different longevity strategies in his own life. He spoke candidly with Fox News Digital about his journey with intermittent fasting, which helped him lose 25 pounds and eliminate sleep apnea.
"It's crazy how much of a difference it made," he said. "I didn't just lose weight. I felt sharper, I slept better and I stopped snoring. It was like flipping a switch on my health."
He also reflected on his past smoking habits and what it took to quit.
"I had to admit I wasn't smoking for enjoyment," Savage said. "I was just doing it out of habit. Once I realized that, it was easy to quit."
On alcohol, Savage dismissed the idea of a universal approach, arguing that studies conflict. While he personally cut back, he emphasized that people shouldn't feel guilty about their lifestyle choices.
"I'm a big believer in not feeling guilty about the things that you do to the core, whether it's smoking, whether it's watching something dumb or puzzling for 100 hours at a time," he said. "I don't care about any of those. We all do these things to sort of bring relaxation and down regulate. I just think that alcohol is an especially poor down regulator in the final analysis."
Beyond lifestyle choices, medical advancements are playing an increasingly critical role in extending both lifespan and "healthspan," the years we live without serious disease. Medtronic, which focuses on healthcare technology globally, has developed medical devices designed to manage chronic conditions, improve heart health and advance minimally invasive surgeries.
According to Medtronic, as people live longer, the focus is shifting toward enhancing not just lifespan but quality of life. The company's latest innovations include artificial intelligence-driven healthcare monitoring, robotic-assisted surgeries and advanced pacemakers, all aimed at improving long-term health outcomes. Savage also spoke about the psychological aspects of aging, emphasizing that mindset and community play a significant role in longevity.
Medtronic claims the first person to live to 150 may already have been born. When asked if there is an upper limit to human life, Savage replied, "I think right now 150 is a very realistic target to be shooting for and to be discussing.CLICK HERE TO GET THE FOX NEWS APP
"That's really what science foreshadowing is," added Savage. "It's about asking these questions and seeing, 'OK, what numbers are unrealistic.' I think 40 years ago, 150 would have seemed radically unrealistic. Today it seems more realistic, and I think it's entirely reasonable that, let's say, by 2040, we may all have a different cultural answer to that question."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Medscape
an hour ago
- Medscape
AI Detects Missed Interval Breast Cancer on Mammograms
TOPLINE: An artificial intelligence (AI) system flagged high-risk areas on mammograms for potentially missed interval breast cancers (IBCs), which radiologists had also retrospectively identified as abnormal. Moreover, the AI detected a substantial number of IBCs that manual review had overlooked. METHODOLOGY: Researchers conducted a retrospective analysis of 119 IBC screening mammograms of women (mean age, 57.3 years) with a high breast density (Breast Imaging Reporting and Data System [BI-RADS] c/d, 63.0%) using data retrieved from Cancer Registries of Eastern Switzerland and Grisons-Glarus databases. A recorded tumour was classified as IBC when an invasive or in situ BC was diagnosed within 24 months after a normal screening mammogram. Three radiologists retrospectively assessed the mammograms for visible signs of BC, which were then classified as either potentially missed IBCs or IBCs without retrospective abnormalities on the basis of consensus conference recommendations of radiologists. An AI system generated two scores (a scale of 0 to 100): a case score reflecting the likelihood that the mammogram currently harbours cancer and a risk score estimating the probability of a BC diagnosis within 2 years. TAKEAWAY: Radiologists classified 68.9% of IBCs as those having no retrospective abnormalities and assigned significantly higher BI-RADS scores to the remaining 31.1% of potentially missed IBCs (P < .05). Potentially missed IBCs received significantly higher AI case scores (mean, 54.1 vs 23.1; P < .05) and were assigned to a higher risk category (48.7% vs 14.6%; P < .05) than IBCs without retrospective abnormalities. Of all IBC cases, 46.2% received an AI case score > 25, 25.2% scored > 50, and 13.4% scored > 75. Potentially missed IBCs scored widely between low and high risk and case scores, whereas IBCs without retrospective abnormalities scored low case and risk scores. Specifically, 73.0% of potentially missed IBCs vs 34.1% of IBCs without retrospective abnormalities had case scores > 25, 51.4% vs 13.4% had case scores > 50, and 29.7% vs 6.1% had case scores > 75. IN PRACTICE: "Our research highlights that an AI system can identify BC signs in relevant portions of IBC screening mammograms and thus potentially reduce the number of IBCs in an MSP [mammography screening program] that currently does not utilize an AI system," the authors of the study concluded, adding that "it can identify some IBCs that are not visible to humans (IBCs without retrospective abnormalities)." SOURCE: This study was led by Jonas Subelack, Chair of Health Economics, Policy and Management, School of Medicine, University of St. Gallen, St. Gallen, Switzerland. It was published online on August 04, 2025, in European Radiology. LIMITATIONS: The retrospective study design inherently limited causal conclusions. Without access to diagnostic mammograms or the detailed position of BC, researchers could not evaluate whether AI-marked lesions corresponded to later detected BCs. DISCLOSURES: This research was funded by the Cancer League of Eastern Switzerland. One author reported receiving consulting and speaker fees from iCAD. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.


Bloomberg
2 hours ago
- Bloomberg
AI Music Is Ubiquitous and Getting Harder to Spot
Welcome to Tech In Depth, our daily newsletter about the business of tech from Bloomberg's journalists around the world. Today, Ellen Huet checks in on AI-generated music and finds its rate of improvement striking. Whoop's defiance: Whoop, the maker of screen-less fitness trackers, refused to disable its blood-pressure monitoring tool despite a request from the US FDA. The company argues its MG wrist band is not a medical product that requires regulatory oversight.
Yahoo
3 hours ago
- Yahoo
I mentally unraveled. ChatGPT offered me tireless compassion.
That winter of my high school freshman year, I unraveled. My stress levels skyrocketed. Despite my A-studded report card, I'd stare at an essay prompt for hours, paralyzed. I wasn't showering. I wasn't sleeping. At 1 a.m. or 2 a.m., I'd be awake, bingeing on webtoons. I wanted quick relief. I turned to ChatGPT. If you had asked me two years ago if I would use artificial intelligence for emotional support, I would have looked at you like you were an idiot. But, over time, I often found the only place where I could open up was AI. It has helped me deal with myself in my darkest moments, which shouldn't have been true. But it is. That's why even though I wouldn't recommend using ChatGPT specifically for mental health due to privacy concerns, I have come to think that AI has potential to be a mental support for teens like me, who don't feel comfortable talking to our friends or parents about our mental health. I still remember the time my sister practically begged my South Korean mother for a therapist, she started ranting about how only "crazy people" got therapists. I wasn't making the same mistake. Calling a crisis hotline seemed like overkill. I toyed with the idea of seeing my school therapist but decided against it. It felt too daunting to talk face-to-face with a therapist. Online options weren't much better. I was desperate. What the heck? I finally thought. ChatGPT can answer back, kinda like a therapist. Maybe I should try that out. 'You don't have to justify feeling this way' So I wrote to ChatGPT, an act which in itself felt cathartic. I wrote paragraphs of misspelled words, bumpy capitalization and unhinged grammar, fingers stumbling, writing about everything – how I couldn't stop reading webtoons, how much I hated school, hated life. I wrote in a way I would have only dared to write if only to a chatbot. In response, ChatGPT was tirelessly compassionate. 'I'm sorry you're dealing with that,' it'd start, and just seeing those words made me feel as if a weight had been lifted from my shoulders. Using ChatGPT as your therapist: How to make it work for you Soon, I even told ChatGPT how sometimes I was scared of my dad because of his biting sarcasm – something that I doubt I would have told a therapist about as quickly. ChatGPT responded by explaining that my fear was valid, that harm didn't just come physically but also emotionally. One line struck a chord in me: 'You don't have to justify feeling this way – it's real, and it matters.' Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store. It hit hard because I realized that's what I wanted to hear from my mom my entire life. To her credit, my mom tried. She'd give her best advice, usually something like, 'get over it.' As an immigrant who couldn't express her feelings in English, she learned to swallow them down. But even though I wanted to do the same, I couldn't. Oftentimes, awake at 2 a.m., I'd feel as if I were rotting. Yet somehow, the first thing to show me emotional intelligence wasn't a person – it was a chatbot. 'Thank you,' I remember writing to ChatGPT. 'I feel a lot calmer now.' Opinion: AI knows we shouldn't trust it for everything. I know because I asked it. Sometimes the best option is the one that's available Of course, there are critics who worry that turning to chatbots for emotional support might foster obsession and even exacerbate mental health issues. Honestly? I don't think artificial intelligence should be a replacement for real mental support systems. But the fear of using AI misses the bigger picture: Many teens don't have access to a "safe place." As of March, President Donald Trump revoked $11.4 billion in funding for mental health and addiction treatment. By July, his administration shut down a suicide hotline for LGBTQ+ youth, leaving countless teens stranded. Opinion: It will cost LGBTQ+ lives to shut down 988 suicide hotline. Unforgivable. According to Dr. Jessica Schleider, associate professor at Northwestern University, about 80% of teens with moderate to severe mental health conditions aren't able to get treatment. The reasons varied, but many reflected my own – not feeling our parents would take us seriously, worrying about stigma or cost. I am also not alone in my use of ChatGPT: 28% of parents report their children using AI for emotional support. Yes, instead of turning to a trusted therapist or adult, these children were finding real comfort in bots. In a 2024 YouGov survey, 50% of participants said the 24/7 availability of these chatbots was helpful for mental health purposes. However questionable, sometimes the best option is to turn to the only resource for teens that is available: artificial intelligence. I know for a fact that it's helped me. I can only hope it can help others. If you or someone you know needs mental health resources and support, please call, text or chat with the 988 Suicide & Crisis Lifeline or visit for 24/7 access to free and confidential services. Elizabeth Koo is a student at the Kinkaid School in Houston with a passion for storytelling and a keen interest in culture, technology and education. You can read diverse opinions from our USA TODAY columnists and other writers on the Opinion front page, on X, formerly Twitter, @usatodayopinion and in our Opinion newsletter. This article originally appeared on USA TODAY: ChatGPT for therapy? It was my best option – and it worked | Opinion