logo
Giroir Commends Formation of Congressional BIOTech Caucus

Giroir Commends Formation of Congressional BIOTech Caucus

Former Senior Trump Health Official Urges Focus on Innovative 'Upstream' Approaches that Disrupt Current Paradigm of Costly Chronic Care
ATLANTA, July 9, 2025 /PRNewswire/ -- Brett P. Giroir, M.D., former Assistant Secretary of Health and Acting FDA Commissioner in the first Trump Administration, praised the recently announced formation of the bipartisan BIOTech Caucus and noted the critical importance of its mission to engage with sector leaders and to build awareness and bioliteracy among Members of Congress.
'I applaud Reps. Chrissy Houlahan (D-PA) and Stephanie Bice (R-OK) for seizing the initiative to form the BIOTech Caucus. With new FDA reform initiatives and focused efforts in Congress, innovative biotech companies will be better able to shatter treatment paradigms that have become too comfortable and too profitable for Big Pharma to change,' said Giroir, CEO of Altesa BioSciences.
'This is precisely why I chose to lead a small innovative biotech company after completing my US government service,' he continued. 'The federal government can play a key role to help incentivize bio-entrepreneurship and leverage capital markets for the benefit of U.S. biotech companies poised to provide seismic, beneficial impacts to patients, taxpayers and the public-at-large.'
Chronic Obstructive Pulmonary Disease (COPD) treatment as paradigm shift example:
Giroir pointed to the transformation needed in the treatment of COPD, a condition that negatively impacts the health and well-being of 17 million Americans and nearly 500 million globally. COPD, the Altesa CEO noted, costs the U.S. approximately $50 billion annually in health care spending and is predicted to become the world's leading cause of death in 15 years. 'Until we can eliminate the underlying causes of COPD, namely smoking and air pollution, patients deserve better than costly downstream immune-modifying injections that only help a minority of COPD patients avoid a minority of flare-ups,' he continued.
Instead, he said, patients must be empowered to better care for their disease by using and integrating wearable technologies, at-home diagnostics, artificial intelligence, and specific treatments for the number one cause of exacerbations - respiratory virus infections. 'Diagnostics coupled with effective oral medicines have transformed our treatment of flu, COVID, Hepatitis C, and HIV. If we can identify respiratory viruses early- and treat them early on, there is a good chance that we can markedly and cost-effectively reduce exacerbations of COPD, asthma, and other lung conditions,' the Altesa BioSciences CEO said.
Giroir pointed to Altesa BioSciences collaborator, Sensifai Health, as emblematic of the needed emphasis on AI and wearables technology to facilitate upstream treatments to help people experiencing COPD. 'The entire rationale of 'upstream' disease treatment is to intervene before health crises occur,' Giroir pointed out. 'Sensifai's objectives align perfectly with our mission to deliver transformative respiratory therapeutics at a time when they can be most effective.' He noted a peer-reviewed study published in The Lancet Digital Health last week details the fact Sensifai's AI platform is the world's first wearable-powered system to predict acute inflammation with 90% sensitivity.
To encourage paradigm change, Giroir urged payers like Medicare and Medicaid to evaluate innovative care models that compare costly chronic downstream therapies versus common-sense upstream approaches like AI-assisted wearables, vaccination, preemptive treatment of viral infections, exercise, digital coaching, and Vitamin D supplementation.
'If such commonsense approaches proved effective – and I believe they will – the lives of COPD patients would be forever changed, and the U.S. health care system would save tens of billions of dollars annually,' Giroir concluded. 'These are the types of conversations Congress needs to hear, and I look forward to working in a positive, constructive manner with the BIOTech Caucus to help detail and explain the 'upstream' diagnostic and treatment changes we can no longer afford to ignore.'
Download study: https://www.thelancet.com/journals/landig/article/PIIS2589-7500(25)00068-8/fulltext
About Altesa BioSciences, Inc.
Altesa BioSciences is a clinical-stage pharmaceutical company dedicated to developing new treatments for age-old threats to human health: high-consequence viral infections. These infections are particularly severe in vulnerable people, including those with chronic health conditions, like lung diseases, as well as the elderly and many people in underserved communities.
About Sensifai Health Inc.
Sensifai Health is a Canadian-Israeli Preemptive Health startup at the forefront of bioconvergence. Its AI-powered platform continuously analyzes data from wearable biometric sensors to deliver early alerts of systemic inflammation before symptoms appear. By identifying silent immune signals in vulnerable individuals, Sensifai enables timely intervention that helps prevent critical health events, reduce hospitalizations, and improve long-term outcomes.
Contact:
Media Inquiries: Mia Heck
Cellular (210) 284-0388
[email protected]
View original content to download multimedia: https://www.prnewswire.com/news-releases/giroir-commends-formation-of-congressional-biotech-caucus-302501433.html
SOURCE Altesa Biosciences Inc.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NFL reveals reasons why smelling salts are banned in 2025
NFL reveals reasons why smelling salts are banned in 2025

Yahoo

time40 minutes ago

  • Yahoo

NFL reveals reasons why smelling salts are banned in 2025

The 2025 season will introduce some new rules and technology to the NFL, but one change in particular has upset one of the league's best tight ends. The San Francisco 49ers' George Kittle interrupted an interview on NFL Network to voice his displeasure over a new rule banning smelling salts. "I honestly just came up here to air a grievance," Kittle said during an episode of "Inside Training Camp Live" featuring teammate Fred Warner. "Our team got a memo today that smelling salts and ammonia packets were made illegal in the NFL. And I've been distraught all day." Kittle was especially upset because he uses smelling salts often during NFL games. "I'm an every drive guy," Kittle said. "I considered retirement. We got to figure out a middle ground here guys. Somebody help me out, somebody come up with a good idea... I miss those already." Jordan Addison suspension: Vikings WR gets three game ban, NFL announces Training camp scuffle: Baltimore Ravens and Indianapolis Colts get into joint practice brawl While the NFL did change its smelling salt policy, it did not result in them actually being banned. USA TODAY Sports obtained the league memo and here's what the NFL told teams. NFL statement on smelling salts A league official confirmed to USA TODAY Sports that the teams are now prohibited from providing players smelling salts during games. The official declined to be identified because of the sensitivity of the matter. The memo that the NFL sent to teams today read: "In 2024, the FDA issued a warning to companies that produce commercially available ammonia inhalants (AIs), as well as to consumers about the purchase and use of AIs, regarding the lack of evidence supporting the safety or efficacy of AIs marketed for improving mental alertness or boosting energy. The FDA noted potential negative effects from AI use. AIs also have the potential to mask certain neurologic signs and symptoms, including some potential signs of concussion. As a result, the NFL Head, Neck, and Spine Committee recommended prohibiting the use of AIs for any purpose during play in the NFL. In light of this information, effective for the 2025 NFL season, clubs are prohibited from providing or supplying ammonia in any form at NFL games. For clarity, 'ammonia' refers to ammonia capsules, inhalers, ammonia in a cup, and any form of 'smelling salts.' This prohibition applies to all club personnel (including but not limited to team physicians, athletic trainers, strength and conditioning coaches and coaches or other personnel). The prohibition applies through the entirety of all NFL games, including during all pregame activities, and halftime, and applies on the sideline and in stadium locker rooms." However, while NFL teams are no longer allowed to distribute smelling salts, their players are still allowed to use them, as the NFLPA clarified Wednesday. "We were not notified of this club policy change before the memo was sent out," the NFLPA wrote in a message to its players, per ESPN. "To clarify, this policy does not prohibit player use of these substances, but rather it restricts clubs from providing or supplying them in any form. The NFL has confirmed this to us." Why do NFL players use smelling salts? NFL players have used smelling salts for years. Some players claim it provides them with a pick-me-up or makes them more alert. "The ammonia wakes you up, opens your eyes," DeMarcus Lawrence explained to ESPN in 2017. "You'll be on the bench, you start to get a little tired and you got to wake your body up, and that's what that little ammonia does for you." However, medical opinions vary about whether smelling salts provide those effects or are simply a placebo. Concerns also exist about the ammonia capsules masking severe injuries. That's why many boxing organizations have banned smelling salts and also why the NFL has elected to outlaw its clubs from distributing them ahead of its 2025 season. USA TODAY Sports' Jacob Camenker also contributed to this report. This article originally appeared on USA TODAY: Why the NFL is banning smelling salts for 2025 season

ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?

Forbes

timean hour ago

  • Forbes

ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?

During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.

Atlanta police officer dies after shooting near CDC headquarters
Atlanta police officer dies after shooting near CDC headquarters

Yahoo

time2 hours ago

  • Yahoo

Atlanta police officer dies after shooting near CDC headquarters

A police officer has died from injuries sustained while responding to a shooting outside the headquarters of the US Centers for Disease Control and Prevention (CDC) in Atlanta. The incident, which took place on Friday near Emory University, involved a "single shooter" who is now dead, the Atlanta police department said. It said the officer, David Rose, had been taken to hospital and later died from his injuries. No civilian was wounded in the incident. The motive is unclear, but US media, citing an unnamed law-enforcement official, reported a theory that the gunman believed he was sick as a result of a coronavirus vaccine. Media reports also suggested the man's father had called police on the day of the shooting believing his son was suicidal. CDC Director Susan Monarez said the centre was "heartbroken" by the attack. "DeKalb County police, CDC security, and Emory University responded immediately and decisively, helping to prevent further harm to our staff and community," she wrote in a post on X. In a press briefing on Friday, police said they became aware of a report of an active shooter at around 16:50 local time (21:50 BST) near the CDC campus. Officers from multiple agencies responded. Emory University posted at the time on social media: "Active shooter on Emory Atlanta Campus at Emory Point CVS. RUN, HIDE, FIGHT." The CDC campus received multiple rounds of gunfire into buildings. Police said they found the shooter "struck by gunfire" - but could not specify whether that was from law enforcement or self-inflicted. Media outlets have reported that CDC employees have been asked to work remotely on Monday.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store