
Restless legs syndrome: A common sleep disorder you may never have heard of
Prescription drugsFacebookTweetLink
Follow
Sign up for CNN's Sleep, But Better newsletter series. Our seven-part guide has helpful hints to achieve better sleep.
Karla Dzienkowski's daughter was 11 when she started coming into her mom's room at night saying she couldn't fall asleep because of a stabbing feeling in her legs. She had to walk to make it stop.
The preteen became cranky and tired. Her grades started to slip, and she even fell asleep on a bench during a family trip to an amusement park, Dzienkowski said.
It took three years, but Dzienkowski's family finally got an explanation for the girl's condition: restless legs syndrome.
One study estimates 4% to 29% of adults in Western industrialized countries have restless legs syndrome. It is a condition that too few people can recognize in themselves, and many doctors don't know how to manage properly, said Dzienkowski, a nurse who is executive director of the Restless Legs Syndrome Foundation.
Here is what experts want you to know about restless legs syndrome.
'Restless legs syndrome is a neurological disorder that is characterized by a need to move that is oftentimes associated with an uncomfortable feeling,' said Dr. John Winkelman, chief of the sleep disorders clinical research program at Massachusetts General Hospital and professor of psychiatry at Harvard Medical School.
The uncomfortable feeling — described as crawling, aching, tingling or throbbing — is often in the legs and sometimes the arms, he added.
Restlessness frequently happens when people with the condition are sitting or lying down, and it is relieved with movement, Winkelman said.
Symptoms are more likely to occur when a person is at rest, most often at night, and because the syndrome interferes with sleep, it is classified as a sleep disorder, Winkelman said.
In moderate to severe cases, people experience restless legs syndrome several times a week, and in the most extreme cases, symptoms can delay sleep for several hours, said Dr. Brian Koo, associate professor of neurology at Yale School of Medicine and director of the Yale Center for Restless Legs Syndrome.
Two strong components play a role in who gets restless legs syndrome: genetics and iron levels.
Restless legs syndrome often runs in families, and genetic markers make up about 20% of the prediction of who will get it, Winkelman said.
Those with an iron deficiency are also more likely to get restless legs syndrome, including people who are pregnant, on dialysis, who are menstruating, who have anemia, or who are vegetarians, Winkelman said.
Those on selective serotonin reuptake inhibitor antidepressants may also be vulnerable to restless legs syndrome, he added.
The condition is twice as common in women as in men and much more common as people get older, Winkelman said.
However, as Dzienkowski learned, children can have restless legs syndrome, too.
To treat restless legs syndrome, a good first step is to look at what might be making the condition worse, Winkelman said.
Alcohol, other medications and simple sugars may contribute to symptoms, Koo said.
If iron is low — or even borderline low — oral iron supplements or intravenous iron infusions may help, Winkelman added.
Dzienkowski also recommends having a 'bag of tricks' to manage symptoms, such as hot or cold packs, massages, walks or some mind-stimulating activity.
'For some reason … if you keep your mind engaged, it helps to keep symptoms at bay,' she said.
There are medications that help if lifestyle changes and iron supplementation don't work.
Many doctors will start with a class of drugs called alpha2-delta ligands, such as gabapentin or pregabalin, Koo said.
For a long time, dopamine agonists were the first line of medications. But they are now prescribed infrequently because they can worsen restless legs syndrome over time, Winkelman added.
The medications for the most severe cases are low-dose, long-acting opioid medications, Koo said.
If you have discomfort that motivates you to move your legs at rest — particularly if doing so disturbs your sleep — talk to a doctor, Dzienkowski said.
Not all medical professionals are well versed in restless legs syndrome, so asking for a referral to a sleep specialist may be helpful, she said. You should also get your lab work done, especially an iron panel with ferritin, a blood test that looks at how much iron your body has and how available it is for use, Dzienkowski said.
'The sooner you do it, the better, because you're just delaying diagnosis and treatment, which can be detrimental to your life,' she said. 'You don't realize that that sleepiness that you're feeling at work or the crankiness or you're not wanting to get out and do things could be the RLS bleeding into your daytime. … At least go have that conversation.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
6 minutes ago
- Associated Press
Why Sleep Matters: Optimum Wound Care Shares Key Link Between Rest and Wound Recovery
Research-backed guidance sheds light on the healing benefits of quality sleep in wound care. 'We regularly observe how non-clinical factors such as rest, nutrition, and stress management affect patient recovery.'— Mosayeb (Moe) Karimi, FNP LOUISVILLE, KY, UNITED STATES, August 11, 2025 / / -- Optimum Wound Care, a provider of advanced wound care in Louisville, has released new guidance on the important relationship between sleep and the body's healing process. The educational resource emphasizes that quality rest is a foundational component in wound recovery, especially for patients managing chronic wounds or undergoing long-term treatment. Recent clinical studies have shown that adequate sleep supports tissue regeneration, immune function, and hormonal balance all of which are essential for effective wound healing. The Optimum Wound Care's recent article outlines these connections, aiming to raise awareness among patients and caregivers about the role of sleep in overall recovery outcomes. 'We regularly observe how non-clinical factors such as rest, nutrition, and stress management affect patient recovery,' said Dr. Mosayeb (Moe) Karimi, Medical Director at Optimum Wound Care. 'Sleep, in particular, has a measurable influence on wound healing. Patients who prioritize consistent, restorative sleep often experience more favorable healing progress.' According to Optimum Wound Care, sleep deprivation can disrupt the body's ability to produce cytokines proteins that regulate inflammation and healing. This can lead to slower recovery, increased risk of infection, and prolonged treatment timelines. Individuals with existing conditions such as diabetes or circulatory issues may be especially vulnerable to the effects of poor sleep. The center's article also discusses how certain sleep disorders, including insomnia and sleep apnea, can impair oxygen delivery and reduce the effectiveness of natural healing mechanisms. These insights are part of Optimum Wound Care's broader approach to wound care, which integrates lifestyle education with evidence-based medical treatment. 'Patient education is an important part of what we do,' Dr. Karimi added. 'When we inform patients about how their daily habits like sleep patterns can affect their recovery, we empower them to take a more active role in their healing process.' In addition to traditional wound treatments, Optimum Wound Care encourages patients to develop healthy sleep routines. Recommendations include setting consistent bedtimes, limiting screen use before sleep, and addressing underlying medical issues that may disrupt rest. These strategies, while not a substitute for medical intervention, can significantly enhance the effectiveness of wound care. The article, How Sleep Affects Wound Healing and Recovery, is available on the Optimum Wound Care's website and is intended for both patients and clinicians. It reflects the center's continued commitment to improving recovery through holistic, patient-centered care. About Optimum Wound Care Optimum Wound Care is a Louisville-based medical clinic specializing in advanced wound care and recovery solutions. With a multidisciplinary team of physicians, nurses, and specialists, the center provides customized treatment for chronic wounds, diabetic ulcers, pressure injuries, and other complex conditions. Optimum Wound Care is committed to improving patient outcomes through evidence-based care, education, and compassionate support. Healing starts here. Trust Optimum Wound Care to deliver expert, patient-centered solutions that accelerate recovery and improve outcomes. Connect with our care team today and take the first step toward optimum healing. Mosayeb (Moe) Karimi Optimum Wound Care (OWC) +1 502-293-5665 [email protected] Visit us on social media: Instagram Facebook Legal Disclaimer: EIN Presswire provides this news content 'as is' without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.


Forbes
6 minutes ago
- Forbes
Handy Recap And Further Expansion Of How GPT-5 Impacts AI Mental Health Therapy
In today's column, I continue my exploration of how the recently released GPT-5 impacts the use of generative AI and large language models (LLMs) for performing AI-driven mental health therapy. First, I provide a brief recap of my previous discussion, which focused on the user or consumer side of AI-based mental health therapy (see my in-depth analysis at the link here). I am to quickly bring you up-to-speed on this evolving and rather controversial matter. In that prior discussion, I promised that I would follow up and focus on the therapist's side of things. Ergo, I will examine here how it is that therapists and mental health professionals are going to be impacted by the advent of GPT-5. You see, it is inherently a two-punch topic, namely, the use of AI for mental health by consumers and the use of AI by therapists in their mental health services. I've repeatedly predicted that we are inevitably moving from the traditional therapist-client dyad to the emerging triad of therapist-AI-client relationship, see my depiction at the link here. GPT-5 is going to move the needle in that regard. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Therapists And AI Usage Many therapists and mental health professionals are opting to integrate AI into their practices and overtly use the AI as a therapeutic adjunct for their clients and patients (see my coverage at the link here). Even those that don't go down the route of incorporating AI are bound to encounter clients and patients who are doing so. Those clients and patients will often walk in the door with preconceived beliefs about how their therapy should go or is going, spurred and prodded by what AI has told them. In this sense, one way or another, therapists and mental health professionals are going to be impacted by the release of GPT-5. Right now, there are around 700 million weekly active users of ChatGPT. You can expect many will migrate over to GPT-5, plus, OpenAI is stridently pushing the adoption of GPT-5 and has indicated they plan to sunset their prior generative AI models. Consumers Using AI For Therapy I'd like to set the stage on how it is that generic generative AI and LLMs are typically used in an ad hoc way by consumers for mental health guidance when they are otherwise utilizing the AI for a wide variety of chores and miscellaneous tasks. Millions upon millions of people are using generic generative AI for their ongoing advisor on mental health considerations. You might find it of notable interest that the top-ranked use of contemporary generic generative AI and LLMs is to consult with the AI on mental health matters, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. When I say that I am referring to generic generative AI, please know that there are non-generic versions of generative AI and LLMs that are customized specifically for undertaking therapeutic assessments and recommendations, see examples at the link here. I'm going to primarily be discussing generic generative AI, though many of these points can impact the specialized marketplace, too. GPT-5 Enters The Jungle I stridently assert that GPT-5 will undoubtedly bolster consumer use of AI for mental health purposes. GPT-5 is alluring due to various fluency improvements that have been built into OpenAI's latest AI offering. My expectation is that GPT-5 will notably exacerbate and stoke an already feverish pace of growth in people turning to AI for their therapeutic needs. To clarify, I am not suggesting that boosted use is necessarily a positive thing. An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society. See my analysis of the potential widespread impacts at the link here. If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity. The question becomes, what are therapists and mental health professionals going to do about this madcap rush to the use of generic AI for therapy? How Therapists Are Sizing Up Two major routes are taking place by today's therapists: Allow me to elaborate. One option is to dig your head into the sand and be a naysayer. Pretend that millions of people aren't using AI. Tell your prospective clients and patients that they aren't welcome if they are using AI, or that if they come on board, they will have to forsake their AI usage in that regard. Whether this business strategy will work is a mighty tall guess. Perhaps some people will cherish that their therapist is eschewing AI. That small segment of the population will be happy with such a therapist. On the other hand, the prevailing trend is in the opposite direction. People want their therapist to be up to date. AI is here. Everyone can see this. They would rather choose a therapist who acknowledges the emergence of AI and guides their clients in how to sensibly and suitably use AI. The prudent perspective is that therapists desiring a fruitful future are going to realize that they are better off by embracing AI rather than fighting AI. For more ins and outs, see my analysis at the link here. The Old And New Facets In GPT-5 Leading up to the release of GPT-5, there was an immense amount of wildly fantastical speculation about what GPT-5 would consist of. It's going to be artificial general intelligence (AGI), containing the entirety of human intelligence. Period, end of story. Whoa, some bellowed, it is going to actually be even better and consist of artificial superintelligence (ASI). The AI will be superhuman. It will outthink any human that has ever existed and that will ever come into existence. Sorry to break the sobering news, but GPT-5 is not AGI, nor is it even close to artificial superintelligence. GPT-5 is an incremental advancement over the prior OpenAI models. It is better in some ways than competing marketplace LLMs, and it is also less capable in some ways when compared head-to-head to available LLMs. See my detailed assessment of GPT-5 at the link here. The bottom line is that if you already know about other popular generative AI and LLMs, you are already ready to use GPT-5. The same nature of prompting applies. The same concerns about what AI can and cannot do are still at play. All those facets are inherited into GPT-5. Boom, drop the mic. There are, though, various crucial changes and differences that are significant, especially in the context of seeking AI mental health advice from GPT-5. Let's get into those vital elements. Auto-Switcher And Processing Time GPT-5 is actually a conglomeration of several submodels that were previously in OpenAI's potpourri of generative AI products. They upgraded their popular LLMs such as GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, etc. Those are essentially collected together under the umbrella of GPT-5. The reason this is important to know is that GPT-5 contains an auto-switcher that secretly examines each of your prompts and then computationally decides which of the GPT-5 submodels ought to be run. You have no control over this. It is entirely willy-nilly and up to the GPT-5 auto-switcher to route all prompts. It used to be that the user selected which of those available models to utilize. This involved knowing what you wanted to do and what each of those models could perform. It all depended on what you were looking to do. Some of the models were faster, some were slower. Some were deeper at certain classes of problems, others were shallower. Not only does the GPT-5 auto-switcher select GPT-5 submodels, but an aligned component determines how much run-time your prompt will be allotted. Maybe GPT-5 will guess suitably and give a prompt sufficient time to reach a solid answer. On the other hand, it is entirely possible that the run-time will be short-changed and, worse still, a less-than-best choice of the submodels will be selected. The answer generated could be woefully weak and incomplete. GPT-5 And Mental Health Analyses The new auto-switching and accompanying processing-time guesswork by GPT-5 is going to, at times, mess up the mental health analyses being performed by GPT-5. Again, I'm not saying that GPT-5 should necessarily be used for said purpose, and only pointing out that if it is used for therapy-related aspects, there is a wanton nature of what's going to occur. First, a submodel might be chosen by the auto-switcher that is less capable than a different one in the context of mental health. The compounding problem is that insufficient run-time might be allotted to the effort. The mental health advice that is generated by GPT-5 could be off-target or half-baked (of course, this is always possible, for all LLMs, in general). Second, the mental health advice could end up so varied that it becomes confusing to the user. They presumably won't realize that there are these underlying submodels being ping-ponged. Each prompt, even in the midst of a conversation or chat with GPT-5 will be getting this runaround. Plus, GPT-5 doesn't reveal its hidden shenanigans. The user might readily assume they are doing something that is causing all the mishmash. Here's the upshot for therapists and mental health professionals. If you are going to use GPT-5 as a tool for your own therapy-oriented double-checker or feedback mechanism, keep in mind that GPT-5 could be tossing you back and forth between various submodels. Be wary. You can also attempt to trick GPT-5 into using a particular submodel – see my prompting tips and techniques regarding GPT-5 at the link here. If you are going to advise your clients and patients to use GPT-5, or if they are doing so anyway, it would behoove you -- and them -- to inform them about this auto-switching boondoggle. They will then at least not be caught completely off-guard by a possible inconsistent set of responses. I'm sure they will be appreciative that you forewarned them. It also provides a kind of added benefit that you are looking out for their best interests and are up-to-speed on the latest AI. Writing Is Enhanced On the writing side of things, GPT-5 has improvements in a myriad of writing aspects. The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. I anticipate that written responses to mental health questions will likely be more robust. More involved. Often more complicated than they used to be under the prior OpenAI models. That's the good news. The somewhat bad news is that GPT-5 might produce rather dense and impenetrable responses, rather than being succinct and direct. Another possibility entails GPT-5 sliding into a poem-producing mode. Mental health advisement via poetry is probably not the best route to go. For therapists, you need to realize that a client or patient using GPT-5 is likely to come to you with a seemingly sophisticated mental health argument or narrative. The AI is doing its darnedest to try and look smart. You will need to carefully dissect what GPT-5 has generated and determine what makes sense and what is ill-advised. You can get GPT-5 to fall back to its old ways of writing. In a prompt, all that needs to be done is tell GPT-5 to write in the same manner as GPT-4 (see my prompting suggestion at the link here). It's up to you whether you want to tell your clients and patients about this. Some might find it useful to get GPT-5 to talk like it is GPT-4. Others might not care and will just proceed with the newly uplifted chatter. Lies And AI Hallucinations OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is a misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth). I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here. From a mental health angle, an ongoing concern has been that the AI might lie to someone about a mental health issue or perhaps generate a zany response due to encountering an AI hallucination. A person seeking therapy via the AI is vulnerable to believing whatever the AI says. They might not be able to readily figure out that the advice being given is bogus, or worse, harmful to them. A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn't zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern. If you want to try and get GPT-5 to lie less of the time, and have hallucinations less of the time, you can use a two-step prompt that will sternly instruct GPT-5 on this (see my prompting suggestion at the link here). Please know that no matter what you tell GPT-5, there is still a chance of it lying and hallucinating. In terms of clients and patients, your best bet is to emphasize repeatedly that whatever GPT-5 says, and whatever any LLM says, they must remain stoutly wary and alert. I'd suggest you mention this crucial point each time that you have a conversation with a client or patient and when the topic of using AI comes up. Pound away at the disturbing underbelly that AI lies and makes things up. Personas Are Coming To The Fore I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. In the context of mental health, I showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities. As a mental health professional, you ought to give serious consideration to making use of personas in GPT-5 for your own self-training and personal refinement. I'd recommend that you generally avoid the newly devised preset personas. Instead, create personas that have particular relevance to you. For example, you might craft a persona that will pretend to be a person with deep depression. You could then use this persona to hone your therapeutic prowess regarding depression in patients and clients. It can be quite useful. Plus, there is no danger in the sense that since it is just AI, you can try out various avenues to gauge what works and doesn't work. No harm, no foul. For my suggestions on how to prompt the invoking of a persona, see the link here. In terms of clients and patients, I would generally suggest that you do not lean them into using personas. Some people get lost in a persona and start to believe that the AI is real. That's not something to be encouraged. If a client or patient starts to use personas in GPT-5, it would be useful to have them indicate what the persona is and why they feel their activity is meritorious. Then, if feasible, wean them away from the use of personas. That's a case-by-case decision, and a consideration as a therapist that you'll need to studiously decide on. AI Mental Health Is In Flux I'll finish here with some hefty thoughts as a heads-up for you to consider. AI makers find themselves in quite a pickle. By allowing their AI to be used for mental health purposes, they are opening the door to serious legal liability, along with damaging reputational hits if their AI gets caught dispensing inappropriate guidance. So far, they've been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role. Meanwhile, new laws might put the kibosh on generic unregulated generative AI providing any semblance of mental health advisement. The recently enacted Illinois law restricting AI for mental health usage in Illinois puts the AI makers in quite a rough spot, see my discussion at the link here and the link here. Other states and the federal government might decide to enact similar laws. The Illinois law also applies stringently to therapists and mental health professionals. Overall, the restrictions come down to only allowing AI usage for mainly administrative chores by therapists, such as billing, scheduling, and the like. The use of AI as a therapeutic adjunct is almost entirely placed out of bounds. All-In Or All-Out Though I understand the logic of a veritable ban associated with therapists employing generic unregulated generative AI, doing so to protect consumers, I don't believe it is prudent. It seems like a head-in-the-sand proclamation and belies existing reality. It grossly belies the future growth in the global widespread adoption of AI that is inexorably barreling this way. As the famed trauma physician Gabor Maté astutely reminds us: 'The attempt to escape from pain, is what creates more pain.' Let's keep that in mind as attempts are made to deal with a complex problem in somewhat simplistic ways.


Associated Press
17 minutes ago
- Associated Press
UTSA and ArcLight Academy Partner to Increase Healthcare Education and Training
SAN ANTONIO--(BUSINESS WIRE)--Aug 10, 2025-- The University of Texas at San Antonio (UTSA) Professional and Continuing Education (PaCE) and ArcLight Education have announced a strategic partnership to strengthen healthcare education and training for students. This collaboration will introduce EMT Basic training programs, enhancing the skill set and career readiness of PaCE participants starting this fall. ArcLight Academy offers training for multiple medical certifications with paid tuition available and job placement opportunities. This includes: EMT Basic, Advanced EMT, Phlebotomy, EKG Technician, Mental Health Technician, and AHA certifications. ShurMed EMS, the sister company of ArcLight Academy, will also serve as the official emergency medical service (EMS) provider for The University of Texas at San Antonio (UTSA) athletics, ensuring top-tier EMS for athletes, coaches, and staff. 'We are excited for this partnership and believe this will help fill the much-needed entry level medical positions in San Antonio,' said an executive from the partnership. ArcLight Academy, known for its cutting-edge curriculum and expert faculty, will offer these specialized training programs at its local San Antonio facility and the UTSA downtown campuses. This partnership aligns with the growing demand for skilled healthcare professionals and the need for comprehensive medical training. ArcLight Academy is rapidly becoming the number one medical education and training center in San Antonio, TX, and this collaboration with The University of Texas at San Antonio (UTSA) will further solidify its position. The combined efforts of UTSA, ArcLight Academy, and ShurMed EMS aim to set a new standard for healthcare education and service in the city and surrounding counties. About ArcLight Academy – ArcLight Academy is a leading entry level medical education and training provider in San Antonio, TX, known for its cutting-edge curriculum, expert faculty, and local facilities. About ShurMed EMS – ShurMed EMS is a veteran and locally owned company dedicated to providing the highest value, quality and safety, in emergent and non-emergent transportation services, operating 24 hours a day, 365 days a year, including holidays. Additional services include special medical standby, inner state and out-of-state transfers, and more. View source version on CONTACT: Ivan Melchor, Program Manager, ArcLight Academy & Dustin Yaklin, Director of Business Development, ArcLight Academy/ShurMed EMS Email:[email protected],[email protected] Number: 210-889-3327 (Ivan), 210-514-4862 (Dustin) KEYWORD: UNITED STATES NORTH AMERICA TEXAS INDUSTRY KEYWORD: EDUCATION HEALTH OTHER HEALTH GENERAL HEALTH TRAINING UNIVERSITY SOURCE: ArcLight Academy Copyright Business Wire 2025. PUB: 08/10/2025 11:28 PM/DISC: 08/10/2025 11:28 PM