logo
StoneGate Senior Living Selects WellSky ® Long-Term Care to Enhance Resident Care and Improve Efficiency

StoneGate Senior Living Selects WellSky ® Long-Term Care to Enhance Resident Care and Improve Efficiency

Business Wire26-04-2025

OVERLAND PARK, Kan.--(BUSINESS WIRE)-- WellSky, a global health and community care technology and services company, today announced that StoneGate Senior Living, a premier provider of senior care services, has implemented WellSky as its electronic health record (EHR) and revenue cycle technology partner – a strategic decision that reflects StoneGate's commitment to adopting innovative technology that supports outstanding care.
'The team at WellSky is willing to bring our processes and workflows together with their innovative technology to make sure the solution works for us.'
StoneGate Senior Living operates a network of more than 30 skilled nursing and rehabilitation centers, assisted living communities, and memory care facilities in Texas, Oklahoma, and Colorado. The organization was already using the WellSky CarePort Intake solution to connect with hospital partners and manage referrals but sought a more modern, innovative EHR solution. They ultimately chose WellSky Long-Term Care, a comprehensive EHR and revenue cycle management platform.
'As a trusted long-term care provider known for a commitment to innovation, staff satisfaction, and growth, we wanted to work with a company willing to stay on the leading edge of the technology needs of long-term and post-acute care,' said John Paul Taylor, CEO of StoneGate Senior Living. 'The team at WellSky is willing to bring our processes and workflows together with their innovative technology to make sure the solution works for us.'
With WellSky, the StoneGate team will be better equipped to make data-driven decisions, reduce administrative burdens, and improve care coordination, leading to better outcomes for residents and greater operational efficiency. StoneGate will also help to shape the evolution of the WellSky Long-Term Care solution by providing regular feedback to the WellSky team and partnering on feature developments, including new workflows powered by WellSky SkySense AI, a suite of Artificial Intelligence-based tools.
WellSky solutions for long-term care connect critical touchpoints before, during, and after a resident's stay. Clinical and revenue cycle capabilities are integrated to streamline operations, reduce complexity, and improve outcomes. WellSky solutions are purpose-built to meet an organization's unique needs and to help long-term care facilities like StoneGate enhance their current processes.
'This collaboration highlights how forward-thinking providers like StoneGate and technology innovators like WellSky can work hand-in-hand to meet the needs of residents, care teams, and leaders across the industry,' said WellSky chairman and CEO Bill Miller. 'StoneGate's commitment to excellence aligns perfectly with the WellSky mission to deliver world-class technology solutions. With our platform, StoneGate will not only address current challenges but also set a new standard for innovation in long-term care.'
To learn more about WellSky Long-Term Care, visit wellsky.com.
About WellSky ®
WellSky is one of America's largest and most innovative healthcare technology companies leading the movement for intelligent, coordinated care. Our proven software, analytics, and services power better outcomes and lower costs for stakeholders across the health and community care continuum. In today's value-based care environment, WellSky helps providers, payers, health systems, and community organizations scale processes, improve collaboration for growth, harness the power of data analytics, and achieve better outcomes by further connecting clinical and social care. WellSky serves more than 20,000 client sites — including the largest hospital systems, blood banks, cell therapy labs, home health and hospice franchises, post-acute providers, government agencies, and human services organizations. Informed by more than 40 years of providing software and expertise, WellSky anticipates clients' needs and innovates relentlessly to build healthy, thriving communities. For more information, visit wellsky.com.
About StoneGate Senior Living
StoneGate Senior Living provides support services to senior living and care properties that offer skilled health care, assisted living, memory support and independent living at locations in Texas, Oklahoma, and Colorado. Founded and led by a team of senior living industry veterans, StoneGate understands that careful attention to customer expectations is vital to the success of a senior living and care community. We understand that services must gratify the wants, needs and expectations of diverse customers. StoneGate-supported properties are designed to provide a comfortable atmosphere with all the qualities of home and more. Thoughtfully planned care and services foster enhanced well-being for residents and important peace of mind for their families.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Will Trump's policies kill Massachusetts' life sciences leadership?
Will Trump's policies kill Massachusetts' life sciences leadership?

Boston Globe

timean hour ago

  • Boston Globe

Will Trump's policies kill Massachusetts' life sciences leadership?

Advertisement Although the industry is centered in eastern Massachusetts, there's a statewide benefit from all the tax dollars those businesses and workers pay. Get The Gavel A weekly SCOTUS explainer newsletter by columnist Kimberly Atkins Stohr. Enter Email Sign Up In all, Massachusetts organizations — including universities, research institutes, and hospitals — received $3.5 billion in funding from the National Institutes of Health. Massachusetts-headquartered companies raised $3.26 billion in venture capital funding. Among all drugs in the development pipeline in the United States, 15 percent were being made by companies headquartered in Massachusetts. But actions taken by President Trump and his administration — cutting funding for scientific research and universities, flirting with tariffs, fanning skepticism about vaccines — threaten to devastate the ecosystem. Today, the industry is at a precipice, and uncertainty abounds. Some companies are already feeling the pinch of terminated federal grants, while others are anxious about what might come. Taken together, Trump's policies could force some companies and scientists to take their money, talents, and products overseas. Advertisement Christopher Locher, CEO of Lowell-based Versatope Therapeutics, which develops a platform to deliver vaccines and therapeutics, said he worries the Greater Boston life sciences ecosystem is 'being flushed down the toilet.' For example, Trump is Trump's funding cuts are already having a large impact on some local companies. Part of the problem is the Trump administration isn't only cutting funding, but it's picking which technologies to fund — in some cases apparently based on politics more than science. Take flu vaccines. The Trump administration recently announced a $500 million campaign to fund the development of a universal flu vaccine, which doesn't require annual updates, using technology being worked on But simultaneously, he cut funding for other work on a universal flu vaccine. Versatope Therapeutics got $14 million in NIH funding and spent five years developing a universal flu vaccine. It had approval from the US Food and Drug Administration to begin clinical trials when Trump terminated the contract's remaining $8 million, with the reason given being 'convenience,' Locher said. Trump also Advertisement Company executives say decisions by Trump officials to disinvest in vaccine-related technology — and concerns about whether government will approve new technology — means it's nearly impossible to find private investment funding to replace lost federal dollars. 'We're faced with bankruptcy in the very near future,' Locher said. Ironically, given Trump's stated commitment to bringing businesses back to the United States, one potential option Locher is eyeing is opening a subsidiary abroad. Conducting clinical trials would be cheaper in another country, whether in Europe, Australia, or China, Locher said, and some countries are offering financial incentives to American companies to relocate. Companies also face a potential workforce brain drain. There have been MassBio officials said China has less rigorous — but faster — safety and research protocols than the US. Australia allows a faster timeline for clinical trials. If regulatory approval of medicines is held up because the FDA is understaffed, companies may seek European regulatory approval instead. The loss of talent to foreign countries will be compounded if the pipeline of local university graduates dries up. One draw for life sciences companies to Boston/Cambridge is the presence of elite schools like Harvard and MIT, with their potential for faculty collaboration and skilled graduates. Advertisement Trump is trying to Chip Clark, CEO at Vibrant Biomedicines in Cambridge, said cuts to university research funding both 'shrink the pipeline of great ideas' that form the basis for many biotech startups and translate to fewer available scientists. Clark said the administration's policies 'seem like a deliberate attempt to try to cede scientific leadership to Europe and Japan and Korea and China. ... They will be delighted to capitalize on our talent, technology, and investment capital to make their robust biotech sectors grow and ultimately compete successfully against the US industry,' he said. Don Ingber, founding director of the Wyss Institute for Biologically Inspired Engineering at Harvard University, said he has postdocs with US visas applying for jobs in Europe, and others who were accepted to work at Harvard but are going elsewhere. 'The fact that places like Harvard and MIT and American universities are magnets for the best and brightest from around the world is what's driven our technology economy and certainly the Boston/Cambridge ecosystem,' Ingber said. 'With this uncertainty, I fear we'll lose a generation.' Ingber, who was forced to stop work on two government-funded projects on drugs designed to prevent injury from radiation exposure, compared administration policies to 'eating seed corn' needed to grow crops. Advertisement Trump's vendetta will undermine one of the most vibrant state economies in the country and set back American science by years. And it's not just eastern Massachusetts that will pay a price; the entire country will. As Ingber noted, it might take years to see the impact of medicines or technologies that aren't developed because of these shortsighted cuts. Editorials represent the views of the Boston Globe Editorial Board. Follow us

I'm a Gen Zer who landed a 6-figure job at Morgan Stanley before graduation. Here's what the process was like — and why you should refresh a surprisingly important part of your résumé.
I'm a Gen Zer who landed a 6-figure job at Morgan Stanley before graduation. Here's what the process was like — and why you should refresh a surprisingly important part of your résumé.

Business Insider

timean hour ago

  • Business Insider

I'm a Gen Zer who landed a 6-figure job at Morgan Stanley before graduation. Here's what the process was like — and why you should refresh a surprisingly important part of your résumé.

This as-told-to essay is based on a conversation with Sara Thomas, 22, a 2025 graduate from the University of Chicago and incoming investment banking analyst at Morgan Stanley. Business Insider's recent " Path to Wall Street" series highlighted how finance careers continue to attract young talent, despite the industry's long hours and demanding entry-level roles. Entry-level bankers typically earn about $110,000 a year, not including bonuses. This interview has been edited for length and clarity. I had barely decided on banking as a career choice when I had to start preparing for interviews. My experience was similar to most stories I've heard about the banking world: the recruiting process starts early. Before submitting my internship applications, I spoke to about five people at each major bank — mostly recent UChicago grads and people the university's career advancement program connected me with — so they would recognize my name when they saw my résumé. Cold LinkedIn messaging didn't work very well for me. Those introductions are often necessary to get an interview. At most banks, I interviewed for multiple rounds, including calls focused on my personality and technical skills and a two-hour " super day." The whole process lasted about two weeks. Then, by the spring of my sophomore year, I was done. I landed an internship at Morgan Stanley, and I knew my full-time job would be set as long as I got a return offer. My prior internships had been at Bain Capital and the Chicago-based firm Ariel Investments. The only other career trajectory I considered in college was academia. I studied economics and considered getting involved with economics research, but I realized it wasn't for me. I just work better in a faster-paced finance environment. In my free time at school, I tried to focus on clubs and internships that would keep me close to startups and entrepreneurship, so I joined a venture capital fund on campus. I was also involved in a campus group for women and minorities interested in finance and investing. Those opportunities really helped me build my industry chops. I would tell any college student hoping to land a Wall Street or Silicon Valley job: be decisive. Even if investment banking or consulting isn't your lifelong passion, plan to commit to a career path sooner rather than later, so that you can give yourself the most amount of time to prepare, network, and do really well in the interviews. And, for the application process, students need to be careful with what they put on their résumé — recruiters pay a lot of attention to the "skills and interests" section at the bottom. Don't say that you're a mountain climber if you have never climbed a mountain, because people will ask you about your hobbies, and you need to be able to genuinely talk about them. Of course, your credentials matter, but I've found that recruiters are most interested in meeting well-rounded people. Job applications and postgrad life make me anxious sometimes. I've leaned on my friends a lot: Some aren't going into finance, others have gone through recruiting alongside me. I'm also grateful for my family — my Morgan Stanley role will be in San Francisco, and it will be the first time I've moved far from home, since I grew up in Chicago. Exercise has always been a coping strategy for me, too. I'm trying to build healthy habits and make sure I don't isolate myself. Those are the main ways I'm dealing with stress. Long, long term, my biggest goal is to be able to work for myself in some way. That could mean working at a big company that gives me a lot of autonomy to make decisions or starting my own company, if I am brave enough to do that. For now, I'm excited to keep building my career and learning from my coworkers. Having engaging conversations with people you've never met can be the most challenging part of the finance industry, but it's also the most rewarding.

The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins
The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins

Forbes

time2 hours ago

  • Forbes

The Ardent Belief That Artificial General Intelligence Will Bring Us Infinite Einsteins

In today's column, I examine an AI conjecture known as the infinite Einsteins. The deal is this. By attaining artificial general intelligence (AGI) and artificial superintelligence (ASI), the resulting AI will allegedly provide us with an infinite number of AI-based Einsteins. We could then have Einstein-level intelligence massively available 24/7. The possibilities seem incredible and enormously uplifting. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here. AI insiders are pretty much divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as 'P(doom),' which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity's problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that's good in the sense that AI will invent things we never could have envisioned. No one can say for sure which camp is right and which one is wrong. This is yet another polarizing aspect of our contemporary times. For my in-depth analysis of the two camps, see the link here. Let's momentarily put aside the attainment of ASI and focus solely on the potential achievement of AGI, just for the sake of this discussion. No worries -- I'll bring ASI back into the big picture in the concluding remarks. Imagine that we can arrive at AGI. Since AGI is going to be on par with human intelligence, and since Einstein was a human and made use of his human intelligence, the logical conclusion is that AGI would be able to exhibit the intelligence of the likes of Einstein. The AGI isn't especially simulating Einstein per se. Instead, the belief is that AGI would contain the equivalent of Einstein-level intelligence. If AGI can exhibit the intelligence of Einstein, the next logical assumption is that this Einstein-like capacity could be replicated within the AGI. We already know that even contemporary generative AI allows for the simulation of multiple personas, see my analysis at the link here. With sufficient hardware, the number of personas can be in the millions and billions, see my coverage at the link here. In short, we could have a vast number of Einstein-like intelligence instances running at the same time. You can quibble that this is not the same as saying that the number of Einsteins would be infinite. It would not be an infinite count. There would be some limits involved. Nonetheless, the infinite Einsteins as a catchphrase rolls off the tongue and sounds better than saying the millions or billions of Einsteins. We'll let the use of the word 'infinite' slide in this case and agree that it means a quite large number of instances. Einstein was known for his brilliance when it comes to physics. I brought this up because it is noteworthy that he wasn't known for expertise in biology, medicine, law, or other domains. When someone refers to Einstein, they are essentially emphasizing his expertise in physics, not in other realms. Would an AGI that provides infinite Einsteins then be considered to have heightened expertise solely in physics? To clarify, that would certainly be an amazing aspect. No doubt about it. On the other hand, those infinite Einsteins would presumably have little to offer in other realms such as biology, medicine, etc. Just trying to establish the likely boundaries involved. Imagine this disconcerting scenario. We have infinite Einsteins via AGI. People assume that those Einsteins are brilliant in all domains. The AGI via this capacity stipulates a seeming breakthrough in medicine. But the reality is that this is beyond the purview of the infinite Einsteins and turns out to be incorrect. We might be so enamored with the infinite Einsteins that we fall into the mental trap that anything the AGI emits via that capacity is aboveboard and completely meritorious. Some people are more generalized about the word 'Einstein' and tend to suggest it means being ingenious on an all-around basis. For them, the infinite Einsteins consist of innumerable AI-based geniuses of all kinds. How AGI would model this capacity is debatable. We don't yet know how AGI will work. An open question is whether other forms of intelligence such as emotional intelligence (EQ) get wrapped in the infinite Einsteins. Are we strictly considering book knowledge and straight-ahead intelligence, or shall we toss in all manner of intelligence including the kitchen sink? There is no debate about the clear fact that Einstein made mistakes and was not perfect. He was unsure at times of his theories and proposals. He made mistakes and had to correct his work. Historical reviews point out that he at first rejected quantum mechanics and vowed that God does not play dice, a diss against the budding field of quantum theory. A seemingly big miss. Would AGI that allows for infinite Einsteins be equally flawed in those Einsteins as per the imperfections of the modeled human? This is an important point. Envision that we are making use of AGI and the infinite Einsteins to explore new frontiers in physics. Will we know if those Einsteins are making mistakes? Perhaps they opt to reject ideas that are worthy of pursuit. It seems doubtful that we would seek to pursue those rejected ideas simply due to the powerful assumption that all those Einsteins can't be wrong. Further compounding the issue would be the matter of AI hallucinations. You've undoubtedly heard or read about so-called AI hallucinations. The precept is that sometimes AI generates confabulations, false statements that appear to be true. A troubling facet is that we aren't yet sure when this occurs, nor how to prevent it, and ferreting out AI hallucinations can be problematic (see my extensive explanation at the link here). There is a double-whammy about those infinite Einsteins. By themselves, they presumably would at times make mistakes and be imperfect. They also would be subject to AI hallucinations. The danger is that we would be relying on the aura of those infinite Einsteins as though they are perfect and unquestionably right. Would all the AGI-devised infinite Einsteins be in utter agreement with each other? You might claim that the infinite Einsteins would of necessity be of a like mind and ergo would all agree with each other. They lean in the same direction. Anything they say would be an expression of the collective wisdom of the infinite Einsteins. The downside there is that if the infinite Einsteins are acting like lemmings, this seems to increase the chances of any mistakes being given an overabundance of confidence. Think of it this way. The infinite Einsteins tell us in unison that there's a hidden particle at the core of all physics. Any human trying to disagree is facing quite a gauntlet since an infinite set of Einsteins has made a firm declaration. A healthy principle of science is supposed to be the use of scientific discourse and debate. Would those infinite Einsteins be dogmatic or be willing to engage in open-ended human debates and inquiries? Let's consider that maybe the infinite Einsteins would not be in utter agreement with each other. Perhaps they would among themselves engage in scientific debate. This might be preferred since it could lead to creative ideas and thinking outside the box. The dilemma is what do we do when the infinite Einsteins tell us they cannot agree? Do we have them vote and based on a tally decide that whatever is stated seems to be majority-favored? How might the intellectual battle amid infinite Einsteins be suitably settled? A belief that there will be infinite Einsteins ought to open our eyes to the possibility that there would also be infinite Issac Newton's, Aristotle's, and so on. There isn't any particular reason to restrict AGI to just Einsteins. Things might seem to get out of hand. All these infinite geniuses become mired in disagreements across the board. Who are we to believe? Maybe the Einsteins are convinced by some other personas that up is down and down is up. Endless arguments could consume tons of precious computing cycles. We must also acknowledge that evil doers of historical note could also be part of the infinite series. There could be infinite Genghis Khan's, Joseph Stalin's, and the like. Might they undercut the infinite Einsteins? Efforts to try and ensure that AI aligns with contemporary human values is a vital consideration and numerous pathways are currently being explored, see my discussion at the link here. The hope is that we can stave off the infinite evildoers within AGI. Einstein had grave concerns about the use of atomic weapons. It was his handiwork that aided in the development of the atomic bomb. He found himself mired in concern at what he had helped bring to fruition. An AGI with infinite Einsteins might discover the most wonderful of new inventions. The odds are that those discoveries could be used for the good of humankind or to harm humankind. It is a quandary whether we want those infinite Einsteins widely and in an unfettered way to share what they uncover. Here's the deal. Would we restrict access to the infinite Einsteins so that evildoers could not use the capacity to devise destructive possibilities? That's a lot harder to say than it is to put into implementation. For my coverage of the challenges facing AI safety and security, see the link here. Governments would certainly jockey to use the infinite Einsteins for purposes of gaining geo-political power. A nation that wanted to get on the map as a superpower could readily launch into the top sphere by having the infinite Einsteins provide them with a discovery that they alone would be aware of and exploit. The national and international ramifications would be of great consequence, see my discussion at the link here. I promised at the start of this discussion to eventually bring artificial superintelligence into the matter at hand. The reason that ASI deserves a carve-out is that anything we have to say about ASI is purely blue sky. AGI is at least based on exhibiting intelligence of the kind that we already know and see. True ASI is something that extends beyond our mental reach since it is superintelligence. Let's assume that ASI would not only imbue infinite Einsteins, but it would also go far beyond Einstein-level thinking to super Einstein thresholds. With AGI we might have a solid chance of controlling the infinite Einsteins. Maybe. In the case of ASI, all bets are off. The ASI would be able to run circles around us. Whatever the ASI decides to do with the infinite super Einsteins is likely beyond our control. Congrats, you've now been introduced to the infinite Einsteins conjecture. Let's end for now with a famous quote from Einstein. Einstein made this remark: 'Two things are infinite: the universe and human stupidity, and I'm not sure about the universe.' This highlights whether we could suitably harness an AGI containing infinite Einsteins depends upon human acumen and human stupidity. Hopefully, our better half prevails.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store