logo
New bill would prevent evictions during extreme weather in Pennsylvania

New bill would prevent evictions during extreme weather in Pennsylvania

Yahoo28-05-2025

PENNSYLVANIA (WTAJ) — A state representative has introduced legislation that would require judges to halt eviction orders during times of extreme weather.
In a memo to lawmakers, Representative Greg Scott argues that there needs to be something that prevents residents from being placed in immense danger during times that are already stressful.
Property management company settles with Pennsylvania AG over AI-related maintenance delays
'Currently, there is nothing in state law to stop an eviction from proceeding during a blizzard, a hurricane, or other extreme weather event. While landlords have a prerogative to evict their tenants for nonpayment of rent, there should be considerations for extreme weather conditions,' Scott wrote.
The proposed legislation would require a judge to halt the execution of an eviction order on a day-to-day basis when extreme weather is ongoing for those on federal assistance programs.
'This legislation will protect the most vulnerable tenants from being put in an even more precarious situation,' according to Scott's memo.
Whilst this bill has not yet been introduced, when it officially is, you can keep up to date with its progress here.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Behind the Curtain: The scariest AI reality
Behind the Curtain: The scariest AI reality

Axios

time21 minutes ago

  • Axios

Behind the Curtain: The scariest AI reality

The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work. Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do. Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown. None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it. Two years ago, Axios managing editor for tech Scott Rosenberg wrote a story, "AI's scariest mystery," saying it's common knowledge among AI developers that they can't always explain or predict their systems' behavior. And that's more true than ever. Yet there's no sign that the government or companies or general public will demand any deeper understanding — or scrutiny — of building a technology with capabilities beyond human understanding. They're convinced the race to beat China to the most advanced LLMs warrants the risk of the Great Unknown. The House, despite knowing so little about AI, tucked language into President Trump's "Big, Beautiful Bill" that would prohibit states and localities from any AI regulations for 10 years. The Senate is considering limitations on the provision. Neither the AI companies nor Congress understands the power of AI a year from now, much less a decade from now. The big picture: Our purpose with this column isn't to be alarmist or " doomers." It's to clinically explain why the inner workings of superhuman intelligence models are a black box, even to the technology's creators. We'll also show, in their own words, how CEOs and founders of the largest AI companies all agree it's a black box. Let's start with a basic overview of how LLMs work, to better explain the Great Unknown: LLMs — including Open AI's ChatGPT, Anthropic's Claude and Google's Gemini — aren't traditional software systems following clear, human-written instructions, like Microsoft Word. In the case of Word, it does precisely what it's engineered to do. Instead, LLMs are massive neural networks — like a brain — that ingest massive amounts of information (much of the internet) to learn to generate answers. The engineers know what they're setting in motion, and what data sources they draw on. But the LLM's size — the sheer inhuman number of variables in each choice of "best next word" it makes — means even the experts can't explain exactly why it chooses to say anything in particular. We asked ChatGPT to explain this (and a human at OpenAI confirmed its accuracy): "We can observe what an LLM outputs, but the process by which it decides on a response is largely opaque. As OpenAI's researchers bluntly put it, 'we have not yet developed human-understandable explanations for why the model generates particular outputs.'" "In fact," ChatGPT continued, "OpenAI admitted that when they tweaked their model architecture in GPT-4, 'more research is needed' to understand why certain versions started hallucinating more than earlier versions — a surprising, unintended behavior even its creators couldn't fully diagnose." Anthropic — which just released Claude 4, the latest model of its LLM, with great fanfare — admitted it was unsure why Claude, when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can't fully explain the irresponsible action. Again, sit with that: The company doesn't know why its machine went rogue and malicious. And, in truth, the creators don't really know how smart or independent the LLMs could grow. Anthropic even said Claude 4 is powerful enough to pose a greater risk of being used to develop nuclear or chemical weapons. OpenAI's Sam Altman and others toss around the tame word of " interpretability" to describe the challenge. "We certainly have not solved interpretability," Altman told a summit in Geneva last year. What Altman and others mean is they can't interpret the why: Why are LLMs doing what they're doing? Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology." Amodei called this a serious risk to humanity — yet his company keeps boasting of more powerful models nearing superhuman capabilities. Anthropic has been studying the interpretability issue for years, and Amodei has been vocal about warning it's important to solve. In a statement for this story, Anthropic said: "Understanding how AI works is an urgent issue to solve. It's core to deploying safe AI models and unlocking [AI's] full potential in accelerating scientific discovery and technological development. We have a dedicated research team focused on solving this issue, and they've made significant strides in moving the industry's understanding of the inner workings of AI forward. It's crucial we understand how AI works before it radically transforms our global economy and everyday lives." (Read a paper Anthropic published last year, "Mapping the Mind of a Large Language Model.") Elon Musk has warned for years that AI presents a civilizational risk. In other words, he literally thinks it could destroy humanity, and has said as much. Yet Musk is pouring billions into his own LLM called Grok. "I think AI is a significant existential threat," Musk said in Riyadh, Saudi Arabia, last fall. There's a 10%-20% chance "that it goes bad." Reality check: Apple published a paper last week, "The Illusion of Thinking," concluding that even the most advanced AI reasoning models don't really "think," and can fail when stress-tested. The study found that state-of-the-art models (OpenAI's o3-min, DeepSeek R1 and Anthropic's Claude-3.7-Sonnet) still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero "beyond certain complexities." But a new report by AI researchers, including former OpenAI employees, called " AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is long and often too technical for casual readers to fully grasp. It's wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies. It captures the belief — or fear — that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly. You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue. In the AI 2027 report, the authors warn that competition with China will push LLMs potentially beyond human control, because no one will want to slow progress even if they see signs of acute danger. The safe-landing theory: Google's Sundar Pichai — and really all of the big AI company CEOs — argue that humans will learn to better understand how these machines work and find clever, if yet unknown ways, to control them and " improve lives." The companies all have big research and safety teams, and a huge incentive to tame the technologies if they want to ever realize their full value.

Remedying Pain Points In After-Hours Patient Care With AI
Remedying Pain Points In After-Hours Patient Care With AI

Forbes

time40 minutes ago

  • Forbes

Remedying Pain Points In After-Hours Patient Care With AI

Sidd Shah, Vice President, Strategy & Business Growth, healow. getty There is no defined timeframe for when patient care is needed most. At times, patients may need to reach their provider's office after hours for pressing concerns that require prompt responses. Furthermore, while chatbots and email offer an option to conveniently receive information, many patients prefer calling instead of using app-based services to receive timely responses on information such as appointment schedules and the status of medication refills. Many doctors' offices have had difficulties managing rising call volumes and voicemail messages, and patients may have had to deal with delayed responses. This creates a fragmented healthcare experience, which can leave patients feeling frustrated and put a strain on patient satisfaction. Hiring additional staff or after-hours teams to manage call volumes can be expensive and often isn't feasible for smaller, independent healthcare practices. New AI solutions, such as AI-powered contact centers, stand to transform after-hours care. By adopting these tools, practices can answer patient calls 24/7 and provide patients with the information they need in a timely manner without needing to hire additional staff. A 2025 MGMA poll found that 32% of surveyed medical group leaders listed AI tools as their top tech priority, demonstrating that the healthcare industry recognizes the potential of AI to transform the care delivery model. A December 2023 American Medical Association poll of physicians found that 24.4% of those surveyed worked in a practice that invested in after-hours care. This can be frustrating for patients, as healthcare is an around-the-clock issue, and critical needs can arise at any time of day. While emergency rooms offer 24/7 options for urgent needs, patients don't want it to be their only option; sometimes, they require real-time support to determine if it's the right option. For example, a patient may have uncomfortable symptoms such as a fever and be on the fence about whether to escalate seeking emergency care. Patients can be left feeling unsure about what the right healthcare decision is simply due to the time of day. AI assistants can offer patients the option to speak to an on-call provider who has access to their medical history and can help them promptly determine the best course of action. Solutions that address patients' after-hours care needs can improve the healthcare ecosystem by helping patients make informed decisions about seeking timely care. Healthcare staff members are not just struggling with keeping up with inbound patient calls after hours, but they're also struggling with managing call volumes during business hours. In a survey conducted at the eClinicalWorks 2024 National Conference, 62% of respondents reported that their practice spends more than four hours a day answering calls and managing appointments. This issue is poised to intensify as anticipated staffing shortages loom over provider offices. A 2024 Mercer study (via the American Hospital Association) projected a shortage of 100,000 critical healthcare workers by 2028. AI can address these pain points during and after business hours by immediately answering 100% of patient calls, never putting patients on hold and offering actionable insights—saving provider offices time and resources. When adopting AI, healthcare organizations must prioritize a few considerations and take internal steps: • Understanding Objectives: Before implementation, collaborate with the team to align on objectives (e.g., improve patient support after hours, improve patient satisfaction, reduce waiting times). • Selecting A Partner: Consider all options and select a technology or solution that will offer long-term support. It's important to have a full understanding of the adopted technology and ensure it works in harmony with electronic health record (EHR) systems and patient scheduling platforms. • Customizing And Integrating: Spend time customizing the solution to meet the practice's needs and match the practice's tone and response style. The technology partner can help guide you through this process. Once it meets those requirements, move forward with integration. • Analyzing Performance: Once integrated, consistently monitor and evaluate the technology's performance to ensure it is meeting the practice's goals or value-based care goals. Keep a close eye on response time, patient satisfaction rates and call resolution rates to understand how it's performing for patients. When properly implemented, AI can boost the efficiency of a practice while simultaneously improving patient engagement and satisfaction. It's critical to select a solution that prioritizes continuous evolution, improvements and the introduction of new capabilities to ensure the tool will benefit the practice over the long term. Currently, patients' healthcare journeys are often fragmented, defined by gaps in communications with their providers, regardless of the time of day. This no longer needs to be the case with the continuous introduction and adoption of AI that can transform healthcare delivery and improve the patient experience. Adopting an AI contact center requires strategic planning, but when done correctly, it can redefine the way patients, providers and practices communicate. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

JBA AI Earns FDA SaMD Recognition
JBA AI Earns FDA SaMD Recognition

Associated Press

timean hour ago

  • Associated Press

JBA AI Earns FDA SaMD Recognition

A New Era of Accessible Health Monitoring GARDEN GROVE, CALIFORNIA / ACCESS Newswire / June 9, 2025 / JBA AI, the flagship health-tech division under AQ Pharmaceuticals, Inc. and JBA Brands, proudly announces several major global milestones in its mission to make preventive healthcare more accessible, intelligent, and AI Infographic JBA AI has been officially recognized by the United States Food and Drug Administration (FDA) as a Software as a Medical Device (SaMD). In addition, the platform has now received Class B Medical Device approval from Vietnam's Ministry of Health. These achievements affirm the platform's safety, clinical effectiveness, and international credibility. The JBA AI mobile application is already available globally for Android users and will soon be released on iOS. It enables contactless, non-invasive vital signs monitoring directly through smartphones or tablets. In just 60 seconds, the app delivers insights on over nine key health metrics. These include heart rate, heart rate variability, blood oxygen saturation, respiratory rate, stress index, and estimated glucose and blood pressure trends. This is achieved using AI-powered facial signal analysis through camera-based remote photoplethysmography technology. Expanding its impact even further, JBA AI has partnered with one of Vietnam's largest financial institutions, which serves more than 30 millions members. Together, they are preparing to launch JBA AI nationwide, making preventive health monitoring more accessible through a trusted and widespread digital network. 'JBA AI is more than a health app - it's a certified medical device. Recognized by both the FDA and Vietnam's Ministry of Health,' said Tracy Trang Nguyen, founder and CEO of JBA AI. 'We're proud to deliver trusted, clinically guided tools for daily health monitoring worldwide.' Nguyen is a Vietnamese American engineer and wellness industry leader who developed JBA AI after surviving cancer. Her personal health journey inspired her to combine advanced artificial intelligence with affordable wellness solutions. Under her leadership, JBA AI complements other products within the JBA Brands portfolio, such as GlucoTrojan, Joint Revive Fucoidan, and Vegan Collagen, to form a fully integrated health ecosystem. JBA AI is already gaining traction and is starting to expand into pharmacies, corporate wellness programs, public health campaigns, and family clinics across the United States, Vietnam, and other countries. The platform's franchise-ready model supports rapid local deployment while keeping costs low and impact high. It is designed to help people monitor their health in real-time, without the need for needles, wearable devices or clinic visits. JBA AI is currently welcoming franchisees, affiliates, and institutional partners to help bring this technology to communities around the world. Whether at home or in partnership with healthcare providers, JBA AI is redefining preventive care through data-driven technology and a compassionate approach to well-being. Contact Information Catherine Marino Marketing Manager [email protected] (+1) 714 903 1009 SOURCE: AQP One, Inc. press release

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store