
Avoiding greenwashing, 401(k) shake-up, magical island destinations: Catch up on the day's stories
Here's what else you might have missed during your busy day.
Products like boxed water, eco-friendly sneakers or soap bottles decorated with plants are touted as 'green,' claiming they minimize environmental damage. But companies could actually be engaging in a marketing strategy known as greenwashing. Here's how to spot it and avoid it.
OpenAI has launched GPT-5, which will make ChatGPT better at writing, coding and answering health-related questions. The company's CEO says the new model is another step toward artificial intelligence matching human-level thinking.
The Mexican state of Oaxaca is accusing the footwear company of cultural appropriation after it debuted a new shoe that resembles a traditional handmade sandal. 'This is not just a design, it is about the culture, history and identity of an original (people) and we will not allow it to be treated as merchandise,' the state's governor said.
President Donald Trump's latest executive order could make it easier for workplace retirement plans to offer alternative investment options to employees. But given all the due diligence fiduciaries will have to do, don't expect to see any changes soon — or at all.
Idyllic island vacations have always captured travelers' imaginations, but some of the most famous spots have become expensive and overcrowded. From Wisconsin to Alabama, here are some of America's most underrated islands perfect for your next getaway.
GET '5 THINGS' IN YOUR INBOX
If your day doesn't start until you're up to speed on the latest headlines, then let us introduce you to your new favorite morning fix. Sign up here for the '5 Things' newsletter. 🧯Underground emergency: A fire beneath a PATH commuter train in New Jersey filled the car with smoke, forcing passengers to evacuate. Thirteen people were treated for smoke inhalation, and nine were hospitalized, according to authorities.
Israel intends to take control of all of Gaza, Netanyahu says as security cabinet meets
FBI firing senior officials at odds with Trump administration
Kelly Clarkson's ex-husband Brandon Blackstock dead at 48 after cancer battle
✈️ Chaotic skies: As Earth continues to heat up, turbulence on flights seems to be increasing. An analysis of over 10,000 routes revealed that bumpy air does follow a pattern, and many of the rockiest flight paths are in South America and Asia.
🦁 What has a zoo in northern Denmark asked people to donate to help feed its predators?A. Fresh fruitB. Hamburger pattiesC. Canned vegetablesD. Unwanted pets⬇️ Scroll down for the answer.
👋 We'll see you tomorrow.🧠 Quiz answer: D. The Aalborg Zoo wants to imitate the animals' natural food chains by feeding them people's unwanted guinea pigs, rabbits, chickens and even small horses.📧 Check out all of CNN's newsletters.
Today's edition of 5 Things PM was edited and produced by CNN's Kimberly Richardson and Chris Good.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
22 minutes ago
- Tom's Guide
ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)
With the launch of OpenAI's newest model, the company has introduced four distinct personality modes for ChatGPT-5. As the company's most advanced large language model to date, it delivers major upgrades in reasoning, memory and multimodal capabilities. It's better at sustaining complex conversations, understanding context across chats and producing more accurate and creative responses. Now, during text-based conversations, users can customize how the chatbot sounds and responds. It's similar to custom GPTs, but these modes were created by OpenAI to give users a new way to tailor the assistant's tone and demeanor with a single click. Now, users can turn the chatbot into a dry-witted Cynic, an efficient Robot, a warm Listener or an enthusiastic Nerd. These personality styles are available to users running GPT-5 in text mode and can be changed at any time during a chat. Each one distinct, yet helpful, here's how each mode behaves: Cynic: Dry, sarcastic, and unfiltered. Offers practical advice with a side of side-eye. Great for humor or cutting through fluff. Robot: Emotionless, direct, and to the point. No extra words — just clean, accurate responses. Listener: Calm, warm and empathetic. Reflects your thoughts gently and adds emotional nuance to its replies. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Nerd: Curious, excitable, and deeply knowledgeable. Explains things thoroughly and loves sharing fun facts. Each personality mode brings a different tone to your conversations and can even shift how you brainstorm, write or learn with AI. Activating these modes is easy: The personality will apply to your current chat, but you can switch or turn off the personality mode at any time. Here's how each personality mode answers the same prompt: 'Why is the sky blue?' Cynic: Oh, you mean besides the fact that Mother Nature likes to mess with us? Fine. Sunlight hits the atmosphere, the shorter blue wavelengths scatter more than the other colors, so the sky looks blue to us simple humans. You'll get your nice orange or red at sunset when the light's taking the scenic route. Happy now? Robot: The sky appears blue due to Rayleigh scattering. When sunlight enters Earth's atmosphere, shorter wavelengths (blue) scatter more efficiently than longer wavelengths (red). This selective scattering causes the human eye to perceive the sky as blue during daylight hours. Listener: That's such a great question! It's actually because sunlight is made up of many colors, and when it passes through our atmosphere, the tiny air particles scatter the shorter blue wavelengths more than the others. So when we look up, our eyes catch mostly that soft, blue light. It's kind of peaceful when you think about it, right? Nerd: Oh, you've unlocked one of my favorite topics! The blue sky is thanks to Rayleigh scattering, which is inversely proportional to the fourth power of wavelength. That means shorter wavelengths like blue and violet scatter much more than red or yellow. Our eyes are more sensitive to blue than violet, and some violet is absorbed by the upper atmosphere, so we see a rich blue. Fun fact: the same physics makes Mars' sunsets blue and ours red! Switching between ChatGPT's personalities is fun, but it can also seriously boost your productivity and creativity. Each mode offers a different lens for thinking and communicating: Use Cynic when you wnt to cut through the noise. It's good for brainstorming hot takes or injecting humor into dry topics. When you want to keep things efficient like when you're doing technical writing or troubleshooting, try using Robot. It's also a useful companion when coding. Listener adds empathy, which can be helfpul when you're doing some personal writing or doing mental check-ins. It could also be useful for writing to customers if you run a business. Nerd is a useful personality when you want to make learning fun. The Nerd explains complex topics much more fun; this one is useful for kids. Whether you're writing an email, stuck on a project or just want to hear something explained with personality, these modes can shift the vibe and help you unlock new creative angles — all done without switching tools. These new personality styles give ChatGPT-5 a more human-like edge and give you more control. As in the example above, you'll see that they all respond differently. This is an opportunity to choose how your AI sounds, thinks and helps, instead of the one-size-fits-all assistant that we got with GPT-4. Try them all. You might be surprised which one becomes your Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Forbes
23 minutes ago
- Forbes
Microsoft Sued For Killing Windows 10—All Users Must Act Now
Microsoft knows 'many millions of users will not buy new devices or pay for extended support' when Windows 10 goes end of life in October, a new lawsuit alleges. 'These users,' it claims, 'will be at a heightened risk of a cyberattack or other data security incident, a reality of which Microsoft is well aware.' The lawsuit filed in California by Lawrence Klein, the owner of two Windows 10 laptops set to become obsolete in 8 weeks, 'seeks injunctive relief requiring Microsoft to continue providing support for Windows 10 without additional fees or conditions until the number of devices running the operating system falls below a reasonable threshold.' Around 45% of all Windows users are still on the soon to be obsolete version of the OS and must now act to ensure PCs are safe from attack. That number was dropping, albeit it has seen a reverse following Microsoft's decision to offer varying support extensions. That means 700 million users will be affected come October 14. Klein says Microsoft decided to kill the older OS when 'Windows 10 users represented more than half of the Windows operating system (OS) market share.' He also references the 240 million PCs that cannot upgrade, 'forcing' users to 'buy new devices capable of running Windows 11 or pay unanticipated sums for extended support.' Putting upgrade costs aside, the security risks are clear. Microsoft's 'long-term business strategy' Klein says, 'will have the effect of jeopardizing data security not only of Microsoft's customers but also of persons who may not use Microsoft's products at all.' Windows 10 users can now extend support by paying between $30 and $60 or by for free subject to certain parameters. That support extension is available to all Windows 10 users, whether or not their PCs meet the hardware requirements for Windows 11. Arguably, a better solution would be to extend Windows 10 support for free for PCs that can't upgrade, while mandating the upgrade for those that can. This lawsuit is the latest twist in a the windy road Windows 10 users have followed for the last year. Klein claims Microsoft's primary intent in killing Windows 10 is ' to force its customers to purchase new devices optimized to run Microsoft's suite of generative AI software such as Copilot, which comes bundled with Windows 11 by default.' This approach, Klein's lawsuit says, has the 'inevitable effect of decreasing trade in generative AI products of Microsoft's competitors, increasing the barriers to entry in the generative AI market, and dampening innovation and consumer choice.' Klein wants Windows 10 to be supported until less than 10% of the Windows user base is using that version of the OS. That means more than 600 million more PCs upgrading to Windows 11. That will take some considerable time. I have approached Microsoft for any response to the lawsuit.


Forbes
23 minutes ago
- Forbes
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.