logo
Microsoft, Antigen Security Partner to Cut Cyber Insurance Costs

Microsoft, Antigen Security Partner to Cut Cyber Insurance Costs

Yahoo28-06-2025
Microsoft Corporation (NASDAQ:MSFT) is one of the best software infrastructure stocks to invest in. On June 26, Antigen Security, LLC, announced a new partner program with Microsoft, establishing Antigen as one of Microsoft's Top 150 Managed Partners. The collaboration allows customers utilizing Microsoft 365 E3/E5 Security and Microsoft Azure Security to achieve an average of 20% to 60% savings on their cyber insurance premiums.
These savings are realized when Microsoft's security suite is integrated into a risk-informed and standards-aligned cyber risk management strategy. The partnership uses Antigen's expertise in cyber liability insurance underwriting and claims research alongside Microsoft's advanced security products. Antigen's risk management approach adheres to industry-leading frameworks and ensures a coverage strategy that promotes 'Resilience by Design'.
A development team working together to create the next version of Windows.
As a Top 150 Managed Partner, Antigen will provide specialized training, marketing support, and incentives to various Microsoft channel partners. Antigen is also introducing 4 dedicated landing pages tailored for different audiences: New Microsoft Partners, Existing Microsoft Partners, New Microsoft Customers, and Existing Microsoft Customers.
Microsoft Corporation (NASDAQ:MSFT) develops and supports software, services, devices, and solutions.
While we acknowledge the potential of MSFT as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the .
READ NEXT: and .
Disclosure: None. This article is originally published at Insider Monkey.
Fehler beim Abrufen der Daten
Melden Sie sich an, um Ihr Portfolio aufzurufen.
Fehler beim Abrufen der Daten
Fehler beim Abrufen der Daten
Fehler beim Abrufen der Daten
Fehler beim Abrufen der Daten
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

TrustedTech Unveils Rebrand and Evolved Focus
TrustedTech Unveils Rebrand and Evolved Focus

Los Angeles Times

time2 minutes ago

  • Los Angeles Times

TrustedTech Unveils Rebrand and Evolved Focus

Rebrand aims to reflect TrustedTech's evolution from licensing provider to full-service technology partner, and new status as a Microsoft Managed Partner Irvine-based TrustedTech, a Microsoft Managed Partner and provider of Microsoft cloud solutions and IT modernization services, has unveiled its brand, an evolution into an innovation-driven technology partner focused on AI, IT infrastructure and Modern Work digital transformation. As companies race to modernize their IT environments, integrate AI responsibly and maximize Microsoft cloud investments, TrustedTech has redefined its role: from a licensing provider to a full-scale technology partner. The rebrand is described as a commitment to the company's primary focus on helping customers cut through complexity, adopt intelligent tools like Microsoft Copilot and build future-ready infrastructure with confidence. 'This evolution isn't about a new name and logo, it's about rising to meet the moment,' said Julian Hamood, founder of TrustedTech. 'Every part of our rebrand was to allow our current and future customers the ability to access sophisticated technologies while minimizing time-to-value in today's fast-moving technology landscape. It's not just a better version of who we were but the best form of who we will be for businesses who need a true partner to help them navigate AI adoption, modernize legacy systems and make smarter, faster technology decisions.' TrustedTech's new phase brings expanded offerings that include Microsoft Copilot implementation, Azure infrastructure tenant migrations, Microsoft 365 optimization, security hardening, tailored licensing advisory and a more comprehensive list of professional services and technical break/fix support – especially for those going through mergers & acquisitions. 'Our customers are navigating rapid technology shifts that demand smarter strategies and reliable support,' added Hamood. 'We're committed to empowering IT leaders to confidently adopt AI, optimize cloud environments and modernize their infrastructure, delivering practical solutions that drive real business outcomes. TrustedTech is here to turn complexity into clarity, so our clients can focus on growth and innovation.' Information sourced from TrustedTech. To learn more, contact

The Real Risks of Turning to AI for Therapy
The Real Risks of Turning to AI for Therapy

WebMD

time2 minutes ago

  • WebMD

The Real Risks of Turning to AI for Therapy

Aug. 20, 2025 — Whenever Luke W Russell needs to work through something, they turn to ChatGPT. (Luke uses they/them pronouns.) 'I've wept as I've navigated things,' said the Indianapolis filmmaker, who uses the chatbot to pick apart intrusive thoughts or navigate traumatic memories. 'I've had numerous times when what ChatGPT is saying to me is so real, so powerful, and I feel so deeply seen.' Russell's experience reflects a broader, growing reality: Many people are turning to chatbots for mental health support — for everything from managing anxiety and processing grief to coping with work conflicts and defusing marital spats. More than half of adults ages 18-54 — and a quarter of adults 55 and up — say they would be comfortable talking with an AI chatbot about their mental health, according to a 2025 survey by the Harris Poll and the American Psychological Association (APA). The catch: OpenAI's ChatGPT and other chatbots — like Anthropic's Claude and Google's Gemini — are not designed for this. Even AI products promoted as emotional health tools — like Replika, Wysa, Youper, and MindDoc — were not built on validated psychological methods, said psychologist C. Vaile Wright, PhD, senior director of the APA's Office of Health Care Innovation. 'I would argue that there isn't really any commercially approved, AI-assisted therapy at the moment,' said Wright. 'You've got a whole lot of chatbots where there is no research, there's no psychological science, and there are no subject matter experts.' Critics warn that AI's potential for bias, lack of true empathy, and limited human oversight could actually endanger users' mental health, especially among vulnerable groups like children, teens, people with mental health conditions, and those experiencing suicidal thoughts. The growing concern has led to the emergence of the terms 'ChatGPT psychosis' or ' AI psychosis ' — referring to the potential harmful mental health effects of interacting with AI. It's even drawing attention from lawmakers: This month, Illinois enacted restrictions on AI in mental health care, banning its use for therapy and prohibiting mental health professionals from using AI to communicate with clients or make therapeutic decisions. (Similar restrictions have already been passed in Nevada and Utah.) But none of this is stopping people from turning to chatbots for support, especially amid clinician shortages, rising therapy costs, and inadequate mental health insurance coverage. 'People have absolutely reported that experiences with chatbots can be helpful,' said Wright. The Draw of Chatbots for Mental Health Data shows we're facing a massive shortage of mental health workers, especially in remote and rural areas, said psychologist Elizabeth Stade, PhD, a researcher in the Computational Psychology and Well-Being Lab at Stanford University in Stanford, CA. 'Of adults in the United States with significant mental health needs, only about half are able to access any form of treatment. With youth, that number is closer to 75%,' said Jessica Schleider, PhD, a child and adolescent psychologist at Northwestern University in Chicago. 'The provider shortage is clearly contributing to why so many folks are turning to their devices and, now increasingly, to generative AI to fill that gap.' Unlike a therapist, a chatbot is available 24/7. 'When [people] need help the most, it is typically after hours,' said Wright, who suggested the right AI tool could potentially supplement human therapy. 'When it's 2 a.m. and you're in crisis, could this help provide some support?' Probably, she said. Results of the first clinical trial of an AI-generative therapy chatbot showed 'significant, clinically meaningful reductions in depression, anxiety, and eating disorder symptoms' within four to eight weeks, said lead study author Michael V. Heinz, MD, a professor at Dartmouth College's Geisel School of Medicine and faculty affiliate at the Center for Technology and Behavioral Health in Lebanon, New Hampshire. The chatbot — Therabot, developed at Dartmouth — combines extensive training in evidence-based psychotherapy interventions with advanced generative AI. 'We saw high levels of user engagement — six-plus hours on average across the study,' Heinz said. Participants said using Therabot was like talking to a human therapist. But results are early, and more studies are needed, Heinz said. Access and affordability drew Russell to ChatGPT, they said. 'I didn't set out to use ChatGPT as a therapist. I quit therapy in January due to income dropping. I was already using ChatGPT on the regular for work, and then I started using it for personal idea exploration. ... I've never had a therapist who could move as fast as ChatGPT and ignore miscellaneous things,' they said. Perhaps one of the most appealing aspects is that chatbots don't judge. 'People are reluctant to be judged, and so they are often reluctant to disclose symptoms,' said Jonathan Gratch, PhD, professor of computer science and psychology at the University of Southern California, who has researched the topic. One of his studies found that military veterans were more likely to share PTSD symptoms with a virtual chatbot than in a survey. When Chatbots Are Harmful Most people don't know how AI works — they might believe it's always objective and factual, said Henry A. Willis, PhD, a psychologist and professor at the University of Maryland in College Park. But often, the data they're trained on is not representative of minority groups, leading to bias and technology-mediated racism, Willis said. 'We know that Black and brown communities are not adequately reflected in the majority of large-scale mental health research studies,' Willis said. So a chatbot's clinical symptom information or treatment recommendations may not be relevant or helpful to those from minority backgrounds. There's also an impersonal aspect. Chatbots do what's called ecological fallacy, said H. Andrew Schwartz, PhD, associate professor of computer science at Stony Brook University in Stony Brook, NY. They treat scattered comments like random data points, making assumptions based on group-level data that may not reflect the reality of individuals. And who's accountable if something goes wrong? Chatbots have been linked to cases involving suggestions of violence and self-harm, including the death of a teen by suicide. Some chatbots marketed for companionship and emotional support were designed with another incentive: to make money. Wright is concerned that they may unconditionally validate patients, telling them what they want to hear so they stay on the platform — 'even if what they're telling you is actually harmful or they're validating harmful responses from the user.' None of these conversations are bound by HIPAA regulations, either, Wright pointed out. 'So even though they may be asking for personal information or sharing your personal information, they have no legal obligation to protect it.' The Psychological Implications of Forming Emotional Bonds With AI In an opinion article published in April in the journal Trends in Cognitive Sciences, psychologists expressed concern about the long-term implications of forming emotional bonds with AI. Chatbots can replace users' real relationships, crowding out romantic partners, co-workers, and friends. This may mean that individuals begin to 'trust' the opinion and feedback of chatbots over real people, said Willis. 'The ongoing positive reinforcement that can happen instantly from interacting with a chatbot may begin to overshadow any reinforcement from interacting with real people,' who may not be able to communicate as quickly, he said. 'These emotional bonds may also impair people's ability to have a healthy level of skepticism and critical evaluation skills when it comes to the responses of AI chatbots.' Gratch compared it to hunger and food. 'We're biologically wired to seek out food when we get hungry. It is the same with social relationships. If we haven't had a relationship in a while, we may feel lonely, and then that motivates us to go out and reach out to people.' But studies suggest that social interaction with a computer program, like a chatbot, can sate a person's social needs and demotivate them to go out with friends, he said. 'That may have long-term consequences for increased loneliness. For example, research has shown people who compulsively use Facebook tend to be much more lonely.' Counseling with a therapist involves 'a natural curiosity about the individual and their experiences that AI cannot replicate,' Willis said. 'AI chatbots respond to prompts, whereas therapists can observe and ask clinical questions based on one's body language, a synthesis of their history, and other things that may not be conscious to the client — or things the client may not even be aware are important to their mental health well-being.' The Future of AI Therapy "I think there is going to be a future where you have really well-developed [chatbots] for addressing mental health that are scientifically driven and where they are ensuring that there are guardrails in place when somebody is in crisis. We're just not quite there yet,' said the APA's Wright. 'We may get to a place where they're even reimbursed by insurance,' she said. 'I do think increasingly we are going to see providers begin to adopt these technology tools as a way to meet their patients' needs.' But for now, her message is clear: The chatbots are not there yet. 'Ideally, chatbot design should encourage sustained, meaningful interaction with the primary purpose of delivering evidence-based therapy,' said Dartmouth's Heinz. Until then, don't rely on them too heavily, the experts cautioned — and remember, they are not a substitute for professional help.

Forget the Pixel 10 Pro Fold. Foldables Should Look Like the Microsoft Surface Duo
Forget the Pixel 10 Pro Fold. Foldables Should Look Like the Microsoft Surface Duo

CNET

time2 minutes ago

  • CNET

Forget the Pixel 10 Pro Fold. Foldables Should Look Like the Microsoft Surface Duo

Google almost got it right with the first Pixel Fold. As Google unveils its latest foldable, the Pixel 10 Pro Fold, at this year's Made by Google event, I keep thinking about the original Pixel Fold. Released in 2023, it was far from a perfect phone. It was underpowered, had a thick inner bezel and couldn't open completely flat. Even then, it was a beautiful device with a shiny stainless steel chassis that felt substantial in the hand. Not only that, it was wide. The passport-like form factor made it squat in comparison to slab-style iPhones and Galaxys, but when opened up, it had an almost TV-like 17:9 aspect ratio. Then, with the Pixel 9 Pro Fold the following year, Google threw out the Moleskine for Galaxy Z Fold-style safety. The Pixel 9 Pro Fold axed its predecessor's passport-like form factor for a slab that happened to fold open. Literally, the Pixel 9 Pro Fold had the same outer screen aspect ratio as the Pixel 9 Pro. Here, Google was folding (no pun intended) to industry trends, following similar form factors of the Galaxy Z Fold 6, OnePlus Open, Xiaomi's MIX Fold 4 and others. Apart from the first Pixel Fold, the only other phone to go squat and wide was the first Oppo Find N from 2021. YouTuber Marques Brownlee went so far as to call it the best folding phone, in terms of form factor. Since then, Oppo dropped the passport for a folding slab, as can be seen with the Oppo Find N5. Why all foldables look the same A major reason all foldable phone makers have adopted a similar form factor is app support. The outer display on the Samsung Galaxy Z Fold 7 has a more typical slab phone aspect ratio, meaning apps better conform to its standard smartphone-like screen. And when opened, loading up two apps side-by-side fits well on what's essentially a long rectangle. "That's what led us to say, 'Hey, we need to make this a phone first in terms of design,'" Claude Zellweger, Google's director of industrial design, said in a 2024 blog post about the Pixel 9 Pro Fold. The other issue is Android's limited app support for tablets. The iPad, which has strong support from Apple, sells well and gets plenty of app updates to support its wider form factor. The same can't be said for Android tablets. Many Android apps aren't optimized for wider tablet aspect ratios. It's a recurring complaint from fans and tech columnists. It doesn't help that Google hasn't put in a strong effort in the tablet space either, with the company axing its Pixel Tablet line after just one attempt. So, when you tried to run Reddit or some other popular third-party apps on the Pixel Fold's wide screen, the app would have black bars on both sides, essentially running in the same aspect ratio as a typical phone. This problem persists on taller foldables as well; it just isn't as prominent. Who'll make a foldable that gets it right? Despite the anemic support for tablet apps, I'd love to see a foldable phone with the measurements of the Microsoft Surface Duo. Released in 2020, it was a half-step toward what foldable phones are today. Instead of having a foldable inner display, it was two screens bisected by a hinge. It really felt like holding a metal and glass Moleskine notebook. The inner two displays on the Surface Duo had a 4:3 aspect ratio, which is also square-like. This is largely due to its chunky inner bezels. Foldable screen tech has come a long way since. Assuming Microsoft were to make a true foldable in 2025 with minimal bezels, the inner display could have a wide 24:9 aspect ratio. It would be a beastly device for watching YouTube, Netflix or playing games. And rotating it would feed my eyes an endless scroll of Instagram Reels brainrot. The likelihood of any foldable phone-maker bisecting a 16:9 or 21:9 screen is slim. Honestly, if any phone-maker could do something so daring, it'd be Apple. Rumors have been floating for some time that Apple is working on a clamshell-like foldable for 2027. If Apple does decide to jump into the world of book-style foldables, thanks to the iPad, a wide foldable iPhone would be ready to go without the need for developers to massively overhaul their apps. Assuming Apple does do a squat and wide foldable, that move would likely prompt the rest of the smartphone market to (again, I apologize for the pun), fold.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store