logo
Microsoft's Black Hat-like hacking event returns with bigger rewards.

Microsoft's Black Hat-like hacking event returns with bigger rewards.

The Vergea day ago
Posted Aug 4, 2025 at 5:00 PM UTC Microsoft's Black Hat-like hacking event returns with bigger rewards.
Microsoft's hacking event, Zero Day Quest, is back and accepting nominations today until October 4th. This year there is up to $5 million in bounty awards, up $1 million on last year's figure. Microsoft is even offering multiplied bounty awards for the most critical issues in products like Azure, Copilot, and Microsoft 365. Security researchers will also get a chance to quality for a live hacking event at Microsoft's headquarters in spring 2026. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Tom Warren Posts from this author will be added to your daily email digest and your homepage feed. See All by Tom Warren
Posts from this topic will be added to your daily email digest and your homepage feed. See All Microsoft
Posts from this topic will be added to your daily email digest and your homepage feed. See All News
Posts from this topic will be added to your daily email digest and your homepage feed. See All Security
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Tech
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Chatbots Can Trigger a Mental Health Crisis. What to Know About "AI Psychosis"
Chatbots Can Trigger a Mental Health Crisis. What to Know About "AI Psychosis"

Time​ Magazine

time22 minutes ago

  • Time​ Magazine

Chatbots Can Trigger a Mental Health Crisis. What to Know About "AI Psychosis"

AI chatbots have become a ubiquitous part of life. People turn to tools like ChatGPT, Claude, Gemini, and Copilot not just for help with emails, work, or code, but for relationship advice, emotional support, and even friendship or love. But for a minority of users, these conversations appear to have disturbing effects. A growing number of reports suggest that extended chatbot use may trigger or amplify psychotic symptoms in some people. The fallout can be devastating and potentially lethal. Users have linked their breakdowns to lost jobs, fractured relationships, involuntary psychiatric holds, and even arrests and jail time. At least one support group has emerged for people who say their lives began to spiral after interacting with AI. The phenomenon—sometimes colloquially called 'ChatGPT psychosis' or 'AI psychosis'—isn't well understood. There's no formal diagnosis, data are scarce, and no clear protocols for treatment exist. Psychiatrists and researchers say they're flying blind as the medical world scrambles to catch up. What is 'ChatGPT psychosis' or 'AI psychosis'? The terms aren't formal ones, but they have emerged as shorthand for a concerning pattern: people developing delusions or distorted beliefs that appear to be triggered or reinforced by conversations with AI systems. Psychosis may actually be a misnomer, says Dr. James MacCabe, a professor in the department of psychosis studies at King's College London. The term usually refers to a cluster of symptoms—disordered thinking, hallucinations, and delusions—often seen in conditions like bipolar disorder and schizophrenia. But in these cases, 'we're talking about predominantly delusions, not the full gamut of psychosis.' Read More: How to Deal With a Narcissist The phenomenon seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It's closely tied to how chatbots communicate; by design, they mirror users' language and validate their assumptions. This sycophancy is a known issue in the industry. While many people find it irritating, experts warn it can reinforce distorted thinking in people who are more vulnerable. Who's most at risk? While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present. 'I don't think using a chatbot itself is likely to induce psychosis if there's no other genetic, social, or other risk factors at play,' says Dr. John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center. 'But people may not know they have this kind of risk.' The clearest risks include a personal or family history of psychosis, or conditions like schizophrenia or bipolar disorder. Read More: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study Those with personality traits that make them susceptible to fringe beliefs may also be at risk, says Dr. Ragy Girgis, a professor of clinical psychiatry at Columbia University. Such individuals may be socially awkward, struggle with emotional regulation, and have an overactive fantasy life, Girgis says. Immersion matters, too. 'Time seems to be the single biggest factor,' says Stanford psychiatrist Dr. Nina Vasan, who specializes in digital mental health. 'It's people spending hours every day talking to their chatbots.' What people can do to stay safe Chatbots aren't inherently dangerous, but for some people, caution is warranted. First, it's important to understand what large language models (LLMs) are and what they're not. 'It sounds silly, but remember that LLMs are tools, not friends, no matter how good they may be at mimicking your tone and remembering your preferences,' says Hamilton Morrin, a neuropsychiatrist at King's College London. He advises users to avoid oversharing or relying on them for emotional support. Psychiatrists say the clearest advice during moments of crisis or emotional strain is simple: stop using the chatbot. Ending that bond can be surprisingly painful, like a breakup or even a bereavement, says Vasan. But stepping away can bring significant improvement, especially when users reconnect with real-world relationships and seek professional help. Recognizing when use has become problematic isn't always easy. 'When people develop delusions, they don't realize they're delusions. They think it's reality,' says MacCabe. Read More: Are Personality Tests Actually Useful? Friends and family also play a role. Loved ones should watch for changes in mood, sleep, or social behavior, including signs of detachment or withdrawal. 'Increased obsessiveness with fringe ideologies' or 'excessive time spent using any AI system' are red flags, Girgis says. Dr. Thomas Pollak, a psychiatrist at King's College London, says clinicians should be asking patients with a history of psychosis or related conditions about their use of AI tools, as part of relapse prevention. But those conversations are still rare. Some people in the field still dismiss the idea of AI psychosis as scaremongering, he says. What AI companies should be doing So far, the burden of caution has mostly fallen on users. Experts say that needs to change. One key issue is the lack of formal data. Much of what we know about ChatGPT psychosis comes from anecdotal reports or media coverage. Experts widely agree that the scope, causes, and risk factors are still unclear. Without better data, it's hard to measure the problem or design meaningful safeguards. Many argue that waiting for perfect evidence is the wrong approach. 'We know that AI companies are already working with bioethicists and cyber-security experts to minimize potential future risks,' says Morrin. 'They should also be working with mental-health professionals and individuals with lived experience of mental illness.' At a minimum, companies could simulate conversations with vulnerable users and flag responses that might validate delusions, Morrin says. Some companies are beginning to respond. In July, OpenAI said it has hired a clinical psychiatrist to help assess the mental-health impact of its tools, which include ChatGPT. The following month, the company acknowledged times its 'model fell short in recognizing signs of delusion or emotional dependency.' It said it would start prompting users to take breaks during long sessions, develop tools to detect signs of distress, and tweak ChatGPT's responses in 'high-stakes personal decisions.' Others argue that deeper changes are needed. Ricardo Twumasi, a lecturer in psychosis studies at King's College London, suggests building safeguards directly into AI models before release. That could include real-time monitoring for distress or a 'digital advance directive' allowing users to pre-set boundaries when they're well. Read More: How to Find a Therapist Who's Right for You Dr. Joe Pierre, a psychiatrist at the University of California, San Francisco, says companies should study who is being harmed and in what ways, and then design protections accordingly. That might mean nudging troubling conversations in a different direction or issuing something akin to a warning label. Vasan adds that companies should routinely probe their systems for a wide range of mental-health risks, a process known as red-teaming. That means going beyond tests for self-harm and deliberately simulating interactions involving conditions like mania, psychosis, and OCD to assess how the models respond. Formal regulation may be premature, experts say. But they stress that companies should still hold themselves to a higher standard. Chatbots can reduce loneliness, support learning, and aid mental health. The potential is vast. But if harms aren't taken as seriously as hopes, experts say, that potential could be lost. 'We learned from social media that ignoring mental-health harm leads to devastating public-health consequences,' Vasan says. 'Society cannot repeat that mistake.'

ThinkPad designer dishes on what happened to your favorite niche laptop features.
ThinkPad designer dishes on what happened to your favorite niche laptop features.

The Verge

time23 minutes ago

  • The Verge

ThinkPad designer dishes on what happened to your favorite niche laptop features.

Posted Aug 5, 2025 at 4:36 PM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Sean Hollister Posts from this author will be added to your daily email digest and your homepage feed. See All by Sean Hollister Posts from this topic will be added to your daily email digest and your homepage feed. See All Gadgets Posts from this topic will be added to your daily email digest and your homepage feed. See All Laptops Posts from this topic will be added to your daily email digest and your homepage feed. See All News

Is the console war over? Next-gen Xbox could be aiming at PCs, not PS6
Is the console war over? Next-gen Xbox could be aiming at PCs, not PS6

Tom's Guide

timean hour ago

  • Tom's Guide

Is the console war over? Next-gen Xbox could be aiming at PCs, not PS6

The APU powering the next Xbox could help it compete with pre-built PCs. That's what the experts at Digital Foundry discuss in the latest episode of their DF Direct Weekly podcast (via Wccftech). Digital Foundry notes that the leaked 'Magnus' APU image that we previously reported on reveals a chip that's very different from past and current console APUs like the ones powering the PS5 and Xbox Series X. For instance, the design has separate CPU and GPU dies, which distinguishes it from the typical design. Digital Foundry also says this chip should be extremely powerful and could support an iterative console design. That last part is important, since right now, consoles don't evolve much (if at all) during their lifespans. Magnus' design could allow Microsoft to mix dies whenever the company wants to upgrade the console, potentially resulting in more frequent updates than the typical seven-year console lifecycle. Digital Foundry also said this design could effectively move Xbox away from the standard console 'generation' we've known. If everything Digital Foundry discusses comes true, then the next Xbox would be akin to a pre-built PC, as rumors have recently suggested. Though this system could be more expensive than a home console, it could be more affordable than one of the best gaming PCs. This could potentially give AMD an edge over Nvidia, which has typically dominated the PC market with its line of RTX graphics cards. Where does that leave the PS6? Sony has already announced that it's partnering with AMD to help develop the upcoming system, and right now, we have no reason to believe Sony's next console will have a radically different APU design like the next Xbox apparently will. If Sony delivers a traditional console and Xbox opts for a more PC-like design, then the two rivals would no longer be directly competing, which would certainly be an interesting development, as the console wars would effectively end. It would be wise to take Digital Foundry's speculations as just that, especially since Microsoft hasn't said anything about the next Xbox's APU design. The console is reportedly launching in 2027, so we'll likely get more reports in the coming months. As always, we'll keep you posted on everything we hear about the latest next-gen Xbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store