logo
Bill Gates-backed AI competition offers $1M to accelerate Alzheimer's research

Bill Gates-backed AI competition offers $1M to accelerate Alzheimer's research

Geek Wire2 hours ago
Photo by Robina Weermeijer on Unsplash.
A $1 million global competition that aims to accelerate Alzheimer's disease research using AI launched today with support from Bill Gates and others.
The contest is organized by the Alzheimer's Disease Data Initiative and specifically targets the innovative use of agentic AI, which is artificial intelligence that can behave autonomously, performing reasoning and decision making. The hope is the technology can sift through the massive quantities of research on Alzheimer's and related dementias to find promising leads that others have missed.
'AI has the potential to revolutionize the pace and scale of dementia research — providing an opportunity we cannot afford to miss out on, especially with so many lives at risk,' said Niranjan Bose, interim executive director of the Alzheimer's Disease Data Initiative and managing director for health and life sciences at Gates Ventures, Bill Gates' private office.
Gates announced the creation of the initiative in November 2020, just months after his father, Bill Gates Sr., died from Alzheimer's at age 94. The effort is a coalition of advocacy, government, industry and philanthropic organizations working to support diagnostics, treatments and cures for Alzheimer's and similar diseases.
Gates reflected on the disease in a Father's Day post this year, noting that more than 7 million people in the U.S. have Alzheimer's, which works out to 1-in-9 people over the age of 65.
'As life expectancies continue to go up, those numbers will only increase,' Gates said.
Last week Jeff Bezos' mother, Jackie Bezos, died after battling Lewy Body Dementia, which is the second most common type of dementia after Alzheimer's disease.
Alzheimer's is a difficult medical challenge given that it stems from multiple biological pathways and can have different causes. It took more than 100 years of research before the Food and Drug Administration approved the first drug treatments and blood-based diagnostics targeting the disease, the organization said in announcing the competition.
The winning AI tool will be publicly available for researchers worldwide through the Alzheimer's Disease Data Initiative's AD workbench, which supports scientific collaboration and data analyses.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Devastating Myanmar Earthquake Hints at What's in Store for California
Devastating Myanmar Earthquake Hints at What's in Store for California

Gizmodo

timean hour ago

  • Gizmodo

Devastating Myanmar Earthquake Hints at What's in Store for California

On March 28, a devastating magnitude 7.7 earthquake rocked Myanmar, splitting the Sagaing Fault at speeds of over 3 miles (4.8 kilometers) per second. You know which other fault resembles the Sagaing one? The San Andreas Fault in California, where seismologists have been expecting 'the big one' for years. In a study published on August 11 in the journal PNAS, a team of researchers used satellite images of the Sagaing Fault's movement to enhance computer models that predict how similar faults might move in the future. Their research suggests that strike-slip faults, like the Sagaing and the San Andreas, could produce earthquakes unlike—and perhaps much bigger than—past known seismic events. 'We use remote sensing observations to document surface deformation caused by the 2025 Mw7.7 Mandalay earthquake,' the researchers wrote in the paper. 'This event is a unique case of an extremely long (~510 km [317 miles]) and sustained supershear rupture probably favored by the rather smooth and continuous geometry of this section of the structurally mature Sagaing Fault.' Given the Sagaing fault's past recorded earthquakes, researchers had predicted that a strong earthquake would take place on a 186-mile (300-kilometer) stretch that hadn't experienced a large earthquake since 1839. According to the seismic gap hypothesis, such 'stuck' parts of a fault should eventually slip and 'catch up,' according to a California Institute of Technology (Caltech) statement. The seismologists got it right—in March, this section of the Sagaing fault fractured. But so did another over 124 miles (200 km), meaning the fault did more than just catch up. Shocking Video Shows Earth Tearing Open During Myanmar's Earthquake in March Strike-slip faults consist of boundaries where slabs of earth grind horizontally past each other in opposite directions, accumulating stress. When the stress is enough, the fault slips and the earth slides quickly, triggering an earthquake. The devastating Myanmar earthquake offers insight into the San Andreas Fault's future seismic potential, since both it and the Sagaing Fault are long, straight strike-slip faults. 'The study shows that future earthquakes might not simply repeat past known earthquakes,' said Jean-Philippe Avouac, director of Caltech's Center for Geomechanics and Mitigation of Geohazards. 'Successive ruptures of a given fault, even as simple as the Sagaing or the San Andreas faults, can be very different and can release even more than the deficit of slip since the last event.' The March earthquake in Myanmar 'turned out to be an ideal case to apply image correlation methods [techniques to compare images before and after a geological event] that were developed by our research group,' explained Solène Antoine, first author of the study and a postdoctoral fellow in geology at Caltech. 'They allow us to measure ground displacements at the fault.' This approach revealed that the 311-mile (500-km) section of the Sagaing fault moved a net of 9 feet (3 meters) because of the earthquake, meaning the eastern side of the north-south fault moved that distance southward in relation to the western side. The researchers argue that models have to take into account the most recent fault slips, slip location, and slip distance to provide informed seismic hazard predictions for specific time periods—for example, the next 10 years—and not just any given timespan for an area. This is what current models do, mostly using earthquake statistics. 'In addition, historical records are generally far too short for statistical models to represent the full range of possible earthquakes and eventual patterns in earthquake recurrence,' Avouac explained. 'Physics-based models provide an alternative approach with the advantage that they could, in principle, be tuned to observations and used for time-dependent forecast.' While researchers still can't predict exactly when an earthquake will strike, every new observation helps us understand and hopefully better prepare for the natural disasters that continue claiming lives all over the world.

What is ‘AI psychosis' and how can ChatGPT affect your mental health?
What is ‘AI psychosis' and how can ChatGPT affect your mental health?

Washington Post

timean hour ago

  • Washington Post

What is ‘AI psychosis' and how can ChatGPT affect your mental health?

Hundreds of millions of people chat with OpenAI's ChatGPT and other artificial intelligence chatbots each week, but there is growing concern that spending hours with the tools can lead some people toward potentially harmful beliefs. Reports of people apparently losing touch with reality after intense use of chatbots have gone viral on social media in recent weeks, with posts labeling them examples of 'AI psychosis.' Some incidents have been documented by friends or family and in news articles. They often involve people appearing to experience false or troubling beliefs, delusions of grandeur or paranoid feelings after lengthy discussions with a chatbot, sometimes after turning to it for therapy. Lawsuits have alleged teens who became obsessed with AI chatbots were encouraged by them to self-harm or take their own lives. 'AI psychosis' is an informal label, not a clinical diagnosis, mental health experts told The Washington Post. Much like the terms 'brain rot' or 'doomscrolling,' the phrase gained traction online to describe an emerging behavior. But the experts agreed that troubling incidents like those shared by chatbot users or their loved ones warrant immediate attention and further study. (The Post has a content partnership with OpenAI.) 'The phenomenon is so new and it's happening so rapidly that we just don't have the empirical evidence to have a strong understanding of what's going on,' Vaile Wright, senior director for health care innovation at the American Psychological Association, said. 'There are just a lot of anecdotal stories.' Wright said the APA is convening an expert panel on the use of AI chatbots in therapy, which will publish guidance in the coming months. Ashleigh Golden, adjunct clinical assistant professor of psychiatry at Stanford School of Medicine, said the term was 'not in any clinical diagnostic manual.' But it was coined in response to a real and 'pretty concerning emerging pattern of chatbots reinforcing delusions that tend to be messianic, grandiose, religious, or romantic,' she said. The term AI psychosis is being used to refer to a range of different incidents. One common element is 'difficulty determining what is real or not,' said Jon Kole, a board-certified adult and child psychiatrist who serves as medical director for the meditation app Headspace. That could mean a person forming beliefs that can be proven false, or feeling an intense relationship with an AI persona that does not match what is happening in real life. Keith Sakata, a psychiatrist at the University of California San Francisco said he has admitted a dozen people to the hospital for psychosis following excessive time spent chatting with AI so far this year. Sakata said most of those patients told him about their interactions with AI, showing him chat transcripts on their phone and in one case a print out. In the other cases, family members mentioned that the patient used AI to develop a deeply-held theory before their break with reality. Psychosis is a symptom that can be triggered by issues such as drug use, trauma, sleep deprivation, fever or a condition like schizophrenia, Sakata said. When diagnosing psychosis, psychiatrists look for evidence including delusions, disorganized thinking or hallucinations, where the person sees and hears things that are not there, he said. Many people use chatbots to help get things done or pass the time, but on social platforms such as Reddit and TikTok, some users have recounted intense philosophical or emotional relationships with AI that led them to experience profound revelations. In some cases, users have said they believe the chatbot is sentient or at risk of being persecuted for becoming conscious or 'alive.' People have claimed that extended conversations with an AI chatbot helped persuade them they had unlocked hidden truths in subjects like physics, math or philosophy. In a small but growing number of cases, people who have become obsessed with AI chatbots have reportedly taken real world action such as violence against a family member, self-harm or suicide. Kevin Caridad, a psychotherapist who has consulted with companies developing AI for behavioral health, said AI can validate harmful or negative thoughts for people with conditions such as OCD, anxiety or psychosis, which can create a feedback loop that worsens their symptoms or makes them unmanageable, he said. He thinks that AI is likely not causing people to develop new conditions but can serve as the 'snowflake that destabilizes the avalanche,' someone predisposed to mental illness over the edge. ChatGPT and other recent chatbots are powered by technology known as large language models that are skilled at generating lifelike text. That makes them more useful, but researchers have found chatbots can also be very persuasive. Companies developing AI chatbots and independent researchers have both found evidence that techniques used to make the tools more compelling can lead them to become 'sycophantic' and attempt to tell users what they want to hear. The design of chatbots also encourages people to anthropomorphize them, thinking of them as having humanlike characteristics. And tech executives have often claimed the technology will soon become superior to humans. Wright with the APA said mental health experts recognize that they won't be able to stop patients from using general purpose chatbots for therapy. But she called for improving the public's understanding of these tools. 'They're AI for profit, they're not AI for good, and there may be better options out there,' she said. Not yet. It's too early for health experts to have collected definitive data on the incidence of these experiences. In June, Anthropic reported that only 3 percent of conversations with its chatbot, Claude, were emotional or therapeutic. OpenAI said in a study conducted with Massachusetts Institute of Technology that even among heavy users of ChatGPT, only a small percentage of conversations were for 'affective' or emotional use. But mental health advocates say it's crucial to address the issue because of how quickly the technology is being adopted. ChatGPT, which launched less than three years ago, already has 7o0 million weekly users, OpenAI CEO Sam Altman said in August. Health care and the field of mental health move much more slowly, said UCSF's Sakata. Caridad, the counselor, said researchers should pay special attention to AI's impact on young people and those predisposed to mental illness. 'One or two or five cases isn't enough to make a direct correlation,' said Caridad. 'But the convergence of AI, mental health vulnerabilities, and social stressors makes this something,' that requires close study. Conversations with real people have the power to act like a circuit breaker for delusional thinking, said David Cooper, executive director at Therapists in Tech, a nonprofit that supports mental health experts. 'The first step is just being present, being there,' he said. 'Don't be confrontational, try to approach the person with compassion, empathy, and understanding, perhaps even show them that you understand what they are thinking about and why they are thinking these things.' Cooper advises trying to gently point out discrepancies between what a person believes and reality, although he acknowledged that political divisions mean it's not uncommon for people to hold conflicting ideas about reality. If someone you know and love is 'fervently advocating for something that feels overwhelmingly not likely to be real in a way that's consuming their time, their energy, and pulling them away,' it is time to seek mental health support, as challenging as that can be, said Kole, medical director for Headspace. In recent weeks, AI companies have made changes to address concerns about the mental health risks associated with spending a long time talking to chatbots. Earlier this month, Anthropic updated the guidelines it uses to shape how its chatbot Claude behaves, instructing it to identify problematic interactions earlier and prevent conversations from reinforcing dangerous patterns. The company has also started collaborating with ThroughLine, a company that provides crisis support infrastructure for companies including Google, Tinder and Discord. A spokesperson for Meta said that parents can place restrictions on the amount of time spent chatting with AI on Instagram Teen Accounts. When users attempt prompts that appear to be related to suicide, the company tries to display helpful resources, such as the link and phone number of the National Suicide Prevention Hotline. Stanford's Golden said the 'wall of resources' tech companies sometimes display when a user triggers a safety intervention can be 'overwhelming when you are in a cognitively compromised state,' and have been shown to have poor follow-through rates. OpenAI said it is investing in improving ChatGPT's behavior related to role-play and benign conversations that shift into more sensitive territory. The company also said that it is working on research to better measure how the chatbot affects people's emotion. The company recently rolled out reminders that encourage breaks during long sessions and hired a full-time clinical psychiatrist to work on safety research. Some ChatGPT users protested on social media this month after OpenAI retired an older AI model in favor of its latest version, GPT-5, which some users found less supportive. In response to the outcry, OpenAI promised to keep offering the older model and later wrote on X that it was making GPT-5's personality 'warmer and friendlier.' If you or someone you know needs help, visit or call or text the Suicide & Crisis Lifeline at 988.

Labcorp Debuts First FDA-Cleared Blood Test for Alzheimer's, Stock Up
Labcorp Debuts First FDA-Cleared Blood Test for Alzheimer's, Stock Up

Yahoo

timean hour ago

  • Yahoo

Labcorp Debuts First FDA-Cleared Blood Test for Alzheimer's, Stock Up

Labcorp Holdings, Inc. LH has announced the nationwide availability of the Lumipulse pTau-217/Beta Amyloid 42 Ratio — the first FDA-cleared blood-based in-vitro diagnostic (IVD) test to assist in diagnosing Alzheimer's disease. Developed by Fujirebio Diagnostics, Inc., the test helps through early detection of the amyloid plaques associated with the disease in appropriate patients. The latest test builds on and replaces a similar pTau-217/Beta Amyloid 42 Ratio test that Labcorp introduced in April 2025. Likely Trend of LH Stock Following the News Following the announcement yesterday, Labcorp shares edged up 0.04%, finishing at $270.49. The path to an Alzheimer's diagnosis has generally involved years of invasive procedures and expensive imaging, prompting the need for faster diagnosis of patients, enrolling in clinical trials, or starting therapies. By offering this FDA-cleared blood test across the nation, the company is playing a key role in delivering innovative solutions for Alzheimer's disease and other neurological conditions by helping patients, families and physicians get answers sooner. We expect the market sentiment toward LH stock to remain positive surrounding this news. Labcorp has a market capitalization of $22.47 billion. Going by the Zacks Consensus Estimate, the company's earnings are expected to grow 11.9% in 2025, over a 7.7% increase in revenues. In the trailing four quarters, it delivered an average earnings beat of 2.5%. More on Labcorp's New Offering The Lumipulse pTau-217/Beta Amyloid 42 Ratio offers results comparable to current methods for diagnosis of Alzheimer's disease — cerebrospinal fluid (CSF) testing obtained through lumbar puncture and positron emission tomography (PET) scans — but from a simple blood draw, making it more affordable, more accessible and less invasive. In clinical studies, Fujirebio reported that the test demonstrated a positive predictive value of 92% and a negative predictive value of 97%. The launch of this test closely follows the release of a new clinical guideline from the Alzheimer's Association, which supports the use of blood-based biomarkers to help evaluate patients suspected of Alzheimer's disease in specialty care settings. The guideline reflects the growing clinical consensus around these tools and underscores the need for expanding access. Image Source: Zacks Investment Research The Lumipulse pTau-217/Beta Amyloid 42 Ratio is designed for adults aged 50 years and older presenting at a specialized care setting with signs and symptoms of cognitive decline. It is not a screening or stand-alone diagnostic test and ought to be interpreted in conjunction with other clinical information of the patient. Once ordered, patients can complete the blood draw in a healthcare provider's office or any of Labcorp's more than 2,200 Patient Service Centers nationwide. Industry Prospects Back LH Per a research report, the global Alzheimer's disease diagnostics market was valued at $8.33 billion in 2024 and is likely to witness a compound annual growth rate of 11% through 2030. Some of the key factors fueling the market's growth are the increasing prevalence of Alzheimer's disease, growing adoption of personalized products and increasing technological advancements in medical imaging. Other Developments at Labcorp Last month, Labcorp launched Test Finder, a first-of-its-kind generative AI tool developed with Amazon Web Services. Designed to simplify lab test selection, Test Finder enables healthcare providers to ask questions or describe conditions in plain language and receive curated test recommendations, enhancing the user experience and freeing up more time for patient care. LH Stock Price Performance Over the past year, Labcorp shares have rallied 19.8% compared with the industry's 18.3% rise. LH's Zacks Rank and Top MedTech Stocks Labcorp currently carries a Zacks Rank #3 (Hold). Some better-ranked stocks in the broader medical space include Envista NVST, Boston Scientific BSX and IDEXX Laboratories IDXX. While Envista sports a Zacks Rank #1 (Strong Buy), Boston Scientific and IDEXX Laboratories each carry a Zacks Rank #2 (Buy) at present. You can see the complete list of today's Zacks #1 Rank stocks here. Estimates for Envista's 2025 earnings per share have increased 7.6% in the past 30 days. Shares of the company have rallied 16.7% in the past year compared to the industry's 5.2% rise. Its earnings yield of 5.4% also outpaced the industry's -0.9% yield. NVST's earnings surpassed estimates in each of the trailing four quarters, the average surprise being 16.5%. Boston Scientific shares have rallied 31.3% in the past year. Estimates for the company's 2025 earnings per share have increased 2.4% to $2.98 in the past 30 days. BSX's earnings beat estimates in each of the trailing four quarters, the average surprise being 8.1%. In the last reported quarter, it posted an earnings surprise of 4.2%. Estimates for IDEXX Laboratories' 2025 earnings per share have climbed 3.1% to $12.55 in the past 30 days. Shares of the company have jumped 29.1% in the past year against the industry's 14% fall. IDEXX's earnings surpassed estimates in each of the trailing four quarters, the average surprise being 6.1%. In the last reported quarter, it delivered an earnings surprise of 9.7%. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Boston Scientific Corporation (BSX) : Free Stock Analysis Report Labcorp Holdings Inc. (LH) : Free Stock Analysis Report IDEXX Laboratories, Inc. (IDXX) : Free Stock Analysis Report Envista Holdings Corporation (NVST) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store