New study finds ‘simple selfie' can help predict patients' cancer survival
A selfie can be used as a tool to help doctors determine a patient's 'biological age' and judge how well they may respond to cancer treatment, a new study suggests.
Because humans age at 'different rates' their physical appearance may help give insights into their so-called 'biological age' – how old a person is physiologically, academics said.
The new FaceAge AI tool can estimate a person's biological age, as opposed to their actual age, by scanning an image of their face, a new study found.
A person's biological age, which is a predictor of their overall health and can be a predictor of life expectancy, is based on many factors including lifestyle and genetics, researchers from Mass General Brigham in the US said.
But they wanted to examine whether or not biological age could be examined based on how a person looks – similar to what doctors call an 'eyeball test' whereby certain judgments are made based on how a person looks, such as whether or not someone could undergo intensive cancer treatment based on how frail they appear to be.
Researchers said they wanted to see whether they could 'go beyond' the 'subjective and manual' eyeball test by creating a 'deep learning' artificial intelligence (AI) tool which could assess 'simple selfies'.
The new algorithm was trained using 59,000 photos.
'Our study now has shown for the first time that we can really use AI to turn a selfie into a real biomarker source of ageing,' said Dr Hugo Aerts, corresponding author of the paper.
He said the tool is low cost, can be used repeatedly over time and could be used to track an individual's biological age over 'months, years and decades'.
'The impact can be very large, because we now have a way to actually very easily monitor a patient's health status continuously and this could help us to better predict the risk of death or complications after, say, for example, a major surgery or other treatments,' he added.
Explaining the tool, academics showed how it assessed the biological age of actors Paul Rudd and Wilford Brimley based on photographs of the men when they were both 50 years old.
Rudd's biological age was calculated to be 42.6, while Brimley, who died in 2020, was assessed to have a biological age of 69.
The new study, published in the journal Lancet Digital Health, saw the tool used on thousands of cancer patients.
FaceAge was used on 6,200 patients with cancer using images taken at the start of their treatment.
The academics found that the biological age of patients with cancer was, on average, five years older than chronological age.
They also found that older FaceAge readings were associated with worse survival outcomes among patients with cancer, especially in people who had a FaceAge older than 85 years old.
The authors concluded: 'Our results suggest that a deep learning model can estimate biological age from face photographs and thereby enhance survival prediction in patients with cancer.'
Dr Ray Mak, co-senior author on the paper, added: 'We have demonstrated that AI can turn a simple face photo into an objective measure of biological age, that clinicians can use to personalised care for patients, like having another vital sign data point.'
He said that it is 'another piece of the puzzle like vital signs, lab results or medical imaging'.
But he added: 'We want to be clear that we view AI tools like FaceAge as assistance provide decision support and not replacements for clinician judgment.'
More studies assessing FaceAge are under way, including whether it could be used for other conditions or diseases and what impact things like cosmetic surgery or Botox have on the tool.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


San Francisco Chronicle
13 minutes ago
- San Francisco Chronicle
AI chatbots need more books to learn from. These libraries are opening their stacks
CAMBRIDGE, Mass. (AP) — Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artistsand others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'


Forbes
15 minutes ago
- Forbes
AI Will Provide Much Needed Shortcut In Finding Earthlike Exoplanets
In the search for earthlike planets, AI is playing more and more of a role. But first one must define what is meant by earthlike. That's not an easy definition and is the cause of much confusion in the mainstream media. When planetary scientists say that a planet is earthlike, they really mean it's an earth mass planet that lies in the so-called habitable zone of any given extrasolar planetary system. That's loosely defined as the zone in which a given planet can harbor liquid water at its surface. But there's no guarantee that it has oceans, beaches, fauna, flora, or anything approaching life. Yet Jeanne Davoult, a French astrophysicist at the German Aerospace Center (DLR) in Berlin, is at the vanguard of using artificial intelligence to speed up the process of finding earthlike planets using AI modeling and algorithms that would boggle the minds of mere mortals. In a recent paper, appearing in the journal Astronomy & Astrophysics, Davoult, the paper's lead author writes that the aim is to use AI to predict which stars are most likely to host an earthlike planet. The goal is use AI to avoid blind searches, minimize detection times, and thus maximize the number of detections, she and colleagues at the University of Bern write. Using a previous study on correlations between the presence of an earthlike planet and the properties of its system, we trained an AI Random Forest, a machine learning algorithm, to recognize and classify systems as 'hosting an earthlike planet' or 'not hosting an earthlike planet,' the authors write. For planetary detection, we try to identify patterns in data sets, and patterns which correspond to planets, Davoult tells me via telephone. Understanding and anticipating where earthlike planets form first, and thus targeting observations to avoid blind searches, minimizes the average observation time for detecting an earthlike planets and maximizes the number of detections, the authors write. But among the estimated 6000 exoplanets thus detected in the last 30 years, only some 20 systems with at least one earthlike planet have been found, says Davoult. In fact, stars smaller than the Sun --- such as K-spectral type dwarfs as well as the ubiquitous red dwarf M-spectral type stars which make up most of the stars in the cosmos, all have longer lifetimes than our own G-spectral type star. Thus, because of their long stellar lifetimes, it's probably more likely for intelligent life to develop around these K and M types of stars, says Davoult. We are also focusing a lot on M dwarfs because it's easier to detect an earthlike planet around the stars than around sun like stars, because the habitable zone is closer to the stars, so the orbital period is shorter, she says. The three populations of synthetic systems used in this study differ only in the mass of the central star, the authors write. This single difference directly influences the mass of the protoplanetary disk and thus the amount of material available for planet formation, note the authors. As a result, the three populations exhibit different occurrences and properties for the same type of planet, highlighting the importance of studying various types of stars, they write. We have developed a model using a Random Forest Classifier to predict which known planetary systems are most likely to host an earthlike planet, the authors write. It's hard to really compare synthetic planetary populations and real planetary populations, because we know that our model is not perfect, says Davoult. But if you just take the big pattern at the system level, then I'm convinced it's a very powerful tool, she says. If we observe a planet within a given solar system, it doesn't mean that we've detected all the planets in this planetary system, says Davoult. That's because an earthlike planet might be a bit too far away from the star, or too small to detect, she says. In contrast, my model takes what we already know about planetary system and tells us if there is a possibility for an undetected earthlike planet to exist in the same planetary system, says Davoult. Davoult is specifically looking for terrestrial planets in the habitable zone of their parent stars. The very first step is just to detect them and create a database of earthlike planets, even if we have no clue about the composition of their atmospheres, says Davoult.


Gizmodo
16 minutes ago
- Gizmodo
Bats Have Cancer-Fighting ‘Superpowers'—Here's What That Means for Humans
When you think of longevity in animals, chances are that the Greenland shark will immediately come up. After all, researchers estimate that the enigmatic animal can live for at least 250 years. It turns out, however, that bats also hold their own when it comes to lifespan, with some species living up to 25 years—equivalent to 180 human years—and they tend to do it cancer-free. Researchers from the University of Rochester (UR) have investigated anti-cancer 'superpowers,' as described in a UR statement, in four bat species: the little brown bat, the big brown bat, the cave nectar bat, and the Jamaican fruit bat. The results of their investigation could have important implications for treating cancer in humans. 'Longer lifespans with more cell divisions, and longer exposure to exo- [external] and endogenous [internal] stressors increase cancer incidence,' the researchers wrote in a study published last month in the journal Nature Communications. 'However, despite their exceptional lifespans, few to no tumors have been reported in long-lived wild and captive populations of bats.' Led by biologists Vera Gorbunova and Andrei Seluanov from the UR Department of Biology and Wilmot Cancer Institute, the team identified a number of biological defenses that help bats avoid the disease. For example, bats have a tumor-suppressor gene, called p53. Specifically, little brown bats carry two copies of the gene, and have high p53 activity, which can get rid of cancer cells during apoptosis, a biological process that eliminates unwanted cells. 'We hypothesize that some bat species have evolved enhanced p53 activity as an additional anti-cancer strategy, similar to elephants,' the researchers explained. Too much p53, though, runs the risk of killing too many cells. Clearly, bats are able to find the right apoptosis balance. Humans also have p53, but mutations in the gene—which disrupt its anti-cancer properties—exist in around 50% of human cancers. The researchers also analyzed the enzyme telomerase. In bats, the telomerase expression allows bat cells to multiply endlessly. That means they don't undergo replicative senescence: a feature that restricts cell proliferation to a certain number of divisions. Since, according to the study, senescence 'promotes age-related inflammation contributing to the aging process,' bats' lack thereof would seem to promote longevity. And while indefinite cell proliferation might sound like the perfect cancer hotbed, bats' high p53 activity can kill off any cancer cells. Furthermore, 'bats have unique immune systems which allows them to survive a wide range of deadly viruses, and many unique immune adaptations have been described in bats,' the researchers wrote. 'Most knowledge of the bat immune systems comes from studies of bat tolerance to viral infections deadly to humans. However, these or similar immune adaptations may also recognize and eliminate tumors,' as well as 'temper inflammation, which may have an anticancer effect.' Cells have to go through several steps, or 'oncogenic hits,' to become harmful cancerous cells. Surprisingly, the researchers also found that it only takes two hits for normal bat cells to become malignant, meaning bats aren't naturally resistant to cancer—they just have 'robust tumor-suppressor mechanisms,' as described in the statement. The team's findings carry important implications for treating cancer. Specifically, the study confirms that increased p53 activity—which is already targeted by some anti-cancer drugs—can eliminate or slow cancer growth. More broadly, their research is yet another example of scientists turning to nature for solutions to human challenges on all scales. Though the study focuses on bats, the ultimate aim is, always, finding a cure for cancer in humans.