Could your voice reveal cancer before you know it? Scientists say AI might soon make it possible
In a study published in Frontiers in Digital Health, scientists found that artificial intelligence (AI) could potentially detect abnormal growths on the vocal cords, from benign nodules to early-stage laryngeal cancer, by analysing short voice recordings.
The findings could support efforts to find an easier, faster way to diagnose cancerous lesions on the vocal cords, also known as folds.
'With this dataset we could use vocal biomarkers to distinguish voices from patients with vocal fold lesions from those without such lesions,' said Phillip Jenkins, the study's lead author and a postdoctoral researcher in clinical informatics at Oregon Health and Science University in the United States.
Related
Experimental vaccine to fight cancer prompts immune response for some patients in small trial
Why early detection of throat cancer matters
Cancer of the voice box, or larynx, affects more than a million people worldwide and kills roughly 100,000 every year. It is the 20th most common cancer in the world.
Smoking, alcohol use, and certain strains of HPV (human papillomavirus) are key risk factors, and survival rates vary from around 35 per cent to 90 per cent depending on how early the disease is diagnosed, according to Cancer Research UK.
One of the most common warning signs for laryngeal cancer is hoarseness or changes in the voice that last more than three weeks. Other symptoms include a persistent sore throat or cough, difficulty or pain when swallowing, a lump in the neck or throat, and ear pain.
Early detection of laryngeal cancer is crucial because it significantly improves survival rates and treatment outcomes.
Related
AI battled doctors in a live showdown to diagnose patients. Who came out on top?
Yet current diagnostic methods, including nasal endoscopies and biopsies, are invasive, uncomfortable, and often slow, requiring specialist equipment and expertise that many patients struggle to access quickly.
Developing a simple tool to flag early signs of vocal fold abnormalities through a quick voice recording could transform how throat cancer is detected – making it faster, more affordable and accessible to a wider population.
The next steps for AI-driven diagnosis
The research team examined about 12,500 voice recordings from 306 people across North America. They looked for subtle acoustic patterns, such as changes in pitch, loudness, and harmonic clarity.
The team identified clear differences for men in the harmonic-to-noise ratio and pitch between those with healthy voices, benign lesions, and cancer. No significant patterns were found in women, but the researchers say this may be due to the smaller dataset.
Related
New AI tool is better than doctors at diagnosing complicated medical issues, Microsoft says
Jenkins said that the results indicate large datasets "could soon help make our voice a practical biomarker for cancer risk in clinical care'.
The next step is to train AI models on larger, professionally labelled datasets and test them in clinical settings. The team would also need to test the system to make sure it works well for both men and women, he said.
'Voice-based health tools are already being piloted," Jenkins said.
"Building on our findings, I estimate that with larger datasets and clinical validation, similar tools to detect vocal fold lesions might enter pilot testing in the next couple of years".
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
10 minutes ago
- Yahoo
ChatGPT's mobile app has generated $2B to date, earns $2.91 per install
ChatGPT's mobile app is raking in the revenue. Since launching in May 2023, ChatGPT's app for iOS and Android devices has reached $2 billion in global consumer spending, according to a new analysis by app intelligence provider Appfigures. That figure is approximately 30-times the combined lifetime spending of ChatGPT's rivals on mobile, including Claude, Copilot, and Grok, the analysis indicates. So far this year, ChatGPT's mobile app has made $1.35 billion, up 673 percent year-over-year from the $174 million it made during the same period (January-July) in 2024, per the data. On average, the app is generating close to $193 million per month, up from $25 million last year. That's significantly higher — or about 53-times higher — than ChatGPT's next nearest competitor, Grok, which made approximately $25.6 million this year to date. Grok's average monthly consumer spending is estimated at $3.6 million, or 1.9% of ChatGPT's. This data suggests that other consumer chatbots still have a way to go to catch up with ChatGPT's dominance on mobile devices, even if the numbers don't provide a complete picture of the AI companies' overall revenue. Consumers, teams, and businesses can also subscribe to AI plans on the web, and the companies generate revenue in other ways, too, like via their APIs. Rather, this new data offers a window into the apps' traction with consumers, who discover and pay for these AI assistants via the mobile app stores. It's also worth noting that when xAI's Grok launched in November 2023 (after ChatGPT), Grok didn't initially have standalone iOS or Android apps. Instead, users interacted with the AI chatbot through the X platform (formerly Twitter). Grok only became available on mobile devices through its own iOS app as of early January 2025, and has been on Google Play since March 4. Still, ChatGPT's lifetime global spending per download is $2.91, compared to Claude's $2.55, Grok's $0.75, and Copilot's $0.28, Appfigures found. In the U.S., ChatGPT's spending per download to date is even higher, at $10, leading the market to account for 38% of the app's revenue to date. Germany is the second-largest market, accounting for 5.3% of ChatGPT's lifetime total spending. ChatGPT's lead can also be seen in terms of downloads. To date, the app has been installed an estimated 690 million times globally, compared with Grok's 39.5 million. (That puts X owner Elon Musk's recent complaints about the App Store's alleged favoritism of ChatGPT in its Top Charts into better context.) Average monthly downloads of ChatGPT globally are now at approximately 45 million, up 180% from about 16 million in January through July of 2024. In 2025 so far, ChatGPT's app has been downloaded 318 million times, or 2.8-times more than the 113 million it saw during the same period last year. By the number of installs, however, India is the top market, accounting for 13.7% of lifetime downloads, compared with second place, the U.S., which accounted for 10.3% of all downloads. Sign in to access your portfolio


Gizmodo
11 minutes ago
- Gizmodo
Does Steven Spielberg's ‘AI: Artificial Intelligence' Play Differently Today?
Rewatching Steven Spielberg's 2001 film AI: Artificial Intelligence, it feels as plausible as ever, but also more misguided. In 2001, AI was barely a thought in everyday life. It was the thing that destroyed the world in Terminator, and still a lofty goal in tech circles. Today, as the technology continues to grow and dominate daily conversation in almost every way, you may expect to watch the film and have a slightly new perspective. Some change in insight. Instead, the film falters as Spielberg's views on his titular technology take a backseat to a story unsure of what it wants to be. The movie's flaws shine brighter than ever before, even as its world becomes increasingly familiar and likely. But, maybe, there is more to it than meets the (A) eye. Based on a short story by Brian Aldiss and developed in large part from work previously done by the late Stanley Kubrick, AI is set in an undefined future after the icecaps have melted and destroyed all coastal cities. As a result, society has changed drastically, with certain resources becoming increasingly important and scarce. That's why robots, which don't need to eat or drink, have become so crucial. Tech companies are always looking ahead, though, and inventor Allen Hobby (William Hurt) thinks he's figured out the next step. He hopes to create an artificially intelligent robot child who can love a parent just as a normal child would. Hobby sees true emotion as the logical next step in robotic integration into human life, and about two years later, he believes he has achieved it. The first act of AI then follows David (Haley Joel Osment), a prototype child robot with the ability to love, as he attempts to help two parents, Monica (Frances O'Connor) and Henry (Sam Robards). Monica and Henry have a son, Martin, but he's been in a coma for about five years. Assuming Martin will pass away, Henry is chosen to bring David home. Initially, Monica and Henry treat David very coldly, and rightfully so. He's weird. He's creepy. He does not act human in any way. So, when Monica decides to keep him and 'imprint' on him, it feels like a bit of a shock. And this is the first of many places AI today just doesn't quite get things right. We learn that David can love whomever he's programmed to imprint on, but that it's irreversible. So, if for some reason the family doesn't want him anymore, he has to be destroyed, not reprogrammed. Which feels like a pretty big design flaw, does it not? David's deep-seated desire to be loved by Monica is crucial to the story, but watching it now, it feels almost silly that a company wouldn't have the ability to wipe the circuits clean and start it again. Also, the notion that any parent would want to have a child who stays a child forever simply feels off. Isn't the joy of parenting watching your kids grow up and discover the world? Well, David would never do that. He'd just be there, forever, making you coffee and pretending he loves you with the same, never-ending intensity. Which is a little creepy, right? The beginning of AI has very distinct horror vibes that feel even more prominent now than they did in 2001. But, clearly, this was the intention. Spielberg wants to keep the characters and audience on their toes. After two decades of killer robot movies, though, it's even more unmistakable and obvious. That unsettled tone makes it difficult to feel any connection to these characters, at least at the start. Eventually, Monica and Henry's son miraculously recovers, comes home, and develops a rivalry with David. The two clash, and, instead of returning David to the company to be destroyed, Monica leaves him in the woods. Which feels so much worse! Truly, it's irredeemable. When an animal is sick beyond aid, the merciful thing is to let them go, not throw them in the woods where they will scream in pain forever. But that's what Monica does to David. You hate her, you feel for him, and it's weird. From there, AI gets even weirder. David meets Gigolo Joe (Jude Law), an artificially intelligent sex robot who has way more emotion and humanity than the ultra-advanced David (the same goes for David's low-tech teddy bear sidekick, Teddy, the best part of the movie). The two traverse a world that has either become disgusted with machines taking over their lives or fully embraced it. It's an interesting dichotomy, one brought to life by wild production design such as the 'Flesh Fair,' where humans watch robots be destroyed for fun, and 'Rogue City,' which is basically AI Las Vegas. And yet, these scenes only touch on larger concepts of what AI means and what it has done to society. Joe delivers a monologue about humans' distrust of technology that feels poignant and thoughtful, but then it's largely forgotten. The ideas are there, but not crucial to what's happening around them. What stands out about all of this, especially from a modern viewpoint, is how Spielberg's vision of AI is still so distant. Things in the movie are well beyond what we have today. Even with modern chatbots, self-driving cars, generative AI, and the like, everything in the movie is clearly science fiction. Artificial intelligence in Spielberg's world isn't special. It's been around for so long; it's already been monetized, exploited, embraced, and rejected. One scene, however, does ring truer now than it did in 2001. As Joe and David look for the Blue Fairy that can turn him into a real boy (more on that in a second), they go to 'Dr. Know,' a store where an AI Albert Einstein, voiced by Robin Williams, can search through the entirety of human knowledge to answer any question for you. It's basically ChatGPTat its highest form, and in this world, it's just a cheap attraction in a strip mall. Dr. Know is a crucial plot device in the film because it puts Joe and David back on the track of the Blue Fairy, a character from Pinocchio who turned that character into a real boy, and whom David believes is real and can do the same for him. This is another disconnect that's hard to get your head around. We're continuously told how advanced David is supposed to be technologically, and yet he exhibits none of that mentally. He only shows the emotions and mind of a small child. There's never any hint that he'll learn or develop past that. That he'll evolve in any way. He's the most advanced robot in the world, but can't grasp that Pinocchio isn't real. So, we're left confused about what he believes, what he doesn't, his potential, and his overall purpose. Nevertheless, when Joe and David ask Dr. Know about how the Blue Fairy can turn him into a real boy, the program somehow understands this request and sends them on a journey to Manhattan, which has been lost under rising seas. There, David finds Hobby, his creator, and we learn Hobby and his team have been monitoring and even subtly seeding David's adventure to get him to this place. Which feels incredibly forced on multiple levels, but also essential to the big reveal. To this point, AI has been pretty all over the place. Cautionary, brutal, near-horror movie. Wild, cross-country adventure. Whimsical fairy tale. But finally, Hobby explains the film's central drive. Having completed this adventure, David is the first robot to actually chase his dreams. To act on his own self-motivation, not that of a human, and that's a huge jump ahead for artificial intelligence in this world. It's a fascinating revelation ripe for exploration. And yet, it immediately gets forgotten as Joe helps David escape and complete his journey to find the Blue Fairy, which he settles on being a submerged carnival attraction at Coney Island. Now, I hadn't seen AI in probably 20 years, and, for some reason, this is the ending I remember. David, stuck underwater, looking at the Blue Fairy forever. His dream, kind of, achieved. But that's not the ending. I forgot that the movie had about 20 more minutes left. We jump ahead 2,000 years. The world has ended, and advanced aliens are here studying our past. They find David buried in the ice, the last being on the planet with any connection to living humans, and, to make him happy, they bring his mom back for one day. The happiest day of his life. Roll credits. It's a touching ending, but it also speaks to how all over the map the movie plays in 2025. Basically, the movie is a horror, fairy tale, social commentary, and sci-fi adventure with heart… but only sort of. There's no real reason why David's mom can't be around for more than one day. It's just an arbitrary rule the aliens tell us. However, it does hammer home the film's ultimate message about the importance of love and how emotions are what make humans so special. A message that works completely independently of anything regarding artificial intelligence. In fact, calling the movie AI in 2025 is almost a conundrum beyond the movie itself. Upon release, most of us assumed the title just referred to David and the robots. But now, maybe I see that's not the case. AI in the movie is so not the point, maybe calling it that is a commentary on human intelligence itself, or the lack thereof. We certainly take for granted the things we inherently have as people. In the end, I did not care for AI: Artificial Intelligence as much as I did when it came out. At the time, I found it kind of profound and brilliant. Now I find it sort of messy and underwhelming, with a few hints of genius. But, there are a lot of good ideas here, and as the world of the movie becomes increasingly recognizable, I'd imagine another 25 years is likely to re-contextualize it all over again. AI: Artificial Intelligence is not currently streaming anywhere, but is available for purchase or rent. Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what's next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.


Gizmodo
11 minutes ago
- Gizmodo
Open-Sourced AI Models May Be More Costly in the Long Run, Study Finds
As more businesses adopt AI, picking which model to go with is a major decision. While open-sourced models may seem cheaper initially, a new study warns that those savings can evaporate fast, due to the extra computing power they require. In fact, open-source AI models burn through significantly more computing resources than their closed-source rivals when performing the same tasks, according to a study published Thursday by Nous Research. The researchers tested dozens of AI models, including closed systems from Google and OpenAI, as well as open-source models from DeepSeek and Magistral. They measured how much computing effort each required to complete identical tasks across three categories: simple knowledge questions, math problems, and logic puzzles. To do this, they used the number of tokens each model used to solve and answer questions as for computing resources consumed. 'Open-weight models use 1.5–4× more tokens than closed ones—and up to 10× for simple knowledge questions—making them sometimes more expensive per query despite lower per-token costs,' the study authors wrote. In AI, a token is a piece of text or data—it could be a word, part of a word, or even punctuation—that models use to understand language. Models process and generate text one token at a time, so the more tokens they use, the more computing power and time a task requires. Since most closed-source models don't reveal their raw reasoning process or chain of thought (CoT), the researchers measured their computing efficiency by counting the tokens they used instead. Because models are billed by total output tokens used in their reasoning process and outputting the final answer, completion tokens serve as a proxy for the effort needed to produce a response. This is an important consideration for companies using AI for many reasons. 'First, while hosting open weight models may be cheaper, this cost advantage could be easily offset if they require more tokens to reason about a given problem,' the researchers wrote. 'Second, an increased number of tokens will lead to longer generation times and increased latency.' The study found that open models consistently use more tokens than closed models for the same tasks, sometimes three times as many for simple knowledge questions. The gap narrowed to less than twice for math and logic problems. 'Closed models (OpenAI, Grok-4) optimize for fewer tokens to cut costs, while open models (DeepSeek, Qwen) use more tokens, possibly for better reasoning,' the study authors wrote. Among open models, llama-3.3-nemotron-super-49b-v1 was the most efficient, while Magistral models were the most inefficient. OpenAI's models were standouts as well. Both its o4‑mini and the new open-weight gpt‑oss models showed impressive token efficiency, especially on math problems. The researchers noted that OpenAI's gpt‑oss models, with their concise chain-of-thoughts, could serve as a benchmark for improving token efficiency in other open models.