New code in Spotify's app references the long-awaited ‘lossless' tier
It's been over four years since Spotify first announced its plans to launch a lossless audio tier that has yet to arrive.
After numerous leaks and teases from company execs, we know better than to start hoping that the long-delayed service could be finally nearing arrival.
But newly added references in the music app's code suggest the lossless service is at least still under development and could even be getting closer to launch.
According to technologist and reverse engineer Chris Messina, a number of references to 'lossless' have popped up in Wednesday's build of the Spotify desktop app.
These code snippets mainly refer to help cards that would appear to inform the end user about what the lossless tier offers and how to use or troubleshoot the service.
For instance, one card says, 'Lossless music, now in premium,' while another describes it as the 'best sound quality on Spotify for music in up to 24-bit/44.1 kHz.'
Some of the cards warn users that particular songs aren't available in lossless or when their device has bad connectivity, which could affect playback. Another notes that lossless music is 'best enjoyed on devices compatible with Spotify Connect and/or wired devices.'
In addition, Reddit user u/bendotlc noted that other changes were seen in the latest version of the mobile app, where the code now mentions lossless: 'Say hello to the best sound quality on Spotify. Listen to high-fidelity music in up to 24-bit/44.1 kHz.' The Redditor claims the feature is actually present in the app, but is currently disabled.
Of course, code references don't mean the lossless feature is coming anytime soon.
As we've seen in previous years, leaks and code references have appeared before, and yet the high-quality audio tier never arrived.
When asked directly about lossless and other plans for premium tiers on recent earnings calls with investors, Spotify execs, including CEO Daniel Ek, would only hint that the company was investing in premium products for 'super fans.' They wouldn't give a time frame on when these or any other higher-quality streaming options would finally launch.
Most recently, Spotify Chief Business Officer Alex Norstrom told investors on the Q1 2025 call that Spotify was still investing in more premium tiers.
'Now, with regards to higher tiers, we see great potential in them as we've mentioned before,' he said. 'So creating higher tiers around new offerings is something we are working towards as it really opens up new opportunities to delight users,' Norstrom said at the time.
When Spotify first announced its plans for Spotify HiFi, as it was then called, in 2021, it said that the service would offer users music in 'CD-quality, lossless audio format.' It had run tests of a lossless option in the years before this, however.
Eventually, Spotify pointed to licensing issues as the reason for its delay in launching.
In what could be promising news on that front, the streamer signed new deals with labels including Warner Music and Universal Music Group in early 2025, which could pave the way for the lossless tier — or so music fans hope.
Bloomberg also reported earlier this year that a premium Music Pro tier would launch later in 2025, which would include high-quality streaming, remixing, and other features.
Spotify declined to comment on this report.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
35 minutes ago
- Fast Company
AI users have to choose between accuracy or sustainability
PREMIUM New research shows the smarter and more capable AI models become, the larger their environmental impact. [Images: hramovnick/Adobe Stock; yLemon/Adobe Stock] BY Listen to this Article More info 0:00 / 2:38 Cheap or free access to AI models keeps improving, with Google the latest firm to make its newest models available to all users, not just paying ones. But that access comes with one cost: the environment. In a new study, German researchers tested 14 large language models (LLMs) of various sizes from leading developers such as Meta, Alibaba, and others. Each model answered 1,000 difficult academic questions spanning topics from world history to advanced mathematics. The tests ran on a powerful, energy-intensive NVIDIA A100 GPU, using a specialized framework to precisely measure electricity consumption per answer. This data was then converted into carbon dioxide equivalent emissions, providing a clear comparison of each model's environmental impact. The researchers found that many LLMs are far more powerful than needed for everyday queries. Smaller, less energy-hungry models can answer many factual questions just as well. The carbon and water footprints of a single prompt vary dramatically depending on model size and task type. Prompts requiring reasoning, which force models to 'think aloud,' are especially polluting because they generate many more tokens. One model, Cogito, topped the accuracy table—answering nearly 85% of questions correctly—but produced three times more emissions than similar-sized models, highlighting a trade-off rarely visible to AI developers or users. (Cogito did not respond to a request for comment.) 'Do we really need a 400-billion parameter GPT model to answer when World War II was, for example,' says Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and one of the study's authors. advertisement The final deadline for Fast Company's Next Big Things in Tech Awards is Friday, June 20, at 11:59 p.m. PT. Apply today. Subscribe to see the rest. Already Subscribed? Login. GET UNLIMITED ACCESS TO FAST COMPANY Enjoy the latest trends from the world-leading progressive business media brand just $1 Join for $1 Sign up for our weekly tech digest. SIGN UP This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Privacy Policy ABOUT THE AUTHOR Chris Stokel-Walker is a contributing writer at Fast Company who focuses on the tech sector and its impact on our daily lives—online and offline. He has explored how the WordPress drama has implications for the wider web, how AI web crawlers are pushing sites offline, as well as stories about ordinary people doing incredible things, such as the German teen who set up a MySpace clone with more than a million users. More Explore Topics Artificial Intelligence


Gizmodo
38 minutes ago
- Gizmodo
Why Some AI Models Spew 50 Times More Greenhouse Gas to Answer the Same Question
Like it or not, large language models have quickly become embedded into our lives. And due to their intense energy and water needs, they might also be causing us to spiral even faster into climate chaos. Some LLMs, though, might be releasing more planet-warming pollution than others, a new study finds. Queries made to some models generate up to 50 times more carbon emissions than others, according to a new study published in Frontiers in Communication. Unfortunately, and perhaps unsurprisingly, models that are more accurate tend to have the biggest energy costs. It's hard to estimate just how bad LLMs are for the environment, but some studies have suggested that training ChatGPT used up to 30 times more energy than the average American uses in a year. What isn't known is whether some models have steeper energy costs than their peers as they're answering questions. Researchers from the Hochschule München University of Applied Sciences in Germany evaluated 14 LLMs ranging from 7 to 72 billion parameters—the levers and dials that fine-tune a model's understanding and language generation—on 1,000 benchmark questions about various subjects. LLMs convert each word or parts of words in a prompt into a string of numbers called a token. Some LLMs, particularly reasoning LLMs, also insert special 'thinking tokens' into the input sequence to allow for additional internal computation and reasoning before generating output. This conversion and the subsequent computations that the LLM performs on the tokens use energy and releases CO2. The scientists compared the number of tokens generated by each of the models they tested. Reasoning models, on average, created 543.5 thinking tokens per question, whereas concise models required just 37.7 tokens per question, the study found. In the ChatGPT world, for example, GPT-3.5 is a concise model, whereas GPT-4o is a reasoning model. This reasoning process drives up energy needs, the authors found. 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach,' study author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a statement. 'We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.' The more accurate the models were, the more carbon emissions they produced, the study found. The reasoning model Cogito, which has 70 billion parameters, reached up to 84.9% accuracy—but it also produced three times more CO2 emissions than similarly sized models that generate more concise answers. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,' said Dauner. 'None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.' CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases. Another factor was subject matter. Questions that required detailed or complex reasoning, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, according to the study. There are some caveats, though. Emissions are very dependent on how local energy grids are structured and the models that you examine, so it's unclear how generalizable these findings are. Still, the study authors said they hope that the work will encourage people to be 'selective and thoughtful' about the LLM use. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner said in a statement.


Bloomberg
2 hours ago
- Bloomberg
Stock Movers: Circle, Marvell, Netflix
On this episode of Stock Movers: - Stablecoin issuer Circle (CRCL) shares rose as much as 9.8% on Wednesday after the US Senate passed stablecoin legislation setting up regulatory rules for crypto currencies pegged to the dollar. The stablecoin vote is the industry's most tangible return yet on the hundreds of millions of dollars it poured into electing a crypto-friendly Congress. As of the last close, the stock has risen more than 380% from its IPO price of $31. Visa (V) and Mastercard (MA) both fell as much as 5% amid continued worries about the impact of stablecoins on the credit-card issuers. - Marvell Technology (MRVL) also rallied today. Analysts are now positive on the chip maker after first quarter earnings that came in line with expectations. Following an event focused on AI, Marvell raised its overall data center total addressable market to $94 billion by 2028, up from $75 billion. - Netflix (NFLX) is up 38% year-to-date today after an announcement that the streamer will add live television channels and shows from French broadcaster TF1, expanding Netflix's live offerings for customers in the country. French customers will be be able to watch live feeds, including sports, from TF1's channels, and stream the broadcaster's shows on demand from next summer, Netflix said in a statement on Wednesday. Netflix will dedicate a portion of the app to TF1 content as part of the distribution agreement. Netflix is expanding the content it offers customers and has invested in live events such as National Football League games and wrestling matches. The French partnership goes a step further, offering traditional live broadcast content such as dramas and reality television.