
CMF Buds 2 Series Launched In India: Price And Features
Last Updated:
CMF Buds 2 series has launched along with the Phone 2 Pro model in the Indian market this week. Here are all the details.
Nothing has introduced the CMF Phone 2 Pro along with a slew of new budget TWS earbuds in the market this week. The company is offering three distinct models to cater to different budgets and needs. The newly introduced range includes the CMF Buds 2, Buds 2 Plus, and Buds 2a — each positioned at a separate price point with unique specifications.
CMF Buds 2 Series Price In India
CMF Buds 2 series prices in India are:
CMF Buds 2a – Rs 2,199
CMF Buds 2 – Rs 2,699
CMF Buds 2 Plus – Rs 3,299
CMF Buds 2 Pro TWS earbuds will be available from next month, while the other two models will go on sale at a later date in the country.
CMF Buds 2
The Buds 2 come equipped with 11mm audio drivers and offer Active Noise Cancellation (ANC) of up to 48dB. The brand says the earbuds can provide up to 13.5 hours of playback without ANC on a single charge.
CMF Buds 2 Plus
Next up is the Buds 2 Plus which sits above the standard Buds 2 with notable upgrades in ANC performance and support for Smart Adaptive Noise Cancellation, a feature absent on the base Buds 2.
The Buds 2 Plus packs larger 12mm drivers, get LDAC codec support, and Hi-Res Audio certification for high-fidelity sound. You can get up to 14 hours of playback on a single charge (with ANC turned off), and a total of 61.5 hours with the case.
The CMF Buds 2a offer a more affordable entry point, though with a few compromises. It features 12.4mm drivers and Dirac tuning, but the ANC performance is reduced to 42dB.
The earbuds provide up to 8 hours of usage (ANC off) and a maximum of 35.5 hours, when used with the charging case. Even the IP rating is slightly lower at IP54, compared to IP55 on the other two models.
All three of the CMF Buds 2 series model are compatible with the Nothing X app, where you can tweak the EQ, adjust bass levels, toggle ANC settings, activate low-latency mode, connect to multiple devices simultaneously, and even summon ChatGPT for assistance.
First Published:

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hindu
an hour ago
- The Hindu
Silicon Valley VCs navigate uncertain AI future
For Silicon Valley venture capitalists, the world has split into two camps: those with deep enough pockets to invest in artificial intelligence behemoths, and everyone else waiting to see where the AI revolution leads. The generative AI frenzy unleashed by ChatGPT in 2022 has propelled a handful of venture-backed companies to eye-watering valuations. Leading the pack is OpenAI, which raised $40 billion in its latest funding round at a $300 billion valuation, unprecedented largesse in Silicon Valley's history. Other AI giants are following suit. Anthropic now commands a $61.5 billion valuation, while Elon Musk's xAI is reportedly in talks to raise $20 billion at a $120 billion price tag. The stakes have grown so high that even major venture capital firms, the same ones that helped birth the internet revolution, can no longer compete. Mostly, only the deepest pockets remain in the game: big tech companies, Japan's SoftBank, and Middle Eastern investment funds betting big on a post-fossil fuel future. "There's a really clear split between the haves and the have-nots," says Emily Zheng, senior analyst at PitchBook, told AFP at the Web Summit in Vancouver. "Even though the top-line figures are very high, it's not necessarily representative of venture overall, because there's just a few elite startups and a lot of them happen to be AI." Given Silicon Valley's confidence that AI represents an era-defining shift, venture capitalists face a crucial challenge: finding viable opportunities in an excruciatingly expensive market that is rife with disruption. Simon Wu of Cathay Innovation sees clear customer demand for AI improvements, even if most spending flows to the biggest players. "AI across the board, if you're selling a product that makes you more efficient, that's flying off the shelves," Wu explained. "People will find money to spend on OpenAI" and the big players. The real challenge, according to Andy McLoughlin, managing partner at San Francisco-based Uncork Capital, is determining "where the opportunities are against the mega platforms." "If you're OpenAI or Anthropic, the amount that you can do is huge. So where are the places that those companies cannot play?" Finding that answer isn't easy. In an industry where large language models behind ChatGPT, Claude and Google's Gemini seem to have limitless potential, everything moves at breakneck speed. AI giants including Google, Microsoft, and Amazon are releasing tools and products at a furious pace. ChatGPT and its rivals now handle search, translation, and coding all within one chatbot, raising doubts among investors about what new ideas could possibly survive the competition. Generative AI has also democratised software development, allowing non-professionals to code new applications from simple prompts. This completely disrupts traditional startup organisation models. "Every day I think, what am I going to wake up to today in terms of something that has changed or (was) announced geopolitically or within our world as tech investors," reflected Christine Tsai, founding partner and CEO at 500 Global. In Silicon Valley parlance, companies are struggling to find a 'moat' that unique feature or breakthrough like Microsoft Windows in the 1990s or Google Search in the 2000s that's so successful it takes competitors years to catch up, if ever. When it comes to business software, AI is "shaking up the topology of what makes sense and what's investable," noted Brett Gibson, managing partner at Initialized Capital. The risks seem particularly acute given that generative AI's economics remain unproven. Even the biggest players see a very uncertain path to profitability given the massive sums involved. The huge valuations for OpenAI and others are causing "a lot of squinting of the eyes, with people wondering 'is this really going to replace labor costs'" at the levels needed to justify the investments, Wu observed. Despite AI's importance, "I think everyone's starting to see how this might fall short of the magical" even if its early days, he added. Still, only the rare contrarians believe generative AI isn't here to stay. In five years, "we won't be talking about AI the same way we're talking about it now, the same way we don't talk about mobile or cloud," predicted McLoughlin. "It'll become a fabric of how everything gets built." But who will be building remains an open question.

The Hindu
2 hours ago
- The Hindu
Hey chatbot, is this true? AI 'factchecks' sow misinformation
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots, including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini, in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok, now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries, wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."


Time of India
2 hours ago
- Time of India
Hey chatbot, is this true? AI 'factchecks' sow misinformation
During India and Pakistan's recent conflict, social media users turned to AI chatbots like Grok, ChatGPT, and Gemini for fact-checking, only to receive misinformation. Experts warn these tools often produce false or biased content, raising concerns as tech firms cut human fact-checkers and users increasingly rely on AI for news. Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI 's Grok, OpenAI's ChatGPT , and Google's Gemini -- in search of reliable information."Hey @Grok, is this true?" has become a common query on Elon Musk 's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social the responses are often themselves riddled with -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes."The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP."Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead."When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as " Community Notes ," popularized by have repeatedly questioned the effectiveness of "Community Notes" in combating fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people."We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP."I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers."