logo
OnePlus Bullets Wireless Z3 neckband launched: Check price, unboxing, more

OnePlus Bullets Wireless Z3 neckband launched: Check price, unboxing, more

OnePlus Bullets Wireless Z3 neckband will be available from June 24 on OnePlus website and e-commerce platforms Amazon, Flipkart, and Myntra
New Delhi
China's OnePlus has launched its latest wireless neckband, the Bullets Wireless Z3, in India. The company claims the 2025 model offers faster charging, immersive sound, and smart AI-powered features, all packed in a lightweight and ergonomic design. The OnePlus Bullets Wireless Z3 will be available in two colour options—Samba Sunset and Mambo Midnight.
OnePlus Bullets Wireless Z3: Price and availability
Price: Rs 1,699
Sale begins: June 24, 12 pm
The neckband will be available through OnePlus India's official website and leading e-commerce platforms including Amazon, Flipkart, and Myntra. Offline availability includes OnePlus Experience Stores and major retail chains such as Croma, Reliance Digital, Vijay Sales, Bajaj Electronics, and others.
OnePlus Bullets Wireless Z3: Details
The Bullets Wireless Z3 features 12.4mm dynamic bass drivers for rich sound across low and high frequencies. OnePlus says the neckband is tuned for immersive audio, further enhanced by the company's BassWave algorithm, which intelligently boosts low-end frequencies to deliver deeper, punchier bass.
Users can fine-tune the audio profile via the Sound Master EQ, which includes four preset modes: Balanced, Serenade, Bass, and Bold.
For a more cinematic experience, the neckband introduces 3D Spatial Audio, which OnePlus claims turns regular stereo output into an immersive 360-degree soundstage.
Additional highlights include a built-in voice assistant shortcut and AI-powered call noise cancellation. Using environmental noise cancellation (ENC), the Z3 is capable of isolating the user's voice from ambient noise in real time, ensuring better call clarity for the receiver.
In terms of connectivity, the neckband supports Bluetooth 5.4 and Google Fast Pair for seamless pairing with Android devices. The neckband is also IP55-rated, offering protection against dust and water splashes.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI sovereignty will be test for India's tech ‘aatmanirbharta'
AI sovereignty will be test for India's tech ‘aatmanirbharta'

Indian Express

time14 minutes ago

  • Indian Express

AI sovereignty will be test for India's tech ‘aatmanirbharta'

Prime Minister Narendra Modi's Independence Day address this year felt different. His 103-minute oration, the longest of his tenure, was anchored in the language of national sovereignty. At a time when the United States has imposed tariffs on Indian exports, unsettling trade talks, the PM sought to turn the national conversation from negotiation to assertion, and to the deeper emotion of sovereignty. In this vision, citizens are not just beneficiaries but also guardians of that autonomy. Here, sovereignty takes shape in the capacity to build our own fertilisers, batteries, jet engines and defence systems. It finds expression in the symbolic unveiling of the Sudarshan Chakra defence kit, in the pledge of sweeping GST reform, and in the assurance that farmers and households alike will not be left exposed as the push for self-reliance accelerates. For perspective, one must only count the number of senior officials in Delhi who still use foreign-owned 'gmail', despite possessing official '. addresses. There is a temptation to reduce all this to political choreography. Modi's earlier Red Fort addresses highlighted schemes of inclusion, like Jan Dhan, Swachh Bharat, Ujjwala. But this one is cast in the harder register of industrial resurrection and techno-sovereignty. It is meant to rally households as much as boardrooms. But history is unkind to speeches not backed by delivery. The hardest of India's ambition is to stake a meaningful claim in the global race for emerging technologies. The country missed the first wave (Web1 era), not through want of talent but through want of an ecosystem. In the past decades, its best brains, including mathematicians and engineers, left India, while domestic research budgets stayed meagre, patents were scarce, and private investments low. India became the world's back office, designing chips for others, writing code for global companies, and running IT services, but it rarely created its own products or owned valuable patents. In a fast-moving field like technology, where early movers gain lasting advantages, India fell behind on big breakthroughs in semiconductors, artificial intelligence and the hardware-software frontier. Our universities do not yet produce research at scale, and most industries spend too little on R&D. Our brightest youngsters, especially in areas like AI, quantum and Web3, have also been moving abroad for better opportunities and access to global markets, and this slow drain of talent weakens the very idea of technological sovereignty that India now seeks to champion. Catching up will take more than money. It needs a cultural reset, including in the policy world. India has never invested enough in long-term research or built lasting partnerships between universities and industry. Recent government missions remain far too small for the scale of the challenge. The IndiaAI Mission has a budget of Rs 10,371.92 crore and the National Quantum Mission Rs ₹6003.65 crore over eight years. Compared to what is needed to lead in frontier science, these amounts are tiny. Unless the state commits deeper investment and private capital amplifies it, these missions will remain a scaffolding rather than become engines of change. Frontier technologies do not emerge from frugal innovation or Jugaad. The United States created its lead by binding venture capital with defence research and universities. China, for its part, mobilised the state with massive investments across sectors and then drew in private capital to accelerate research and commercialisation. India has yet to frame AI as core infrastructure rather than a promising sector. To re-enter the race, it will need patient public capital to underwrite risk, regulation that rewards open experimentation, and a cultural shift that prizes invention over execution alone. The next few months will matter less for headline numbers like GDP or inflation, and more for how quickly promises turn into action. If we see ground being broken for chip plants and wider AI ecosystem investments, people will believe the momentum is real. If farmers, small businesses and everyday consumers begin to feel the benefit of GST reform, the idea of shared self-reliance will carry weight. Emotional sovereignty can be a deep moat, but no political or policy fortress stands without delivery. In an era where technology shapes both economic power and national security, India's independence will be measured by the labs and factories built, research funded and commercialised, and AI platforms and ecosystems that are not only homegrown but also globally relevant. The writer is a corporate advisor and author of Family and Dhanda

Nikhil Kamath invests Rs 137.5 crore in Goldi Solar
Nikhil Kamath invests Rs 137.5 crore in Goldi Solar

New Indian Express

time21 minutes ago

  • New Indian Express

Nikhil Kamath invests Rs 137.5 crore in Goldi Solar

NEW DELHI: Entrepreneur and investor Nikhil Kamath has invested Rs 137.5 crore in Goldi Solar, India's largest solar photovoltaic (PV) module manufacturer, as part of efforts to strengthen the country's renewable energy manufacturing base. The fresh fund infusion will help Goldi Solar expand its production and accelerate India's positioning as a global renewable energy hub. Over the last year, Goldi Solar has nearly tripled its module manufacturing capacity—from 3 GW to 14.7 GW—and is now ramping up its solar cell production facilities in Surat, Gujarat. The company plans to roll out new high-efficiency modules and cells using emerging technologies to meet India's fast-rising demand for clean power.

Anthropic's Claude AI will now cut off abusive chats with users for its own ‘welfare'
Anthropic's Claude AI will now cut off abusive chats with users for its own ‘welfare'

Indian Express

time44 minutes ago

  • Indian Express

Anthropic's Claude AI will now cut off abusive chats with users for its own ‘welfare'

Anthropic has said its most capable AI models, Claude Opus 4 and 4.1, will now exit a conversation with a user if they are being abusive or persistently harmful in their interactions. The move is aimed at improving the 'welfare' of AI systems in potentially distressing situations, the Amazon-backed company said in a blog post on Friday, August 15. 'We're treating this feature as an ongoing experiment and will continue refining our approach,' it said. Claude will also not be able to end chats on its own in cases where users might be at imminent risk of harming themselves or others, Anthropic further said. The new feature, developed as part of Anthropic's exploratory work on AI welfare, comes amid the emerging trend of users turning to AI chatbots like Claude or ChatGPT for low-cost therapy and professional advice. However, a recent study found that AI chatbots showed signs of stress and anxiety when users shared 'traumatic narratives' about crime, war, or car accidents. This could potentially make the chatbots less useful in therapeutic settings with people. Beyond AI welfare, Anthropic said Claude's ability to end chats also has broader relevance to model alignment and safeguards. Prior to rolling out Claude Opus 4, Anthropic said it studied the model's self-reported and behavioral preferences. The AI model showed a 'consistent aversion' to harmful prompts from users such as requests to generate sexual content involving minors and information related to terror acts. Claude Opus 4 showed 'a pattern of apparent distress when engaging with real-world users seeking harmful content' and a tendency to end such conversations with the user, as per the company. 'These behaviors primarily arose in cases where users persisted with harmful requests and/or abuse despite Claude repeatedly refusing to comply and attempting to productively redirect the interactions,' the company said. However, Anthropic has added a disclaimer as well, noting, 'We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.' This is because framing AI models in terms of their welfare or wellbeing risks anthropomorphising them. Several researchers argue that today's large language models (LLMs) do not possess genuine understanding or reasoning, describing them instead as stochastic systems optimised for predicting the next token. Anthropic has said it will keep exploring ways to mitigate risks to AI welfare, 'in case such welfare is possible.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store