logo
Google Gemini is coming to Samsung Galaxy Watches and Buds: What to expect

Google Gemini is coming to Samsung Galaxy Watches and Buds: What to expect

Samsung said that Google Gemini availability will not just make the AI assistant natively available on Galaxy Watches but it will also improve the Gemini experience for users if they are wearing Buds
New Delhi
Samsung has announced the arrival of Google-owned artificial intelligence (AI) assistant, Gemini, to Galaxy wearables in the coming months. Samsung in a press release noted that it is integrating Gemini into Galaxy watches. Additionally, according to the South Korean consumer electronics manufacturer, activating Gemini when paired with the Galaxy Buds 3 series will soon become easier.
This integration of Gemini is said to extend AI functionalities across the Galaxy ecosystem which will create a 'cohesive and intelligent user experience.'
Gemini on Galaxy watch
Samsung said that the Gemini integration into Galaxy Watches will enable users to get hands-free assistance. Moreover, users will be able to use natural voice commands and get the work done.
Samsung, while giving an example of this, wrote: 'Simply ask Gemini to 'Remember I'm using locker 43 today' so you can focus on your workout without a second thought. Or when you receive an important email while your hands are tied up carrying groceries, just ask Gemini to 'Summarize my last email' for a quick overview.'
Gemini will fulfill these requests across apps, letting users quickly glance at the response and get right back to what they were doing.
Galaxy Buds to make Gemini integration better
According to Samsung, when used with Galaxy Buds, the Gemini experience will become even more seamless. Users will be able to use their voice or pinch and hold controls to activate Gemini on their Galaxy Buds.
Users will be able to interact smoothly with their Galaxy smartphones with the presence of Galaxy Buds. While giving an example for this, Samsung wrote that when someone would be lacing up their running shoes before a jog, they can just ask Gemini: 'What's the weather for my run today?' They won't have to reach for their phone and Gemini will provide them with the answer to their questions.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?
Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?

Time of India

time11 hours ago

  • Time of India

Is AI sparking a cognitive revolution that will lead to mediocrity and conformity?

HighlightsThe rise of artificial intelligence is reshaping cognitive processes in various fields, prompting concerns about the potential loss of originality and depth in creative work as reliance on AI tools increases. Generative AI, while capable of producing competent-sounding content, often lacks true creativity and originality, as it predominantly reflects and rearranges existing human-created material. The challenge posed by the cognitive revolution driven by artificial intelligence is not only technological but also cultural, as it raises questions about preserving the irreplaceable value of human creativity amid a surge of algorithmically generated content. Artificial Intelligence began as a quest to simulate the human brain . Is it now in the process of transforming the human brain's role in daily life? The Industrial Revolution diminished the need for manual labour. As someone who researches the application of AI in international business , I can't help but wonder whether it is spurring a cognitive revolution , obviating the need for certain cognitive processes as it reshapes how students, workers and artists write, design and decide. Graphic designers use AI to quickly create a slate of potential logos for their clients. Marketers test how AI-generated customer profiles will respond to ad campaigns. Software engineers deploy AI coding assistants. Students wield AI to draft essays in record time - and teachers use similar tools to provide feedback. The economic and cultural implications are profound. What happens to the writer who no longer struggles with the perfect phrase, or the designer who no longer sketches dozens of variations before finding the right one? Will they become increasingly dependent on these cognitive prosthetics, similar to how using GPS diminishes navigation skills? And how can human creativity and critical thinking be preserved in an age of algorithmic abundance? Echoes of the industrial revolution We've been here before. The Industrial Revolution replaced artisanal craftsmanship with mechanised production, enabling goods to be replicated and manufactured on a mass scale. Shoes, cars and crops could be produced efficiently and uniformly. But products also became more bland, predictable and stripped of individuality. Craftsmanship retreated to the margins, as a luxury or a form of resistance. Today, there's a similar risk with the automation of thought. Generative AI tempts users to conflate speed with quality, productivity with originality. The danger is not that AI will fail us, but that people will accept the mediocrity of its outputs as the norm. When everything is fast, frictionless and "good enough," there's the risk of losing the depth, nuance and intellectual richness that define exceptional human work. The rise of algorithmic mediocrity Despite the name, AI doesn't actually think. Tools such as ChatGPT, Claude and Gemini process massive volumes of human-created content, often scraped from the internet without context or permission. Their outputs are statistical predictions of what word or pixel is likely to follow based on patterns in data they have processed. They are, in essence, mirrors that reflect collective human creative output back to users - rearranged and recombined, but fundamentally derivative. And this, in many ways, is precisely why they work so well. Consider the countless emails people write, the slide decks strategy consultants prepare and the advertisements that suffuse social media feeds. Much of this content follows predictable patterns and established formulas. It has been there before, in one form or the other. Generative AI excels at producing competent-sounding content - lists, summaries, press releases, advertisements - that bears the signs of human creation without that spark of ingenuity. It thrives in contexts where the demand for originality is low and when "good enough" is, well, good enough. When AI sparks - and stifles - creativity Yet, even in a world of formulaic content, AI can be surprisingly helpful. In one set of experiments, researchers tasked people with completing various creative challenges. They found that those who used generative AI produced ideas that were, on average, more creative, outperforming participants who used web searches or no aids at all. In other words, AI can, in fact, elevate baseline creative performance. However, further analysis revealed a critical trade-off: Reliance on AI systems for brainstorming significantly reduced the diversity of ideas produced, which is a crucial element for creative breakthroughs. The systems tend to converge toward a predictable middle rather than exploring unconventional possibilities at the edges. I wasn't surprised by these findings. My students and I have found that the outputs of generative AI systems are most closely aligned with the values and world views of wealthy, English-speaking nations. This inherent bias quite naturally constrains the diversity of ideas these systems can generate. More troubling still, brief interactions with AI systems can subtly reshape how people approach problems and imagine solutions. One set of experiments tasked participants with making medical diagnoses with the help of AI. However, the researchers designed the experiment so that AI would give some participants flawed suggestions. Even after those participants stopped using the AI tool, they tended to unconsciously adopt those biases and make errors in their own decisions. What begins as a convenient shortcut risks becoming a self-reinforcing loop of diminishing originality - not because these tools produce objectively poor content, but because they quietly narrow the bandwidth of human creativity itself. Navigating the cognitive revolution True creativity, innovation and research are not just probabilistic recombinations of past data. They require conceptual leaps, cross-disciplinary thinking and real-world experience. These are qualities AI cannot replicate. It cannot invent the future. It can only remix the past. What AI generates may satisfy a short-term need: a quick summary, a plausible design, a passable script. But it rarely transforms, and genuine originality risks being drowned in a sea of algorithmic sameness. The challenge, then, isn't just technological. It's cultural. How can the irreplaceable value of human creativity be preserved amid this flood of synthetic content? The historical parallel with industrialisation offers both caution and hope. Mechanisation displaced many workers but also gave rise to new forms of labour, education and prosperity. Similarly, while AI systems may automate some cognitive tasks, they may also open up new intellectual frontiers by simulating intellectual abilities. In doing so, they may take on creative responsibilities, such as inventing novel processes or developing criteria to evaluate their own outputs. This transformation is only at its early stages. Each new generation of AI models will produce outputs that once seemed like the purview of science fiction. The responsibility lies with professionals, educators and policymakers to shape this cognitive revolution with intention. Will it lead to intellectual flourishing or dependency? To a renaissance of human creativity or its gradual obsolescence? The answer, for now, is up in the air.

Salman Rushdie says he has never used AI, warns it could replace writers but first it needs to learn comedy
Salman Rushdie says he has never used AI, warns it could replace writers but first it needs to learn comedy

India Today

time11 hours ago

  • India Today

Salman Rushdie says he has never used AI, warns it could replace writers but first it needs to learn comedy

When OpenAI launched ChatGPT in 2023, the reaction was mixed. On one hand, everyone was excited to see cutting-edge technology and what artificial intelligence could do for humans. On the other, fears began to linger about AI replacing humans in various jobs. One of the roles people were most concerned about was that of writers, as AI platforms like ChatGPT and Gemini could write everything from essays to novels. And since then these tools have become much famous writer Salman Rushdie believes that writers shouldn't fear AI — at least not for now — as it lacks one very important skill: a sense of humour. Speaking at the Hay Festival in Wales, Rushdie quipped that writers are safe from being overtaken by machines, at least until a bot can write a book that genuinely makes people laugh.'I've never tried AI,' he said. 'I pretend it doesn't exist. It has no sense of humour, you don't want to hear a joke told by ChatGPT. If there's ever a moment when a funny book is written by ChatGPT, I think we're screwed.' Rushdie's comments on AI came during his first major in-person appearance in the UK since he was seriously injured after he was stabbed while on stage in the United States. The attack left him blind in his right eye, but he has continued to make public appearances and comment on developments in literature and writers have been among the first white-collar professionals to raise the alarm about the rise of AI and its potential to replace humans in writing work. In recent years, several prominent authors have voiced concerns that AI companies are training language models using their copyrighted works, without permission or 2023, a group of more than 8,500 authors came together and signed an open letter to tech companies asking them to stop using their books for training AI tools.'These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the 'food' for AI systems, endless meals for which there has been no bill,' they wrote in the growing tension between technology and creativity is even prompting many in the literary community to call for regulation and transparency. Writers argue that while AI can mimic sentence structure and tone, it lacks the emotional nuance, lived experience, and cultural insight that define truly impactful literature.

Your screen protectors need AI too, apparently. Do AI screen protectors really work?
Your screen protectors need AI too, apparently. Do AI screen protectors really work?

Mint

time11 hours ago

  • Mint

Your screen protectors need AI too, apparently. Do AI screen protectors really work?

First, they came for your devices, then they came for your jobs, and now they're coming for your screens. By 'they,' I'm referring to artificial intelligence (AI) - a term you hear almost daily across all spheres of life. Want a quick text summary of a canonical event? Simply ChatGPT it or ask Gemini to create an image that depicts the headline appropriately. Yep, it's that easy. But now, AI is being misused to represent technologies that are farcical or simply don't exist yet. One such tech that has climbed viral trends lately is screen protectors, or screen guards, as they're commonly known in India. Why would your screen protector need AI, you wonder? Well, we're wondering too. It's not uncommon for companies to now use 'AI' as a way to grab people's attention, even if those features were available to use prior to the AI-isation of our world. You may be interested in A Reddit post recently showed a screen protector with the 'optimised for AI' badge. Since then, the post has hit some nerves. Puritans think of AI as a marketing gimmick, while the new age adopters think AI has uses everywhere. So, do AI screen protectors work? According to Android Authority's investigation, the screen guard in question was, well, a regular screen protecting film that was marketed as AI optimised to garner attention. And it clearly worked. Was AI used to design the screen guard for better accuracy or for bubble elimination? Perhaps, yes. But we'll never know, because companies now slap the AI tag onto every product without headlining where AI was used and whether AI is part of the final product. In this case, it's possible that AI was used to create the screen guard, but there is no suggestion of the same, at least based on the Reddit post. If nothing else, the AI screen protector smoke show is driving attention to the lack of transparency prevailing in the tech world when it comes to AI functionality and products - especially in the face of evolving regulations. Slapping an AI logo should not be enough to sway potential customers, but it is the case right now. But when AI technologies mature enough to create more tangible impact, people will naturally know how to differentiate a gimmick from a revolution. Till then, remember - not everything needs AI optimisation, especially not your next screen guard - at least for now!

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store