logo
Google launched a dizzying array of new AI products, and it's getting harder to make sense of them all

Google launched a dizzying array of new AI products, and it's getting harder to make sense of them all

Attending Google's I/O developer conference is like being doused with a firehose of new AI announcements.
At I/O's keynote event on Tuesday, Business Insider counted at least two dozen new models, features, and updates.
"We are shipping faster than ever," Google CEO Sundar Pichai boasted onstage.
Indeed. But it's starting to get a little confusing. For one, some of the launches seem to overlap with each other. Launching so many AI products in such a short timeframe is impressive, and it can also feel scatterbrained.
AI Mode allows you to chat with Google as you browse the web, creating a more conversational search experience. Don't confuse it with Gemini in Chrome, which allows you to ask Gemini questions while you browse.
With Gemini Live, you can point your phone at whatever you want and talk to the AI assistant about it. Don't mistake it for Search Live, which allows you to chat with Search about whatever your phone sees.
Project Mariner is an experimental AI agent that can take actions like booking tickets. Gemini's upcoming Agent Mode also has agentic capabilities, like helping users find just the right Zillow listing.
Not all the new tools seemed that similar. Google launched an impressive new AI filmmaking tool called Flow, powered by its new model Veo 3.
Google also touted updates to an entirely separate AI model family from Gemini called Gemma which, incidentally, can help decipher how dolphins talk to each other — that's DolphinGemma.
Multiple Googlers that Business Insider spoke with at I/O used a single word to describe Google's current rate of shipping: "intense."
Google's approach complicates its own vision of building a single, universal AI assistant. (That mission has its own name, too: Project Astra.)
OpenAI is also moving fast towards this goal and appears intent on launching a dedicated device to run it, given its recent purchase of Apple designer Jony Ive's hardware startup.
Google risks building so many overlapping AI products that it will be tough to compete with a single, more stand-alone solution, such as an AI-native phone.
No one's counting Google out, though. The tech giant has become an undeniable AI leader, inventing much of the core research behind the current boom and successfully launching transformational technology like Waymo. Time will tell whether Google's more sprawling approach wins out.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Owned by Google, Fitbit Versa 4 is Now Available at an All-Time Low as Amazon Clears Out Stock
Owned by Google, Fitbit Versa 4 is Now Available at an All-Time Low as Amazon Clears Out Stock

Gizmodo

time26 minutes ago

  • Gizmodo

Owned by Google, Fitbit Versa 4 is Now Available at an All-Time Low as Amazon Clears Out Stock

Fitbit is now owned by Google following its acquisition a few years ago, and it has significantly improved its software and products to be the top brand among everyday users looking for a reliable fitness smartwatch. The Fitbit Versa 4 is currently available on Amazon at an all-time low price of $149, down from its list price of $199, and offers a substantial 25% savings for a limited time. See at Amazon Premium Fitness Features The Fitbit Versa 4 integrates premium fitness features with smart features in a polished and slender body. It has a bright 1.58-inch AMOLED display covered with Corning Gorilla Glass 3 for strength and clarity of vision. The device is water-resistant up to 50 meters so people can wear it when they swim or in damp conditions without fear of damage. The aluminum case and elastic strap provide a secure fit for wearing all day long. It really offers a complete fitness-tracking functionality: It has an onboard GPS and GLONASS to track pace and distance correctly without having a phone on one's person. Its users can choose from more than 40 exercise modes including HIIT, yoga, strength training, and running along with auto-exercise detection so that no workout goes unrecorded. The watch also includes 24/7 heart rate monitoring with high or low heart rate alerts to keep users in their target heart rate training zones. The Active Zone Minutes motivates users to stay in their own target heart rate zones, enhancing the effectiveness of workouts. Fitbit Daily Readiness Score which is included with the additional six-month Premium membership makes personalized suggestions to train harder or take a rest day based on recovery. Combined with the Cardio Fitness Score (VO2 Max), the user can utilize these and optimize training for cumulative improvements. Versa 4 also monitors blood oxygen levels at night and during high-altitude training, and skin temperature changes to detect trends that affect health. What's more, it boasts a personalized Sleep Profile, sleep stage percentage breakdowns (light, deep, and REM), and a Sleep Score to allow users to understand and improve their sleep. Its smart wake-up alarm also wakes users at the optimal point in their sleep cycle for better mornings. Furthermore, the watch also includes stress management functionality such as a daily Stress Management Score, guided breathing, and mindfulness content to promote mental well-being. In addition to fitness, the Versa 4 also seeks to improve daily life by making functions like on-wrist Bluetooth calls, text messages, and app messages possible. There is voice response and quick reply for Android users, and Fitbit Pay and Google Wallet enable effortless contactless payments. The watch is also compatible with Amazon Alexa for voice guidance and Google Maps for directions, so it's an all-around companion to workouts as well as regular activity. Battery life is great, with over six days of daily wear time per charge, which removes the frustration of frequent recharging. Combined with its light weight (about 15% lighter and 10% thinner than its predecessor), the Versa 4 is comfortable and easy to wear for prolonged use. Don't miss out, this is a deal similar to last Black Friday's one. See at Amazon

Artificial Intelligence Is Not Intelligent
Artificial Intelligence Is Not Intelligent

Atlantic

time31 minutes ago

  • Atlantic

Artificial Intelligence Is Not Intelligent

On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.

At WWDC, Apple's AI strategy comes into question
At WWDC, Apple's AI strategy comes into question

CNBC

time37 minutes ago

  • CNBC

At WWDC, Apple's AI strategy comes into question

One year ago, Apple announced Apple Intelligence, its response to the wave of sophisticated chatbots and systems kicked off by the arrival of ChatGPT and the age of generative AI. Analysts said Apple's installed base of more than 1 billion iPhones, the data on its device and its custom-designed silicon chips were advantages that would help the company become an AI leader. But it's been an underwhelming 12 months since then. Apple Intelligence stumbled out of the gate while rivals like OpenAI, Google and Meta have continued to make headway launching new generative-AI models. Now, investors are calling for Apple to do something major to catch up in AI, which is rapidly transforming the tech industry. When CEO Tim Cook speaks at Apple's annual Worldwide Developers Conference in Cupertino, California, investors on Monday, fans and developers will want to hear how the company's approach to AI has changed. That's especially important after some Apple executives have said that the technology could be the reason the iPhone gets supplanted by the next-generation of computer hardware. "You may not need an iPhone 10 years from now," Apple services chief Eddy Cue said in court last monthin one of the government's antitrust case against Google, adding that AI was a "huge technological shift" that can upend incumbents like Apple. The Apple Intelligence rollout was rocky. The first features launched in October — tools for rewriting text, a new Siri animation and improved voice, and a tool that generates slideshow movies out of user photos — were underwhelming. One key feature, which came out in December, summarized long stacks of text messages. But it was disabled for news and media apps after the BBC discovered that it twisted headlines to display factually incorrect information. But the biggest stumble for Apple came in early March, when the company said that it was delaying "More personal Siri," a major improvement to the Siri voice assistant that would integrate it with iPhone apps so it could do things like find details from inside emails and make restaurant reservations. Apple had been advertising the feature on television as a key reason to buy an iPhone 16, but after delaying the feature until the "coming year," it pulled the ads from broadcast and YouTube. The company now faces class-action suits from people who claim they were misled into buying a new iPhone. Although Apple Intelligence had a rough first year, the company hasn't said much publicly. However, it's reportedly reorganized some of its AI teams. JPMorgan Chase analyst Samik Chatterjee said in a note this week that investor expectations were set for a "lackluster" WWDC, as the company still needs to bring to market the features it announced last year, versus "addressing the more material issue of lagging behind other large technology companies in relation to advancements in AI." Meanwhile, Apple is facing renewed competition in its core business. OpenAI in May acquired the startup io for about $6.4 billion, bringing in former Apple chief designer Jony Ive to build AI hardware. The company hasn't provided details about its future devices. Meta has made a splash with its Ray-Ban Meta Glasses, selling over 2 million units since launching in 2021. The devices use Meta's Llama large language model to answer spoken questions from the user. And last month, Android maker Google said its Gemini models will become the default assistant on Android phones. The company showed Gemini doing things that go beyond Siri's capabilities, such as summarizing videos. Google also announced a $150 million partnership with Warby Parker to develop its own pair of AI-powered smart glasses. A working Apple Intelligence is important for the company to encourage its users to buy new iPhones since devices released before the iPhone 15 Pro in 2023 don't support the suite of features. But AI hasn't been a key driver of sales for smartphones yet, and may not be for years, said Forrester analyst Thomas Husson. "There's been some new cool features and services, but I don't think it has drastically changed the experience yet," Husson said. Apple declined to comment. For years, Apple didn't like the words "artificial intelligence." It preferred the more academic term "machine learning." Apple focused its efforts on what could efficiently run on its battery-powered phones. The AI race, led by OpenAI and Google, was about bleeding-edge capabilities that required high-powered servers based on Nvidia graphics-processing units, or GPUs. Then ChatGPT launched in late 2022, making AI the most important term in Silicon Valley. Soon after, Cook was telling investors that Apple was spending "a tremendous amount of time and effort" on the technology. While Apple Intelligence is based on a series of language and diffusion models that the company trained itself, Apple hasn't publicly competed with Google, OpenAI, Anthropic, or other companies in what are called "frontier models," or the most capable AI systems that often have to be trained on large server clusters packed with Nvidia chips and fast memory. The difference between the way Apple and its rivals approach AI can be seen in the company's approach to capital expenditures. Apple spent $9.5 billion on capital expenditures in its fiscal 2024, or about 2.4% of its total revenue. The iPhone maker has rented the computing power needed to train its foundation models, it revealed last year, from Google Cloud and other providers. Apple's rivals are gobbling up billions of dollars of GPUs to push the technology forward. Meanwhile, Meta, Amazon, Alphabet and Microsoft are planning to collectively spend more than $300 billion this year on capital expenditures, up from $230 billion last year. Amazon alone is aiming to spend $100 billion, and Microsoft has allocated $80 billion. Apple's best chance to quickly catch up up may be to do what it's done many times in the past: Buy a company, and turn it into a killer feature. It bought PA Semi in 2008 for $278 million, and turned it into the seed for its semiconductor division. Ahead of releasing the Vision Pro headset, Apple bought over 10 startups that worked on virtual and augmented reality. Even Siri was a startup before Apple bought it for more than $200 million in 2010. With $133 billion in cash and marketable securities on hand as of the start of May, there isn't much Apple can't buy, assuming it could get regulatory clearance. However, OpenAI, Apple's current Siri partner, is likely out of reach with a valuation of $300 billion. And given OpenAI's new relationship with Ive to build hardware, there are reasons for Apple to slow the partnership down. Anthropic, whose Claude chatbot is powered by one of the leading AI models, was valued at $61.5 billion in a funding round in March. In the Google antitrust case, Cue, a senior vice president at Apple, mentioned Anthropic as a potential replacement for Google as the default search option in the iPhone's Safari browser. "They probably need to acquire Anthropic," said Deepwater Asset Management's Gene Munster, who has followed Apple for decades, in an interview. That would be by far Apple's largest acquisition. To date, the most the company has paid is $3 billion, when it bought Beats Electronics in 2014 for $3 billion, part of an effort to catch Spotify in the music streaming market. Apple could buy a company that's developing AI-based apps, even if they're on open-source or other company models. Perplexity, which is currently fundraising at a $14 billion valuation, has shown strong interest in the smartphone market and understanding of the value of being a default AI service. In April, Perplexity announced a partnership with Motorola, and it's reportedly in talks with Samsung to integrate its technology into the South Korean company's version of Android, as well as take investment from the Apple rival. Cue mentioned that Apple had been in discussions with Perplexity about its technology at the May trial. It's also possible for Apple to treat frontier AI like it treated search — as a service that can be filled with a partnership. Apple software chief Craig Federighi implied as much last year at a panel discussion during WWDC, saying that Apple would like to add other AI models, especially for specific purposes, into its Apple Intelligence framework. Federighi specifically mentioned Google, whose Gemini can now fluidly speak to the user and handle input that comes from photos, videos, voice or text. Documents revealed during the Google trial showed executives from Apple, including Cue and M&A chief Adrian Perica, were involved in the negotiations over Gemini. Apple has been designing its own chips since 2010, and with AI in mind since at least 2018. The most powerful Apple M-series chips can tap into something called "unified memory," says WebAI co-founder David Stout, making them ideal for doing AI inference. Apple also includes good GPUs on its chips, he said. WebAI is building software that allows users to fine-tune, train and run big models on consumer hardware. Stout's company has built clusters of consumer-grade Mac Studio computers to run big AI models, like Meta's Llama. "We picked Apple Silicon because we think it's the best hardware for AI," said Stout, adding that in his company's tests, Apple's chips can output 100 million tokens per dollar spent versus 12 million tokens per dollar for an Nvidia H100. Part of Apple's strategy for Siri, announced last summer, was to cajole its developers to build snippets of new code into their apps, which would make it simpler for Apple Intelligence and Siri to use the apps and get things done. While Apple is still pushing "App Intents" — the same system powers stuff like lock screen widgets — the framework for how they work with Siri hasn't been released yet. The threat that advanced AI like Google Gemini and OpenAI's ChatGPT represents to Apple was underscored by Cue at the trial last month. He suggested that the rise of AI threatened Apple's biggest business. "AI is a new technology shift, and it's creating new opportunities for new entrants," Cue said at the trial last month. There is a growing sense in Silicon Valley that sophisticated AI interfaces might one day replace smartphones and laptops with new devices that are designed from the ground up to take advantage of AI-based interfaces. That could mean people speaking or chatting with their devices to command AI agents, rather than tapping on touch screens or keyboards. Upon joining OpenAI in May, Ive said he believes AI is enabling a new generation of hardware. "I am absolutely certain that we are literally on the brink of a new generation of technology that can make us our better selves," Ive, the iPhone designer who retired from Apple in 2019, said in a video announcing that his company had been acquired. Though AI represents a risk to Apple's current business, Deepwater Asset Management's Munster said the company has more time than many believe to adapt because of so many years of customer loyalty. "This is still something that has existential risk to all these companies, including Apple, but I don't think we're at some break point in the next year around it," Munster said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store