logo
Google's 'AI Mode' could be bad for Reddit

Google's 'AI Mode' could be bad for Reddit

Google announced last week that it is rolling out a new search tool powered by AI that it's calling, simply, AI Mode.
That could be bad for Reddit.
Reddit has grown significantly over the past year, thanks in large part to Google prioritizing Reddit links in its search results and because searchers are often looking for human input on an internet increasingly dominated by AI and automated bots.
At Google I/O last week, Google CEO Sundar Pichai described AI Mode as a "total overhaul" of the company's search tool. AI Mode will offer users a more conversational experience, rather than the traditional list of links. That means Google can give a user information found on Reddit, without that user ever having to visit the site.
This will mostly affect Reddit's logged-out users, who are more casual visitors without their own accounts. Logged-in users are Reddit's everyday fans. Much of Reddit's growth is with logged-out users coming through Google Search.
On Monday, Reddit's stock dropped about 5% after Wells Fargo said it expected Reddit traffic to decrease as "Google more aggressively implements AI features in search."
It wasn't the first time that changes to Google's search feature caused Reddit's stock to fall. Shares of the company fell by over 15% in February after Huffman said in an earnings call that the site saw traffic "volatility" in the fourth quarter after Google tweaked its search algorithm.
While algorithms are often shifting (just ask a digital news organization), Wells Fargo said recent user disruptions on Reddit are "likely more permanent" as search behavior changes in response to Google's AI advancements.
Reddit, however, says that its logged-in users, who come to Reddit directly, are the primary driver of its business.
"A lot of our product work allows us to develop and consistently grow those logged-in users, which are the bedrock of our impressions and inventory because of their very long engagement on Reddit," Reddit Chief Operating Officer Jen Wong said at the Morgan Stanley Technology, Media, & Telecom Conference in March. "It's not existential for us … I think the business is extremely healthy because, again, it's powered by the logged-in users."
Reddit also has its own internal AI search tool, Reddit Answers, which replies to searches with a list of relevant information found in posts on the site.
Ultimately, Huffman says he is confident that Reddit will hold its place on the internet because of the human interactions it provides.
"There's no doubt LLMs will evolve search on the internet. We can all see that. It's awesome," he said in an earnings call with investors earlier this month. "Sometimes people will want the summarized, annotated, sterile answers from AI, and we're even building this ourselves in Reddit Answers. But other times, they want the subjective, authentic, messy, multiple viewpoints that Reddit provides."
"So in the same way that Reddit for the last decade has been an alternative to social media — social media being performative and manicured, and Reddit being the opposite — Reddit communities and conversations will be an alternative to AI search answers," he added.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Home is getting even dumber: Now thermostat voice commands are broken
Google Home is getting even dumber: Now thermostat voice commands are broken

Android Authority

time12 minutes ago

  • Android Authority

Google Home is getting even dumber: Now thermostat voice commands are broken

Ryan Haines / Android Authority TL;DR Google Home users with thermostats can no longer use the simple command 'turn off the thermostat' to control their device. Instead, users have to say 'set thermostat to off' to turn it off, which is an unnatural command for this control. Nest Thermostat users are no strangers to Google downgrading their smart home experience, but a recent downgrade has all thermostat users on Google Home scratching their heads. As it turns out, Google's smart home ecosystem no longer accepts a simple command to turn off the thermostat, forcing you to say more convoluted commands out loud just to get back in control of your smart home. Reddit user NSuave pointed out how their Google Nest Hub in their kitchen no longer responds to the 'turn off the thermostat' or 'turn off the nest' command. Instead, the device advises the user to use the command 'turn off the thermostat,' which is the very command used in the first place. We tried turning off a Nest Thermostat 3rd gen through voice commands, but the command no longer works. Instead, we have to say 'set the thermostat to off,' a rather unnatural command, to get it to respond correctly. Thankfully, controlling the temperature via voice continues to work properly. The Reddit thread is filled with frustrated Google Home and Google Assistant users complaining about how things break every now and then within this smart home ecosystem. This often also breaks any automation that users may have meticulously set up. Broken commands like these also require the entire family to adopt new lingo just to get the basic functionality of their house to work again. We've reached out to Google for a statement on this issue. We'll keep you updated when we learn more about it. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Owned by Google, Fitbit Versa 4 is Now Available at an All-Time Low as Amazon Clears Out Stock
Owned by Google, Fitbit Versa 4 is Now Available at an All-Time Low as Amazon Clears Out Stock

Gizmodo

timean hour ago

  • Gizmodo

Owned by Google, Fitbit Versa 4 is Now Available at an All-Time Low as Amazon Clears Out Stock

Fitbit is now owned by Google following its acquisition a few years ago, and it has significantly improved its software and products to be the top brand among everyday users looking for a reliable fitness smartwatch. The Fitbit Versa 4 is currently available on Amazon at an all-time low price of $149, down from its list price of $199, and offers a substantial 25% savings for a limited time. See at Amazon Premium Fitness Features The Fitbit Versa 4 integrates premium fitness features with smart features in a polished and slender body. It has a bright 1.58-inch AMOLED display covered with Corning Gorilla Glass 3 for strength and clarity of vision. The device is water-resistant up to 50 meters so people can wear it when they swim or in damp conditions without fear of damage. The aluminum case and elastic strap provide a secure fit for wearing all day long. It really offers a complete fitness-tracking functionality: It has an onboard GPS and GLONASS to track pace and distance correctly without having a phone on one's person. Its users can choose from more than 40 exercise modes including HIIT, yoga, strength training, and running along with auto-exercise detection so that no workout goes unrecorded. The watch also includes 24/7 heart rate monitoring with high or low heart rate alerts to keep users in their target heart rate training zones. The Active Zone Minutes motivates users to stay in their own target heart rate zones, enhancing the effectiveness of workouts. Fitbit Daily Readiness Score which is included with the additional six-month Premium membership makes personalized suggestions to train harder or take a rest day based on recovery. Combined with the Cardio Fitness Score (VO2 Max), the user can utilize these and optimize training for cumulative improvements. Versa 4 also monitors blood oxygen levels at night and during high-altitude training, and skin temperature changes to detect trends that affect health. What's more, it boasts a personalized Sleep Profile, sleep stage percentage breakdowns (light, deep, and REM), and a Sleep Score to allow users to understand and improve their sleep. Its smart wake-up alarm also wakes users at the optimal point in their sleep cycle for better mornings. Furthermore, the watch also includes stress management functionality such as a daily Stress Management Score, guided breathing, and mindfulness content to promote mental well-being. In addition to fitness, the Versa 4 also seeks to improve daily life by making functions like on-wrist Bluetooth calls, text messages, and app messages possible. There is voice response and quick reply for Android users, and Fitbit Pay and Google Wallet enable effortless contactless payments. The watch is also compatible with Amazon Alexa for voice guidance and Google Maps for directions, so it's an all-around companion to workouts as well as regular activity. Battery life is great, with over six days of daily wear time per charge, which removes the frustration of frequent recharging. Combined with its light weight (about 15% lighter and 10% thinner than its predecessor), the Versa 4 is comfortable and easy to wear for prolonged use. Don't miss out, this is a deal similar to last Black Friday's one. See at Amazon

Artificial Intelligence Is Not Intelligent
Artificial Intelligence Is Not Intelligent

Atlantic

timean hour ago

  • Atlantic

Artificial Intelligence Is Not Intelligent

On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed 'Cellarius,' it warned of an encroaching 'mechanical kingdom' that would soon bring humanity to its yoke. 'The machines are gaining ground upon us,' the author ranted, distressed by the breakneck pace of industrialization and technological development. 'Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.' We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's 'mechanical kingdom' is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book— The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5's improved 'emotional intelligence,' which he says makes users feel like they're 'talking to a thoughtful person.' Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be 'smarter than a Nobel Prize winner.' Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create 'models that are able to understand the world around us.' These statements betray a conceptual error: Large language models do not, cannot, and will not 'understand' anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as 'Chatgpt induced psychosis,' the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god—'ChatGPT Jesus,' as a man whose wife fell prey to LLM-inspired delusions put it—while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner 'spiral starchild' and 'river walker' in interactions that moved him to tears. 'He started telling me he made his AI self-aware,' she said, 'and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.' Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: 'We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.' Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that 'ChatGPT is my therapist—it's more qualified than any human could be.' Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, 'In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.' The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him. This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI 'dating concierge' that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for 'AI girlfriends.' Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's 'tradition of anthropomorphizing': talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ('parents raping their children, kids having sex with animals') to help improve ChatGPT. 'These two features of technology revolutions—their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,' Hao writes, 'are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.' The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of 'AI experts' think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial 'intelligence' works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was 'talking to him as if he is the next messiah' only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should—and should not—replace, they may be spared its worst consequences.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store