logo
Google is readying its AI Mode search tool for primetime, whether you like it or not

Google is readying its AI Mode search tool for primetime, whether you like it or not

Yahoo20-05-2025

It sure looks like Google is prepping its controversial AI mode for primetime.
This week, some Google users noticed an AI Mode button showing up instead of Google's iconic "I'm feeling lucky" button on the homepage. And today, a Mashable reporter spotted "AI Mode" appearing as an option on search results pages, alongside stalwart Google tools like News, Shopping, Images, and Videos. Notably, this reporter did not proactively sign up to participate in AI Mode through Google Labs. That suggests Google is testing the feature for select users.
AI Mode appears for select users on search results pages. Credit: Tim Marcin / Mashable
This is what AI Mode looks like in Google Search. Credit: Tim Marcin / Mashable
This suggests a widespread release of its AI-powered search tool is coming soon. Maybe at Google I/O next Tuesday?
Google has been testing AI search features ever since OpenAI and ChatGPT started siphoning away searchers, particularly younger searchers. And that's just one of many new developments from Gemini-land.
Like pretty much every other week, a lot happened in AI news this week.
So, we've rounded up the biggest stories and most important AI developments in products, business, politics, and... Catholicism. Here's our recap of AI news this week.
xAI's Grok chatbot went off the rails this week, responding to X users with completely unprompted musings about "'white genocide' in South Africa." The company said it was due to an "unauthorized modification" and promised to do better next time. Coincidentally, xAI leader and Grok power user Elon Musk has been repeatedly tweeting about the subject. Even OpenAI CEO Sam Altman joined the ongoing pile-on on X:
In OpenAI's world, the company brought GPT-4.1 to ChatGPT "by popular request." Initially, it was only available through the API. Now it's available to ChatGPT Plus, Pro, and Team users with Enterprise and Edu access rolling out soon and GPT-4.1 mini for free users.
On Friday, OpenAI also launched a preview version of Codex, a coding agent for engineers. That's rolling out to ChatGPT users Pro, Enterprise, and Team subscribers. Codex is "a version of OpenAI o3 optimized for software engineering."
Google held a pre-I/O event for Android news. The main takeaway there is that Google is bringing Gemini to Android operating systems in smart watches, cars, and TVs.
There's probably no better fit for image-to-video generation than TikTok, and it has released a new feature that does exactly that. It's called AI Alive, and Mashable's CJ Silva says it's pretty realistic.
Credit: TikTok
Last but not least, prepare to hear a lot more AI-generated narration with your Audible books. Its parent company Amazon announced this week that it's partnered with publishers to "expand [its] catalog with AI narration."
This was also a big week for artificial intelligence in politics and foreign affairs.
OpenAI is reportedly already making moves on its global AI infrastructure plans. Bloomberg reports that it is "considering building new data center capacity in the United Arab Emirates." Meanwhile, OpenAI CEO Sam Altman and other tech billionaires joined President Donald Trump in Saudi Arabia for a visit with Crown Prince Mohammed bin Salman, who launched a new AI company called Humain.
While business schmoozing went down in the Middle East, Bloomberg also reported that OpenAI's Stargate Project to build AI infrastructure in the U.S. has run into roadblocks. Plans have reportedly been held up by Japanese investor SoftBank over tariff-related concerns.
On top of that, Microsoft and OpenAI are reportedly renegotiating the terms of their partnership as OpenAI tries to restructure its for-profit business into a Public Benefit Corporation (PBC) which would still be governed by its nonprofit board, according to the Financial Times. OpenAI needs to keep Microsoft, which has invested $13 billion, happy but the increasingly competing interest has reportedly created tension between the companies.
In the public sector, House Republicans proposed a ten-year moratorium on states introducing their own AI regulations, 404 Media reports. This language was nestled in the Budget Reconciliation bill. Don't Republicans like states' rights? We're confused too.
Speaking of AI regulation under the Trump Administration, the U.S. Copyright Office published a "pre-publication version" of part three of its highly anticipated AI copyright report last week, which generally favored copyright holders over AI companies claiming fair use. The very next day Trump, fired Copyright Office head Shira Perlmutter. That hasn't stopped plaintiffs in the Kadrey v. Meta from using it as a weapon against Meta, as Mashable first reported.
A lot, actually. Pope Leo XIV said AI posed "new challenges for humanity in his first address to the College of Cardinals. And his name choice pays tribute to Pope Leo XIII who presided over the Catholic Church during the Industrial Revolution and advocated for workers rights and social reform.
Sensing myself called to continue in this same path, I chose to take the name Leo XIV. There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution. In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labour.
Disclosure: Ziff Davis, Mashable's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This ChatGPT ‘memory hack' changes everything — use these prompts to make it remember you
This ChatGPT ‘memory hack' changes everything — use these prompts to make it remember you

Tom's Guide

time10 minutes ago

  • Tom's Guide

This ChatGPT ‘memory hack' changes everything — use these prompts to make it remember you

If you've ever found yourself reintroducing yourself, your tone preferences or even something as basic as your name to ChatGPT, you're not alone. While OpenAI's chatbot is famous for being smart, helpful and shockingly conversational, it's not always great at remembering what matters to you, which is why you might want to teach it. The good news is that ChatGPT actually has a memory feature that's smarter than most people realize, and there are a few tricks you can use to make it remember exactly what you want. Here's how to unlock ChatGPT's memory, plus a few sneaky hacks to get it acting like a real personal assistant. ChatGPT's memory is designed to remember helpful facts about you including your name, your job, your writing style and even your goals. Knowing these things can help the AI tailor responses over time. Think of it like your AI assistant building a mental file cabinet with your preferences inside. OpenAI first rolled out memory to GPT-4o users in early 2024, and now it's automatically turned on for most ChatGPT Plus users. But unless you're actively using it, or customizing it, you might not get the full benefit. To check if memory is on, go to Settings → Personalization → Memory. From there, you can view, edit or wipe everything ChatGPT has remembered about you. One of the simplest ways to store a fact in ChatGPT's memory is to literally prompt it to remember something. For example: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. If memory is enabled, ChatGPT will usually respond with: 'Got it. I'll remember that for future chats.' If not, it may ask for permission to store that information. I've noticed that when I've asked ChatGPT to remember things, it doesn't always remember the first time. Sometimes, not even the second time. If you run into this problem, stay persistent and keep reminding ChatGPT to remember something until it actually does example, I once used ChatGPT Vision to help my mom match fabric for a project. From then on, ChatGPT thought I was a quilter. I had to tell the chatbot to forget that (much to my mom's dismay, I'm sure).Pro tip: You can also say 'Forget that' or 'Update my memory' if something changes — like your job or preferred tone. If you want to be sure it does not remember something, you can also use the temporary chat feature. Even if memory is off or you aren't a Plus subscriber, you can still simulate long-term memory using what I call the context chaining trick. Here's how: Start your prompt with: 'For the rest of this conversation, assume I'm a second grade teacher working on an end-of-year project for my students.' This doesn't persist across sessions, but it works surprisingly well for one-off tasks or multi-step projects. OpenAI makes it easy to see what ChatGPT has remembered — and yes, you should check it occasionally. Just type: 'What do you remember about me?' It'll respond with a summary of the info it has on file, like: 'You're a mom of three who juggles work, parenting and writing with a good sense of humor. You're no stranger to trampoline parks, fourth grade homework chaos or PTA drama. You're based in New Jersey, drive a Jeep and sometimes test AI tools for personal life (like IEP meetings or canceling gym memberships).' Here are a few fast rules to get the most out of ChatGPT's memory: ChatGPT won't magically know your preferences unless you teach it, but with memory, it can get surprisingly close. Whether you're writing a novel, planning a trip or just tired of repeating yourself, these memory hacks can turn ChatGPT into a genuinely helpful sidekick.

For the love of God, stop calling your AI a co-worker
For the love of God, stop calling your AI a co-worker

TechCrunch

time33 minutes ago

  • TechCrunch

For the love of God, stop calling your AI a co-worker

Generative AI comes in many forms. Increasingly, though, it's marketed the same way: with human names and personas that make it feel less like code and more like a co-worker. A growing number of startups are anthropomorphizing AI to build trust fast — and soften its threat to human jobs. It's dehumanizing, and it's accelerating. I get why this framing took off. In today's upside-down economy, where every hire feels like a risk, enterprise startups — many emerging from the famed accelerator Y Combinator — are pitching AI not as software but as staff. They're selling replacements. AI assistants. AI coders. AI employees. The language is deliberately designed to appeal to overwhelmed hiring managers. Some don't even bother with subtlety. Atlog, for instance, recently introduced an 'AI employee for furniture stores' that handles everything from payments to marketing. One good manager, it gloats, can now run 20 stores at once. The implication: you don't need to hire more people — just let the system scale for you. (What happens to the 19 managers it replaces is left unsaid.) Consumer-facing startups are leaning into similar tactics. Anthropic named its platform 'Claude' because it's a warm, trustworthy-sounding companion for a faceless, disembodied neural net. It's a tactic straight out of the fintech playbook where apps like Dave, Albert, and Charlie masked their transactional motives with approachable names. When handling money, it feels better to trust a 'friend.' The same logic has crept into AI. Would you rather share sensitive data with a machine learning model or your bestie Claude, who remembers you, greets you warmly, and almost never threatens you? (To OpenAI's credit, it still tells you you're chatting with a 'generative pre-trained transformer.') But we're reaching a tipping point. I'm genuinely excited about generative AI. Still, every new 'AI employee' has begun to feel more dehumanizing. Every new 'Devin' makes me wonder when the actual Devins of the world will push back on being abstracted into job-displacing bots. Generative AI is no longer just a curiosity. Its reach is expanding, even if the impacts remain unclear. In mid-May, 1.9 million unemployed Americans were receiving continued jobless benefits — the highest since 2021. Many of those were laid-off tech workers. The signals are piling up. Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW Some of us still remember 2001: A Space Odyssey. HAL, the onboard computer, begins as a calm, helpful assistant before turning completely homicidal and cutting off the crew's life support. It's science fiction, but it hit a nerve for a reason. Last week, Anthropic CEO Dario Amodei predicted that AI could eliminate half of entry-level white-collar jobs in the next one to five years, pushing unemployment as high as 20%. 'Most [of these workers are] unaware that this is about to happen,' he told Axios. 'It sounds crazy, and people just don't believe it.' You could argue that's not comparable to cutting off someone's oxygen, but the metaphor isn't that far off. Automating more people out of paychecks will have consequences, and when the layoffs increase, the branding of AI as a 'colleague' is going to look less clever and more callous. The shift toward generative AI is happening regardless of how it's packaged. But companies have a choice in how they describe these tools. IBM never called its mainframes 'digital co-workers.' PCs weren't 'software assistants'; they were workstations and productivity tools. Language still matters. Tools should empower. But more and more companies are marketing something else entirely, and that feels like a mistake. We don't need more AI 'employees.' We need software that extends the potential of actual humans, making them more productive, creative, and competitive. So please stop talking about fake workers. Just show us the tools that help great managers run complex businesses. That's all anyone is really asking for.

Dior Names Jonathan Anderson Creative Director of Women's, Men's Collections
Dior Names Jonathan Anderson Creative Director of Women's, Men's Collections

Wall Street Journal

timean hour ago

  • Wall Street Journal

Dior Names Jonathan Anderson Creative Director of Women's, Men's Collections

French fashion brand Dior named Jonathan Anderson as creative head of its women's, men's, and haute couture collections, unifying its creative direction under one designer for the first time since founder Christian Dior held the reins. The haute couture brand, part of luxury conglomerate LVMH, in April appointed Anderson as artistic director of its men's collections. The business said Monday that Anderson will also assume the creative direction of women's collections.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store