logo
Google Gemini Chats Just Got More Personal

Google Gemini Chats Just Got More Personal

Yahoo15 hours ago
GPT-5 might be the most important thing happening in AI tech this month, but Google is also constantly upgrading the abilities of its Gemini chatbot. The company announced new Gemini features on Wednesday that users will surely appreciate. Gemini chats will get more personal than before, as the chatbot will be able to provide more personal answers thanks to a new "Personal Context" feature. The feature will roll out gradually, starting with Gemini 2.5 Pro users.
Gemini is also getting more private, with Google rolling out support for Temporary Chats, a feature that's similar to how private browsing or private Google Maps navigation works. On top of that, Google will also make changes to the way you handle Gemini privacy. The "Gemini Apps Activity" section will be renamed to "Keep Activity" in the near future. It'll include settings that manage Gemini Live data from your iPhone or Android handset, including audio and recordings, just like the "Gemini Apps Activity."
Read more: How To Remove Yourself From Google Results And Any Other Websites
Gemini Personalization Is Turned On By Default
AI firms like OpenAI and Google are improving the abilities of their chatbots with every new update. The goal isn't to just offer a more reliable assistant that can answer quickly all sorts of prompts. AI firms also want to create personal assistants that know everything about the user and can provide more helpful context. Personalization features, like remembering previous chats and specific memories, are stepping stones towards that future.
Google on Wednesday said that Gemini will be able to reference past chats to learn what you like and offer personal responses. Here's an example of a personal Gemini experience from Google's announcement: "You've previously discussed the evolution of characters' powers in your favorite comic book. Now, if you ask Gemini to brainstorm a birthday party theme that's unique to me, it might suggest a celebration based on your favorite character, complete with themed food and a custom photo booth with props."
The feature is called "Personal Context," and it's turned on by default in the settings menu of the Gemini app. The screenshots above show your customization options on mobile and desktop. You can disable "Your past chats with Gemini" if you don't want the AI to retain memories.
Personalized conversations will be available with the Gemini 2.5 Pro model initially in select markets. It will roll out to Gemini 2.5 Flash in the coming weeks.
Temporary Chats And New Privacy Controls
Temporary Chats let you exclude Gemini conversations from your history. They won't appear in the list of recent chats, and won't impact Gemini's "Personal Context" memory feature. Also, Temporary Chats will not send data to Google to improve the Gemini if you have that setting enabled. To use Temporary Chats, you'll want to click on the chat bubble that appears on the right of the "New chat" menu. Temporary Chats will be retained for 72 hours, which is the standard for all Gemini chats. This is a safety and feedback feature.
If you have the "Gemini Apps Activity" setting enabled in your Gemini apps, Google will use information from your chat to train the model. That's how other AI models work, including ChatGPT. But users can always opt-out, to exclude personal chats from the training data Google uses to improve its AI models.
The "Gemini Apps Activity" feature will be called "Keep Activity" in the coming weeks. Despite the name change, the purpose of the setting will not change. Turn it off, and Google won't train the AI with your data. If "Gemini Apps Activity" is off, "Keep Activity" will remain off.
Google also explained that it introduced a privacy feature earlier this month that lets Gemini Live users decide whether audio, video, and screenshots can be used to train Gemini. That feature is off by default, so you don't have to do anything to prevent your data from reaching Google's servers. As you can see above, the upcoming "Keep Activity" setting will also include a tick box for audio and Gemini Live recordings. Do not tick the box if you want to prevent Google from using that data to improve its AI models.
Read the original article on BGR.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Messages now ensures you don't get flashed without your consent
Google Messages now ensures you don't get flashed without your consent

Android Authority

time17 minutes ago

  • Android Authority

Google Messages now ensures you don't get flashed without your consent

Edgar Cervantes / Android Authority TL;DR Google Messages will now automatically blur NSFW photos that you receive or send. It will show warnings before opening any explicit media shared with you to ensure you approve of it. It will also warn you of the risks of sending such photos before you do. All processing takes place locally on your device, so none of the private media is sent to Google. The outpouring of multimedia junk, thanks to RCS, in Android's default Messages app has inspired Google to bolster it with extensive spam protection. Now, it is expanding its protection features to any NSFW (Not Safe For Work) media you might receive in your inbox, with or without soliciting. Google Messages is gaining 'Sensitive content warnings' that will notify you when you receive a picture of someone naked. The feature, first announced in late 2024 and then rolled out in beta earlier this year, is now available to all users, as noted by 9to5Google. When the feature is enabled, the images will be automatically blurred to save you from any public embarrassment, even if they were meant for you. It will give you another set of 'Yes' and 'No' options when you first tap the image to ensure you open it mindfully. If you are uncomfortable viewing the picture, you can also delete it without revealing its contents, or block and report the sender. Google says nudity in pictures will be identified with an Android system feature called SafetyCore. The analysis and processing happen locally, so you wouldn't have to worry about any private media being sent to Google. There is currently no protection for other media, such as GIFs or videos, possibly because of their larger sizes, although Google is already testing support for them. Don't want to miss the best from Android Authority? Set us as a preferred source in Google Search to support us and make sure you never miss our latest exclusive reports, expert analysis, and much more. In addition to protecting you against any unwanted explicit images, Google will also warn you of the implications when you send a picture with nudity to another person. Google is adding a link to a resource that apprise you of the risks of voluntarily sharing nude images, which can later be used to harass you or cause anguish. Meanwhile, the resource also notes the repercussions of sharing anyone else's images without their consent. While the feature is going live for a broad set of users, you might have to ensure that it is turned on and that Android System SafetyCore is installed. To do that, head over to the Google Messages app and tap on your profile picture on the top right. Next, go to Message Settings > Protection & Safety and tap the area that says 'Manage sensitive content warnings.' On the page that opens, you might be asked to download SafetyCore before enabling the feature. Once installed, you can toggle these warnings on or tap the 'Visit resources' link at the bottom to view risks associated with sharing nude images through messages. In a dedicated support page, Google notes that the sensitive content warnings are turned on by default (once you set up SafetyCore). While adults (18+ in age) can turn it off, it can only be managed by parents for 'Supervised' teens who have their accounts managed by the Family Link app. Meanwhile, unsupervised teens (aged 13–17) will also have the option to turn it off themselves. While it's good to see Google Messages bring a crucial feature, it doesn't quite extend as far as the Sensitive Content Warnings on iOS, a feature that blocks every NSFW media (including videos) shared across multiple apps, such as Messages, Contact Cards, FaceTime, and even AirDrop. Follow

Delete Any App On Your Smartphone If You See This On Screen
Delete Any App On Your Smartphone If You See This On Screen

Forbes

time18 minutes ago

  • Forbes

Delete Any App On Your Smartphone If You See This On Screen

Even as Google and Apple make headlines with new security features for Android and iPhone, the mobile threat landscape has never been worse. Your phone is under attack from malicious texts and emails, malware-laced apps, even over-the-air threats. Some of this is hard to detect. But one message on screen is a glaring red flag. It's fairly straightforward to ensure your phone — the digital key to your life — is better protected. Do not click links or download unexpected attachments; do not install apps from outside official stores; and always run an updated version of the phone's OS. It should be that simple, but it's not. There are upwards of a billion smartphones that are running outdated operating systems that no longer receive security updates. Sideloading apps from emails, messages and third-party stores remains popular. And hardly a week goes by without news of new text or email attacks claiming victims. But there is one safeguard that really is simple. It stops attackers hijacking devices and taking control of your cameras and microphone. It prevents current threats such as LumaSpy and PlayPraetor from carrying out their worst. And it makes it immeasurably more difficult for bad actors to run riot on your device. We're talking accessibility services, permissions which grant wholesale access to a phone. 'Your app must use platform-level accessibility services only for the purpose of helping users with disabilities interact with your app,' Google says. But alas this is the golden ticket for malware developers. All they need to do is trick you into saying yes. The 'power' of these accessibility services is such that 'very few official apps will mess with it for fear of attracting the wrath of Google,' Bitdefender says. But unfortunately, 'malicious apps don't have the same qualms,' and 'many types of malware will try to gain access to this permission as a way to take over control and monitor devices.' Google has locked down accessibility services. But 'the security enhancements aimed at limiting abuse of Android's accessibility services have been systematically circumvented by sophisticated malware loaders. This has enabled a new generation of banking trojans, keyloggers, and remote access tools to persistently target users.' The screenshots above (courtesy of Zimperium) illustrate what you need to look for. Any app you have installed that asks for 'full control' is a serious risk. Unless you have downloaded an app that requires control of a device given your own personal needs, it's dangerous to grant these permissions. You should delete the app.

The latest ChatGPT is supposed to be ‘PhD level' smart. It can't even label a map
The latest ChatGPT is supposed to be ‘PhD level' smart. It can't even label a map

Yahoo

time21 minutes ago

  • Yahoo

The latest ChatGPT is supposed to be ‘PhD level' smart. It can't even label a map

A version of this story appeared in CNN Business' Nightcap newsletter. To get it in your inbox, sign up for free here. Sam Altman, the artificial intelligence hype master, is in damage-control mode. OpenAI's latest version of its vaunted ChatGPT bot was supposed to be 'PhD-level' smart. It was supposed to be the next great leap forward for a company that investors have poured billions of dollars into. Instead, ChatGPT got a flatter, more terse personality that can't reliably answer basic questions. The resulting public mockery has forced the company to make sweaty apologies while standing by its highfalutin claims about the bot's capabilities. In short: It's a dud. The misstep on the model, called GPT-5, is notable for a couple of reasons. 1. It highlighted the many existing shortcomings of generative AI that critics were quick to seize on (more on that in a moment, because they were quite funny). 2. It raised serious doubts about OpenAI's ability to build and market consumer products that human beings are willing to pay for. That should be particularly concerning for investors, given OpenAI, which has never turned a profit, is reportedly worth $500 billion. Let's rewind a bit to last Thursday, when OpenAI finally released GPT-5 to the world — about a year behind schedule, according to the Wall Street Journal. Now, one thing this industry is really good at is hype, and on that metric, CEO Sam Altman delivered. During a livestream ahead of the launch last Thursday, Altman said talking to GPT-5 would be like talking to 'a legitimate PhD-level expert in anything, any area you need.' In his typically lofty style, Altman said GPT-5 reminds him of 'when the iPhone went from those giant-pixel old ones to the retina display.' The new model, he said, is 'significantly better in obvious ways and subtle ways, and it feels like something I don't want to ever have to go back from,' Altman said in a press briefing. Then people started actually using it. Users had a field day testing GPT-5 and mocking its wildly incorrect answers. The journalist Tim Burke said on Bluesky that he prompted GPT-5 to 'show me a diagram of the first 12 presidents of the United States with an image of their face and their name under the image.' The bot returned an image of nine people instead, with rather creative spellings of America's early leaders, like 'Gearge Washingion' and 'William Henry Harrtson.' A similar prompt for the last 12 presidents returned an image that included two separate versions of George W. Bush. No, not George H.W. Bush, and then Dubya. It had 'George H. Bush.' And then his son, twice. Except the second time, George Jr. looked like just some random guy. Labeling basic maps of the United States also proved tricky for GPT-5 (but again, pretty funny, as tech writer Ed Zitron's post on Bluesky showed). GPT-5 did slightly better when I asked it on Wednesday for a map of the US. Some people can, in fact, label the great state of Vermont correctly without a PhD, but not GPT-5. And this is the first I'm hearing of states named 'Yirginia.' The slop coming out of GPT-5 was funny when it was just us nerds trying to find its blind spots. But some regular fans of ChatGPT weren't laughing. Especially because users have been particularly alarmed by the new version's personality – or rather, lack thereof. In rolling out the new model, OpenAI essentially retired its earlier models, including the wildly popular GPT-4o that's been on the market for over a year, making it so that even people who loved the previous iteration of the chatbot suddenly couldn't use it. More than 4,000 people signed a petition to compel OpenAI to resurrect it. 'I'm so done with ChatGPT 5,' one user wrote on Reddit, explaining how they tried to use the new model to run 'a simple system' of tasks that an earlier ChatGPT model used to handle. The user said GPT-5 'went rogue,' deleting tasks and moving deadlines. And while OpenAI's defenders could chalk that up to an isolated or even made-up incident, within 24 hours of the GPT-5 launch Altman was doing damage control, seemingly caught of guard by the bad reception. On X, he announced a laundry list of updates, including the return of GPT-4o for paid subscribers. 'We expected some bumpiness as we roll out so many things at once,' Altman said in a post. 'But it was a little more bumpy than we hoped for!' The CEO's failure to anticipate the outrage suggests he doesn't have a firm grasp on how an estimated 700 million weekly active users are engaging with his product. Perhaps Altman missed all the coverage — from CNN, the New York Times, the Wall Street Journal — of people forming deep emotional attachments to ChatGPT or rival chatbots, having endless conversations with them as if they were real people. A simple search of Reddit could have offered insights into how others are integrating the tool into their workflows and lives. Basic market research should have shown OpenAI that a mass update sunsetting the tools people rely on would be more than just a bit bumpy. When asked about the backlash to GPT-5, an OpenAI representative pointed CNN to Altman's public statements on social media announcing the return of older models, as well as a blog post about how the company is optimizing GPT-5. The messy rollout speaks to how the AI industry as a whole is struggling to prove themselves as producers of consumer goods rather than 'labs' — as they love to call themselves, because it sounds more scientific and distracts people from the fact that they are backed by people who are trying to make unfathomable amounts of cash for themselves. AI companies often base their fanfare around how a model performs in various behind-the-scenes benchmark tests that show how well a bot can do complex math. For all we know, GPT-5 sailed through those evaluations. But the problem is that OpenAI hyped the thing so far into the stratosphere, disappointment was (or should have been) inevitable. 'I honestly didn't think OpenAI would burn the brand name on something so mid,' wrote prominent researcher and AI critic Gary Marcus. 'In a rational world, their valuation would take a hit,' he added, noting OpenAI still hasn't turned a profit, is slashing prices to keep its user numbers up, and is hemorrhaging talent as competition heats up. For critics like Marcus, the GPT-5 flop was a kind of vindication. As he noted in a blog post, other models like Elon Musk's Grok aren't faring much better, and the backlash from even AI proponents feels like a turning point. When people talk about AI, they're talking about one of two things: the AI we have now — chatbots with limited, defined utility — and the AI that companies like Altman's claim they can build — machines that can outsmart humans and tell us how to cure cancer, fix global warming, drive our cars and grow our crops, all while entertaining and delighting us along the way. But the gap between the promise and the reality of AI only seems to widen with every new model. CNN's Lisa Eadicicco contributed reporting.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store