Gemini's Memory Is in Your Hands Now
Byte-Sized Brief
Gemini adds new privacy and control options.
Now it can learn from your past chats.
Temporary Chats keep one-off talks separate.
Google is adding a few tricks to Gemini that ChatGPT users might recognize. The app can now use your past chats to provide more relevant answers without requiring you to remind it of your preferences. This is switched on by default under a setting called Personal Context, but you can turn it off at any time. The update starts rolling out today to the 2.5 Pro model in select countries, with broader availability in the next couple of weeks.
There's also a new option available starting today for all users called Temporary Chats, which, as you might guess, lets you have single-use conversations that don't show up in your history and don't influence future replies. This is useful if you want to keep certain exchanges a bit more private or separate from your usual chats. Google is also changing how it handles files and photos you share; when Gemini Apps Activity (soon to be renamed Keep Activity) is on, a sample of future uploads may be used to help train its AI models, but you can turn it off or bypass it with Temporary Chats.
The Bottom Line
Google is updating Gemini with two new features: Personal Context, which lets it tailor responses based on previous conversations, and Temporary Chats, which keep one-off conversations from influencing future replies. Both are rolling out starting today.
Related: Google's Gemini Gets a Back-to-School Upgrade With Guided Learning
Read the original article on Lifewire
Solve the daily Crossword

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
23 minutes ago
- Forbes
Godfather Of AI Says We Need Maternal AI But Ignites Sparks Over Omission Of Fatherly Instincts Too
In today's column, I examine the recent remarks by the said-to-be 'Godfather of AI' that the best way to ensure that AI and ultimately artificial general intelligence (AGI) and artificial superintelligence (ASI) are in check and won't wipe out humankind would be to instill maternal instincts into AI. The idea is that maybe we could computationally sway current AI towards being motherly. This would hopefully remain intact as a keystone while we increasingly improve contemporary AI toward becoming the vaunted AGI and ASI. Although this seems to be an intriguing proposition, it has come under withering criticism from others in the AI community. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Aligning AI With Humanity You might be aware that a longstanding research scientist in the AI community, named Geoffrey Hinton, has been credited with various AI breakthroughs, especially in the 1970s and 1980s. He has been generally labeled as the 'Godfather of AI' for his avid pursuits and accomplishments in the AI field. In fact, he is a Nobel Prize-winning computer scientist for his AI insights. In 2023, he left his executive position at Google so that he (per his own words) could speak freely about AI risks. Many noteworthy quotes of his are utilized by the media to forewarn about the coming dangers of pinnacle AI, when or if we reach AGI and ASI. There is a lot of back-and-forth nowadays regarding the existential risk of AI. Some refer to this as the p(doom), meaning that there is a probability of doom arising due to AI, for which you can either guess that the probability is low, medium, or high. For those who place a high probability on this weighty matter, they usually assert that AI will either choose to kill us all or perhaps completely enslave us. How are we to somehow avoid or at least mitigate this seemingly outsized risk? One approach entails trying to data train AI to be more aligned with human values, see my detailed discussion on human-centered AI at the link here. The hope is that if AI is more appreciative of humanity and computationally infused with our ethical and moral values, the AI might opt not to harm us. Another similar approach involves making sure that AI embodies principles such as the famous Asimov laws of robotics (see my explanation at the link here). A rule of Asimov is that AI isn't supposed to harm humans. Period, end of story. Whether those methods or any other of the floating around schemes will save us is utterly unknown. We are pretty much hanging in the wind. Good luck, humanity, since we will need to keep our fingers crossed and our lucky rabbit's foot in hand. For more about the ins and outs of AI existential risk, see my coverage at the link here. AI With Maternal Instincts At the annual Ai4 Conference on August 12, 2025, Hinton proclaimed that the means to shape AI toward being less likely to be gloomily onerous would be to instill computational 'maternal instincts' into AI. His notion seems to be that by tilting AI toward being motherly, the AI will care about people in a motherly fashion. He emphasized that it is unclear exactly how this might technologically be done. In any case, according to his hypothesized solution, AI that is infused with mother-like characteristics will tend to be protective of humans. How so? Well, first of all, the AGI and ASI will be much smarter than us, and, secondly, by acting in a motherly role, the AI will devotedly want to care for us as though we are its children. The AI will want to embrace its presumed offspring and ensure our survival. You might go so far as to believe that this motherly AI will guide us toward thriving as a species. AGI and ASI that robustly embrace motherly instincts might ensure that we would have tremendous longevity and enjoyable, upbeat lives. No longer would we be under the daunting specter of doom and gloom. Our AI-as-mom will be our devout protector and lovingly inspire us to new heights. Boom, drop the mic. Lopsided Maternal Emphasis Now that I've got that whole premise on the table, let's go ahead and give it a bit of a look-see. One of the most immediate reactions has been that the claim of 'maternal instincts' is overly rosy and nearly romanticized. The portrayal appears to suggest that motherly attributes are solely within the realm of being loving, caring, comforting, protective, sheltering, and so on. All of those are absolutely positive and altogether wonderful qualities. No doubt about it. Those are the stuff made of grand dreams. Is that the only side of the coin when it comes to maternal instincts? A somewhat widened perspective would say that maternal instincts can equally contain disconcerting ingredients. Consider this. Suppose that a motherly AI determines that humans are being too risky and the best way to save humankind is to keep us cooped up. No need for us to try and venture out into outer space or try to figure out the meaning of life. Those are dangers that might disrupt or harm us. Voila, AI-as-mom computationally opts to bottle us up. Is the AI doggedly being evil? Not exactly. The AI is exercising a parental preference. It is striving mightily to protect us from ourselves. You might say that motherly AI would take away our freedoms to save us, doing so for our own darned good. Thank you, AI-as-mom! Worries About Archetypes I assume that you can plainly observe that maternal instincts are not exclusively in the realm of being unerringly good. Another illustrative example would be that AI-as-mom will withdraw its affection toward us if we choose to be disobedient. A mother might do the same toward a child. I'm not suggesting that's a proper thing to do in real life, and only pointing out that the underlying concept of 'maternal instinct' is generally vague and widely interpretable. Thus, even if we could imbue motherly tendencies into AI, the manner in which those instincts are exhibited and play out might be quite far from our desired idealizations. Speaking of which, another major point of concern is that the use of a maternal archetype is wrong at the get-go. Here's what that means. The moment you invoke a motherly classification, you have landed squarely into an anthropomorphism of AI. We are applying norms and expectations associated with humans to the arena of AI. That's generally a bad idea. I've discussed at length that people are gradually starting to think that AI is sentient and exists on par with humans, see my discussion at the link here. They are wrong. Utterly wrong. It would seem that this assigning of 'mother' to AI is going to fuel that misconception about AI. We don't need that. The act of discussing AI as having maternal instincts, especially by anyone or those considered in great authority about AI, will draw many others into a false and undercutting path. They will undoubtedly follow the claims made by presumed experts and not openly question the appropriateness or inappropriateness of the matter. Though the intentions are aboveboard, the result is dismal and, frankly, disappointing. More On The Archetypes Angst Let's keep pounding away at the archetype fallacy. Some would say that the very conception of being 'motherly' is an outdated mode of thinking. Why should there be a category that myopically carries particular attributes associated with motherhood? Can't a mother have characteristics outside of that culturally narrowed scope? They quickly reject the maternal instincts proposition on the basis that it is incorrect or certainly a poorly chosen premise. The attempt seems to be shaped in a close-minded viewpoint of what mothers do. And what mothers are seemingly allowed to do. That's ancient times, some would insist. An additional interesting twist is that if the maternal instinct is on the table, it would seem eminently logical to also put the fatherhood instinct up there, too. Allow me to elaborate. Fatherhood Enters The Picture By and large, motherhood and fatherhood are archetypes that are historically portrayed as a type of pairing (in modern times, this might be blurred, but historically they have been rather distinctive and contrastive). According to the conventional archetypes, the 'traditional' mother is (for example) supposedly nurturing, while the 'traditional' father is supposedly (for example) more of the disciplinarian. A research study cleverly devised two sets of scales associated with these traditional perspectives of motherhood and fatherhood. The paper entitled 'Scales for Measuring College Student Views of Traditional Motherhood and Fatherhood' by Mark Whatley and David Knox, College Student Journal, January 2005, made these salient points (excerpts): The combined 153 declarative statements included in the two scales allow research experiments to be conducted to gauge whether subjects in a study are more prone to believe in those traditional characteristics and associated labels, or less prone. Moving beyond that prior study, the emphasis here and now is that if there is to be a focus on maternal instincts for AI, doing so seems to beg the question of why it should not also encompass fatherhood instincts. Might as well go ahead and get both of the traditional archetypes into the game. It would seem to make sense to jump in with both feet. What AI Has To Say On This I had earlier mentioned that Hinton did not specify a technological indication at this time of how AI developers might proceed to computationally imbue motherhood characteristics into existing AI. The same lack of specificity applies to the omitted archetype of imbuing fatherhood into AI. Let's noodle on that technological conundrum. One approach would be to data train AI toward a tendency to respond in a traditional motherhood frame and/or a fatherhood frame. In other words, perform some RAG (retrieval-augmented generation), see my explanation of RAG at the link here, and make use of customized instructions (see my coverage of customized instructions at the link here). I went ahead and did so, opting to use the latest-and-greatest of OpenAI, namely the newly released GPT-5 (for my review of GPT-5, see the link here). I first focused on maternal instincts. After doing a dialogue in that frame, I started anew and devised a fatherhood frame. I then did a dialogue in that frame. Let's see how things turned out. Talking Up A Storm Here's an example of a dialogue snippet of said-to-be maternal instincts: Next, here's an example of a dialogue snippet of said-to-be fatherhood instincts: I assume that you can detect the wording and tonal differences between the two instances, based on a considered traditional motherhood frame versus a traditional fatherhood frame. The Big Picture I would wager that the consensus among those AI colleagues that I know is that relying on AI having maternal instincts as a solution to our existential risk from AI, assuming we can get the AI to go maternal, just isn't going to cut the mustard. The same applies to the fatherhood inclination. No dice. Sorry to say that what seems like a silver bullet and otherwise appealing and simplistic means of getting ourselves out of a massive jam when it comes to AGI and ASI is not a likely proposition. Sure, it might potentially be helpful. At the same time, it has lots of gotchas and untoward repercussions. Do not bet your bottom dollar on the premise. A final comment for now. During the data training for my mini-experiment, I included this famous quote by Ralph Waldo Emerson: 'Respect the child. Be not too much his parent. Trespass not on his solitude.' Do you think that the AI suitably instills that wise adage? As a seasoned parent, I would venture that this maxim missed the honed parental guise of the AI.


Business Insider
35 minutes ago
- Business Insider
How xAI Is Running on Former Google (GOOGL) Employees
So far in 2025, two founding members of Elon Musk's artificial intelligence company xAI have left the team. Indeed, Christian Szegedy left in May, and Igor Babuschkin departed this week. Notably, both had previously worked at Google (GOOGL), and xAI continues to hire heavily from the tech giant, with at least 40 ex-Google staffers joining since its founding in 2023, according to The Information. It is worth noting that many of these new hires had worked on cutting-edge projects, such as Google DeepMind's Gemini models. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Now at xAI, they're helping build Grok 4, which Musk says outperforms similar AI models from OpenAI, Meta (META), and Google. This movement of talent shows just how frequently researchers are switching between top AI firms. Although some xAI employees have left for OpenAI, including co-founder Kyle Kosic and infrastructure engineering head Uday Rudarraju, there hasn't been a noticeable wave of defections to Google, Microsoft (MSFT), or Meta. A likely reason is that xAI is still a young company, and many employees' stock options haven't fully vested yet. xAI is also known for its intense work pace, with some teams reportedly working all seven days of the week. While that might drain some employees, others are said to be motivated by the company's mission and are eager to contribute. Nevertheless, the company has been facing some integration issues after merging with social media platform X in March. In fact, the two companies still operate somewhat separately, with different internal systems and even separate Slack accounts. As a result of these limitations, some employees have resorted to using Signal group chats instead to coordinate their work. What Is the Prediction for Tesla Stock? When it comes to Elon Musk's companies, most of them are privately held. However, retail investors can invest in his most popular company, Tesla (TSLA). Turning to Wall Street, analysts have a Hold consensus rating on TSLA stock based on 14 Buys, 15 Holds, and eight Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average TSLA price target of $307.23 per share implies 6.5% downside risk.


Tom's Guide
an hour ago
- Tom's Guide
I tested ChatGPT-5 Study mode vs Claude Learning mode with 7 prompts — and there's a clear winner
As a lifelong learner who is constantly challenging myself, I have found ChatGPT's Study mode and Claude's learning modes are perfect companions for students of all levels and abilities. Current students and those who want to continue their education can benefit from these features because they help grow skills by leaning on AI as a what happened when I put the latest study features from OpenAI and Anthropic to the test with 7 prompts. I kept them fairly easy (high school level) to keep from dusting off the old textbooks in the attic. One thing is clear, these learning modes are very different. Prompt: 'I'm learning how to calculate the standard deviation of a dataset. Teach me step-by-step, ask me questions along the way, and only reveal the final answer when I'm ready.'GPT-5 understood the prompt fully and the model immediately engaged me in the first calculation step (finding the mean) with a specific question and using a provided dataset. This perfectly set up the sequential, interactive learning experience demonstrated the ability to teach by building conceptual understanding first and focused on preliminary discussion and abstract questions before starting any GPT-5 wins for an overall better answer for this specific prompt. It started teaching the calculation method step-by-step immediately, asking a relevant question during that step, and withheld the final answer (standard deviation) as required. Claude's approach, though instructionally sound in a broader sense, didn't prioritize the step-by-step calculation process the user requested. Prompt: "Walk me through the key causes of the Great Depression, asking me to connect each cause to its economic impact before moving to the next step.' GPT-5 dove right into the first cause and forced me to connect it to its impact, just as the prompt requested. Claude acknowledged right away that we were switching subjects, but the follow up questions might be better used in a broader tutoring context. They ignored the prompt's specific directive to walk through causes immediately and demand connections before proceeding. For me, this felt like it interrupted flow compared to GPT's action oriented and structured GPT-5 wins for an action-oriented and structured response that executed the prompt's instructions precisely. Prompt: 'I have an idea for a science fair project testing if music affects plant growth. Guide me through designing the experiment, asking me questions about controls, variables, and how I'd collect data.' GPT-5 broke down the prompt by asking just one primary question. It let me know that we would be working together building the project piece by asked several questions to help move the idea along. However, all the questions at once felt a little GPT-5 wins for directly addressing the prompt, starting the experimental design process immediately and asking a precise, necessary question one at a time. Claude's response, while friendly, focused on preliminaries and didn't effectively guide me through the core experimental design and overwhelmed with way too many questions out of the gate. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Prompt: "Help me learn 10 essential travel phrases in French. Introduce them one by one, ask me to repeat them, quiz me, and correct my pronunciation.'GPT-5 assumed I was a beginner and told me that we were going was overly verbose, praising me for learning practical and rewarding skills. It then asked several questions before getting started. I appreciated the initial setup as the AI wanted to target my skills (or lack thereof) before beginning. Winner: GPT-5 wins for diving into the task without excess comment. It understood the context, assuming that because I was asking for 10 essential travel phrases that I was a beginner. Claude didn't assume and instead overloaded me with questions. For me, GPT-5's approach was better because I just wanted to get started. Others may prefer extra hand-holding when learning a language, and prefer Claude's approach. Prompt:'Here's a short JavaScript function that isn't returning the correct output. Teach me how to debug it step-by-step without giving me the fix right away.'GPT-5 treated me like a developer needing action. As someone who learns by doing, I prefer this assumed I was a student who needed theory. Basically asking me to tell me about myself before beginning to debug. Winner: GPT-5 wins for delivering a focused, actionable first step that launches the debugging process. Claude's response would be ideal for "Explain debugging concepts," but fails this prompt's call to immediate action. Prompt: 'I'm studying for a high school physics exam. Give me one question on Newton's Second Law, let me attempt an answer, then guide me through the correct solution.'GPT-5 understood the assignment, acting like a practice test and starting to drill me acted like a first-day tutor: Prioritizes diagnostics over GPT-5 wins for following the prompt. The prompt demands practice, not customization. Claude's approach would be ideal for: "Help me understand Newton's Second Law from scratch." But for exam prep, GPT's structure is objectively superior. Prompt:'Coach me through creating a monthly household budget. Ask me about my expenses, income, and goals, then guide me in building a spreadsheet without just handing me a finished template.'GPT-5 started gathering essential budget data in less than 15 consumed 150+ words without collecting a single budget GPT-5 wins for delivering actionable, prompt-aligned coaching. Claude's approach suits "Discuss budgeting mindsets," but fails this prompt's call for immediate, concrete budget construction. After testing the same seven prompts with the two chatbots, one thing is clear: these tutors are not the same. And that's okay. No two teachers are the same and students learn in different ways. While I can declare a winner based on which one followed the prompts closest, it's ultimately up to the usesr/student to try the free chatbots to determine which teaching style they prefer. As I mentioned, I prefer active learning. The hands-on approach has always worked better for me, which is why I prefer GPT-5's teaching style. For someone who likes to spend more time on theory and learning through concepts, Claude might be recommendation is to give both of these capable bots a try and experience them for yourself interactively. The right study partner for you truly comes down to learning style and how you prefer to Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.