logo
Your Windows Clipboard keeps a history of what you copy: Here's why you should clear it today

Your Windows Clipboard keeps a history of what you copy: Here's why you should clear it today

Mint2 days ago
We all copy sensitive stuff in a hurry. A passport number for a visa form, a bank account for a quick transfer, an OTP that expires in 30 seconds. Once you paste it, you assume it is gone. On Windows, not quite. Clipboard history can quietly keep a list of what you copied, which is brilliant for productivity and awkward for privacy. If you share a PC at home or jump into a screen share at work, that panel of past snippets can surface at the worst moment.
Windows 10 and 11 include Clipboard history. When it is on, press Windows plus V to see a panel of recent items. Text, links, images, screenshots, they all show up. You can even pin favourites so they stick around. For writers, students, coders, this is gold. The flip side is obvious. Sensitive items linger there too. An Aadhaar number you pasted into a form, a one-time passcode copied in a rush, or a screenshot with an email header peeking in the frame. There is also an optional sync that sends your Clipboard to other Windows devices signed in with your Microsoft account. That is convenient if you use a laptop and desktop, but it also widens the surface you have to protect. The fix is not to ditch the feature. It is to control it with a simple routine you can do without thinking. Press Windows plus V to open the Clipboard panel
Click Clear all to remove everything that is not pinned
To delete a single item, click the three dots on that card and tap the bin icon
If an item is pinned, unpin it first, then delete it
Open Settings > System > Clipboard and toggle Clipboard history off if you do not need it
In the same menu, turn Sync across your devices off to stop items travelling to other PCs
Treat Clipboard history like a whiteboard. Use it while you work, then wipe it clean. After copying something sensitive, paste it and clear history. Before any screen share, open Windows plus V and hit Clear all. It takes a second and saves you from a heart-sink moment. On a family PC, create separate user accounts so your Clipboard and theirs do not mix. Use a password manager for credentials and payments so you copy less in the first place. For screenshots, save to a secure folder, crop out private bits, then copy what you need. The goal is not paranoia, it is hygiene. Set the toggles once, add a quick clear to your shutdown habit, and you get the productivity boost without leaving crumbs of data behind.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Neither an engineering degree nor studied at IIT, IIM, IIIT, or VIT, yet got record breaking package at Microsoft as… she is…
Neither an engineering degree nor studied at IIT, IIM, IIIT, or VIT, yet got record breaking package at Microsoft as… she is…

India.com

time2 hours ago

  • India.com

Neither an engineering degree nor studied at IIT, IIM, IIIT, or VIT, yet got record breaking package at Microsoft as… she is…

It is often believed that only students from IITs and IIMs do well and secure the top placements in the best companies. But it is not true. Several students who pursue their courses from other institutes and colleges also secure jobs in leading companies, including Infosys, TCS, Meta, Apple, Google, and Microsoft. One such inspiring person is Rushali Pawar, currently working at Microsoft in Bengaluru, Karnataka. Rushali Pawar is a Communications Specialist at Microsoft India Development Center. What makes her journey phenomenal is that she did not attend a leading institution such as IIT and IIM. However, she is working for one of the biggest companies, Microsoft. Where did she complete her education? From the very beginning, Rushali showed a strong interest in writing, communication, and content creation, as reflected on her LinkedIn profile. She has experience in corporate communication, content strategy, research, and storytelling. According to her LinkedIn profile, Rushali completed her Bachelor of Arts – BA, English Language and Literature/Letters at Stella Maris College, followed by her Master of Arts – MA, English Language and Literature/Letters at the University of Leeds. In her career, she focused primarily on brand messaging, internal communication, and strategic content. She worked as a Trainee Journalist at the Times of India in 2012. Later, she worked as a Junior Writer at Time Out Group plc and became a Sub-Editor at Deccan Chronicle Holdings Ltd. Later, she worked as a Cortana Writer at Microsoft in 2018. She joined Microsoft in August 2021 as a Senior Content Writer. In October 2023, she joined Microsoft India Development Center as a Communications Specialist. 'A meticulous, innovative writer with experience in corporate communications, content strategy, research, and storytelling,' reads ber LinkedIn bio. Today, many people believe that without a degree from IITs or some prestigious institution, it is impossible to land a job in top tech companies. Rushali was able to prove otherwise. She showed that it is very much possible to connect with global companies without going through 'big-name' institutions—with talent, effort, and skills.

Gemini's Glitch: There are lessons to learn
Gemini's Glitch: There are lessons to learn

Mint

time2 hours ago

  • Mint

Gemini's Glitch: There are lessons to learn

Gift this article Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. This isn't the first time AI has done something unexpected, and it won't be the last. In February 2024, a bug caused ChatGPT to spew Spanish–English gibberish that users likened to a stroke. That same year, Microsoft's Copilot responded to a user who said they wanted to end their life. At first, it offered reassurance, 'No, I don't think you should end it all," but then undercut itself with, 'Or maybe I'm wrong. Maybe you don't have anything to live for." Countless similar episodes abound. A fix will come for Gemini soon enough, and it will be back to its sunny self. The 'meltdown" will take its place in AI's short but colourful history of bad behaviour. But before we file it and forget it, there are some takeaways from Gemini's recent weirdness. Despite being around in some form for decades, generative AI that is usable by everyone has come at us like an avalanche in the past two years. It's been upon us before the human race has even figured out whether it's created a Frankenstein monster or a useful assistant. And yet, we tend to trust it. Also Read | Emotional excess: Save yourself from AI over-dependency When machines mimic humans There was a time when technology had no consciousness. It still doesn't, but it has started to do a good job of acting like it does. Gemini's glitch came across as such a human state of upset, it crosses the line enough to be confusing. At this point, most users can still laugh it off. But a few, vulnerable because of mental health struggles or other reasons, could be deeply shaken or misled. Most recently, a 2025 report noted a man spent 300 hours over 21 days interacting with ChatGPT, believing himself to be a superhero with a world-changing formula. Such scenarios expose how large AI models, trained on vast troves of human text, may inadvertently adopt not just helpful behaviours but also negative emotional patterns like self-doubt or delusions. In fact, we lack clear guardrails and guidelines to manage these risks. Extreme examples, of course, stand out sharply, but AI also turns out hallucinations and errors on an everyday basis. AI assistants seem prone to completely dreaming up things to tell you when they experience a glitch or when compelled to give a response that is difficult to get at for some reason. In their keenness to please the user, they will just tell you things that are far from the truth, including advice that could be harmful. Again, most people will question and cross-check something that doesn't look right, but quite an alarming number will just take it for what it is. A 2025 health report claims a man dropped salt from his diet and replaced it with sodium bromide, landing him in the hospital. Now, I wouldn't take advice like that without a doctor's okay, but there are no clear guidelines to protect users against things like Google's AI Overview suggesting it's healthy to eat a rock every day, as mocked in a 2025 X post. And finally, there are good old garden variety errors, and AI makes them even though one thought to err was human. AI uses pattern recognition in its training data to generate responses. When faced with complex, ambiguous, or edge-case inputs (e.g., Gemini's struggle with debugging code), it may misinterpret context or lack sufficient data to respond accurately. But why does it make errors when the question is simple enough? A friend of mine asked ChatGPT how many instances of the term 'ex-ante' appeared in his document. It thought for 1 minute 28 seconds before announcing the term appeared zero times. In actual fact, it appeared 41 times. Why couldn't ChatGPT get it right? A bug, I suppose. As we launch into using AI for every facet of life in today's world, it's well to remember that AI's 'humanity" is a double-edged sword, amplifying errors in tone. Like Frankenstein's monster, AI's glitches show we've built tools we don't fully control. As users, we should demand transparency from AI companies, support ethical AI development, and approach these tools with a mix of curiosity and scepticism. The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life. Mala Bhargava is most often described as a 'veteran' writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience. Topics You May Be Interested In

OpenAI staff looking to sell $6 billion in stock to SoftBank, others, source says
OpenAI staff looking to sell $6 billion in stock to SoftBank, others, source says

The Hindu

time2 hours ago

  • The Hindu

OpenAI staff looking to sell $6 billion in stock to SoftBank, others, source says

Current and former employees of OpenAI are looking to sell nearly $6 billion worth of the ChatGPT maker's shares to investors including SoftBank Group and Thrive Capital, a source familiar with the matter told Reuters on Friday. The potential deal would value the company at $500 billion, up from $300 billion currently, underscoring both OpenAI's rapid gains in users and revenue, as well as the intense competition among artificial intelligence firms for talent. SoftBank, Thrive and Dragoneer Investment Group did not immediately respond to requests for comment. All three investment firms are existing OpenAI investors. Bloomberg News, which had earlier reported the development, said discussions are in early stages and the size of the sale could change. The secondary share sale investment adds to SoftBank's role in leading OpenAI's $40 billion primary funding round. Bolstered by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualized run rate of $12 billion, and is on track to reach $20 billion by the end of the year, Reuters reported earlier in August. Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store