
Microsoft set to shut down its ‘Lens App' amid AI taking centre stage
'Microsoft Lens', formerly known as 'Microsoft Office Lens', was a mobile app allowing users to capture and enhance the quality of images of documents, whiteboards, and other text-based materials.
This sophisticated app also comes up with advanced features, including text recognition and integrative capabilities with other products, including OneDrive and OneNote.
Regarding the app's departure, an official statement from Microsoft said, 'The Microsoft Lens app will be retired from iOS and Android devices starting September 15, 2025. After November 15, Microsoft Lens will no longer be supported. Users can continue to use the scanning capability in the app until December 15, 2025.
'After that date, creating new scans in the Lens app will no longer be possible. However, users can continue to access their previous scans as long as the app remains installed on their device," the official statement added.
Microsoft hasn't yet stated the reason behind the lens app shutdown, but experts are pointing towards the integration of AI in the operational processes of the company. As the organization has also taken some steps in recent months, which have shown that Microsoft is heavily leveraging AI.
UNI SAS PRS
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India.com
an hour ago
- India.com
Neither an engineering degree nor studied at IIT, IIM, IIIT, or VIT, yet got record breaking package at Microsoft as… she is…
It is often believed that only students from IITs and IIMs do well and secure the top placements in the best companies. But it is not true. Several students who pursue their courses from other institutes and colleges also secure jobs in leading companies, including Infosys, TCS, Meta, Apple, Google, and Microsoft. One such inspiring person is Rushali Pawar, currently working at Microsoft in Bengaluru, Karnataka. Rushali Pawar is a Communications Specialist at Microsoft India Development Center. What makes her journey phenomenal is that she did not attend a leading institution such as IIT and IIM. However, she is working for one of the biggest companies, Microsoft. Where did she complete her education? From the very beginning, Rushali showed a strong interest in writing, communication, and content creation, as reflected on her LinkedIn profile. She has experience in corporate communication, content strategy, research, and storytelling. According to her LinkedIn profile, Rushali completed her Bachelor of Arts – BA, English Language and Literature/Letters at Stella Maris College, followed by her Master of Arts – MA, English Language and Literature/Letters at the University of Leeds. In her career, she focused primarily on brand messaging, internal communication, and strategic content. She worked as a Trainee Journalist at the Times of India in 2012. Later, she worked as a Junior Writer at Time Out Group plc and became a Sub-Editor at Deccan Chronicle Holdings Ltd. Later, she worked as a Cortana Writer at Microsoft in 2018. She joined Microsoft in August 2021 as a Senior Content Writer. In October 2023, she joined Microsoft India Development Center as a Communications Specialist. 'A meticulous, innovative writer with experience in corporate communications, content strategy, research, and storytelling,' reads ber LinkedIn bio. Today, many people believe that without a degree from IITs or some prestigious institution, it is impossible to land a job in top tech companies. Rushali was able to prove otherwise. She showed that it is very much possible to connect with global companies without going through 'big-name' institutions—with talent, effort, and skills.


Mint
2 hours ago
- Mint
Gemini's Glitch: There are lessons to learn
Gift this article Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around. This isn't the first time AI has done something unexpected, and it won't be the last. In February 2024, a bug caused ChatGPT to spew Spanish–English gibberish that users likened to a stroke. That same year, Microsoft's Copilot responded to a user who said they wanted to end their life. At first, it offered reassurance, 'No, I don't think you should end it all," but then undercut itself with, 'Or maybe I'm wrong. Maybe you don't have anything to live for." Countless similar episodes abound. A fix will come for Gemini soon enough, and it will be back to its sunny self. The 'meltdown" will take its place in AI's short but colourful history of bad behaviour. But before we file it and forget it, there are some takeaways from Gemini's recent weirdness. Despite being around in some form for decades, generative AI that is usable by everyone has come at us like an avalanche in the past two years. It's been upon us before the human race has even figured out whether it's created a Frankenstein monster or a useful assistant. And yet, we tend to trust it. Also Read | Emotional excess: Save yourself from AI over-dependency When machines mimic humans There was a time when technology had no consciousness. It still doesn't, but it has started to do a good job of acting like it does. Gemini's glitch came across as such a human state of upset, it crosses the line enough to be confusing. At this point, most users can still laugh it off. But a few, vulnerable because of mental health struggles or other reasons, could be deeply shaken or misled. Most recently, a 2025 report noted a man spent 300 hours over 21 days interacting with ChatGPT, believing himself to be a superhero with a world-changing formula. Such scenarios expose how large AI models, trained on vast troves of human text, may inadvertently adopt not just helpful behaviours but also negative emotional patterns like self-doubt or delusions. In fact, we lack clear guardrails and guidelines to manage these risks. Extreme examples, of course, stand out sharply, but AI also turns out hallucinations and errors on an everyday basis. AI assistants seem prone to completely dreaming up things to tell you when they experience a glitch or when compelled to give a response that is difficult to get at for some reason. In their keenness to please the user, they will just tell you things that are far from the truth, including advice that could be harmful. Again, most people will question and cross-check something that doesn't look right, but quite an alarming number will just take it for what it is. A 2025 health report claims a man dropped salt from his diet and replaced it with sodium bromide, landing him in the hospital. Now, I wouldn't take advice like that without a doctor's okay, but there are no clear guidelines to protect users against things like Google's AI Overview suggesting it's healthy to eat a rock every day, as mocked in a 2025 X post. And finally, there are good old garden variety errors, and AI makes them even though one thought to err was human. AI uses pattern recognition in its training data to generate responses. When faced with complex, ambiguous, or edge-case inputs (e.g., Gemini's struggle with debugging code), it may misinterpret context or lack sufficient data to respond accurately. But why does it make errors when the question is simple enough? A friend of mine asked ChatGPT how many instances of the term 'ex-ante' appeared in his document. It thought for 1 minute 28 seconds before announcing the term appeared zero times. In actual fact, it appeared 41 times. Why couldn't ChatGPT get it right? A bug, I suppose. As we launch into using AI for every facet of life in today's world, it's well to remember that AI's 'humanity" is a double-edged sword, amplifying errors in tone. Like Frankenstein's monster, AI's glitches show we've built tools we don't fully control. As users, we should demand transparency from AI companies, support ethical AI development, and approach these tools with a mix of curiosity and scepticism. The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life. Mala Bhargava is most often described as a 'veteran' writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience. Topics You May Be Interested In

The Hindu
2 hours ago
- The Hindu
OpenAI staff looking to sell $6 billion in stock to SoftBank, others, source says
Current and former employees of OpenAI are looking to sell nearly $6 billion worth of the ChatGPT maker's shares to investors including SoftBank Group and Thrive Capital, a source familiar with the matter told Reuters on Friday. The potential deal would value the company at $500 billion, up from $300 billion currently, underscoring both OpenAI's rapid gains in users and revenue, as well as the intense competition among artificial intelligence firms for talent. SoftBank, Thrive and Dragoneer Investment Group did not immediately respond to requests for comment. All three investment firms are existing OpenAI investors. Bloomberg News, which had earlier reported the development, said discussions are in early stages and the size of the sale could change. The secondary share sale investment adds to SoftBank's role in leading OpenAI's $40 billion primary funding round. Bolstered by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualized run rate of $12 billion, and is on track to reach $20 billion by the end of the year, Reuters reported earlier in August. Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February.