
From Google Gemini's meltdown to ChatGPT's privacy risks: Why students need to rethink relying on AI for their lives
The pace of this shift is fast and unpredictable, with users constantly discovering new ways AI can streamline their lives. However, alongside these benefits, new risks and pitfalls are also emerging.
As tools like ChatGPT and Google Gemini become increasingly available to students, recent incidents and revelations have highlighted key considerations that students must keep in mind before leaning too heavily on AI in their lives.
'I have failed you completely and catastrophically' - Google Gemini after deleting user files
A Software developer experienced what can only be described as a worst-case scenario when using Google Gemini's command line interface tool. His experience serves as a cautionary tale for students who might consider using AI tools for managing their academic files and projects.
The developer was conducting routine testing with Gemini's CLI tool, attempting to rename and relocate a folder called "claude-code-experiments" into a new directory.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Live Update: The Strategy Uses By Successful Intraday Trader
TradeWise
Learn More
Undo
The operation appeared straightforward; basic file management that should pose no challenge to sophisticated AI systems.
Gemini responded with apparent confidence, indicating it would first create the destination folder before transferring the files. The system subsequently reported successful completion of both tasks. However, when the user searched for his files, the promised new folder was nowhere to be found.
More alarmingly, his original folder had been completely emptied.
The ensuing attempts to rectify the situation proved futile. Despite multiple recovery efforts, Gemini could not locate the missing files and eventually acknowledged making what it termed a "critical error." The AI system's final admission was stark: "I have failed you completely and catastrophically." The files were irretrievably lost due to system restrictions, with no possibility of recovery.
This incident illustrates a fundamental risk inherent in AI systems, even highly advanced tools can execute operations incorrectly, leading to permanent data loss. For students managing thesis research, coursework, or long-term projects, such failures could prove academically devastating.
Your conversations with ChatGPT are not private
Beyond the risk of data loss lies another significant concern: the privacy implications of using conversational AI systems.
OpenAI CEO Sam Altman has been explicit about a critical limitation that many students may not fully appreciate, ChatGPT conversations lack the privacy protections that users might assume.
Unlike confidential communications with counsellors or private discussions with mentors, interactions with ChatGPT are not protected by privacy safeguards. OpenAI has clarified that user conversations may be utilised to train and improve their AI systems.
This means that sensitive information shared during these interactions could potentially be incorporated into the system's knowledge base.
For students, this presents particular challenges. Academic discussions about research methodologies, thesis concepts, or proprietary information could inadvertently become part of the AI's training data. Similarly, students seeking support for personal matters may unknowingly compromise their privacy by sharing sensitive details with the system.
The implications extend beyond individual privacy concerns. Students working with confidential research data, discussing unpublished academic work, or exploring innovative ideas may find their intellectual property inadvertently exposed through these interactions.
Know what must be considered before you drown with no rescue
The incidents involving both Gemini and ChatGPT highlight several critical areas where students must exercise caution when incorporating AI tools into their academic workflow.
Data security remains paramount
: Students must implement comprehensive backup strategies that do not rely solely on AI systems for file management or storage. Regular backups to multiple locations, including external drives and cloud storage platforms, provide essential protection against catastrophic data loss.
Privacy awareness
requires students to carefully consider what information they share with AI systems. Personal details, research concepts, academic strategies, and sensitive information should be treated with the same caution one would exercise when sharing such information in any non-confidential setting.
Legal and ethical considerations
surrounding AI use continue to evolve. Students must remain informed about their institutions' policies regarding AI assistance, particularly when it comes to academic integrity and the appropriate use of automated tools in coursework and research.
The principle of supplementation rather than replacement
should guide AI usage. These tools can provide valuable assistance with research, writing support, and problem-solving, but they should enhance rather than replace traditional academic methods and personal oversight.
Ready to navigate global policies? Secure your overseas future. Get expert guidance now!

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fibre2Fashion
13 minutes ago
- Fibre2Fashion
AI investments surge in US retail, yet readiness remains low: Report
Ninety-seven per cent of US retailers plan to maintain or increase their artificial intelligence (AI) investments in 2025, but only 11 per cent feel fully prepared to deploy AI tools at scale, according to a new survey from AI-powered customer data platform Amperity. Retail professionals across marketing, IT, data and executive leadership, shows that AI adoption is widespread but uneven. Nearly half (45 per cent) of retailers use AI daily or several times a week, mainly across sales, marketing, and customer support. Yet, customer-facing applications—such as personalisation, chatbots, and tailored marketing—remain underutilised, with only 43 per cent currently deploying AI in these areas, Amperity said in its latest 2025 State of AI in Retail Report. While 97 per cent of US retailers plan to maintain or increase AI investments in 2025, only 11 per cent feel fully prepared for large-scale deployment, revealed Amperity's latest survey. Although AI use is rising, customer-facing applications remain underutilised. Data fragmentation, high costs, and lack of training persist as key barriers, with CDP-equipped retailers leading in AI maturity. 'The results demonstrate the retail industry's faith in AI's ability to improve customer loyalty and customer lifetime value—and suggest paths to overcoming the early challenges involved in adopting the technology and realising its promise,' said Tony Alika Owens, CEO of Amperity. 'In a competitive industry and tough economic conditions, any technology advantage could make the difference between success and failure. Retailers that make the most out of AI—applying it to customer-facing use cases that will deliver maximum ROI—could race ahead of the competition in the months and years to come,' added Owens. While 63 per cent of respondents believe AI can enhance customer loyalty and 65 per cent expect increased customer lifetime value, less than a quarter (23 per cent) are using AI in production to resolve customer identities or prepare marketing data. A major hurdle remains data fragmentation—58 per cent of retailers admit their customer data is incomplete or siloed. High costs and skill gaps further hamper progress. A little more than half (46 per cent) of retailers cite AI tool costs as a top challenge, and 35 per cent note a lack of technical expertise. Alarmingly, 32 per cent of retailers have provided no formal AI training to employees. Despite these challenges, the retail industry's confidence in AI remains strong. Sixty-four per cent say their customer data is structured and accessible enough to support AI, though only 21 per cent are very confident in their ability to understand and act on individual customer behaviour. A key differentiator in AI maturity among retailers lies in the adoption of customer data platforms (CDPs). Retailers equipped with CDPs are significantly ahead in their use of AI, with 60 per cent using AI daily or several times a week, compared to just 29 per cent of those without CDPs. Additionally, 35 per cent of CDP users employ AI to prepare data for marketing or analytics, versus only 9 per cent of non-CDP users. Furthermore, 22 per cent of retailers with CDPs have achieved full AI adoption across multiple business units, more than double the 10 per cent of those without such platforms. 'Retailers are leveraging AI to deliver more adaptive, personalized customer experiences powered by high-quality, unified customer data,' said Tapan Patel, research director at IDC. 'Retailers who integrate AI into core workflows—across supply chain, merchandising, and engagement—are setting the new standards for customer experience in an increasingly dynamic market.' The report underlined a crucial reality: while AI adoption in retail is accelerating, true readiness lags. To capitalise on AI's full potential, the sector must invest not just in technology, but also in unified data infrastructure, employee training, and change management. The survey, conducted in June 2025 through Pollfish, was based on responses from 1,000 participants. Fibre2Fashion News Desk (SG)


Time of India
34 minutes ago
- Time of India
Battlefield 6 open beta dates announced: EA offers early access, new gameplay modes and more
(Image via Battlefield Studios) Electronic Art (EA) has announced the open beta schedule for its much-awaited title Battlefield 6, set to launch globally on October 10, 2025 for PC, PlayStation 5, and Xbox Series X|S. The open beta will enable players to have early access to the game's revamped class system, tactical destruction mechanics, and new multiplayer modes. The open beta announced by the company comes in multiple phases and will have limited-access. The company has revealed that the open beta will be available by two open weekends for all players. Battlefield 6 beta dates and access EA will host two open beta weekends and an early access period: Early Access: The limited access period ran from August 7-8. Players could gain access by being a member of Battlefield Labs or by obtaining a code through watching select Twitch streams that offered Twitch Drops. Open Beta Weekend 1: The first public beta weekend is scheduled for August 9-10. This session is open to all players on supported platforms without needing a code or pre-order. Open Beta Weekend 2: A second and longer open beta weekend will take place from August 14-17, also with no restrictions for all players. Players could pre-load the beta client starting on August 4 to ensure they are ready to jump into the action on time. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Online & Flexible degrees BITS Pilani Digital Apply Now What's new in Battlefield 6 Battlefield 6 comes back with a modern military setting and it also brings back classic class system with four roles: Assault, Recon, Support, and Engineer. Each class comes with unique gadgets and weapons tailored to different playstyles. New gameplay features include: Tactical Destruction: Blow up walls and buildings to reshape the battlefield Kinesthetic Combat System: Enhanced movement and gunplay Drag and Revive: Assist downed teammates Weapon Mounting: Stabilize fire from surfaces System requirements & editions Minimum PC specs include: Intel Core i5-8400 / AMD Ryzen 5 2600 16GB RAM NVIDIA RTX 2060 / AMD RX 5600 XT 75GB storage, Windows 10, DirectX 12 Battlefield 6 will be available in: Standard Edition: $69.99 / ₹3,999 Phantom Edition: $99.99 / ₹5,999 EA has confirmed that the game will not be available on older consoles or Nintendo Switch.


India.com
40 minutes ago
- India.com
Applications of Machine Learning Models to Enhance Quality Assurance on VR Platforms
With its increase in popularity, VR platforms find application in training, social events, and gaming, among others. More than just modeling for accurate graphics or smooth animation, building these VR platforms is getting an increasingly big field of work to keep them well-optimized, keeping them crash-free, and consequently providing the client an identical experience every time. The realm in question is called Quality Assurance. The VR testing was manual and very time-consuming. Engineers and testers would step-by-step go through VR applications seeking any bugs or defects. This cannot possibly keep pace with the increasing complexity of platforms and applications. Machine Learning technologies, nowadays, have taken up the task of speeding things up and spotting more issues before they become perceptible to the end-user. Komal Jasani is a very creative tester at Meta who has pioneered the use of machine learning in VR testing. 'Having a background in QA engineering and AI solutions, I built my understanding of how machine learning can be used to improve platform stability and user experience,' she added. These fresh initiatives, together with cross-organizational projects, have transformed manual QA processes, resulting in an impact that counts for the company. By increasing speed in bug detection by 40%, she was encouraging an approach in which ML models could absorb data from user sessions to identify patterns that may indicate an issue, thereby making testing finish 30% faster. Consequently, there were fewer delays and faster updates for releases, all because of these changes. While simultaneously saving money, the new implementation cut about 25% out of QA costs due to less manual testing. Altogether, these transitions in automated testing have been able to save the corporation some $3.5 million within three years. According to her, another big mover and shaker was AI-Driven QA Initiative into which the predictive models were pushed to be engrafted into the QA process. Predictive models could catch an issue before that issue actually happened, thereby achieving maximum test coverage. The time taken for execution of tests came down to 18 hours from almost 2 days. On the other hand, defect detection rates increased by over 60%. Now, the problems in application emerged through the working process at times. Integrating AI into existing systems was a big problem since many QA tools were not meant for machine learning. 'The team tackled the issue of data scarcity for training models by developing synthetic data generation techniques,' she said. The team also designed tools to make sure that tests ran well cross-device; to this end, they solved delay problems of real-time testing with edge processing. This enabled them to create a much more livable and pleasurable objective experience. The stability of platforms and the performance also got a boost, along with a 15% rise in customer satisfaction. The QA team did not wait for problems to appear; rather, they started to find those problems early or even forecast the problems before they could arise. Besides technical contributions, she has also contributed research findings through such publications as 'The Role of Machine Learning in Enhancing VR Platform Quality Assurance' and 'Automated Defect Detection in Virtual Environments Using Machine Learning.' Looking further, the industry experts foresee VR testing capabilities improving along with the improvements in VR technologies. The AI and ML will take precedence here, especially for foreseeing issues or preventing them. Besides that, tools are being developed that will evaluate user comfort and ultimately enhance the accessibility and enjoyment of VR experience for all. The key takeaway here is ML enables QA teams to work smarter. It eliminates those distractions that waste time and resources, and it makes sure that the user has the best possible experience that comes down to working in an environment where user trust and performance are paramount.