
No work, no salary, still Rs 5 lakh due? Techie's shocking job exit story goes viral
An Indian software professional recently shared his troubling experience after stepping away from a job offer on the day he was supposed to join. Having gone through the entire hiring process—including interviews, documentation, and onboarding preparations—he chose not to proceed with the role due to unforeseen personal and professional factors. The decision was promptly communicated to the company via email on the very day of onboarding.
Believing that notifying the organization early and abstaining from accepting any compensation or using internal company assets would close the matter, the employee thought it was a clean break. However, to his dismay, he received a formal communication later from the company demanding a hefty recovery sum of Rs 5,00,000. The employer claimed this amount as compensation for violating the terms of a pre-signed agreement, alleging
breach of contract
or failure to adhere to a commitment, often referred to as a 'bond.'
The unexpected legal demand left the individual confused and anxious. Seeking clarity and assistance, he turned to Reddit, posting his story to get insights from others who might have faced similar issues. The overwhelming response from users painted a familiar picture of how companies sometimes leverage fear and misinformation to enforce questionable contract terms.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Villas For Sale in Dubai Might Surprise You
Villas In Dubai | Search Ads
Get Rates
Undo
One user recounted facing employer intimidation during their notice period and decided to delve into the legal framework governing such employment agreements. They discovered that many organizations exploit employee ignorance about their rights under
Indian labor laws
. According to prevailing legal interpretations, a bond or contractual agreement demanding financial penalties is not valid unless it has been formally executed on stamped paper and includes clearly defined terms, including monetary investments made by the company on training.
Another respondent emphasized that without significant training costs incurred by the company, no organization can legally demand such a recovery. Indian employment law recognizes only 'reasonable recovery' for quantifiable expenses like technical upskilling or certifications, not arbitrary penalties imposed for withdrawal or resignation. Additionally, since the employee had not officially taken charge of any responsibilities, used company infrastructure, or drawn a salary, enforcing such a bond appeared both morally and legally questionable.
The consensus among Reddit users suggested that unless the company could prove a tangible financial loss directly tied to the onboarding of this individual, the demand carried no substantial legal weight. Several users even advised the employee to respond assertively to the notice, making it clear that he was aware of his rights and would not be bullied into paying an unjustified penalty.
Ultimately, the community reinforced an important message: contracts that appear binding must also be fair and legally sound. The incident sparked broader conversations about unethical practices in the tech industry and the urgent need for greater awareness about employee rights.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
21 minutes ago
- Time of India
EPFO promise turns into nightmare: Retired Mumbai man duped of Rs 1.4 crore over 1.5 years by fake CBI and PF officials; probe on
Representative image MUMBAI: A former state govt employee was cheated of Rs 1.4 crore by cyber scammers who first led him to believe that he was interacting with a provident fund (PF) official and then scared him into thinking the official was arrested by the CBI. The 73-year-old complainant made payments for 1.5 years before realising that he had been taken for a ride and the documents sent to him were bogus. The west cyber police have registered an FIR and are investigating further. The complainant lives in Andheri with his family. In May 2023, he got a phone call purportedly from the Employees Provident Fund Organisation (EPFO), New Delhi. The caller introduced himself as Alok Mehta. He told the complainant his PF dues were pending with the organisation and he could claim them once he paid Rs 7,230 as security charges. After the complainant paid the money, Mehta sent him a letter purportedly from the ministry of finance over WhatsApp. The letter had a govt of India seal on it and the complainant was convinced of its authenticity. The document said he would receive Rs 63.08 lakh if he made a payment of Rs 3.8 lakh. Over the next 10 months, Mehta and two other "EPFO representatives" Sunita Tiwari and one Chakraborty stayed in touch with the complainant and got him to pay Rs 60 lakh under various pretexts. In March 2024, the complainant got a phone call from one Mahi Sharma who claimed to be an officer with the CBI's Mumbai office. She said Mehta had been arrested by the CBI for misappropriation of funds and the complainant's PF file had now come to the CBI for scrutiny. The complainant was left shaken. Sharma assured him his PF dues would be paid to him once she completed a thorough probe into his case. Subsequently, she sent him a cheque of his PF dues, but warned him to pay 30% as taxes. She also asked for charges towards issuing him a no-objection certificate as his PF dues were exceeding a crore. The complainant kept making payments as instructed by Sharma till Nov 2024. When he presented the cheque for encashing, he was told it was counterfeit. He approached cops. Police filed an FIR on May 28 and will probe who operated the bank accounts into which the funds were transferred.


Time of India
23 minutes ago
- Time of India
Welcome to campus, here's your ChatGPT
OpenAI , the maker of ChatGPT , has a plan to overhaul college education -- by embedding its artificial intelligence tools in every facet of campus life. If the company's strategy succeeds, universities would give students AI assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized AI study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud before a test. OpenAI dubs its sales pitch "AI-native universities." Play Video Pause Skip Backward Skip Forward Unmute Current Time 0:00 / Duration 0:00 Loaded : 0% 0:00 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 1x Playback Rate Chapters Chapters Descriptions descriptions off , selected Captions captions settings , opens captions settings dialog captions off , selected Audio Track default , selected Picture-in-Picture Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like How Much Money Should You Have Before Hiring a Financial Advisor? SmartAsset Learn More Undo "Our vision is that, over time, AI would become part of the core infrastructure of higher education," Leah Belsky, OpenAI's vice president of education, said in an interview. In the same way that colleges give students school email accounts, she said, soon "every student who comes to campus would have access to their personalized AI account." To spread chatbots on campuses, OpenAI is selling premium AI services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT. Live Events Some universities, including the University of Maryland and California State University, are already working to make AI tools part of students' everyday experiences. In early June, Duke University began offering unlimited ChatGPT access to students, faculty and staff. The school also introduced a university platform, called DukeGPT, with AI tools developed by Duke. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories OpenAI's campaign is part of an escalating AI arms race among tech giants to win over universities and students with their chatbots. The company is following in the footsteps of rivals like Google and Microsoft that have for years pushed to get their computers and software into schools, and court students as future customers. The competition is so heated that Sam Altman, OpenAI's CEO, and Elon Musk, who founded the rival xAI, posted dueling announcements on social media this spring offering free premium AI services for college students during exam period. Then Google upped the ante, announcing free student access to its premium chatbot service "through finals 2026." OpenAI ignited the recent AI education trend. In late 2022, the company's rollout of ChatGPT, which can produce human-sounding essays and term papers, helped set off a wave of chatbot-fueled cheating. Generative AI tools like ChatGPT, which are trained on large databases of texts, also make stuff up, which can mislead students. Less than three years later, millions of college students regularly use AI chatbots as research, writing, computer programming and idea-generating aides. Now OpenAI is capitalizing on ChatGPT's popularity to promote the company's AI services to universities as the new infrastructure for college education. OpenAI's service for universities, ChatGPT Edu, offers more features, including certain privacy protections, than the company's free chatbot. ChatGPT Edu also enables faculty and staff to create custom chatbots for university use. (OpenAI offers consumers premium versions of its chatbot for a monthly fee.) OpenAI's push to AI-ify college education amounts to a national experiment on millions of students. The use of these chatbots in schools is so new that their potential long-term educational benefits, and possible side effects, are not yet established. A few early studies have found that outsourcing tasks like research and writing to chatbots can diminish skills like critical thinking. And some critics argue that colleges going all-in on chatbots are glossing over issues like societal risks, AI labor exploitation and environmental costs. OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates -- particularly in fields like software engineering, where AI is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide AI tools and training. California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future AI-driven economy." Cal State said the effort would help make the school "the nation's first and largest AI-empowered university system." Some universities say they are embracing the new AI tools in part because they want their schools to help guide, and develop guardrails for, the technologies. " You're worried about the ecological concerns. You're worried about misinformation and bias," Edmund Clark, the chief information officer of California State University, said at a recent education conference in San Diego. "Well, join in. Help us shape the future." Last spring, OpenAI introduced ChatGPT Edu, its first product for universities, which offers access to the company's latest AI. Paying clients like universities also get more privacy: OpenAI says it does not use the information that students, faculty and administrators enter into ChatGPT Edu to train its AI. (The New York Times has sued OpenAI and its partner, Microsoft, over copyright infringement. Both companies have denied wrongdoing.) Last fall, OpenAI hired Belsky to oversee its education efforts. An ed tech startup veteran, she previously worked at Coursera, which offers college and professional training courses. She is pursuing a two-pronged strategy: marketing OpenAI's premium services to universities for a fee while advertising free ChatGPT directly to students. OpenAI also convened a panel of college students recently to help get their peers to start using the tech. Among those students are power users like Delphine Tai-Beauchamp, a computer science major at the University of California, Irvine. She has used the chatbot to explain complicated course concepts, as well as help explain coding errors and make charts diagraming the connections between ideas. "I wouldn't recommend students use AI to avoid the hard parts of learning," Tai-Beauchamp said. She did recommend students try AI as a study aid. "Ask it to explain something five different ways." Belsky said these kinds of suggestions helped the company create its first billboard campaign aimed at college students. "Can you quiz me on the muscles of the leg?" asked one ChatGPT billboard, posted this spring in Chicago. "Give me a guide for mastering this Calc 101 syllabus," another said. Belsky said OpenAI had also begun funding research into the educational effects of its chatbots. "The challenge is, how do you actually identify what are the use cases for AI in the university that are most impactful?" Belsky said during a December AI event at Cornell Tech in New York City. "And then how do you replicate those best practices across the ecosystem?" Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT. Jared DeForest, the chair of environmental and plant biology at Ohio University, created his own tutoring bot, called SoilSage, which can answer students' questions based on his published research papers and science knowledge. Limiting the chatbot to trusted information sources has improved its accuracy, he said. "The curated chatbot allows me to control the information in there to get the product that I want at the college level," DeForest said. But even when trained on specific course materials, AI can make mistakes. In a new study -- "Can AI Hold Office Hours?" -- law school professors uploaded a patent law casebook into AI models from OpenAI, Google and Anthropic. Then they asked dozens of patent law questions based on the casebook and found that all three AI chatbots made "significant" legal errors that could be "harmful for learning." "This is a good way to lead students astray," said Jonathan S. Masur, a professor at the University of Chicago Law School and a co-author of the study. "So I think that everyone needs to take a little bit of a deep breath and slow down." OpenAI said the 250,000-word casebook used for the study was more than twice the length of text that its GPT-4o model can process at once. Anthropic said the study had limited usefulness because it did not compare the AI with human performance. Google said its model accuracy had improved since the study was conducted. Belsky said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the AI "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Belsky envisions graduating students bringing their AI chatbots into their workplaces and using them for life. "It would be their gateway to learning -- and career life thereafter," Belsky said.


News18
35 minutes ago
- News18
Rs 1.2 Lakh Monthly Salary, Still Can't Buy a Home In India? Viral Post Sparks Debate
Akhilesh concluded the post with a powerful remark: 'The market is not broken. It's working exactly as designed—for someone else.' A recent post on social media platform X has reignited concerns over India's growing real estate affordability crisis. A techie named Akhilesh shared a striking anecdote about his friend in Gurugram who earns a hefty Rs 20 lakh per year, yet still finds himself priced out of the housing market. According to the post, Akhilesh's friend takes home around Rs 1.2 lakh per month after taxes and deductions. He lives modestly—no car, no kids, no extravagant lifestyle. Despite this, every residential project he visits in Gurugram starts at a staggering Rs 2.5 crore. These homes boast features like infinity pools, zen gardens, biometric lifts, and imported marble floors, making it clear that developers are targeting luxury buyers, not average professionals. The viral post struck a chord with many, especially young urban professionals. The core argument is simple: even those in the top 5% of India's income bracket can't comfortably buy a home in metro cities without compromising their financial security. Owning a house would mean living paycheck to paycheck, with no room for emergencies or even basic leisure. Akhilesh concluded the post with a powerful remark: 'The market is not broken. It's working exactly as designed—for someone else." The post captures a larger trend—how rapid urbanisation, speculative investments, and a push for ultra-luxury housing are making homeownership increasingly elusive, even for India's high earners. Anarock's Report Reveals Ultra-Luxury House In Demand Anarock's Annual Residential Report 2024 reveals that 59% of new housing projects in Delhi NCR, 18% in Hyderabad, and 12% in MMR were priced above Rs 2.5 crore, showing a rise in demand for premium homes among wealthy buyers and NRIs. NRIs, in particular, are playing a key role in this expansion, actively acquiring premium properties in major Indian metros as part of long-term wealth preservation strategies, noted a recent report by GRI Club. While the majority of new supply is focused on ultra-luxury homes, there is a noticeable shortage of homes in the upper mid-income and premium segments. Since the RERA law came into effect in 2017, there has been a significant increase in trust for developers who follow rules and deliver on time. This has led to a growing preference from NRIs for projects by such developers. About the Author Business Desk Location : New Delhi, India, India First Published: June 08, 2025, 08:20 IST News business Rs 1.2 Lakh Monthly Salary, Still Can't Buy a Home In India? Viral Post Sparks Debate