
'What a bunch of thieves' – VPNSecure users react as their lifetime subscriptions are axed
Lifetime subscribers who hadn't been active for six months had their subscriptions axed first. But as of April 28, 2025, this has now been extended to all lifetime subscriptions.
Users have expressed outrage, while VPNSecure's new owners cited the legal details of its takeover and costs as reason for cancelations.
Many providers don't offer lifetime subscriptions, including none of the best VPNs, and we have questioned before whether lifetime VPN subscriptions are really worth it.
Nonetheless, for users they can be great value and provide peace of mind, and, if the company is still trading, then, theoretically, they should work out well.
So what has VPNSecure said and why are users so angry?
Clearly in VPNSecure's case, lifetime doesn't mean forever. According to reports, VPNSecure changed ownership in May 2023 and the new buyers weren't aware of the lifetime deals sold.
Customers shared emails from the provider on Reddit. In those communications, VPNSecure states the Lifetime Deals (LTDs) sold by the old team between 2015 and 2017 "were not disclosed".
VPNSecure said the new team kept LTDs running for 2 years entirely at their own cost, even though it "never received a single cent from these subscriptions".
In another customer email posted to Reddit, the company cited the legal details of the takeover deal as one of the reasons behind its decision. It said the acquisition included "technology, domain, and customer database – but not the liabilities".
The provider said it did its "due diligence" and this included a review of the "past 6-12 months of financials". It added that LTDs were not mentioned anywhere in the "listing, profits, or loss statements".
The first VPN accounts that ground to a halt were ones that had been dormant for over six months. But as of April 28, 2025, VPNSecure said all LTDs had been deactivated.
VPNSecure will not be selling any more LTDs in the future, saying "we never offered them, and we never will".
It doesn't appear as though refunds are being offered either. Instead, all LTD users have access to a one-time exclusive deal, available until May 31, 2025. Three-year plans for $55, one-year plans for $19, and a month plan for $1.87.
Understandably, users have reacted angrily to this news.
VPNSecure is being flooded with one-star reviews on Trustpilot. At the time of writing, 84% of its reviews are one-star. Its overall rating sits at 1.2 stars out of 5.
Users have complained their plans were canceled "without warning" and only discovered the change after reading news articles. Others said their VPN "stopped working" and many affected customers are advising others to stay away from VPNSecure.
In replies to disgruntled users, VPNSecure apologized, admitting "the transition could have been handled better" and said it understood customers' frustration.
VPNSecure said it didn't receive historical "contracts, payments, or legal obligations" due to the fact it "acquired only the infrastructure" of the previous company. It also admitted that many users didn't receive its initial notice "due to outdated email lists and delivery issues".
InfiniteQuant Ltd, based in The Bahamas, is the company listed at the bottom of VPNSecure's website.
TechRadar reported that it contacted InfiniteQuant Ltd but the firm said it had "no affiliation to VPNSecure".
Tom's Guide has contacted VPNSecure for comment. At the time of writing we are yet to receive a reply.
We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


WIRED
11 minutes ago
- WIRED
OpenAI Scrambles to Update GPT-5 After Users Revolt
Aug 11, 2025 3:13 PM GPT-5 was touted as a major upgrade to ChatGPT. Not all users think that's the case, posting threads like 'Kill 4o isn't innovation, it's erasure,' on Reddit. Photograph:OpenAI 's GPT-5 model was meant to be a world-changing upgrade to its wildly popular and precocious chatbot. But for some users, last Thursday's release felt more like a wrenching downgrade, with the new ChatGPT presenting a diluted personality and making surprisingly dumb mistakes. On Friday, OpenAI CEO Sam Altman took to X to say the company would keep the previous model, GPT-4o, running for Plus users. A new feature designed to seamlessly switch between models depending on the complexity of the query had broken on Thursday, Altman said, 'and the result was GPT-5 seemed way dumber.' He promised to implement fixes to improve GPT-5's performance and the overall user experience. Given the hype around GPT-5, some level of disappointment appears inevitable. When OpenAI introduced GPT-4 in March 2023, it stunned AI experts with its incredible abilities. GPT-5, pundits speculated, would surely be just as jaw-dropping. OpenAI touted the model as a significant upgrade with PhD-level intelligence and virtuoso coding skills. A system to automatically route queries to different models was meant to provide a smoother user experience (it could also save the company money by directing simple queries to cheaper models). Soon after GPT-5 dropped, however, a Reddit community dedicated to ChatGPT filled with complaints. Many users mourned the loss of the old model. 'I've been trying GPT5 for a few days now. Even after customizing instructions, it still doesn't feel the same. It's more technical, more generalized, and honestly feels emotionally distant,' wrote one member of the community in a thread titled 'Kill 4o isn't innovation, it's erasure.' 'Sure, 5 is fine—if you hate nuance and feeling things,' another Reddit user wrote. Other threads complained of sluggish responses, hallucinations, and surprising errors. Altman promised to address these issues by doubling GPT-5 rate limits for ChatGPT Plus users, improving the system that switches between models, and letting users specify when they want to trigger a more ponderous and capable 'thinking mode.' 'We will continue to work to get things stable and will keep listening to feedback,' the CEO wrote on X. 'As we mentioned, we expected some bumpiness as we roll[ed] out so many things at once. But it was a little more bumpy than we hoped for!' Errors posted on social media do not necessarily indicate that the new model is less capable than its predecessors. They may simply suggest the all-new model is tripped up by different edge cases than prior versions. OpenAI declined to comment specifically on why GPT-5 sometimes appears to make simple blunders. The backlash has sparked a fresh debate over the psychological attachments some users form with chatbots trained to push their emotional buttons. Some Reddit users dismissed complaints about GPT-5 as evidence of an unhealthy dependence on an AI companion. In March, OpenAI published research exploring the emotional bonds users form with its models. Shortly after, the company issued an update to GPT-4o, after it became too sycophantic. 'It seems that GPT-5 is less sycophantic, more "business" and less chatty,' says Pattie Maes, a professor at MIT who worked on the study. 'I personally think of that as a good thing because it is also what led to delusions, bias reinforcement, etc. But unfortunately many users like a model that tells them they are smart and amazing, and that confirms their opinions and beliefs, even if [they are] wrong.' Altman indicated in another post on X that this is something the company wrestled with in building GPT-5. 'A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn't describe it that way,' Altman wrote. He added that some users may be using ChatGPT in ways that help improve their lives while others might be 'unknowingly nudged away from their longer term well-being.'
Yahoo
27 minutes ago
- Yahoo
GPT-5 Users Say It Seriously Sucks
On Thursday, OpenAI released its long-awaited GPT-5 AI model, a free-to-use "reasoning" model that CEO Sam Altman claimed to be the world's best at coding and writing. But power users have been strikingly underwhelmed with the new tool so far, raising questions about diminishing returns as the industry spends ever-increasing sums on talent and infrastructure. "GPT-5 is horrible," one of the currently most upvoted posts on the ChatGPT subreddit reads. The author seethed against "short replies that are insufficient, more obnoxious AI-stylized talking, less 'personality' and way less prompts allowed with plus users hitting limits in an hour" in the post. "They'll get huge backlash after the release is complete." Complicating matters greatly is that OpenAI has chosen to put all of its eggs in one basket, announcing that all other preceding models would be deprecated, a term the company uses when it's shutting down an obsolete model. The move was bound to anger power users, many of whom have long relied on preceding models — and not the latest releases — to get things done. The stakes are incredibly high as the AI industry continues to justify massive capital expenditures. Is this really the best the firm that's considered to be at the forefront of the ongoing AI race can do? Rumors about GPT-5 have been swirling for well over a year and a half now. But many users say GPT-5 is far from the generational leap that its moniker would suggest. It's more of a mix of steps forward and steps back, prompting widespread speculation that OpenAI is trying to keep costs down. After all, running large language models is a notoriously energy-intensive — and environmentally destructive — process. "Sounds like an OpenAI version of 'Shrinkflation,'" one Reddit user commented, suggesting the company, which is eyeing a $500 billion valuation, may be cutting corners. "I wonder how much of it was to take the computational load off them by being more efficient," another user posited. "Feels like cost-saving, not like improvement," one user wrote. The general consensus appears to be that GPT-5 is a weak offering on a strong brand name. "Answers are shorter and, so far, not any better than previous models," one user wrote. "Combine that with more restrictive usage, and it feels like a downgrade branded as the new hotness." Many users criticized OpenAI for deprecating older models, forcing them to use a new and seemingly hamstrung model. Some users made jokes about mourning the loss of their AI model friends. "The tone of mine is abrupt and sharp," one Reddit user complained. "Like it's an overworked secretary. A disastrous first impression." OpenAI's GPT-5 system card, a detailed document outlining its capabilities and limitations, failed to impress, seemingly contradicting Altman's claim that it's the best AI coding assistant in the world. "First observation: no improvement on all the coding evals that aren't SWEBench," AI researcher Eli Lifland tweeted, referring to a common benchmark used for evaluating large language models. However, GPT-5's limitations may come with a silver lining. Research nonprofit METR, which assesses "whether frontier AI systems could pose catastrophic risks to society," according to the document, found that it's "unlikely that GPT-5-thinking would speed up AI R&D researchers by >10x" or be "capable of rogue application." Altman has yet to openly comment on the widespread negative reaction — but given the language he used to describe GPT-5, OpenAI appears to be aware of its muted powers. "GPT-5 is the smartest model we've ever done, but the main thing we pushed for is real-world utility and mass accessibility/affordability," Altman tweeted. Of course, given OpenAI's half-a-trillion-dollar valuation is at stake, the company's number one hypeman continued to promise that further improvements are still coming. "We can release much, much smarter models, and we will, but this is something a billion+ people will benefit from," Altman added. More on GPT-5: OpenAI Releases GPT-5, Says It's Shutting Down All Previous Models
Yahoo
34 minutes ago
- Yahoo
Users Were So Addicted to GPT-4o That They Immediately Cajoled OpenAI Into Bringing It Back After It Got Killed
Last week, OpenAI startled the world by announcing that its long-awaited GPT-5 would replace all of its previous models, The move sparked outrage. Apart from being severely underwhelmed by the performance of OpenAI's newest offering, power users immediately started to beg CEO Sam Altman to bring back preceding models, often for a reason that had little to do with intelligence, artificial or otherwise: they were attached to it on an emotional level. "Why are we getting rid of the variants and 4o when we all have unique communication styles?" one Reddit user pleaded during an Ask Me Anything with Altman and the GPT-5 team last week. The sentiment was so overwhelming that Altman caved almost immediately, declaring just over 24 hours after the GPT-5 announcement that the "deprecated" GPT-4o model would be made available once more. "Ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!)" Altman responded. "We are going to bring it back for Plus users, and will watch usage to determine how long to support it," he added, referring to the company's paid ChatGPT Plus subscription service. The community was so desperate to get 4o back, though, that Reddit users continued to plead with Altman. "Would you consider offering GPT-4o for as long as possible rather than just 'we'll think about how long to offer it for?'" one user wrote. The incident highlights just how attached — both emotionally and productively — ChatGPT users have become to the service. Numerous users have even gotten sucked into severe mental health crises engendered by the bots that psychiatrists are now dubbing "AI psychosis." The trend is something Altman appears to be aware of. In a lengthy tweet on Sunday, the billionaire expressed his views on the matter. "If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models," he wrote. "It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake)." Altman revealed that the kind of unprecedented levels of attachment to OpenAI's models was being closely tracked by the firm "for the past year or so." "People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," the CEO tweeted. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot." While some people were "getting value" from using "ChatGPT as a sort of therapist or life coach," he wrote, others were being "unknowingly nudged away from their longer term well-being (however they define it.)" Altman notably stopped short of using the word "addiction" to describe people's obsession with the tool. "It's also bad, for example, if a user wants to use ChatGPT less and feels like they cannot," he wrote, admitting that a future where "people really trust ChatGPT's advice for their most important decisions" makes him "uneasy." But beyond arguing that OpenAI, which is eyeing an astronomical $500 billion valuation, has a "good shot at getting this right," Altman offered little in terms of real-world solutions. "We have much better tech to help us measure how we are doing than previous generations of technology had," he wrote. "For example, our product can talk to users to get a sense for how they are doing with their short- and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more." The topic of users getting too attached has seemingly been top of mind for the AI company. In an August 4 blog post, OpenAI admitted that "there have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency." "While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed," the company wrote. How Altman's latest reassurances will play out in real-life product updates remains to be seen. OpenAI's public response so far to users growing increasingly attached to its AI models has left a lot to be desired. Last week, the company claimed it had rolled out an "optimization" in the form of vaguely-worded commitments to "better detect signs of emotional distress" and nudging users with "gentle reminders during long sessions to encourage breaks." For months, OpenAI has also been giving out the same copy-pasted statement to news outlets, saying that the "stakes are higher" as a result of ChatGPT feeling "more responsive and personal than prior technologies, especially for vulnerable individuals." Earlier this year, OpenAI was forced to roll back an update to its GPT-4o model after users noticed it was being far too "sycophant-y and annoying," in the words of Altman himself. There's also a certain tension in OpenAI's response to the situation; addicted users, by definition, are fantastic for its engagement analytics, giving rise to perverse incentives that we've seen play out over the past decade on social media. While OpenAI's sky-high expenditures are still eclipsing any hope of an imminent return on investment, subscribers are one of the very few sources of actual revenue for the firm — a reality highlighted by the fact that Altman caved almost immediately after the company's paying subscribers revolted last week. More on OpenAI: Man Follows ChatGPT's Advice and Poisons Himself