logo
Aussie blocked from her own account exposes bank's 'insane' request to unblock it

Aussie blocked from her own account exposes bank's 'insane' request to unblock it

Yahoo5 hours ago

An ING customer is urging banks to rethink their fraud processes after she said she was blocked from her bank account and unable to access her money for days due to two innocuous transactions. Katie McMaster has been a long-time customer of the digital bank, which is one of the biggest in the country.
The Melbourne resident received an email from the bank's fraud department on Tuesday saying her account had been suspended due to 'suspicious activity'. She immediately contacted ING and was told she needed to wait for the fraud team to contact her via email and she couldn't speak to them on the phone.
The 35-year-old told Yahoo Finance she had to verify her identity over email and had to reply with her full name, date of birth and mobile number.
RELATED
Commonwealth Bank customer rages over threat to cut access to his money
Centrelink age pension changes coming into effect from July 1
Centrelink rule change gives more Aussies access to $5,000 cash boost
'I just thought that was insane in itself to be sending that data across email to verify. For a bank of their size, I just thought that shouldn't be the process, especially when you're trying to get in touch with the fraud department,' she said.
She was later sent a verification text from the bank and told to upload her identification. After that, she said she heard nothing from the bank and was still unable to get someone on the phone. She was told the fraud team could take two business days to respond.
Meanwhile, the personal assistant was unable to access her funds, use her card or get money out. She said she had to rely on her credit card for funds, which ING did not block.'It was stressful in the fact that I could obviously see the balance, but the available balance was zero,' she said.
'But what was sort of frustrating was that they didn't block my credit card with them. They were happy for me to keep using my credit card, but then I would have to pay interest on that, but they wouldn't let me access my funds, and I couldn't withdraw any cash.'
It wasn't until 5pm on Thursday that she received an email from ING asking her to verify two transactions flagged as suspicious. They were her monthly salary, which goes into her account at the same time each month, and a $300 transfer from a friend for Usher tickets.
After she confirmed the purpose of the funds, she finally received a call from the ING fraud department, and they were able to unblock her accounts "on the spot".
McMaster said the worker she spoke to was confused why the transactions had been flagged as suspicious and apologised for taking so long to resolve the issue. She claimed she was the only one manning the fraud email inbox.
'The lack of communication was frustrating me the most. I understand banks have their processes, but I'd rather sit on hold for an hour knowing I'd get through than wait a couple of days,' she said.
An ING spokesperson told Yahoo Finance safeguarding customers' funds and identities was "central" to how it operated.
"When our systems detect suspicious transactions, we proactively act on behalf of customers to protect their funds. This often involves temporarily placing a hold on a customer's account until we can confirm the transactions with the customer," the spokesperson said.
"To reactivate account activity, we verify the customer's identity through our secure fraud portal. During this process, the customer's biometrics are matched against the identity documents we securely hold on record.
"We recognise that temporarily pausing activity on an account can impact customers, so we always check they have access to essential funds, ensuring they are not placed in financial hardship."
McMaster shared her experience online and was met with hundreds of responses from other Australians, with several others saying they also had their ING accounts blocked.
"This happened to me in 2019, I was overseas and there was no help. For the rest of my trip I had to use a friend's money to be able to pay for anything. They cancelled the card and that was it," one wrote.
"Same happened with me! You feel so vulnerable not being able to get access to money! Luckily, I was able to borrow cash from someone because you literally can't get access to any money, it's crazy," another said.
"It's actually terrifying that they can do this," a third added.
Others agreed they would feel uneasy about providing their personal information over email.
"I'd refuse to send personal details over email, that's ridiculous!!" one said. "It's a very odd process for fraud verification, and goes against our basic security instincts ... It adds even more stress and confusion to the situation," another said.
McMaster said she was now in the process of changing banks and was considering opening up accounts with multiple banks to avoid getting stuck in the same position.
She has called on ING and other banks to reconsider their fraud verification processes, including the requirement for customers to provide sensitive information via email.
'The main concern of all of this is the verification process. It was just having to click on a link to upload my ID from a text message. It's just everything we're told not to do,' she said.
She has urged other Aussies to continue to remain diligent when it comes to scams and to double-check any messages received were legitimate.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI is posing immediate threats to your business. Here's how to protect yourself
AI is posing immediate threats to your business. Here's how to protect yourself

Fast Company

time24 minutes ago

  • Fast Company

AI is posing immediate threats to your business. Here's how to protect yourself

Last month, an AI startup went viral for sending emails to customers explaining away a malfunction of its AI-powered customer service bot, claiming it was the result of a new policy rather than a mistake. The only problem was that the emails—which appeared to be from a human sales rep—were actually sent by the AI bot itself. And the 'new policy' was what we call a hallucination: a fabricated detail the AI invented to defend its position. Less than a month later, another company came under fire after using an unexpectedly obvious (and glitchy) AI tool to interview a job candidate. AI headaches It's not shocking that companies are facing AI-induced headaches. McKinsey recently found that while nearly all companies report investing in AI, fewer than 1% consider themselves mature in deployment. This gap between early adoption and sound deployment can lead to a PR nightmare for executives, along with product delays, hits to your companies' brand identity, and a drop in consumer trust. And with 50% of employers expected to utilize some form of agentic AI —far more advanced systems capable of autonomous decision-making—the business risks of clumsy AI deployment are not just real. They are rising. As AI technology continues to rapidly evolve, executives need a trusted, independent way of comparing system reliability. As someone who develops AI assessments, my advice is simple: Don't wait for regulation to tell you what AI tools work best. Industry-led AI reliability standards offer a practical solution for limiting risk—and smart leaders will start using them now. Industry Standards Technology industry standards are agreed-upon measurements of important product qualities that developers can volunteer to follow. Complex technologies—from aviation to the internet to financial systems—rely on these industry-developed guidelines to measure performance, manage risk, and support responsible growth. Technology industry standards are developed by the industry itself or in collaboration with researchers, experts, and civil society—not policymakers. As a result, they don't rely on regulation or bill text, but reflect the need of industry developers to measure and align on key metrics. For instance, ISO 26262, which was developed by the International Organization for Standardization, sets requirements to ensure the electric systems of vehicles are manufactured to function safely. They're one reason we can trust that complex technology we use every day, like the cars we buy or the planes we fly on, are not defective. AI is no exception. Like in other industries, those at the forefront of AI development are already using open measures of quality, performance, and safety to guide their products, and CEOs can leverage them in their own decision-making. Of course, there is a learning curve. For developers and technical teams, words like reliability and safety have very different meanings than they do in boardrooms. But becoming fluent in the language of AI standards will give you a major advantage. I've seen this firsthand. Since 2018, my organization has worked with developers and academics to build independent AI benchmarks, and I know that industry buy-in is crucial to success. As those closest to creating new products and monitoring trends, developers and researchers have an intimate knowledge of what's at stake and what's possible for the tools they work on. And all of that knowledge and experience is baked into the standards they develop—not just at MLCommons but across the industry. Own it now If you're a CEO looking to leverage that kind of collaborative insight, you can begin by incorporating trusted industry benchmarks into the procurement process from the outset. That could look like bringing an independent assessment of AI risk into your boardroom conversations, or asking vendors to demonstrate compliance with performance and reliability standards that you trust. You can also make AI reliability a part of your formal governance reporting, to ensure regular risk assessments are baked into your company's process for procuring and deploying new systems. In short: engage with existing industry standards, use them to pressure test vendor claims about safety and effectiveness, and set clear data-informed thresholds for what acceptable performance looks like at your company. Whatever you do, don't wait for regulation to force a conversation about what acceptable performance standards should look like—own it now as a part of your leadership mandate. Real damage Not only do industry standards provide a clear, empirical way of measuring risk, they can help navigate the high-stakes drama of the current AI debate. These days, discussions of AI in the workforce tend to focus on abstract risks, like the potential for mass job displacement or the elimination of entire industries. And conversations about the risks of AI can quickly turn political—particularly as the current administration makes it clear they see 'AI safety' as another word for censorship. As a result, many CEOs have understandably steered clear of the firestorm, treating AI risk and safety like a political hot potato instead of a common-sense business priority deeply tied to financial and reputational success. But avoiding the topic entirely is a risk in itself. Reliability issues—from biased outputs to poor or misaligned performance—can create very real financial, legal, and reputational damage. Those are real, operational risks, not philosophical ones. Now is the time to understand and use AI reliability standards—and shield your company from becoming the next case study in premature deployment.

Posted Jun 20, 2025 at 6:31 AM EDT 0 Comments
Posted Jun 20, 2025 at 6:31 AM EDT 0 Comments

The Verge

time39 minutes ago

  • The Verge

Posted Jun 20, 2025 at 6:31 AM EDT 0 Comments

Australia says its 'technically feasible' to age-gate social media. The preliminary findings of Australia's Age Assurance Technology Trial 'didn't find a single ubiquitous solution' for the country's incoming social media ban for under-16s, but trial director Tony Allen said there's little stopping services like Facebook, Instagram, and X from implementing existing age-verification methods like facial scans, behavior assessments, and parental controls. With so many other countries also trying to enforce online age restrictions, you'd think Australia could have just asked?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store