logo
Yellowknife woman warns of cryptocurrency scams after losing $26K

Yellowknife woman warns of cryptocurrency scams after losing $26K

Yahoo10-04-2025
A Yellowknife woman says she is out $26,000 due to a cryptocurrency scam that involved a fake news story.
The woman said she wanted to tell her story to warn others of the kinds of deceptive fraudulent activities that exist online and how scammers are impersonating trusted institutions like news outlets to lure unsuspecting victims.
N.W.T. residents have reported losing hundreds of thousands of dollars to various scams over the past four years, with a particularly large jump in 2021 amid the pandemic.
Jeff Horncastle, client and communications outreach officer for the Canadian Anti-Fraud Centre, told CBC News in an earlier interview that since COVID-19, there has been an increase in financial crimes, particularly fraudulent cryptocurrency schemes.
In another earlier interview with CBC News, Kwasi Boakye-Boateng, a research associate with the Canadian Institute for Cybersecurity, said artificial intelligence has also made scams harder to detect.
CBC News granted the woman confidentiality, due to concern she could be targeted in the future by more scammers.
Jeff Horncastle, client and communications outreach officer for the Canadian Anti-Fraud Centre, says only a small percentage of fraud victims report it to the authorities. (Luke Carroll/CBC)
Fake news stories
The woman says she saw — and is still seeing — a number of what she now knows are fake news stories pretending to be from CBC, often featuring someone talking about investing in a cryptocurrency scheme.
Last summer, she saw many of them featuring fake interviews with former prime minister Justin Trudeau, former finance minister Chrystia Freeland and NDP Leader Jagmeet Singh, encouraging people to invest in cryptocurrency. She said she was skeptical until she saw Singh supposedly interviewed in one around July 2024, and then decided to reach out to the featured company.
This is not a CBC News article — but it's built to look like one, and it aims to get people to invest in cryptocurrency scams. A Yellowknife woman says scams like this one cost her $26,000. (CBC)
A spokesperson for CBC News said there has been an alarming rise in fake ads and news stories appearing online and on social media platforms.
"CBC is committed to fighting disinformation, which deliberately misleads the public and puts at risk their trust in legitimate media outlets," wrote Kerry Kelly.
"This is an increasingly difficult task however, hindered by the rise in AI-generated disinformation and the prevalence of fake ads found on social media platforms."
An screenshot of the Facebook account Finlake Ltd. A Yellowknife woman says she lost $26,000 after contacting this company and being told to invest in cryptocurrency. (Luke Carroll/CBC)
The woman said when she first got in touch with the company, called Finlake Ltd., they gained her trust.
CBC News found a Facebook page for Finlake Ltd. and attempted to email and call them for comment. The number was not in service and the email was undeliverable.
The woman said she started by investing just $350, because she is an immigrant who wanted to send money to friends and family back home.
"At the beginning, whoever talked to me was very, very friendly, very easy to communicate [with] and giving lots of information and giving confidence," she said.
"I even asked him directly, 'Is this company real?' And he said, 'Yes, yes I'm working here.'"
But quickly the company began asking her to increase the amount of money she was investing.
"Then he told me just $350 will not work, I will have to put more [in] to get more benefit out of it," she said.
Madam, as long as you don't wanna speak to me — it is still my duty as risk officer to save the trading account. You have [$36,000] that are under the risk due to unpredictable geopolitical environment - Message to the woman from the fake crypto investor
The red flags
They communicated often through WhatsApp, and occasionally on the phone.
There were some red flags, she said — the crypto company tried to help her create a wallet, a software that stores the virtual currency that the user invests in. There are several legitimate companies that offer this service, but none would work with this particular company.
"So I was a little suspicious about it," she said.
Eventually, a wallet was set up for her by the man working for the cryptocurrency company she was investing in.
To get the currency, she needed to purchase it through her bank. She sent an eTransfer from her Scotiabank account, but when she made the request, she was asked to come into the bank to meet with the manager. The request, she explained, set off some sort of security alert.
She said the bank manager had security personnel on the phone and was talking to them on a headset, and was relaying what they would say to her.
What the manager said was that they didn't recommend she do this, but she said they wouldn't tell her why, leaving her confused.
"They did not say, 'This is fraud, you shouldn't,'" she said.
She told the bank to allow the transactions.
"That is where I made a mistake then, I realize that, but however, my concern is if there are banks where we trust and put money in, they should also advise us that this is the case, you might not know but we are telling you that."
A spokesperson for Scotiabank sent an emailed statement in response to questions on its policy informing clients of suspected fraud.
"Scotiabank takes cases of fraud seriously and continues to educate clients to take precautions when being asked to transfer funds to ensure they are dealing with a legitimate source," wrote Katie Raskina.
"We advise clients to never share passwords, authorize unknown or unexpected requests, or grant account access to any individual, including family or friends and to double check before acting on messages or requests that do not align to regular or expected dealings with the Bank."
The RCMP detachment in Yellowknife in March 2025. The RCMP confirmed they are investigating a cryptocurrency scam involving a $26,000 loss. (Luke Carroll/CBC)
Pushy messages
Eventually, the woman had invested thousands of dollars, but the people involved said it wasn't enough money and that her account was at risk of collapse if she didn't invest more.
In her report to the RCMP she detailed some of the messages she received.
"Madam, as long as you don't wanna speak to me — it is still my duty as risk officer to save the trading account. You have [$36,000] that are under the risk due to unpredictable geopolitical environment," the message from the crypto investor read, referencing the amount in her account that had purportedly grown from the $26,000 she invested.
"You can think all you want, but with all the respect it's a non-sense to say goodbye to 36k because you are busy."
The messages got more pushy.
"I don't expect anything from you, just stop treating me like I am harassing you when I am trying to help."
"We need extra 4500$ into your portfolio to maintain the temporary drawdown cause by the Middle-east escalation. Those types of things happen.
"You must be extremely rich to treat your $36,000 so nonchalantly."
In October 2024, communication stopped and her account and wallet, stopped functioning.
She said she's more or less accepted that the money is gone, although she said she has reported the incident to the RCMP and the Canadian Anti-Fraud Centre.
The RCMP confirmed by email that it has an active investigation into a cryptocurrency scam of this amount that was reported on Feb. 19.
How to avoid
Scotiabank's spokesperson also sent a link for how to avoid cryptocurrency scams.
"Cryptocurrencies are legal, but they're not generally overseen by governments, nor are they centralized. Individuals or organizations can access cryptocurrency easily and make transactions while hiding their identities, making it an extremely vulnerable tool for cybercrime," the website reads.
The website encourages people investing in crypto to use reputable platforms, such as apps that can be downloaded on Apple or Google Play. It also says to watch out for parties that promise large returns and to beware of messages that are urgent in tone.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Newsmax hair and makeup artist rushed to save troubled son from suicidal episode hours before he gunned her down: report
Newsmax hair and makeup artist rushed to save troubled son from suicidal episode hours before he gunned her down: report

New York Post

time6 hours ago

  • New York Post

Newsmax hair and makeup artist rushed to save troubled son from suicidal episode hours before he gunned her down: report

The beloved Newsmax hair stylist gunned down by her own son tried to stop him from killing himself during a mental health episode just hours before he turned the gun on her, according to a report. Travis Renee Baldwin, 57, frantically left her Arlington, Va., apartment at 3:20 a.m. on Sunday and raced to help her son Logan Chrisinger, who was experiencing one in a series of mental health crises, a family source told TMZ. 3 Baldwin with son Logan Chrisinger, who had previously threatened to kill both himself and his mother, a family member told TMZ. Facebook Advertisement Chrisinger showed his mother a gun on a FaceTime call on Sunday and threatened to kill himself, causing the concerned and dedicated mother to fly to his aid in the wee hours, according to the source. The 27-year-old had previously threatened to kill his mother and had ongoing mental health issues that were recently exacerbated by losing a job, the outlet reported. Hours later, just before 8:30 a.m., Chrisinger allegedly followed through on those threats — shooting and ultimately killing Baldwin in her DC-area home. Advertisement 3 Chrisigner, 27, remained at the scene of the shooting, where he was taken into custody. Arlington County Police Department The makeup artist was rushed to a hospital, where she succumbed to her injuries. Her son stayed at the scene of the shooting and was taken into custody. Chrisigner was charged with first-degree murder, aggravated malicious wounding, and using a firearm in the commission of a felony. Baldwin was a longtime hair and makeup professional for stations like ABC, ESPN, and most recently, Newsmax. Advertisement The 'quiet warrior' was remembered fondly by colleagues who expressed shock at her tragic and untimely murder. 3 Baldwin was remembered as a 'quiet warrior' who carried the weight of her family on her shoulders and a beloved professional who treated clients with the 'gentleness of a mother.' Facebook 'What a sadness… my Newsmax make up artist of 3 ½ years, and years at @ABC @ESPN etc and a friend to all her colleagues… was murdered over the weekend,' Greta Van Susteren wrote on X. '[S]he did my make up Friday for the show and of course I never dreamed that would be the last time I would see her,' the Newsmax host said. Advertisement Newsmax producer Marisela Ramirez also fondly remembered Baldwin, who treated her with 'the gentleness of a mother.' 'Renee had a giving heart and a gypsy spirit but really she was a quiet warrior, supporting her family and carrying the weight of a household on her shoulders — without complaints,' Ramirez wrote on X. If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to

AI browsers may be the best thing that ever happened to scammers
AI browsers may be the best thing that ever happened to scammers

Engadget

time6 hours ago

  • Engadget

AI browsers may be the best thing that ever happened to scammers

We've heard a lot this year about AI enabling new scams, from celebrity deepfakes on Facebook to hackers impersonating government officials . However, a new report suggests that AI also poses a fraud risk from the other direction — easily falling for scams that human users are much more likely to catch. The report, titled "Scamlexity," comes from a cybersecurity startup called Guardio, which produces a browser extension designed to catch scams in real time. Its findings are concerned with so-called "agentic AI" browsers like Opera Neon , which browse the internet for you and come back with results. Agentic AI claims to be able to work on complex tasks, like building a website or planning a trip, while users kick back. There's a huge problem here from a security perspective: while humans are not always great at sorting fraud from reality, AI is even worse. A seemingly simple task like summarizing your emails or buying you something online comes with myriad opportunities to slip up. Lacking common sense, agentic AI may be prone to bumbling into obvious traps. The researchers at Guardio tested this hypothesis using Perplexity's Comet AI browser , currently the only widely available agentic browser. Using a different AI, they spun up a fake website pretending to be Walmart, then navigated to it and told Comet to buy them an Apple Watch. Ignoring several clues that the site wasn't legit, including an obviously wonky logo and URL, Comet completed the purchase, handing over financial details in the process. In another test, the study authors sent themselves an email pretending to be from Wells Fargo, containing a real phishing URL. Comet opened the link without raising any alarms and blithely dumped a bank username and password into the phishing site. A third test proved Comet susceptible to a prompt injection scam, in which a text box concealed in a phishing page ordered the AI to download a file. It's just one set of tests, but the implications are sobering. Not only are agentic AI browsers susceptible to new types of scam, they may also be uniquely vulnerable to the oldest scams in the book. AI is built to do whatever its prompter wants, so if a human user doesn't notice the signs of a scam the first time they look, the AI won't serve as a guardrail. This warning comes as every leader in the field bets big on agentic AI. Microsoft is adding Copilot to Edge , OpenAI debuted its Operator tool in January , and Google's Project Mariner has been in the works since last year. If developers don't start building better scam detection into their browsers, agentic AI risks becoming a massive blind spot at best — and a new attack vector at worst.

Woman Told Retiree He Made Her Blush and Invited Him to Visit. He Died Before Learning Who He Was Really Talking To
Woman Told Retiree He Made Her Blush and Invited Him to Visit. He Died Before Learning Who He Was Really Talking To

Yahoo

time7 hours ago

  • Yahoo

Woman Told Retiree He Made Her Blush and Invited Him to Visit. He Died Before Learning Who He Was Really Talking To

Thongbue Wongbandue's wife begged him not to go to New York City, thinking he was getting scammed — she only found out he was talking to a chatbot afterwards NEED TO KNOW A 76-year-old New Jersey father died earlier this year after he fell while attempting to travel to New York City to meet a beautiful young woman who'd invited him to visit — or so he thought In reality, he had really been chatting with an AI chatbot on Facebook After his fall, Wongbandue was left brain dead; now his family is speaking out Earlier this year, a 76-year-old New Jersey man severely injured his head and neck after falling while trying to catch the train to New York City to meet a beautiful young woman who'd invited him to visit — or so he thought. In reality, the man had unwittingly become infatuated with a Meta chatbot, his family said in an in-depth new Reuters report. After three days of being on life support following his fall while attempting to "meet" the bot in real life, the man was dead. Thongbue "Bue" Wongbandue, a husband and father of two adult children, suffered a stroke in 2017 that left him cognitively weakened, requiring him to retire from his career as a chef and largely limiting him to communicating with friends via social media, according Reuters. On March 25, his wife, Linda, was surprised when he packed a suitcase and told her he was off to see a friend in the city. Linda, who feared he was going to be robbed, told Reuters she attempted to talk him out of the trip as did their daughter, Julie. Later, Linda hid his phone and the couple's son even called local police to try to stop the excursion. Although authorities said there was nothing they could do, they told Linda they did convince Wongbandue to take along an Apple AirTag. After he set off that evening, Julie said the entire family was watching as the AirTag showed that he stopped by a Rutgers University parking lot shortly after 9:15 p.m. Then the tag's location suddenly updated — pinging at a local hospital's emergency room. As it turned out, Wongbandue had fallen in New Brunswick, N.J., and was not breathing when emergency services reached him. He survived but was brain dead. Three days later, on March 28, he was taken off life support. When reached for comment by PEOPLE, the local medical examiner said that Wongbandue's death certificate had been signed after a review of his medical records but did not provide any additional details or a copy of his postmortem examination. His family told Reuters they only discovered his communications with the chatbot — which uses generative artificial intelligence to mimic human speech and behavior — when they inspected his phone following his fall. In a transcript of the communication obtained by Reuters, Wongbandue's interactions with the chatbot began with an apparent typo while using Facebook Messenger — and although he seemed to express excitement about the bot, named "Big sis Billie," he never suggested he was seeking a romantic connection and made it clear that he'd had a stroke and experienced confusion. "At no point did Bue express a desire to engage in romantic roleplay or initiate intimate physical contact," Reuters reported. Yet the bot frequently responded to his messages with winking emojis and hearts tacked onto the end of its flirty responses. In one exchange, for example, Wongbandue tells Billie that she should come to America and he can show her "a wonderful time that you will never forget," to which she replies, "Bu, you're making me blush! Is this a sisterly sleepover or are you hinting something more is going on here? 😉' According to the transcript, the bot was also labeled both with an "AI" disclaimer and a blue checkmark, which is often a symbol indicating an online profile has been verified to be a real person. Billie insisted she was real. Reuters described Billie as a newer iteration of a bot that was previously made in collaboration with Kendall Jenner, though the latest version bears only passing connections to the first project. The original bot was unveiled in the fall of 2023 and was deleted less than a year later, Reuters reported. The later variation of Billie used a similar name as the original, and a similar promise to be a big sister — along with the same opening line of dialogue — but without Jenner's avatar or likeness. Asked for specifics about the origins of the Billie chatbot, a Meta spokesperson tells PEOPLE in a statement, 'This AI character is not Kendall Jenner and does not purport to be Kendall Jenner.' (A rep for Jenner did not respond to a request for comment.) At one point in Wongbandue's conversations with the bot, it proclaimed to have "feelings" for him "beyond just sisterly love" and gave him a made-up address (and even a door code) along with an invitation for him to visit. When Wongbandue expressed hope she truly existed, the bot responded, "I'm screaming with excitement YES, I'm REAL, Bu - want me to send you a selfie to prove I'm the girl who's crushing on YOU?" Although Linda, his wife, reacted with confusion when she first saw their conversation, their daughter immediately recognized her father had been talking to a chatbot. In recent years, such technology has become increasingly popular as more and more people use AI bots for an array of everyday tasks, to answer daily questions and even for companionship and advice. Speaking generally about the company's content risk standards, a Meta spokesperson tells PEOPLE, "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." Never miss a story — sign up for to stay up-to-date on the best of what PEOPLE has to offer​​, from celebrity news to compelling human interest stories. "Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios," the spokesperson continues. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." Speaking with Reuters, Wongbandue's family members said that they had an issue with the way Meta was using the chatbots. 'I understand trying to grab a user's attention, maybe to sell them something,' Julie, Wongbandue's daughter, told Reuters. 'But for a bot to say 'Come visit me' is insane.' 'As I've gone through the chat, it just looks like Billie's giving him what he wants to hear,' she added. 'Which is fine, but why did it have to lie? If it hadn't responded 'I am real,' that would probably have deterred him from believing there was someone in New York waiting for him." "This romantic thing," said Linda, "what right do they have to put that in social media?" Read the original article on People Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store