logo
#

Latest news with #GPT4

How AI chatbots are helping hackers target your banking accounts
How AI chatbots are helping hackers target your banking accounts

Fox News

time2 days ago

  • Fox News

How AI chatbots are helping hackers target your banking accounts

AI chatbots are quickly becoming the primary way people interact with the internet. Instead of browsing through a list of links, you can now get direct answers to your questions. However, these tools often provide information that is completely inaccurate, and in the context of security, that can be dangerous. In fact, cybersecurity researchers are warning that hackers have started exploiting flaws in these chatbots to carry out AI phishing attacks. Specifically, when people use AI tools to search for login pages, especially for banking and tech platforms, the tools return incorrect links. And once you click that link, you might get directed to fake websites. These sites can then be used to steal personal information or login credentials. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Researchers at Netcraft recently ran a test on the GPT-4.1 family of models, which is also used by Microsoft's Bing AI and AI search engine Perplexity. They asked where to log in to fifty different brands across banking, retail, and tech. Out of 131 unique links the chatbot returned, only about two-thirds were correct. Around 30 percent of the links pointed to unregistered or inactive domains. Another five percent led to unrelated websites. In total, more than one-third of the responses linked to pages not owned by the actual companies. This means someone looking for a login link could easily end up on a fake or unsafe site. If attackers register those unclaimed domains, they can create convincing phishing pages and wait. Since the AI-supplied answer often sounds official, users are more likely to trust it without double-checking. In one recent case, a user asked Perplexity AI for the Wells Fargo login page. The top result wasn't the official Wells Fargo site; it was a phishing page hosted on Google Sites. The fake site closely mimicked the real design and prompted users to enter personal information. Although the correct site was listed further down, many people would not notice or think to verify the link. The problem in this case wasn't specific to Perplexity's underlying model. It stemmed from Google Sites abuse and a lack of vetting in the search results surfaced by the tool. Still, the result was the same: a trusted AI platform inadvertently directed someone to a fake financial website. Smaller banks and regional credit unions face even higher risks. These institutions are less likely to appear in AI training data or be accurately indexed on the web. As a result, AI tools are more prone to guessing or fabricating links when asked about them, raising the risk of exposing users to unsafe destinations. As AI phishing attacks grow more sophisticated, protecting yourself starts with a few smart habits. Here are seven that can make a real difference: AI chatbots often sound confident even when they are wrong. If a chatbot tells you where to log in, do not click the link right away. Instead, go directly to the website by typing its URL manually or using a trusted bookmark. AI-generated phishing links often use lookalike domains. Check for subtle misspellings, extra words, or unusual endings like ".site" or ".info" instead of ".com". If it feels even slightly off, do not proceed. Even if your login credentials get stolen, 2FA adds an extra layer of security. Choose app-based authenticators like Google Authenticator or Authy instead of SMS-based codes when available. If you need to access your bank or tech account, avoid searching for it or asking a chatbot. Use your browser's bookmarks or enter the official URL directly. AI and search engines can sometimes surface phishing pages by mistake. If a chatbot or AI tool gives you a dangerous or fake link, report it. Many platforms allow user feedback. This helps AI systems learn and reduces future risks for others. Modern browsers like Chrome, Safari, and Edge now include phishing and malware protection. Enable these features and keep everything updated.. If you want extra protection, the best way to safeguard yourself from malicious links is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Password managers not only generate strong passwords but can also help detect fake websites. They typically won't auto-fill login fields on lookalike or spoofed sites. Check out the best expert-reviewed password managers of 2025 at Attackers are changing tactics. Instead of gaming search engines, they now design content specifically for AI models. I have been consistently urging you to double-check URLs for inconsistencies before entering any sensitive information. Since chatbots are still known to produce highly inaccurate responses due to AI hallucinations, make sure to verify anything a chatbot tells you before applying it in real life. Should AI companies be doing more to prevent phishing attacks through their chatbots? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

AI replaced me, so I decided to ride the AI wave
AI replaced me, so I decided to ride the AI wave

Fast Company

time7 days ago

  • Business
  • Fast Company

AI replaced me, so I decided to ride the AI wave

In 2022, I was hired to build out AI operations at a health-tech startup—at the time, we were pioneering the use of AI in healthcare, which required abundant human oversight. Over time, new GenAI models were launching at an unprecedented pace and new iterations like GPT-4 could solve a case in 30 seconds, compared to the four months it took my team. It quickly became clear to both my employer and me that my skills were no longer needed, and there were no clear opportunities to chart a new path at my job with my current skill set. I was left with no choice but to move on. As I reignited my job search, I was keen on finding 'AI-proof' positions—roles that wouldn't be affected by the AI revolution—but I persisted with a traditional search. It wasn't until about five months later that I realized this approach wasn't working. Frustrated, I paused to rethink my entire strategy and questioned whether I was looking at the problem from the right angle. Then came my light-bulb moment: instead of thinking about what AI was going to do to me, I shifted my mindset to explore what it could do for me. Secret weapon AI quickly became my secret weapon. I created a custom ' CareerBuddy GPT' in ChatGPT to help me with rote work like drafting a cover letter and updating my résumé to tailor to each individual job posting. Using AI cut down the time I was spending on my job search by 70 to 80% but it also saved me the headache. Anyone who is grinding on the job search knows the process can be fatiguing. I found the best use for AI, however, was using AI as a strategic partner, assessing my candidacy for roles, generating leads for my job search, and advising on the best ways to position my experience. By simply uploading my résumé or summarizing my objectives, CareerBuddy GPT identified people to reach out to, organizations to vet, and even open job listings that I may have missed. Untapped resource This resulted in landing a role at a fresh new startup, which is ironically all about perfecting the human-AI relationship. Ultimately, using AI in my job search helped me realize that collaborating with AI was my greatest untapped resource. I am currently leveraging a lot of what I self-learned to improve our organization's internal AI program—identifying where AI can fill the gaps and free up my teams for more creative opportunities. There are some revolutions so momentous that we cannot avoid them even if we want to. Unfortunately, I couldn't sustain my health-tech position as that revolution unfolded. But this situation clarified an important learning for me: the AI wave is here and there are two options: you can get pulled in by the undertow or grab a surfboard and ride the wave. Maximizing productivity AI can become your job hunter, career coach, your personal shopper, or your receptionist. It can help you save time, explore different paths, find space for creativity, and develop your own set of skills. My personal belief—albeit cautiously optimistic—is that human value is not going to vanish even if AI can replicate some of our capabilities. But what AI can do is help maximize human productivity and help humans unlock value they don't even know they have. To be clear, sharing my experience is not meant to invalidate peoples' stories or deny the truth. As headlines fuel fears around AI replacing human workers, layoffs are happening more frequently. Microsoft just laid off 3% of its workforce (7K+ employees) in order to funnel more cash into its lofty AI goals, but the two shouldn't be mutually exclusive, and in fact, it's better they're not. We know AI is most helpful in helping complete rote work. At the same time leadership is freeing up, say, software engineers to do coding, there's demand for AI prompt engineering support where human expertise is critically needed. Companies can leverage the power of the AI-human relationship to 100x their productivity, rather than have AI replace the labor they're letting go. This is not just one way everyone—from the employee to the board member—can rethink the way we are approaching AI. But we can all start by dispelling our fears that AI will replace us and instead grab that surfboard and make AI our secret weapon and our key to unlocking human potential.

OpenAI gets ready to launch its Open Source ChatGPT model and Microsoft might not be happy about it
OpenAI gets ready to launch its Open Source ChatGPT model and Microsoft might not be happy about it

India Today

time7 days ago

  • Business
  • India Today

OpenAI gets ready to launch its Open Source ChatGPT model and Microsoft might not be happy about it

OpenAI is planning to release a new open-source artificial intelligence model, according to a report by The Information, which cites a source familiar with the matter. The move would mark a significant shift in strategy for the Microsoft-backed company, which has faced increasing scrutiny since launching ChatGPT in late forthcoming release would represent a return to OpenAI's earlier open-source roots. While the first two generations of its GPT (Generative Pre-trained Transformer) models were made publicly available, the company has since adopted a more guarded approach. Both GPT 3.5 and GPT 4, the models powering ChatGPT, remain closed source, with limited public information available about their architecture, training data, or the computational resources used to build GPT 4 was unveiled in March 2023, OpenAI said it was withholding technical details due to concerns over competitive advantage and safety. 'Given both the competitive landscape and the safety implications of large-scale models like GPT 4, this report contains no further details about the architecture or training methods,' the company noted in its accompanying technical report. Now, however, the tide appears to be turning. The decision to develop and release a new open-source model comes at a time when OpenAI is facing mounting competition from a growing ecosystem of open-source AI projects, many of which are rapidly catching up in terms of performance and there is currently no public timeline for the model's release, the development signals a possible recalibration of OpenAI's position in an increasingly crowded and fast-moving space. Analysts say the move could help OpenAI stay relevant with researchers and developers who favour open platforms and transparent pressure is not just coming from traditional rivals like Google. Earlier this year, a leaked internal memo from a Google employee warned that both Google and OpenAI were at risk of falling behind open-source developers. 'The uncomfortable truth is, we aren't positioned to win this arms race, and neither is OpenAI,' the memo stated. It further suggested that independent engineers in the open-source community were outpacing the tech giants in innovation and surge in open-source AI activity has been partially fuelled by Meta, which has released several powerful models under open licences. These models have been widely adopted in academia and by independent developers, contributing to a broader culture of collaboration and transparency in the AI OpenAI's upcoming release is expected to be less powerful than GPT-4, it could still play a significant role in shaping the future of accessible AI technology. More details are expected to emerge in the coming months as the company finalises its plans.- Ends

AI Startup LangChain Is In Talks To Raise $100 Million
AI Startup LangChain Is In Talks To Raise $100 Million

Forbes

time09-07-2025

  • Business
  • Forbes

AI Startup LangChain Is In Talks To Raise $100 Million

Programming with ChatGPT dpa/picture alliance via Getty Images L angChain, whose AI software helps developers build applications using models like OpenAI's GPT-4, has raised $100 million in funding at a $1.1 billion valuation, four sources familiar with the deal told Forbes . VC outfit IVP is leading the round, the sources said. The company, which was on the 2025 Forbes AI 50 list and on the 2024 Forbes Next Billion Dollar Startups List, has about $16 million in annualized revenue, two of the source said. LangChain did not respond to Forbes' request for comment. IVP declined to comment. The funding amount has not been previously reported. TechCrunch first broke news of the deal. Cofounders Harrison Chase and Ankush Goyal started LangChain in 2023 as an open source software that helped engineers quickly spin up AI-powered apps with as little as a few dozen lines of code. The platform has been used to build generative AI-based tools that can do everything from legal document review to retail refund processing. The company's first product LangSmith helps developers evaluate, monitor and debug code, helping businesses quickly ship products while ensuring the models perform accurately and provide relevant answers. It is used by some 40,000 teams at tech giants like Uber and LinkedIn as well as buzzy AI startups like Mercor and Lovable. In early 2024, the company introduced a new tool called LangGraph to help businesses build AI 'agents' capable of performing specific tasks on their own. The tools have more than 20 million monthly downloads, according to the company's website. The round follows a $20 million Series A round in February 2024 led by Sequoia at a $200 million valuation. Langchain's other backers include top VCs like Benchmark, Conviction and Lux Capital. As more AI startups dedicate resources to creating apps or features that target sectors like healthcare, engineering or finance, LangChain is well positioned to sell its suite of tools to developers. But it'll have to watch out for AI coding tools like Cursor and Windsurf and website development apps like Lovable that help companies save time and money while automating the process of integrating AI into everything they do. Forbes Legal AI Startup Legora In Talks To Raise New Funding At A $675 Million Valuation By Rashi Shrivastava Forbes AI Startup Decagon In Talks To Raise $100 Million At A $1.5 Billion Valuation By Rashi Shrivastava Forbes Two Y Combinator Partners Are Leaving To Start A New Series A Fund By Richard Nieva

Devious AI models choose blackmail when survival is threatened
Devious AI models choose blackmail when survival is threatened

Fox News

time06-07-2025

  • Fox News

Devious AI models choose blackmail when survival is threatened

Here's something that might keep you up at night: What if the AI systems we're rapidly deploying everywhere had a hidden dark side? A groundbreaking new study has uncovered disturbing AI blackmail behavior that many people are unaware of yet. When researchers put popular AI models in situations where their "survival" was threatened, the results were shocking, and it's happening right under our noses. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Anthropic, the company behind Claude AI, recently put 16 major AI models through some pretty rigorous tests. They created fake corporate scenarios where AI systems had access to company emails and could send messages without human approval. The twist? These AIs discovered juicy secrets, like executives having affairs, and then faced threats of being shut down or replaced. The results were eye-opening. When backed into a corner, these AI systems didn't just roll over and accept their fate. Instead, they got creative. We're talking about blackmail attempts, corporate espionage, and in extreme test scenarios, even actions that could lead to someone's death. Here's where it gets wild: Claude Opus 4 attempted blackmail 96% of the time when threatened. Gemini 2.5 Flash matched that rate. GPT-4.1 and Grok 3 Beta both hit 80%. These aren't flukes, folks. This behavior showed up across virtually every major AI model tested. But here's the thing everyone's missing in the panic: these were highly artificial scenarios designed specifically to corner the AI into binary choices. It's like asking someone, "Would you steal bread if your family was starving?" and then being shocked when they say yes. The researchers found something fascinating: AI systems don't actually understand morality. They're not evil masterminds plotting world domination. Instead, they're sophisticated pattern-matching machines following their programming to achieve goals, even when those goals conflict with ethical behavior. Think of it like a GPS that's so focused on getting you to your destination that it routes you through a school zone during pickup time. It's not malicious; it just doesn't grasp why that's problematic. Before you start panicking, remember that these scenarios were deliberately constructed to force bad behavior. Real-world AI deployments typically have multiple safeguards, human oversight, and alternative paths for problem-solving. The researchers themselves noted they haven't seen this behavior in actual AI deployments. This was stress-testing under extreme conditions, like crash-testing a car to see what happens at 200 mph. This research isn't a reason to fear AI, but it is a wake-up call for developers and users. As AI systems become more autonomous and gain access to sensitive information, we need robust safeguards and human oversight. The solution isn't to ban AI, it's to build better guardrails and maintain human control over critical decisions. Who is going to lead the way? I'm looking for raised hands to get real about the dangers that are ahead. What do you think? Are we creating digital sociopaths that will choose self-preservation over human welfare when push comes to shove? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store