logo
#

Latest news with #DanPinto

How To Balance Privacy And Protection In The Age Of AI
How To Balance Privacy And Protection In The Age Of AI

Forbes

time3 days ago

  • Business
  • Forbes

How To Balance Privacy And Protection In The Age Of AI

Dan Pinto is CEO and cofounder of Fingerprint. With over a decade in tech, he is an entrepreneur behind many startups. It seems like we see a new headline about data breaches every day, with one revealing that 16 billion credentials have been leaked over the years. In other words, data breaches are now a fact of life. As a result, consumers are increasingly more privacy-aware, and companies are looking for or have already implemented solutions to help mitigate the damage of past breaches and/or prevent future ones to ensure business continuity. However, incidents like the breach reported by LexisNexis Risk Solutions also reveal a troubling irony: The very solutions designed to help prevent fraud and stop data breaches are becoming high-value targets themselves. But when you think about it, it's not that big of a surprise. Many fraud prevention companies require collecting and storing vast amounts of customer and company data to work effectively. The result? A massive treasure trove of valuable information that's highly tempting to bad actors. The impact of this latest trend is profound. When the companies that suffer data breaches are also the ones tasked with safeguarding financial systems, identities and other valuable information, it doesn't just impact customers—it impacts trust across the entire digital ecosystem. The Escalating Cost Of Storing More Data Traditional fraud prevention approaches operate under the assumption that more data equals better protection. Fraud prevention companies are no different, and many also work with multiple third-party vendors (who also collect and store data on their own systems) to strengthen their security. Yet, each third-party relationship introduces new potential points of failure, and too many organizations compound this risk by storing data in test environments or allowing interconnected platforms access without adequate security checks. The Identity Theft Resource Center reported 3,158 publicly disclosed data breaches in 2024. While supply chain attacks targeting third-party vendors accounted for a smaller portion of incidents, they had an outsized impact, affecting hundreds of organizations and millions of individuals. The report also highlighted a rise in phishing and business email compromise schemes, with generative AI contributing to more convincing attack tactics. Because fraudsters are constantly adapting their methods to bypass fraud prevention measures, organizations need to continuously evolve their fraud prevention strategies to effectively safeguard both customer and company data and privacy. The Modern Approach To Fraud Prevention Today, no single approach to fraud prevention is effective. As fraudsters become more sophisticated and leverage AI tools, bots and agents, organizations must prioritize flexible and privacy-conscious approaches to deterring fraud rather than assuming extensive data collection and storage is the only path to effective fraud prevention. Instead, they should create adaptive defenses to suit their specific needs, using a multitude of technologies that aim to detect and mitigate threats while respecting user privacy. These can include implementing solutions that analyze user behaviors and process device and network signals, in addition to continuously training machine learning models on new data to improve risk-scoring methods. Essential Data Security Practices As the internet as a whole evolves toward a more privacy-conscious world, organizations must implement additional comprehensive modern measures to protect their systems. Here are a few non-negotiables: Social engineering attacks continue to be a highly effective fraud tactic, and they're now bolstered by generative AI. It's essential to provide continuous training to staff so they can better spot deepfakes and identify phishing attempts and other manipulation techniques. Access to company systems should require additional verification on top of the username and password, especially where sensitive data is stored. Assume any single security layer can be compromised. Implementing multiple security layers, such as device fingerprinting, multifactor authentication and other methods of verification, can help thwart attacks. Live customer data should never be stored in development environments. Data governance policies should provide clear guidelines on how to handle sensitive data, including the measures that should be taken to protect data from unauthorized access. What's Next: Course-Correcting For Privacy-Conscious Fraud Prevention Today's top leaders are recognizing that privacy-conscious approaches offer advantages beyond fraud reduction. Customers value organizations that demonstrate a commitment to protecting their most valuable and personal information, in line with the industry moving towards stricter data protection regulations. High-profile breaches offer sobering reminders that even the most sophisticated companies are not immune. The recent wave of security incidents across fraud prevention providers should push us to ask tougher questions: • What assumptions are we still making around fraud prevention that no longer apply? • Can we build fraud detection that doesn't depend on personally identifiable information (PII)? • What would a future look like where privacy is the default, not the exception? Companies that are exploring a diverse set of privacy-forward fraud prevention tools and strategies will be better positioned to minimize risk and maintain customer trust. The Bottom Line Fraud prevention is at a turning point. The legacy approach—collecting more data and building more walls—has failed repeatedly in a landscape defined by automated threats, social engineering and AI. The organizations likely to thrive are those willing to challenge legacy assumptions about what is required to prevent fraud, especially as AI continues to evolve and become ever cheaper and easier to use. More companies are investing in behavioral analytics, device intelligence and real-time monitoring systems that can identify bad actors and threats without impacting user experience, compromising user data or exposing themselves to data liabilities. This shift is a strategic one. It requires executive teams and leaders to audit data flows, re-evaluate vendor dependencies and implement frameworks that treat data minimization as a critical piece of business continuity. The question isn't whether this transformation will happen; it's whether your organization will lead it or simply react to it. The companies making this transition can better define the next decade of fraud prevention. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

How Hackers Use AI Today—And How To Stay Safe
How Hackers Use AI Today—And How To Stay Safe

Forbes

time23-07-2025

  • Business
  • Forbes

How Hackers Use AI Today—And How To Stay Safe

As artificial intelligence advances, so do the tactics of malicious actors. Hackers are now using AI to scale attacks, exploit vulnerabilities more quickly and create deceptive content that's nearly undetectable with traditional defenses. From deepfakes and synthetic identities to AI-generated malware and real-time phishing schemes, the threat landscape is evolving fast. Below, members of Forbes Technology Council share new ways hackers are weaponizing AI, along with practical strategies for defending against these risks. 1. Targeting Real-Time Payments Fraudsters are using AI to create sophisticated fake communications and synthetic identities that target real-time payments at an unprecedented scale; 75% of financial institutions admit bad actors leverage AI more effectively than they do, exposing vulnerabilities. There's no silver bullet, but organizations must leverage AI better across the full customer lifecycle, not just for identity verification. - Yinglian Xie, DataVisor 2. Committing Large-Scale, Humanlike Fraud Hackers now use agentic AI to create fake accounts and commit fraud with humanlike precision at a massive scale. These AI agents mimic real user behavior, bypassing traditional defenses. To stay protected, organizations must move beyond CAPTCHAs and invest in advanced detection systems that analyze behavior, device intelligence and interaction patterns in real time. - Dan Pinto, Fingerprint Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? 3. Analyzing Code For Vulnerabilities Hackers are weaponizing AI to rapidly analyze large volumes of code and uncover vulnerabilities—often before they're even detected. By shifting security left and right, organizations can operationalize security continuously throughout the software development lifecycle to prevent weak or misconfigured security controls, closing gaps from code to deployment that make them vulnerable to AI-powered adversaries. - Brittany Greenfield, Wabbi 4. Infiltrating Hiring Processes Hackers use AI to deploy deepfakes that impersonate job candidates during virtual interviews and hiring assessments, allowing them to bypass traditional identity checks and gain insider access. To defend against this, organizations should adopt biometric authentication and certified identity verification, ensuring both incorporate liveness detection and presentation attack defenses. - Michael Engle, 1Kosmos 5. Building Custom Malware Threat actors are increasingly using unregulated black market LLMs to create malware that can bypass traditional defenses. Once inside, they move laterally to attack. Instant detection of unusual network activity is critical to stop them. Security teams should develop a baseline of normal behavior and continuously monitor their networks to identify anomalies for rapid investigation and remediation. - Rob Greer, ExtraHop 6. Rapidly Exploiting Security Flaws With AI, the time needed to exploit security flaws (vulnerabilities) is drastically reduced from months to days—perhaps, in the near future, even minutes. We already see machines (AI agents) automatically researching, developing and exploiting machines in the wild. What organizations can do is to adopt advanced security controls that can remediate or mitigate threats in a faster, more efficient way. - Roi Cohen, Vicarius 7. Supercharging Attacks AI has become a performance enhancer for threat actors, helping them execute stronger malware attacks, more realistic phishing scams and sophisticated social engineering rackets. The best defense against these AI-enhanced offensives is to embrace zero trust, in which policies and controls are made to contain threats moving at machine speed by limiting lateral movement and enforcing least privilege. - Thyaga Vasudevan, Skyhigh Security 8. Using Deepfakes For Extortion And Deception Hackers are using AI to create deepfakes to extort execs and voice signatures to trick employees. Companies need to leverage multiple policies and processes, including zero-trust principles, multifactor authentication, scalable monitoring tools and continuous employee education and awareness to guard against expanding AI threats. - Rob Green, Insight Enterprises 9. Finding Logic Flaws In Custom Apps And APIs Hackers are using AI to find logic flaws in custom apps and APIs. AI models predict weak points by simulating inputs and analyzing code, uncovering exploits faster than manual scans. Defend with AI-powered code review tools, runtime anomaly detection and red team simulations. For consumers: Reduce app connections and use smart identity monitoring. - Saby Waraich, Clackamas Community College 10. Creating Constantly Morphing Malware Hackers are using AI to create malware that constantly changes to avoid detection, making old antivirus tools useless. To fight back, companies need AI-powered security that watches behavior instead of just known threats. It's about catching suspicious actions in real time and assuming nothing is safe by default—because AI-driven attacks move fast. - Haider Ali, WebFoundr 11. Launching Next-Gen Botnet Attacks Hackers are using AI to quickly develop new botnet propagation and control mechanisms to create bigger, more versatile botnets. An example is the Aisuru botnet, which has been in the news for launching record-breaking distributed denial-of-service attacks. As these new botnets emerge, organizations that have internet-facing apps should reevaluate their DDoS defenses to ensure they are evolving along with the threat. - Carlos Morales, Vercara, a DigiCert Company 12. Lowering Attack Barriers With No-Code Tools The onset of AI-powered no-code tools and 'vibe coding' platforms has lowered the technical barrier for bad actors to launch sophisticated attacks; however, the core tactics remain rooted in social engineering and phishing. The best defense is continuous training, reinforced by phishing simulation tests. Only by developing intuitive awareness of attack vectors can we build lasting workforce resilience. - Pawel Rzeszucinski, Webpros 13. Duplicating Writing Styles Hackers now use AI to create customized phishing emails that duplicate writing patterns and contextual elements to evade detection systems. Organizations need to purchase AI-based threat detection systems and teach staff members to identify minor warning signs, because security awareness has evolved into a human-AI collaboration. - Raju Dandigam, Navan 14. Scanning Open-Source Code For Zero-Day Vulnerabilities Hackers now use AI to scan open-source code and binaries at scale, rapidly uncovering zero-day vulnerabilities in real time. To defend, shift security left by using AI-powered code analysis in CI/CD, continuously scanning for risks across systems, and integrating threat intelligence. As AI accelerates attacks, organizations must respond with early detection, automation and proactive patching. - Harikrishnan Muthukrishnan, Florida Blue 15. Chaining Zero-Day Exploits Hackers now use AI to autonomously chain together zero-day exploits—mapping incomplete vulnerabilities across multiple systems and executing coordinated breaches. To defend, organizations must implement AI-led threat modeling that simulates cross-domain attack paths, enabling preemptive patching even when individual flaws seem harmless in isolation. - Jagadish Gokavarapu, Wissen Infotech 16. Spreading Malware Through Fake GitHub Repositories As seen in a recent case, threat actors use AI to create fake GitHub repositories, misleading developers into downloading malware. Organizations can protect themselves by reviewing open-source code, deploying AI-driven analytics, educating employees on risks, and implementing multifactor authentication and regular patch management. - Arpna Aggarwal 17. Mimicking Trusted Behavior Malicious attackers aren't just using AI to break systems—they're also using it to blend in. When threats mimic trusted behavior, traditional detection falls short. AI can help defenders learn what 'normal' looks like and spot what seems familiar but doesn't match expected patterns. The goal isn't to flag the unusual but to catch the usual in the wrong place, at the wrong time, with the wrong badge. - Leah Dodson, Piqued Solutions 18. Cracking Password Patterns By combining AI and leaked passwords, hackers can now find hidden patterns in our brains. As humans, we are terrible at creating and remembering random passwords; instead, we rely on patterns. Yet, AI can quickly discover these patterns and replay them to find passwords for other services. - Kevin Korte, Univention 19. Causing GenAI Systems To Produce Undesirable Responses One of the new ways hackers are using AI is through 'prompt injection' and 'output injection' on generative AI systems. This results in undesirable responses from enterprise systems—and the outputs produced by these generative AI solutions are critical for end users. Organizations and consumers must refer to the OWASP top 10 list of AI risks and refer to the recommended strategies to mitigate them. - Sid Dixit, CopperPoint Insurance 20. Evolving Ransomware With Adaptive Encryption AI-powered ransomware now adapts encryption methods in real time, making traditional backup strategies less effective. Organizations should implement immutable backups that are stored offline and test recovery procedures monthly to stay ahead of evolving threats. - Chongwei Chen, DataNumen, Inc.

Fingerprint launches features to fight fraud in agentic AI era
Fingerprint launches features to fight fraud in agentic AI era

Finextra

time15-07-2025

  • Business
  • Finextra

Fingerprint launches features to fight fraud in agentic AI era

Fingerprint, a leader in device intelligence for fraud prevention, account security, and returning user experience optimization, today announced new Smart Signals and platform enhancements that detect malicious bots and AI agents, distinguishing them from legitimate automated traffic. 0 This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author. As agentic commerce experiences explosive growth and autonomous AI agents become increasingly sophisticated, enterprises need advanced tools to protect against evolving fraud schemes without delaying innovation or turning away legitimate transactions. Bots currently comprise over half of all internet traffic, with 30% classified as malicious, and Gartner predicts fully autonomous AI agents by 2036. This creates a critical challenge for businesses to differentiate between beneficial automation and malicious activity. Fingerprint's new Smart Signals address this challenge by providing real-time risk indicators based on a device's behavior, environment and configuration to enable enterprises to make better-informed decisions. "Bots and AI agents represent both the biggest current risk and the fastest-evolving threat landscape we've seen," said Dan Pinto, CEO and co-founderof Fingerprint. "As agentic AI technology advances rapidly and becomes more cost-effective, malicious actors are increasingly leveraging these tools for sophisticated fraud. Fingerprint's new Smart Signals ensures enterprises can harness the benefits of AI while maintaining robust defenses against malicious bots and agents that threaten essential business operations." Fingerprint Smart Signals and Features for Enterprises: Bot/AI Agent Detection • • • Bot Detection Smart Signal • can detect dozens of bot detection and browser automation software tools. It performs intelligent • classification on each API request to determine whether a bot or agent is legitimate or malicious, with only verified beneficial bots and agents classified as trustworthy. • • • • Virtual Machine Detection Smart Signal • further enhances AI agent and • bot • detection by identifying virtual machines, which are commonly used in automated fraud schemes. • This capability provides an additional layer of protection against sophisticated attack vectors. • • • • Residential Proxy Detection • addresses one of the most challenging aspects of modern fraud detection. • Residential • proxies are increasingly accessible and affordable, making them attractive tools for fraudsters • looking to mask their IP addresses. Because agentic traffic can be routed through ISPs to real residential IP addresses—giving malicious agents high authenticity—the ability to detect residential proxies with confidence levels is crucial for identifying all • types of agentic-driven fraud. • • • • Request Filtering: • Fingerprint has gathered a list of known user agents used by AI companies for web scraping and model training, as well as AI assistants that help with scheduling and other repetitive tasks. The Request Filtering functionality allows customers to filter out • these legitimate AI agents and bots from fingerprinting, helping optimize billing costs without compromising detection capabilities for AI-driven fraud. • The Rise of Intelligent Automation Demands Smarter Security for Enterprises As legitimate AI agents become more common for business operations—from AI assistants that handle scheduling and repetitive tasks to agentic commerce where agents research and purchase products on users' behalf—the ability to differentiate between beneficial and malicious automation has become a critical industry requirement. With these new enhancements from Fingerprint, organizations can gain comprehensive visibility into visitor intent, enabling proactive defense against evolving attack patterns. The new Smart Signals and features are available immediately and are designed to integrate seamlessly with existing Fingerprint implementations.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store