logo
#

Latest news with #Sift

Your employees may be leaking trade secrets into ChatGPT
Your employees may be leaking trade secrets into ChatGPT

Fast Company

time24-07-2025

  • Business
  • Fast Company

Your employees may be leaking trade secrets into ChatGPT

Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? Sift's latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information. This overconfidence with AI isn't limited to data sharing. The same comfort level that leads people to input sensitive work information also makes them vulnerable to deepfakes and AI-generated scams in their personal lives. Sift data found that concern that AI would be used to scam someone has decreased 18% in the last year, and yet the number of people who admit to being successfully scammed has increased 62% since 2024. Whether it's sharing trade secrets at work or falling for scam texts at home, the pattern is the same: familiarity with AI is creating dangerous blind spots. The Confidence Trap Often in a workplace setting, employees turn to AI to address a specific problem: looking for examples to round out a sales proposal, pasting an internal email to 'punch it up,' sharing nonfinal marketing copy for tone suggestions, or disclosing product road map details with a customer service bot to help answer a complex ticket. This behavior often stems from good intentions, whether that's trying to be more efficient, helpful, or responsive. But as the data shows, digital familiarity can create a false sense of security. The people who think they 'get AI' are the ones most likely to leak sensitive data through it or will struggle to identify malicious content. Every time an employee drops nonpublic context into a GenAI tool, they are—knowingly or not—transmitting business-sensitive data into a system that may log, store, or even use it to train future outputs. Not to mention, if a data leak were ever to occur, a hacker would be privy to a treasure trove of confidential information. So what should businesses do? The challenge with this kind of data exposure is that traditional monitoring won't catch this. Because these tools are often used outside of a company's intranet—their internal software network—employees are able to input almost any data they can access. The uncomfortable truth is that you probably can't know exactly what sensitive information your employees are sharing with AI platforms. Unlike a phishing attack where you can trace the breach, AI data sharing often happens in the shadows of personal accounts. But that doesn't mean you should ban AI usage outright. Try to infer the scale of the problem with anonymous employee surveys. Ask: What AI tools are you using? For which tasks do you find AI most helpful? And what do you wish AI could do? While an employee may not disclose sharing sensitive information with a chatbot, understanding more generally how your team is using AI can identify potential areas of concern—and potential opportunities. Instead of trying to track every instance retroactively, focus on prevention. A blanket AI ban isn't realistic and puts your organization at a competitive disadvantage. Instead, establish clear guidelines that distinguish between acceptable and prohibited data types. Set a clear red line on what can't be entered into public GenAI tools: customer data, financial information, legal language, and internal documents. Make it practical, not paranoid. To encourage responsible AI use, provide approved alternatives. Create company-sanctioned AI workflows for everyday use cases that don't retain data or are only used in tools that do not use any inputs for AI training. Make sure your IT teams vet all AI tools for proper data governance. This is especially important because different account types of AI tools have different data retention policies. Furthermore, it helps employees understand the potential dangers of sharing sensitive data with AI chatbots. Encourage employee training that addresses both professional and personal AI risks. Provide real-world examples of how innocent AI interactions inadvertently expose trade secrets, but also educate employees about AI-powered scams they might encounter outside of work. The same overconfidence that leads to workplace data leaks can make employees targets for sophisticated fraud schemes, potentially compromising both personal and professional security. If you discover that sensitive information has been shared with AI platforms, act quickly, but don't panic. Document what was shared, when, and through which platform. Conduct a risk assessment that asks: How sensitive was the information? Could it compromise competitive positioning or regulatory compliance? You may need to notify affected parties, depending on the nature of the data. Then, use these incidents as learning opportunities. Review how the incident occurred and identify the necessary safeguards. While the world of AI chatbots has changed since 2023, there is a lot we can learn from a situation Samsung experienced a few years ago, when employees in their semiconductor division shared source code, meeting notes, and test sequences with ChatGPT. This exposed proprietary software to OpenAI and leaked sensitive hardware testing methods. Samsung's response was swift: they restricted ChatGPT uploads to minimize the potential for sharing sensitive information, launched internal investigations, and began developing a company-specific AI chatbot to prevent future leaks. While most companies lack the resources to build chatbots themselves, they can achieve a similar approach by using an enterprise-grade account that specifically opts out their accounts from AI training. AI can bring massive productivity gains, but that doesn't make its usage risk-free. Organizations that anticipate and address this challenge will leverage AI's benefits while maintaining the security of their most valuable information. The key is recognizing that AI overconfidence poses risks both inside and outside the office, and preparing accordingly.

AI-Generated Scams Claim 62% More Victims Year-Over-Year Despite Declining Consumer Concern, New Sift Report Reveals
AI-Generated Scams Claim 62% More Victims Year-Over-Year Despite Declining Consumer Concern, New Sift Report Reveals

Yahoo

time25-06-2025

  • Business
  • Yahoo

AI-Generated Scams Claim 62% More Victims Year-Over-Year Despite Declining Consumer Concern, New Sift Report Reveals

Digital Trust Index Exposes Dangerous Confidence Gap as 70% of Consumers Report That Scams Harder to Detect SAN FRANCISCO, June 25, 2025 (GLOBE NEWSWIRE) -- Sift, the AI-powered fraud platform delivering identity trust for leading global businesses, today released its Q2 2025 Digital Trust Index, revealing a troubling disconnect between consumer confidence and actual vulnerability to AI-generated fraud. The report exposes a dangerous "confidence paradox" where scam sophistication is outpacing consumer awareness, creating unprecedented risks for businesses and their customers. The Confidence Paradox: When Familiarity Breeds Vulnerability Despite growing familiarity with GenAI, the data reveals a concerning trend: 27% of those targeted by GenAI have been successfully scammed, a 62% increase from 2024. This surge occurs even as consumer concern about AI fraud has dropped significantly—from 79% in 2024 to just 61% today, an 18-point decrease that signals dangerous complacency. Scam sophistication is outpacing consumer defenses. According to the Sift-commissioned survey, 70% of consumers say scams have become harder to detect in the past year. Yet paradoxically, overall fear of AI-powered fraud is declining, creating a perfect storm for cybercriminals. Generational Divide: Digital Natives Most at Risk The report reveals a striking generational paradox. Gen Z and Millennials—the demographics most comfortable with AI technology—report the highest confidence in identifying scams (52% and 44%, respectively) yet are successfully victimized at alarming rates (30% and 23%). In contrast, Gen X and Baby Boomers express lower confidence (30% and 13%) but demonstrate more cautious online behavior, resulting in lower scam success rates (19% and 12%). Enterprise Risk: Consumer Data Practices Expose Businesses Beyond individual fraud, the report uncovers significant enterprise security risks. Despite widespread privacy concerns, 31% of consumers admit to entering personal or sensitive information into GenAI tools. Among this group, the most commonly shared data includes email addresses (55%), phone numbers (49%), home addresses (44%), and financial information (33%). Most alarmingly, 14% admitted to sharing company trade secrets, creating dual exposure for both individuals and their employers. Behavioral Patterns Reveal Cybercriminal Operations Analysis of Sift's Global Data Network, which processes over 1 trillion events annually, reveals distinct behavioral signatures that differentiate fraudsters from legitimate users. Key findings include: Fraudsters use 36% more payment methods than legitimate users Criminal networks employ 20% fewer IP addresses, suggesting coordinated operations Peak fraud activity occurs during late-night hours (10 p.m. to 5 a.m. local time) when many fraud teams are offline The Business Imperative "AI-generated scams and deepfakes are proliferating with speed and concerning sophistication, leaving even the most informed consumers at risk,' said Kevin Lee, SVP of Customer Experience, Trust & Safety at Sift. 'Businesses must fight fire with fire—using AI to secure identity trust at every customer touchpoint, which ultimately creates better consumer experiences, mitigates fraud, and fosters profitable growth." The full findings from Sift's Q2 2025 Digital Trust Index are available here About SiftSift is the AI-powered fraud platform delivering identity trust for leading global businesses. Our deep investments in machine learning and user identity, a data network scoring 1 trillion events per year, and a commitment to long-term customer success empower more than 700 customers to grow fearlessly. Brands including DoorDash, Yelp, and Poshmark rely on Sift to unlock growth and deliver seamless consumer experiences. Visit us at and follow us on LinkedIn. Media Contact:Victor WhiteVP, Corporate Marketing, Siftpress@ in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Sift Once Again Secures #1 Spot in all Fraud Prevention Categories in G2's 2025 Summer Reports
Sift Once Again Secures #1 Spot in all Fraud Prevention Categories in G2's 2025 Summer Reports

Yahoo

time24-06-2025

  • Business
  • Yahoo

Sift Once Again Secures #1 Spot in all Fraud Prevention Categories in G2's 2025 Summer Reports

Sift earns top spot in Fraud Detection, Risk-Based Authentication, and E-commerce Fraud Protection based on 500 customer reviews SAN FRANCISCO, June 24, 2025 (GLOBE NEWSWIRE) -- Sift, the AI-powered fraud platform securing identity trust for leading global businesses, today announced its ranking in G2's 2025 Summer Reports, once again earning the #1 ranking across all fraud-related categories. This marks the second consecutive year that Sift has achieved the top ranking in Fraud Detection, E-Commerce Fraud Protection, and Risk-Based Authentication (RBA). G2 is the world's largest and most trusted software marketplace and Sift's recognition is based on the reviews of 500 real Sift users, a 42% increase since the 2024 Summer Reports and 52% more reviews than the closest category competitor. 'Earning the #1 ranking across all fraud categories on G2 sends a clear message—Sift leads in fast, accurate fraud decisioning and seamless, secure user experiences,' said Armen Najarian, CMO of Sift. 'It's powerful validation that we're helping businesses grow fearlessly, without the friction and complexity of legacy fraud solutions.' 'According to G2's 2024 Buyer Behavior Report, 69% of software buyers globally say they only engage a salesperson once they have arrived at their purchasing decision,' said Sydney Sloan, CMO of G2. 'As software buyers increasingly turn to trusted customer reviews to inform their purchasing decisions, they know they can rely on G2, the world's largest software marketplace. G2's quarterly Market Reports are rooted in the authentic voice of customers. Simply put, their feedback guides our rankings—including Sift's position in the G2 2025 Reports.' G2's quarterly reports rank the best products across thousands of reports by category, company size, geography, and report type. These reports serve as tailored guides for software buyers researching solutions that meet their specific business needs, informed by real user experience. Highlights of recent Sift user reviews on G2 include: 'We've been using Sift for over 8 years now, and it's been nothing short of excellent. The platform is reliable, easy to work with, and constantly improves to stay ahead of new fraud trends. It's a tool we trust and highly recommend to any business serious about fraud protection.' 'What I like best about Sift is its powerful machine learning capabilities that allow for real-time analysis of transaction patterns. The platform's ability to identify subtle fraud trends and provide actionable insights has been incredibly valuable in helping us prevent fraud and mitigate compliance risks efficiently. The user-friendly interface and customizable features also make it easier to tailor solutions to specific business needs, ensuring that I can stay ahead of potential threats.' 'Sift is one of the best tools for fraud detection; it is straight-forward, the data analytics keep getting better and better, and the information is very complete, which helps us make decisions with more assertiveness.' 'What I like about Sift is how fast and automatic everything is. It helps us catch fraud in real time without messing up the experience for our customers. The interface is super easy to use, and it's clear why certain decisions are made, which helps our team act quickly. I also love how flexible the rules are—we can adjust them to fit exactly what our business needs.' Learn more about what real users have to say about Sift on G2's review page. About Sift Sift is the AI-powered fraud platform securing digital trust for leading global businesses. Our deep investments in machine learning and user identity, a data network scoring 1 trillion events per year, and a commitment to long-term customer success empower more than 700 customers to grow fearlessly. Brands including DoorDash, Yelp, and Poshmark rely on Sift to unlock growth and deliver seamless consumer experiences. Visit us at and follow us on LinkedIn. About G2 G2 is the world's largest and most trusted software marketplace. More than 100 million people annually — including employees at all Fortune 500 companies — use G2 to make smarter software decisions based on authentic peer reviews. Thousands of software and services companies of all sizes partner with G2 to build their reputation and grow their business — including Salesforce, HubSpot, Zoom, and Adobe. To learn more about where you go for software, visit and follow us on LinkedIn. Media Contact:Victor WhiteVP, Corporate Marketing, Siftpress@ in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Sorry, The Emoji-Over-Face Parents Might Be Right About Online Privacy
Sorry, The Emoji-Over-Face Parents Might Be Right About Online Privacy

Yahoo

time16-06-2025

  • Entertainment
  • Yahoo

Sorry, The Emoji-Over-Face Parents Might Be Right About Online Privacy

When you scroll through social media, it's not uncommon to see proud parents sharing snapshots of their children's lives. From first steps to messy birthday parties to back-to-school smiles, but lately, a different kind of post is popping up: one where the child's face is replaced with a well-placed emoji. These 'emoji-over-face' parents are often met with confusion or criticism, accused of being overly cautious or even paranoid. But in a digital age where every post can become permanent, searchable and exploitable, perhaps they're not just being careful — they're being smart. That's the case for Lauren Flowerday, a branding strategist for authors and a first-time mom. She said she always knew that if she became a mother, she didn't want to share her child's face. 'It has been more of a gradual confirmation of a gut decision,' said Flowerday. 'And then I had a client in 2022, an influencer with over 200K followers, who took her child's pictures off her Instagram after a scary situation where her child was approached by a stranger, who was also a follower of their mom's Instagram.' She adds that there are some things she can't control — like daycare login access and pediatrician messaging portals — but she can control showing his face and what personal information is shared by her. So, when it comes to posting pictures of her little one online, she shares photos of his tiny hands or feet, or a perfectly angled shot where just the back of his head is visible or she will put a heart emoji over his face. 'But more recently, I've been thinking about not sharing him through pictures at all and even not sharing his name,' added Flowerday. 'I just want to make sure his identity is protected as much as possible.' As with anything, there are risks worth considering when it comes to sharing your child online, explains Meera Khan, PsyD, clinical director and licensed clinical psychologist. 'In this case, it may violate a child's privacy, expose them unintentionally to malicious individuals, and introduce the presence of others into their social, cognitive, and emotional development,' she said. While social media outlets like Facebook and Instagram are ubiquitous and have become an integral part of our lives, it's natural to want to share images of our children on these platforms. However, this also exposes the child to public scrutiny much sooner than they would otherwise experience or be equipped to navigate developmentally. Protecting children's faces may also help prevent online predators from identifying children. 'Parents should consider when sharing their children's information online that someone could access the child's personally identifiable information,' said Brittany Allen, senior trust & safety architect at Sift. 'Details such as your child's full name, birthdate, address, school, and even photos can be pieced together by cybercriminals to create a complete profile.' She adds that fraudsters can then use this information to open credit accounts, take out loans, or make purchases in your child's name, often going undetected for years because children typically do not have active credit reports. This risk has grown in recent years as parents increasingly share information online, making it easier for fraudsters to piece together different bits of information. Additionally, once information is online, it becomes difficult to control where it goes or who has access to it. As is usually the case, there are two sides to every story. 'And, yes, there are some upsides to sharing your child's pictures online,' said Joseph Laino, Psy.D., psychologist and assistant director of clinical operations at the Sunset Terrace Family Health Center at NYU Langone. 'Social media has made it possible for family and friends to stay connected over time and distance in ways that had never been possible before.' Sharing images of our children can be a way for us to document their milestones and share them with family and friends who don't live nearby and can't participate in their lives in person. The grandparent who lives across the country, the aunt who is overseas, the best friend who lives in another state — all these people stay connected with us so easily on social media and can remain an active part of our children's lives as a result. Social media is relatively new, and psychologists agree that long-term research on how it affects a child's online identity is limited. However, they say they've seen it work both ways. Khan said she's noticed that children who had less protection online, or an earlier exposure to the online world, have a higher rate of anxiety symptoms and worry about scrutiny. These kids' parents often overshare or make their child feel like they need to perform for the camera, or even grow up conscious of cameras around them. The first generation of these young adults is finding themselves grappling with digital footprints that affect job searches, college admissions, and personal relationships. They've had to request content takedowns, confront parents about posts, or even face cyberbullying stemming from old photos or anecdotes. However, parents who choose not to share their children online often share pictures and updates with loved ones or friends via text messages and subsequently have to work a little harder to stay connected. This creates a more authentic relationship and a relationship that the child feels more connected to. While there is no way to guarantee online safety, parents can take steps to increase their safety. Allen suggests the following: Avoid posting your child's full name, birthdate, address, school name, or other identifying details. Do not share images that reveal location, school uniforms, or personal items that could be used to identify or track your child. Set your social media accounts to private and regularly review your friends or followers list. Only share photos with trusted individuals. Turn off location services when posting photos to prevent your location from being automatically shared. Consider freezing your child's credit to prevent unauthorized access to their accounts. Teach your kids the basics around online safety, especially once they get to the age where they have their own device. Help them know how to look out for scams, and to never provide personally identifiable information to strangers. 'Remember our children rely on us to protect them, and they are too young to consent to sharing their images on social media,' said Laino. 'How will our children feel when they grow up, knowing these images are or were out there? And once the images are on the internet, it can be difficult to control what happens with them. If we, as parents, choose to share, we should do so responsibly.' 8 Subtle Ways Parents Create Anxiety Without Realizing It Can You Even Stop Your Partner From Putting Your Kid In The Public Eye? Divorce Lawyers Have Thoughts Oracle, Stoic, Mylove — These Are The Unusual Names U.S. Parents Are Giving Their Children Inflation Is Hurting Everyone — But This 1 Group, It's Taking A Deeper Toll

Sift Kaur Samra wins women's 3P bronze, her 2nd in two years in Munich
Sift Kaur Samra wins women's 3P bronze, her 2nd in two years in Munich

India Gazette

time13-06-2025

  • Sport
  • India Gazette

Sift Kaur Samra wins women's 3P bronze, her 2nd in two years in Munich

Munich [Germany], June 13 (ANI): World record holder Sift Kaur Samra, won bronze in the women's 50m rifle 3 positions (3P), on competition day three of the International Shooting Sport Federation (ISSF) World Cup (Rifle/Pistol) in Munich, giving India their second medal of the competition. The former world number one shot 453.1 in the final at the Olympic Shooting range, to finish behind Switzerland's Emely Jaeggi, who won silver with 464.8. Norwegian ace Jeanette Hegg Duestad won gold with 466.9. India now has two bronzes from the competition. Sift's second 3P bronze in Munich in as many years came on the back of a gold in the year's first Buenos Aires World Cup in April. In Munich on Thursday, she shot a characteristically strong and consistent qualification round with scores of 197, 199 and 196 each in Kneeling, Prone and the last Standing position respectively, to finish second in a top field. Agathe Girard of France pipped her to the top spot on the same score of 592, but with more shots in the inner 10 ring. While the Olympic champion Chiara Leonne of Switzerland missed out, so did India's Ashi Chouksey, a brilliant 589, giving her a ninth-place finish. Known to be a strong standing shooter in 3P, Sift ended the second Prone series of 15 shots in fourth position, behind Paris silver medalist Sagen Maddalena. Duestad and Emely were in a battle of their own for gold and silver quite early in the 45-shot final. Sagen then faltered in the first series of Standing shots and Sift gleefully accepted the window of opportunity to climb up to third, a position which she clinically maintained with a series of slim and mid to high 10s, right till the end of the penultimate 44th shot. On Friday, day four, the women's 10m Air Pistol final is also on schedule, as is the men's 25m rapid-fire pistol. Other Indian scores of the day Women's 3P- Shriyanka Sadangi 582 (43rd) Men's 10m Air Rifle- Kiran Ankush Jadhav 631.7 (10th), 629.1(39th), Sandeep Singh 628.3 (45th) Men's 25m Rapid Fire Pistol (Day 1-ongoing)- Anish 295, Vijavveer Sidhu 284. (ANI)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store