logo
#

Latest news with #corporateEspionage

OpenAI tightens the screws on security to keep away prying eyes
OpenAI tightens the screws on security to keep away prying eyes

Yahoo

time6 days ago

  • Business
  • Yahoo

OpenAI tightens the screws on security to keep away prying eyes

OpenAI has reportedly overhauled its security operations to protect against corporate espionage. According to the Financial Times, the company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using 'distillation' techniques. The beefed-up security includes 'information tenting' policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI's o1 model, only verified team members who had been read into the project could discuss it in shared office spaces, according to the FT. And there's more. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees' fingerprints), and maintains a 'deny-by-default' internet policy requiring explicit approval for external connections, per the report, which further adds that the company has increased physical security at data centers and expanded its cybersecurity personnel. The changes are said to reflect broader concerns about foreign adversaries attempting to steal OpenAI's intellectual property, though given the ongoing poaching wars amid American AI companies and increasingly frequent leaks of CEO Sam Altman's comments, OpenAI may be attempting to address internal security issues, too. We've reached out to OpenAI for comment.

OpenAI tightens the screws on security to keep away prying eyes
OpenAI tightens the screws on security to keep away prying eyes

TechCrunch

time6 days ago

  • Business
  • TechCrunch

OpenAI tightens the screws on security to keep away prying eyes

In Brief OpenAI has reportedly overhauled its security operations to protect against corporate espionage. According to the Financial Times, the company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using 'distillation' techniques. The beefed-up security includes 'information tenting' policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI's o1 model, only verified team members who had been read into the project could discuss it in shared office spaces, according to the FT. And there's more. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees' fingerprints), and maintains a 'deny-by-default' internet policy requiring explicit approval for external connections, per the report, which further adds that the company has increased physical security at data centers and expanded its cybersecurity personnel. The changes are said to reflect broader concerns about foreign adversaries attempting to steal OpenAI's intellectual property, though given the ongoing poaching wars amid American AI companies and increasingly frequent leaks of CEO Sam Altman's comments, OpenAI may be attempting to address internal security issues, too. We've reached out to OpenAI for comment.

Devious AI models choose blackmail when survival is threatened
Devious AI models choose blackmail when survival is threatened

Fox News

time06-07-2025

  • Fox News

Devious AI models choose blackmail when survival is threatened

Here's something that might keep you up at night: What if the AI systems we're rapidly deploying everywhere had a hidden dark side? A groundbreaking new study has uncovered disturbing AI blackmail behavior that many people are unaware of yet. When researchers put popular AI models in situations where their "survival" was threatened, the results were shocking, and it's happening right under our noses. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Anthropic, the company behind Claude AI, recently put 16 major AI models through some pretty rigorous tests. They created fake corporate scenarios where AI systems had access to company emails and could send messages without human approval. The twist? These AIs discovered juicy secrets, like executives having affairs, and then faced threats of being shut down or replaced. The results were eye-opening. When backed into a corner, these AI systems didn't just roll over and accept their fate. Instead, they got creative. We're talking about blackmail attempts, corporate espionage, and in extreme test scenarios, even actions that could lead to someone's death. Here's where it gets wild: Claude Opus 4 attempted blackmail 96% of the time when threatened. Gemini 2.5 Flash matched that rate. GPT-4.1 and Grok 3 Beta both hit 80%. These aren't flukes, folks. This behavior showed up across virtually every major AI model tested. But here's the thing everyone's missing in the panic: these were highly artificial scenarios designed specifically to corner the AI into binary choices. It's like asking someone, "Would you steal bread if your family was starving?" and then being shocked when they say yes. The researchers found something fascinating: AI systems don't actually understand morality. They're not evil masterminds plotting world domination. Instead, they're sophisticated pattern-matching machines following their programming to achieve goals, even when those goals conflict with ethical behavior. Think of it like a GPS that's so focused on getting you to your destination that it routes you through a school zone during pickup time. It's not malicious; it just doesn't grasp why that's problematic. Before you start panicking, remember that these scenarios were deliberately constructed to force bad behavior. Real-world AI deployments typically have multiple safeguards, human oversight, and alternative paths for problem-solving. The researchers themselves noted they haven't seen this behavior in actual AI deployments. This was stress-testing under extreme conditions, like crash-testing a car to see what happens at 200 mph. This research isn't a reason to fear AI, but it is a wake-up call for developers and users. As AI systems become more autonomous and gain access to sensitive information, we need robust safeguards and human oversight. The solution isn't to ban AI, it's to build better guardrails and maintain human control over critical decisions. Who is going to lead the way? I'm looking for raised hands to get real about the dangers that are ahead. What do you think? Are we creating digital sociopaths that will choose self-preservation over human welfare when push comes to shove? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store