Latest news with #dataSecurity


Forbes
17 hours ago
- Business
- Forbes
This $101 Million Startup's AI Exposes Fraudsters Disguised As Employees
North Korean spies have found ways into U.S. companies, tricking employers into thinking their legitimate workers, when they're trying to siphon off business' data. Everything seemed fine when one of Eran Barak's customers, a financial services business, hired a remote worker to do their Salesforce admin. In the interview, the man seemed capable of doing what should've been routine work, and he passed all the background checks. Two weeks into the job, though, Barak says his AI software, dubbed MIND, spotted the employee sending highly sensitive company data up to his personal cloud account. The person coming into the office turned out not to be the same person who'd been interviewed, Barak says. 'It was a completely different individual who had 'bought' the job from a professional fraudster.' With so many new risks to business' data, from accidentally employing a fraudster or losing data to North Korean spies with a similar modus operandi, or employees giving away company secrets to AI chatbots like ChatGPT, companies need new security tech to help them monitor and flag these risks. Barak, who cofounded Seattle-based cyber startup MIND in 2022, said his company's AI can prevent costly data leaks with an 'autopilot' that autonomously identifies sensitive data and helps IT teams secure it, claiming it can reduce false alerts about data leaks 'to nearly zero.' 'We want to push what we call a small language model.' On Wednesday, MIND announced a $30 million Series A round led by Paladin Capital Group and Crosspoint Capital Partners at a $101 million valuation. That follows a $10 million round in September last year after it came out of stealth. Barak says MIND's AI is far better than a human at determining the risk of a given piece of data, whether that's a batch of credit card numbers or meeting minutes, and then telling IT what layers of security are protecting them, if any. That means the AI won't leave as many 'blindspots' where sensitive data is left exposed, according to MIND's pitch. To put these insights into action, an AI agent sits on each employee's device and will detect and block data leakages either happening in the browser or via another app. 'We want to push what we call a small language model to the device itself, so we can really classify on the fly much more complex, sensitive data on the device,' Barak told Forbes. The MIND cofounders have a history in successful AI startups. Barak cofounded Hexadite, another security automation tool that was sold to Microsoft for a reported $100 million in 2017. Hod Bin Noon, MIND's VP of research and development, was a director at Dazz, which was acquired by cloud security giant Wiz for $450 million last year, with Wiz itself set to be acquired by Google for $32 billion. MIND CTO Itai Schwartz was a senior engineer at Torq, which makes AI agents focused on cybersecurity. Barak 'built one of the flagship security automation players,' said Paladin investor Gibb Witham, who led the round. He'd been on the hunt for an AI-native company to solve the problems modern AI was creating, Witham said, and the MIND team's deep cybersecurity automation experience was a major draw. Among MIND's chief competitors is another startup, CyberHaven, which hit a $1 billion valuation earlier this year on a recent $100 million raise. Both are relying on AI to do much of the work of the IT security engineer, while looking for leaks across an organization's networks. Not that Barak thinks that AI is yet good enough to be relied on to automate all security tasks. 'It won't completely replace us,' he adds. 'But I can tell you that the depth and precision of the classification that we are doing thanks to the AI, it's unreal.'


Japan Times
3 days ago
- Business
- Japan Times
Southeast Asia joins Europe's calls to defend subsea cables
Defense officials from Europe and Southeast Asia called for stronger collaboration to protect the global network of subsea cables as concerns grow over threats to this vital infrastructure. Countries like Singapore and Malaysia are becoming key hubs for the roughly 600 fiber-optic cables that carry nearly all of the world's data. Meanwhile, a spike in cable damage in European waters, particularly in the Baltic Sea, has heightened fears about the network's vulnerability. "We need to work together to defend the entire network,' Singapore Defense Minister Chan Chun Sing said at the IISS Shangri-La Dialogue security forum on Sunday. "There's no point trying to defend the integrity and security of a submarine cable by looking at a point. We need both ends to be secure.' The issue came up repeatedly during the two-day summit in Singapore, with European officials keen to share what they've learned from the recent rise in subsea cable damage. Today, the global undersea cable network stretches about 1.4 million kilometers. With the demand for data center power projected to surge significantly with the advent of intensive artificial intelligence platforms, more cables are essential. Protecting them has become a key imperative in Europe since Russia's full-scale invasion of Ukraine, with incidents in its waters mostly involving tankers traveling to and from Russia, and in at least one case, a Chinese vessel. Despite suspicions of sabotage, there's been no strong evidence pointing to deliberate attacks, with the incidents more likely to be accidents or careless behavior. Still, NATO countries have increased patrols in key areas to ensure the safety of undersea infrastructure. The European Union's top diplomat Kaja Kallas urged European and Asian partners to work together at sea to tackle covert "shadow fleets' of tankers and review maritime security laws. Protecting digital cable infrastructure is important for the Philippines too, Defense Secretary Gilberto Teodoro said in an interview. He added that his country, which is locked in a long-running dispute with China in the South China Sea, is working to build up its capabilities to address the threat. Thailand also called the issue a pressing concern, with Defense Minister Phumtham Wechayachai saying the country is collaborating with Singapore and Brunei. He added undersea infrastructure faces risks not only from sabotage, but also from accidents and natural disasters.


CTV News
6 days ago
- Business
- CTV News
Nova Scotia Power CEO one of 280K customers impacted by data breach
Potentially 140,000 Nova Scotia Power customers could have had their social insurance numbers stolen in a recent data breach.


The Guardian
28-05-2025
- General
- The Guardian
What to do if your laptop is lost or stolen – tips for when the worst happens
Laptops are the workhorses of the world and can contain thousands of important documents, photos and treasured memories. Losing them can mean more than just downtime from work. Here's what to do in the event the worst happens. Try to locate it using Microsoft or Apple's Find My device service, lock it remotely and mark it as lost to help secure your data. Remove it as a trusted device for services you use, including your Apple or Microsoft account, cloud accounts, shops or other browser-based apps. Change your passwords for key accounts, including your Apple or Microsoft account. Start with your email to prevent thieves using it to gain access to your other accounts through password resets. Report the theft to the police by calling 101 in the UK and give them a description of the laptop, make, model, colour and serial number, which may be on the box or in your Apple or Microsoft account. Contact your insurance provider if you have cover, your laptop may come under some home contents policies. Tell your bank about any credit cards you may have details of stored on your laptop. Contact the laptop's manufacturer to report the theft so it can blacklist its serial number, which may prevent it being activated and used by someone else. When you get a replacement laptop – either new or secondhand – there are a number of things you can do to help in the event it goes missing: Set a strong password or long pin that cannot be easily guessed, set up and use any face or fingerprint scanners and make sure the screen is set to lock after a short period, usually when the screen turns off. Turn on device encryption in your privacy and security settings if not already enabled. Set up two-step verification for your Microsoft or Apple account, as well as any others you use. Turn on 'Find My' in a Windows 10 or 11's security settings or in iCloud settings on a Mac, which will help you locate a lost laptop, remotely lock it and mark it as lost with a message on its screen for anyone who finds it. Back up your data regularly. On a Windows laptop you can use Windows Backup to store files and settings on Microsoft's OneDrive cloud service (5GB free). On a Mac you can use Time Machine to back up the laptop to an external drive or sync your apps and files with iCloud (you can store 5GB for free). Alternatively, copy files to an external drive but be sure to encrypt it first before backing up any sensitive documents. Host important documents in the cloud. Microsoft and Apple have cloud services, but there are many others including Google Drive, Dropbox and Proton Drive. Take note of your laptop's colour, model number and serial number, and store your proof of purchase somewhere safe. Consider putting the laptop on your home contents insurance, which may need it to be added to the high-risk or high-value items list, particularly if you take it out of the home. Otherwise, there is dedicated laptop or gadget insurance, with multiple cover levels including theft.


Geeky Gadgets
26-05-2025
- Business
- Geeky Gadgets
OpenAI Forced to Retain All Your ChatGPT Data After New Court Order
What happens when the promise of innovative technology collides with the realities of legal mandates and user trust? OpenAI, a leader in artificial intelligence innovation, now finds itself at the center of a heated debate over user privacy and data security. A recent court order has forced the company to retain all user-generated data, even conversations users believed were deleted. For millions who rely on AI tools like ChatGPT, this development raises unsettling questions: Is your private data truly private? And how do companies balance innovation with the ethical responsibility to protect user information? These questions aren't just hypothetical—they're reshaping how we think about the relationship between AI, privacy, and accountability. In this in-depth overview, Matt Johansen unpacks the implications of OpenAI's new data retention requirements and what they mean for you as a user. From the risks of unauthorized access to the broader ethical dilemmas surrounding AI training and copyright infringement, this situation offers a rare glimpse into the complex challenges of navigating the intersection of technology, law, and trust. Whether you're a professional working with sensitive data or simply a curious user of AI tools, understanding these changes is critical to making informed decisions about how you engage with emerging technologies. The stakes are high, and the conversation is just beginning. OpenAI Data Retention Order What the Court Order Means for You The court order requires OpenAI to preserve all output log data, even if users have deleted it. This directive directly conflicts with OpenAI's stated privacy policies and presents significant challenges in adhering to international regulations, such as GDPR. The retention of such data increases the risk of unauthorized access or misuse, amplifying concerns about data security and user privacy. For users, this means that any sensitive information shared with OpenAI's systems could remain stored indefinitely. This creates potential vulnerabilities, particularly for professionals in sectors like law, healthcare, or finance, where confidentiality is paramount. The indefinite storage of such data could expose it to breaches or other security risks, making it essential for users to carefully evaluate how they interact with AI tools. The New York Times Lawsuit: Key Issues The lawsuit filed by The New York Times accuses OpenAI of copyright infringement, alleging that ChatGPT uses its proprietary content without authorization. The Times claims that this practice undermines its business model by reducing user reliance on its services. Central to the case is the debate over whether OpenAI's use of copyrighted material constitutes fair use or whether it unfairly competes with the Times by repurposing its content for AI-generated outputs. This legal battle underscores the growing tension between media companies and technology firms over content usage. The outcome of this case could set a precedent for how AI systems are trained and deployed, particularly regarding the use of copyrighted material. It also raises critical questions about the balance between innovation and the protection of intellectual property rights. OpenAI's Privacy Controversy: Is Your Data at Risk? Watch this video on YouTube. Master ChatGPT with the help of our in-depth articles and helpful guides. Privacy and Security Risks The court-mandated data retention policy has intensified concerns about privacy and security. If you use AI tools like ChatGPT, it is crucial to reconsider how you interact with them, especially when sharing sensitive or confidential information. Key risks associated with this policy include: Increased vulnerability to data breaches or unauthorized access, which could compromise sensitive information. to data breaches or unauthorized access, which could compromise sensitive information. Potential exposure of confidential data, particularly for industries like law, healthcare, or finance, where privacy is critical. of confidential data, particularly for industries like law, healthcare, or finance, where privacy is critical. Challenges in compliance with privacy laws such as GDPR, which emphasize user control over personal data and the right to have data deleted. These risks highlight the importance of understanding how AI tools manage your data and whether their practices align with your privacy expectations. Users should exercise caution and consider limiting the sharing of sensitive information with AI systems until clearer safeguards are in place. OpenAI's Position and Challenges OpenAI has expressed concerns about the court order, describing the requirement to retain all user data as both technically burdensome and unnecessary for the lawsuit. The company has reiterated its commitment to privacy laws and ethical AI practices, emphasizing its efforts to balance user preferences with legal obligations. However, this situation illustrates the broader tension between technological innovation and regulatory compliance. As AI technologies continue to advance, companies like OpenAI face mounting pressure to navigate complex legal landscapes while maintaining user trust. This case serves as a reminder of the challenges organizations encounter when attempting to reconcile innovation with the need for transparency and accountability. Broader Implications for AI and Data Practices The legal challenges faced by OpenAI underscore the need for organizations using AI technologies to critically evaluate their data retention policies and privacy practices. Legal mandates, such as the one imposed on OpenAI, can override existing policies, forcing companies to adapt quickly to new requirements. For users and organizations, this means: Regularly reviewing vendor data management practices to ensure compliance with evolving legal and ethical standards. Anticipating potential changes to data retention strategies as regulatory landscapes shift. Recognizing the ongoing conflict between innovation, copyright law, and user privacy, and preparing for its implications. The lawsuit also reflects broader tensions between media companies and technology firms, echoing past disputes over content usage. These conflicts highlight the need for clearer guidelines and frameworks to govern the use of proprietary content in AI training and deployment. Historical Context: A Recurring Conflict The issues raised in this lawsuit are part of a longstanding conflict between media companies and technology firms. For years, publishers have clashed with platforms like Google over content scraping and the redistribution of news articles. These disputes have often centered on the balance between providing access to information and protecting intellectual property rights. This historical context provides valuable insight into the challenges faced by OpenAI and other AI developers today. As AI systems become more sophisticated, the need for clear legal and ethical standards to govern their use becomes increasingly urgent. The current lawsuit serves as a reminder of the ongoing struggle to reconcile the interests of content creators with the demands of technological innovation. What's Next? The case is set to proceed in court, and its outcomes could have far-reaching implications for AI data practices, copyright law, and user privacy. As a user, staying informed about updates to OpenAI's compliance measures is essential. These developments may influence how you interact with AI tools and the precautions you take when sharing sensitive information. Organizations may need to: Adapt their data retention strategies to align with new legal requirements and ensure compliance with privacy laws. Reassess their reliance on AI technologies in light of evolving regulatory landscapes and potential legal risks. Prepare for shifts in how AI systems are trained and deployed, particularly regarding the use of copyrighted material. Key Takeaways – Evaluate your data-sharing practices and vendor policies to ensure they comply with privacy laws and organizational standards. – Be prepared for potential changes to data retention strategies as legal and regulatory challenges evolve. – Understand the broader tension between innovation, copyright law, and user privacy, and consider how these dynamics might affect your use of AI technologies.