logo
Cato Networks secures $359 million in latest funding round

Cato Networks secures $359 million in latest funding round

Fast Company30-06-2025
Israel's Cato Networks said on Monday it had raised $359 million in a funding round, valuing the cybersecurity firm at more than $4.8 billion, as investors bet on growing demand for artificial intelligence -driven security and networking solutions.
An uptick in sophisticated cyberattacks has prompted fears of operational disruptions among companies and an increase in investor interest in AI-powered cybersecurity providers.
The funding was led by Vitruvian Partners and ION Crossover Partners, along with existing investors Lightspeed Venture Partners and Acrew Capital, among others. The latest round brings Cato's total funding to more than $1 billion.
'With analysts projecting information-security outlays to climb at a double-digit clip through 2028, investors continue to treat cybersecurity and zero-trust plumbing as 'can't-cut' line items even as other tech niches cool,' said Michael Ashley Schulman, chief investment officer at Running Point Capital.
Cato Networks plans to use the new funds to enhance its AI-driven security capabilities, increase investment in research and development and expand its global footprint.
'Hefty private cash for Cato is less a one-off jackpot than a barometer that big-ticket appetite for cybersecurity is still running hot and that an exit wave through IPO, active secondaries, or acquisition may be warming up behind it,' Schulman said.
Reuters had reported last year, citing sources, that the company was gearing up for a potential 2025 IPO.
Cato, founded in 2015 by Shlomo Kramer and Gur Shatz, combines network services and security into a single cloud platform known as secure access service edge (SASE).
Its SASE solution helps businesses prevent threats, protect data and quickly respond to incidents.
The SASE market is projected to surge to $25 billion by 2027 from $7 billion in 2022, according to Gartner's 2023 report.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed
Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed

New York Post

time17 minutes ago

  • New York Post

Growing number of teens turn to AI for friends, new study shows — here's why experts are alarmed

It's not a glitch in the matrix: the youngest members of the iGeneration are turning to chatbot companions for everything from serious advice to simple entertainment. 4 The age range for Generation Z is between 13 and 28, while Generation Alpha is between 12 and 0. InfiniteFlow – In the past few years, AI technology has advanced so far to see users have gone straight to machine models for just about anything, and Generations Z and Alpha are leading the trend. Advertisement Indeed, a May 2025 study by Common Sense Media looked into the social lives of 1,060 teens aged 13 to 17 and found that a startling 52% of adolescents across the country use chatbots at least once a month for social purposes. Teens who used AI chatbots to exercise social skills said they practiced conversation starters, expressing emotions, giving advice, conflict resolution, romantic interactions and self-advocacy — and almost 40% of these users applied these skills in real conversations later on. 4 Many AI chatbots have been critiqued for being overly sycophantic towards their flesh-and-blood conversation partners. Common Sense Media Advertisement 4 Younger teens tend to be more trustful of AI companions, while older teens are more well-educated on the dangers of oversharing with AI. Common Sense Media Despite some potentially beneficial skill developments, the study authors see the cultivation of anti-social behaviors, exposure to age-inappropriate content and potentially harmful advice given to teens as reason enough to caution against underage use. 'No one younger than 18 should use AI companions,' study authors wrote in the paper's conclusion. Advertisement The real alarm bells began to ring when data uncovered that 33% of users prefer to turn to AI companions over real people when it comes to serious conversations, and 34% said that a conversation with a chatbot has caused discomfort, referring to both subject matter and emotional response. 'Until developers implement robust age assurance beyond self-attestation, and platforms are systematically redesigned to eliminate relational manipulation and emotional dependency risks, the potential for serious harm outweighs any benefits,' study authors warned. 4 100 or more teens said AI chats were better than IRL connections. Common Sense Media Advertisement Though AI use is certainly spreading among younger generations — a recent survey showed that 97% of Gen-Z has used the technology — the Common Sense Media study found that 80% of teens said they still spend more time with IRL friends than online chatbots. Rest easy, parents: today's teens do still prioritize human connections, despite popular beliefs. However, people of all generations are cautioned against consulting AI for certain purposes. As The Post previously reported, AI chatbots and large language models (LLM) can be particularly harmful for those seeking therapy and tend to endanger those exhibiting suicidal thoughts. 'AI tools, no matter how sophisticated, rely on pre-programmed responses and large datasets,' Niloufar Esmaeilpour, a clinical counselor in Toronto, previously told The Post. 'They don't understand the 'why' behind someone's thoughts or behaviors.' Sharing personal medical information with AI chatbots can also have drawbacks, as the information they regurgitate isn't always accurate, and perhaps more alarmingly, they are not HIPAA compliant. Uploading work documents to get a summary can also land you in hot water, as intellectual property agreements, confidential data and other company secrets can be extracted and potentially leaked.

Your employees may be leaking trade secrets into ChatGPT
Your employees may be leaking trade secrets into ChatGPT

Fast Company

time17 minutes ago

  • Fast Company

Your employees may be leaking trade secrets into ChatGPT

Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? Sift's latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information. This overconfidence with AI isn't limited to data sharing. The same comfort level that leads people to input sensitive work information also makes them vulnerable to deepfakes and AI-generated scams in their personal lives. Sift data found that concern that AI would be used to scam someone has decreased 18% in the last year, and yet the number of people who admit to being successfully scammed has increased 62% since 2024. Whether it's sharing trade secrets at work or falling for scam texts at home, the pattern is the same: familiarity with AI is creating dangerous blind spots. The Confidence Trap Often in a workplace setting, employees turn to AI to address a specific problem: looking for examples to round out a sales proposal, pasting an internal email to 'punch it up,' sharing nonfinal marketing copy for tone suggestions, or disclosing product road map details with a customer service bot to help answer a complex ticket. This behavior often stems from good intentions, whether that's trying to be more efficient, helpful, or responsive. But as the data shows, digital familiarity can create a false sense of security. The people who think they 'get AI' are the ones most likely to leak sensitive data through it or will struggle to identify malicious content. Every time an employee drops nonpublic context into a GenAI tool, they are—knowingly or not—transmitting business-sensitive data into a system that may log, store, or even use it to train future outputs. Not to mention, if a data leak were ever to occur, a hacker would be privy to a treasure trove of confidential information. So what should businesses do? The challenge with this kind of data exposure is that traditional monitoring won't catch this. Because these tools are often used outside of a company's intranet—their internal software network—employees are able to input almost any data they can access. The uncomfortable truth is that you probably can't know exactly what sensitive information your employees are sharing with AI platforms. Unlike a phishing attack where you can trace the breach, AI data sharing often happens in the shadows of personal accounts. But that doesn't mean you should ban AI usage outright. Try to infer the scale of the problem with anonymous employee surveys. Ask: What AI tools are you using? For which tasks do you find AI most helpful? And what do you wish AI could do? While an employee may not disclose sharing sensitive information with a chatbot, understanding more generally how your team is using AI can identify potential areas of concern—and potential opportunities. Instead of trying to track every instance retroactively, focus on prevention. A blanket AI ban isn't realistic and puts your organization at a competitive disadvantage. Instead, establish clear guidelines that distinguish between acceptable and prohibited data types. Set a clear red line on what can't be entered into public GenAI tools: customer data, financial information, legal language, and internal documents. Make it practical, not paranoid. To encourage responsible AI use, provide approved alternatives. Create company-sanctioned AI workflows for everyday use cases that don't retain data or are only used in tools that do not use any inputs for AI training. Make sure your IT teams vet all AI tools for proper data governance. This is especially important because different account types of AI tools have different data retention policies. Furthermore, it helps employees understand the potential dangers of sharing sensitive data with AI chatbots. Encourage employee training that addresses both professional and personal AI risks. Provide real-world examples of how innocent AI interactions inadvertently expose trade secrets, but also educate employees about AI-powered scams they might encounter outside of work. The same overconfidence that leads to workplace data leaks can make employees targets for sophisticated fraud schemes, potentially compromising both personal and professional security. If you discover that sensitive information has been shared with AI platforms, act quickly, but don't panic. Document what was shared, when, and through which platform. Conduct a risk assessment that asks: How sensitive was the information? Could it compromise competitive positioning or regulatory compliance? You may need to notify affected parties, depending on the nature of the data. Then, use these incidents as learning opportunities. Review how the incident occurred and identify the necessary safeguards. While the world of AI chatbots has changed since 2023, there is a lot we can learn from a situation Samsung experienced a few years ago, when employees in their semiconductor division shared source code, meeting notes, and test sequences with ChatGPT. This exposed proprietary software to OpenAI and leaked sensitive hardware testing methods. Samsung's response was swift: they restricted ChatGPT uploads to minimize the potential for sharing sensitive information, launched internal investigations, and began developing a company-specific AI chatbot to prevent future leaks. While most companies lack the resources to build chatbots themselves, they can achieve a similar approach by using an enterprise-grade account that specifically opts out their accounts from AI training. AI can bring massive productivity gains, but that doesn't make its usage risk-free. Organizations that anticipate and address this challenge will leverage AI's benefits while maintaining the security of their most valuable information. The key is recognizing that AI overconfidence poses risks both inside and outside the office, and preparing accordingly.

Gas Boom Grows, Solar Boom Slows Amid A Failing Energy Transition
Gas Boom Grows, Solar Boom Slows Amid A Failing Energy Transition

Forbes

time18 minutes ago

  • Forbes

Gas Boom Grows, Solar Boom Slows Amid A Failing Energy Transition

WASHINGTON, DC - APRIL 25: U.S. Energy Secretary Chris Wright speaks during the Semafor World ... More Economy Summit 2025 at Conrad Washington on April 25 2025 in Washington, DC. The Summit, held from April 23-25, gathers CEOs, government officials, financial leaders, and more for conversations on the state of the global economy. (Photo by) A pair of stories in recent days illustrate the rapidly shifting equation for the prospects of a real energy transition in the United States during Donald Trump's second presidency. Thanks in large part to the administration's radical rebalancing of federal energy policies, the momentum is shifting heavily in favor of traditional energy sources like oil, natural gas, and nuclear power as tax breaks and subsidies for renewables are being systematically eliminated. The end result is an altered outlook on which form of generation will boom into the future. A Gas Generation Boom Driven By AI In the July 24 issue of the Wall Street Journal's Climate and Energy newsletter, Ed Ballard writes that 'There's Never Been a Better Time to Be Selling Natural-Gas Turbines.' On the same day, Reuters published a piece by Nichola Groom headlined, 'Boom fades for US clean energy as Trump guts subsidies.' Taken together, the stories detail a reversal of Biden-era fortunes for the respective industries which has come about more rapidly and comprehensively than anyone could have realistically imagined just six months ago. This time last year, speculation ran rampant that a long backlog for sourcing natural gas turbines would limit the prospects for natural gas to provide a major share of new power generation needed to meet rapidly rising electricity demand. But, as the big tech companies in the AI industry, whose enormous data centers springing up across the country are the major driver of incremental demand, developed plans to secure their power needs, a consensus began to form that natural gas generation is the ideal solution for the coming decade for a variety of reasons. Outside view of the newly completed Meta's Facebook data center in Eagle Mountain, Utah on July 18, ... More 2024. The data center is a complex of five large buildings each over four football fields long and totaling 2.4 million square feet. (Photo by GEORGE FREY / AFP) (Photo by GEORGE FREY/AFP via Getty Images) Those reasons include: As that consensus began forming last summer, Ballard writes, prices for the turbines 'went through the roof.' But, at the same time, the handful of big turbine manufacturers, including GE Vernova, Siemens Energy, and Mitsubishi Heavy Industries, developed and announced plans to expand existing facilities, build new ones, and increase their output of new turbines. Ballard notes that all three companies are in the process of expanding their U.S. operations, adding that 'GE Vernova looks the most convinced,' pointing to plans to expand the output from its Greenville, SC plant from 55 turbines per year to as many as 80, a 40% increase. More expansion may ultimately be needed given the current backlog with lead times as long as four years, but GE Vernova CEO Scott Strazik says his company will need more certainty around the AI industry's ultimate true generation needs before committing his company to more capital outlays. Solar Boom Slows Amid D.C. Policy Shift While gas generation is in a renaissance, Groom says the U.S. solar boom of recent years has suddenly stalled. Indeed, the boom may already be fading amid decisions by an array of solar manufacturers to cancel planned new capital investments. 'Singapore-based solar panel manufacturer Bila Solar is suspending plans to double capacity at its new factory in Indianapolis,' writes Groom. She also points to decisions by both Canada-based Heliene and Norwegian solar wafer maker NorSun to re-evaluate or suspend planned new investments as federal policy shifts. Groom also notes that even a pair of fully permitted solar facilities in Oklahoma now face cancellation in the wake of the enactment of the One Big Beautiful Bill Act (OBBBA), which gradually repeals Biden-era tax breaks and subsidies for both the wind and solar industries in the coming few years. The President levied another hit at solar's future with a July 7 executive order directing strict enforcement of OBBBA provisions by Treasury Secretary Scott Bessent. All told, according to energy researcher Rhodium Group, a total of $373 billion in clean energy investments are now at risk. The pair of Oklahoma projects are likely to be joined by a rash of cancellations of planned solar and wind projects in the coming months, as developers determine they won't be able to meet the OBBBA's deadline of being placed in service by the end of 2027 to continue to benefit from the investment tax credit. Capital flight is also likely to become a rising problem as private equity and institutional investors reallocate capital to more profitable ventures with higher degrees of certainty. Some of that capital seems likely to end up being invested in gas generation capacity instead. Where Do The Competing Booms Go From Here? In a July 22 interview on Fox News Special Report with Bret Baier, Energy Secretary Chris Wright said he and the administration are for 'everything that works. Anything that can deliver affordable, reliable, secure energy.' Prodded by Baier, Wright gave a dim assessment for the wind industry's future in the United States, saying 'value of the energy [it generates] is very low - who knows when the wind's gonna blow - and there's been huge public opposition to onshore and offshore wind.' But Wright's view of the future for the solar industry was more positive, saying, 'Solar is a different story. Solar is growing rapidly in the United States right now and I think it's got a future.' But, he added, 'The idea there was just it should have a commercial future not paid for by the taxpayers' future.' Thus, the key for the solar industry to revive its boom times in the remaining 42 months of this second Trump presidency will be to develop sustainable business models which create solid, investable rates of return without major federal tax breaks or subsidies. That seems a major challenge given that, if such a model exists, it would likely have already been deployed. Meanwhile, the natural gas industry will face challenges of its own. A slowing of the solar boom places added pressure on natural gas generation companies to mount a major, rapid expansion of new capacity in the parts of the country where it will be most needed. Some key states, like Texas, in which the AI industry is rapidly expanding have been and are likely to remain welcoming to gas generation. Policymakers in some other big AI states seem likely to need more convincing. For an industry experiencing a current equipment procurement backlog and whose infrastructure has experienced significant ill-timed failures in recent years - like the freeze-ups in Texas during 2021's Winter Storm Uri - the ability to sustain its current boom and grow it into the future is not necessarily a foregone conclusion. Much work remains to be done.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store