
Cyberattacks reshape hiring priorities for tech executives
A recent survey examined the business forecasts of 1,000 C-Suite executives in cybersecurity or data center roles and revealed that nearly two-thirds (64%) view data breaches and ransomware attacks as the most significant threat for companies over the next decade.
7 A survey examined executives in cybersecurity and revealed that nearly two-thirds view data breaches and ransomware attacks as the most significant threat for companies.
NicoElNino – stock.adobe.com
In fact, more than half (56%) of companies have already defended against a hacking attempt, 43% have experienced a data breach and 14% have fallen victim to a successful hack.
Given recent high-profile hacks, security breaches and the ever-growing need for cyber vigilance, it's no surprise that 53% of executives see cybersecurity skills as the most in-demand for their future talent pipelines–especially since 43% 'often' seek to hire entry-level talent.
Conducted by Talker Research on behalf of Per Scholas, the results found that adaptability and problem-solving (52%) as well as digital and technical skills (38%) also ranked among the top three most sought-after skillsets. Additionally, a notable 42% placed an emphasis specifically on understanding and developing AI.
7 More than half of companies have already defended against a hacking attempt, 43% have experienced a data breach and 14% have fallen victim to a successful hack.
SWNS / Per Scholas
As the AI landscape continues to evolve rapidly, another 48% of business leaders said that integrating AI and other emerging technologies is a top challenge, while 35% pointed to the growing rules and regulations surrounding AI, rounding out the three biggest challenges ahead.
However, adapting to these challenges is also front of mind for these leaders, as nearly all (95%) say that increased awareness and use of AI has an impact on how they store data. A further 87% even believe that AI played an integral role and changed which challenges their business will face.
The survey also looked at the perspective of employees working in tech and found that of the 1,000 polled, only 48% believe that their company is 'very prepared' to prevent cybersecurity attacks.
7 As the AI landscape continues to evolve rapidly, another 48% of business leaders said that integrating AI and other emerging technologies is a top challenge.
insta_photos – stock.adobe.com
Moreover, only about half of the employees surveyed (51%) are 'very aware' of their company's cybersecurity efforts. The good news? If given the opportunity, 88% said they would participate in additional training — with the average respondent willing to invest just under two hours per week, or 7.1 hours per month.
Still, 81% actively take steps to help protect sensitive data, such as keeping software and systems updated (73%), using two or multi-factor authentication (65%) and regularly backing up their data (56%).
But that doesn't mean they're without concerns when handling this sensitive data.
7 81% actively take steps to help protect sensitive data, such as keeping software and systems updated (73%).
SWNS / Per Scholas
Phishing attempts (52%), external threats like outside parties gaining access (42%) and lack of encryption or security measures overall (41%) remain top of mind.
Adding to those concerns, only 45% of employees are 'very aware' of where sensitive data is stored within their company.
'AI is finding its way into everything from day-to-day workloads to big-picture strategy, yet cybersecurity concerns remain front and center in the AI economy,' said Brittany Murrey, Executive Vice President, Talent Solutions at Per Scholas. 'Our research suggests employees are ready and willing to upskill in order to protect sensitive data, which is a crucial step. By offering comprehensive training and staying ahead of evolving threats, businesses can embrace AI innovations without sacrificing security.'
7 Only 45% of employees are 'very aware' of where sensitive data is stored within their company.
Art_Photo – stock.adobe.com
Data centers–facilities where servers and networking equipment are stored and distributed on a large scale–have become essential for businesses managing vast amounts of customers and client data. It's no surprise that they're booming across the globe. Nearly all (95%) business executives and 89% of employees believe they'll only become more integral to a growing economy.
As AI takes a more central role in daily life, 66% of all respondents see it as the primary driver of growth in the data center industry.
Despite this foresight, just over half (57%) of C-Suite leaders and only 37% of employees feel 'very confident' that the industry will be able to keep up with the rising demand for data centers over the next five years.
7 66% of all respondents see AI as the primary driver of growth in the data center industry.
SWNS / Per Scholas
Both decision makers and workers agree that technical cybersecurity skills (82%) are crucial for data center professionals, while essential professional development skills like problem-solving (79%) and communication (61%) are also in high demand.
As organizations scale to meet this growing demand, they face challenges such as 'recruiting and retaining talent,' 'changes in regulatory compliance' and 'upskilling current talent.'
7 Just over half of C-Suite leaders and feel 'very confident' that the industry will be able to keep up with the rising demand for data centers.
StratfordProductions – stock.adobe.com
'Data centers have become the backbone of our increasingly digital economy, delivering the capacity and infrastructure that modern businesses rely on,' said Murrey. 'There's still a belief that only large enterprises need robust cybersecurity measures, but in reality, every organization — big or small — faces risks. Strengthening security practices, investing in the right talent, and building reliable data infrastructure will help ensure sustainable growth for everyone in this connected landscape.'
Survey methodology:
Talker Research surveyed 1,000 U.S. C-Suite and Direct Managers in Cyber Security and Data Center roles and 1,000 employed Americans working in tech; the survey was commissioned by Per Scholas and administered and conducted online by Talker Research between Feb. 26 and March 5, 2025.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Geek Wire
a minute ago
- Geek Wire
Building an AI-first company: What these two business leaders learned from top experts
Adam Brotman, left, and Andy Sack, authors of the book, 'AI First.' (Photo Courtesy Forum3) This week on the GeekWire Podcast, our guests are Adam Brotman and Andy Sack, co-authors of AI First: The Playbook for a Future-Proof Business and Brand. Brotman was Starbucks' chief digital officer and later co-CEO of Sack is a founder, investor, and longtime advisor to tech leaders. Together, they run Forum3, a Seattle-based company that helps brands with customer loyalty and engagement. For their book, they interviewed experts including Bill Gates, Sam Altman, Reid Hoffman and Ethan Mollick, and spent time with companies and leaders that have seen early AI success. We talk about the shocking prediction that Altman gave them, how Moderna achieved 80% employee participation in an AI prompt contest, the CEO who supercharged sales by using AI to analyze call transcripts, and what businesses can do to roll out AI successfully. Listen below, and continue reading for my 5 top takeaways. 1. Leaders need their own 'holy shit' moment. AI has a better chance of being adopted when executives personally experience and use the technology themselves. 'It doesn't mean that the CEO has to become an expert in AI,' Brotman said, 'but they have to at least demonstrate that mindset, that curiosity, and a little bit of passion for what they don't know, and empower the organization to go ahead.' 2. Formalize AI efforts with a dedicated team. Instead of ad-hoc adoption, create an internal group to lead the charge. A good starting point is a cross-functional 'AI Council' or task force composed of passionate employees and at least one C-suite member. Brotman and Sack were challenged by Wharton professor Ethan Mollick to push companies even further, to establish internal 'AI Labs' to truly go all-in on experimentation. 3. Treat AI like an evolving intelligence, not static software. Unlike traditional technology implementations, AI capabilities change weekly. Companies need an 'always-on experimentation mindset' rather than a deploy-and-maintain approach. 'This is a new thing. This is not software,' Sack said. 'It's a being, an alien intelligence.' 4. Make AI adoption fun and experimental. Moderna succeeded by turning AI learning into a 'prompt-a-thon contest' with prizes, making employees feel comfortable with experimentation. This tapped into human psychology and removed the fear often associated with new technology. 'They really integrated the launch of that contest in the culture of the company,' Brotman said. 'The ROI has been off-the-charts in terms of productivity for them as a company.' 5. The transformation is happening faster than you think. When Brotman and Sack interviewed Altman, the OpenAI CEO casually dropped a bombshell prediction: 95% of marketing as we know it today will be done by artificial intelligence within three to five years. That shifted their thinking and approach to the book. As Brotman noted, 'If you look at how the technology has progressed since we've had that interview, it's right on schedule.' AI First: The Playbook for a Future-Proof Business and Brand, by Adam Brotman and Andy Sack, is published by Harvard Business Review Press. Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.


TechCrunch
a minute ago
- TechCrunch
Anthropic says some Claude models can now end ‘harmful or abusive' conversations
Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as 'rare, extreme cases of persistently harmful or abusive user interactions.' Strikingly, Anthropic says it's doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn't claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains 'highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.' However, its announcement points to a recent program created to study what it calls 'model welfare' and says Anthropic is essentially taking a just-in-case approach, 'working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.' This latest change is currently limited to Claude Opus 4 and 4.1. And again, it's only supposed to happen in 'extreme edge cases,' such as 'requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.' While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users' delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a 'strong preference against' responding to these requests and a 'pattern of apparent distress' when it did so. As for these new conversation-ending capabilities, the company says, 'In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.' Anthropic also says Claude has been 'directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.' Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses. 'We're treating this feature as an ongoing experiment and will continue refining our approach,' the company says.


Forbes
2 minutes ago
- Forbes
If You Get This Voicemail, Your Google Account Is Under Attack
Google has confirmed that user accounts are under attack. And while the spiraling threat from infostealers is now out of control, almost 40% of 'successful intrusions' come from phishing attacks that steal user names and passwords. While Gmail accounts are prized above all, once a hacker has control of your Google account credentials they can access all the Google services you use, as well as any other third-party apps and services that rely on your Google credentials to sign-in. Old school phishing attacks using poorly written messages and emails are being replaced by nicely written, grammatically correct lures courtesy of AI. And when you click a link through to a fake login page, it's now a perfect replica of the real thing. Attacks even include hijacks of Google's own suspicious sign-in warnings and its 'no-reply' email addresses, plus the exploitation of legitimate infrastructure such as Forms, Sites and even Translate. If it's out there, one bad actor or another will try to use it. Courtesy of Redditor anuraggawande, there's even a malicious voicemail doing the rounds — or at least a voicemail notification. 'I received an email claiming I had a 'New Voicemail Notification'. The email included a big 'Listen to Voicemail' button.' The link 'used a legitimate Microsoft Dynamics domain to host the initial page, instantly boosting credibility.' But 'after solving the captcha, the site redirected to a Gmail login clone hosted on the same malicious domain and not to The page was pixel-perfect, ready to steal credentials.' Typically now, this attack looked 'looked professional and harmless, but just as with any message, email or notification from Google or any other provider, you should never log into any website or platform from a link. Always use your regular methods of entry to access your accounts — apps or websites. The other critical advice is to use a non-SMS form of two-factor authentication and to add a passkey. As Google explains, 'passkeys offer users a convenient and secure authentication experience across websites and apps. Unlike passwords, which can be guessed, stolen, or forgotten, passkeys are unique digital credentials tied to a user's device.'