Latest news with #SandraJoyce


Axios
2 days ago
- Business
- Axios
How North Korea's IT army is hacking the global job market
Nearly every Fortune 500 company is hiding the same uncomfortable secret: they have hired a North Korean IT worker. Why it matters: Despite how widespread the issue is, few companies are willing to talk publicly about it. Experts say reputational risk, legal uncertainty, and embarrassment all contribute to the silence — which in turn makes the problem harder to solve. Dozens of resumes, LinkedIn profiles, and fraudulent identity documents shared with Axios lay bare the scale and sophisticated of the scams. The big picture: For North Korea, this is a precious revenue stream that evades American sanctions — capitalizing on the wealth of high-paying remote worker roles in the U.S. to route cash back to Pyongyang. In the past two years, companies and their security partners have begun to grasp the scale of the problem — and now, they're sounding the alarm about where it's headed next. "They've been stealing intellectual property and then working on the projects themselves," Michael "Barni" Barnhart, principal investigator at DTEX Systems, told Axios. "They're going to use AI to magnify exponentially what they're already doing — and what they're doing now is bad." Between the lines: It sounds easy to simply weed out North Korean job applicants. But some of the world's biggest firms have found it devilishly difficult. That's because the North Korean operation has become as complex as a multi-national corporation. It involves several North Korean government offices, dozens of China-based front companies and Americans willing to facilitate the fraud. And the undercover North Korean IT workers are often exceptional at their jobs — at least until they start stealing sensitive data or extorting companies that try to fire them. Google Threat Intelligence VP Sandra Joyce recalled the response of one employer when told they likely had a North Korean fraudster on staff: "You guys better be right, because that is my best guy." The groups running the show North Korea has invested years into building up its remote IT labor force, providing training not just for remote job fraud but also corporate espionage and IP theft. Workers are selected and trained at elite institutions such as Kim Chaek University of Technology and the University of Sciences in Pyongsong — some with specializations in software development, AI or cryptography. Research from DTEX shows that the most advanced worker scams are often coordinated with units like APT 45, a notorious government hacking group known for infiltrating companies, running scams and laundering money. Other participants in the scheme include the Lazarus Group, which typically leads the regime's cryptocurrency hacks and has positioned insiders within crypto companies, and Research Center 227, a new AI research unit inside North Korea's intelligence agency. The intrigue: Cybersecurity companies have been discovering and naming new groups running these hacks, with names like Jasper Sleet, Moonstone Sleet and Famous Chollima. The scale Driving the news: Nine security officials who spoke with Axios all said they've yet to meet a Fortune 500 company that hasn't inadvertently hired a North Korean IT worker. Google told reporters at the RSA Conference in May that it had seen North Koreans applying to its jobs. SentinelOne and others have said the same. KnowBe4, a cybersecurity training company, admitted last year that it hired a North Korean IT worker. A smaller cryptocurrency startup told the WSJ that they accidentally had North Korean workers on their payroll for almost two years. In one case, Sam Rubin, senior vice president of Palo Alto Networks' Unit 42 consulting and threat intelligence team, told Axios that within 12 hours of a large client posting a new job, more than 90% of the applicants were suspected to be North Korean workers. "If you hire contract IT workers, this has probably happened to you," Rubin said. The intrigue: Even small-to-mid-sized companies that rely on remote IT talent or outsource their IT needs to a consulting firm have encountered this problem, Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, said. CrowdStrike has investigated more than 320 incidents where North Korean operatives landed jobs as remote software developers, according to the company's annual threat hunting report published earlier this month. How it works Getting a job at a U.S. company — and going undetected — is a team effort that involves several North Korean IT workers, China-based companies and even a handful of Americans. Some of the North Korean workers are even stationed in China and other nearby countries to keep suspicions low. First, the workers identify potential identities they can assume. Those are often stolen from a real person, or even from a dead U.S. citizen. To pull off this deception, they create fake passwords, Social Security cards and utility bills. Many of them use the same recognizable tablecloth in the background of fake ID photos, Meyers said. For instance, in a December indictment of 14 North Koreans, the workers were found using stolen identities to apply to dozens of jobs. Second, the workers find open jobs in software development, technical support and DevOps posted on Upwork, Fiverr, LinkedIn, and third-party staffing platforms. Much of this is streamlined through AI tools that help track and manage their job applications. Many of them will use AI tools to help generate passable resumes and LinkedIn profiles, according to Trevor Hilligoss, senior vice president at SpyCloud Labs. "There's a hierarchy: There's a group of people who are the interviewers, and they're the ones with the really good English specialties," Hilligoss told Axios. "When they get hired, that gets turned over to somebody that's a developer." Those developers will often juggle several jobs and multiple different personas. Zoom in: Job interviews would seem like the obvious time to catch a fraudulent application. But the "applicants" — whether they're using their real faces and voices or AI-enabled personas — are practiced interviewers with the skills necessary to complete technical coding assignments. In multiple cases, hiring managers only realized something was wrong weeks later when employees looked or behaved differently than during the interview, Barnhart said. After landing the job, the developers step in and request that their company laptop be shipped to a U.S. address — often citing a last-minute move or family emergency. That address often belongs to an American accomplice, who typically operates what's known as a "laptop farm." These facilitators are told to install specific remote desktop software onto the laptops so the North Korean worker can operate the laptop from abroad. In July, the FBI said it executed searchers of 21 premises across 14 states that were known or suspected laptop farms, seizing 137 laptops. Then there's the challenge of ensuring the salaries actually reach the North Korean regime. That often requires the facilitators forward the paychecks to front companies across China or funnel it through cryptocurrency exchanges. In a report published in May, researchers at Strider Technologies identified 35 China-based companies linked to helping North Korean operations. Challenges Hiring processes are so siloed that it's difficult for managers to see all the signs of fraud until the North Korean workers start their roles, Kern said. Even if a company suspects something is wrong, the forensic signals can be subtle and scattered. Security teams may detect unusual remote access tools or strange browser behavior. HR might notice recycled references or resumes that reuse the same phone number. But unless those insights are pooled together, it rarely raises alarms. "There's not one giant red flag to point to," said Sarah Kern, a leading North Korea analyst at Sophos' Counter Threat Unit. "It is multiple technical forensic aspects and then such a human aspect of small things to pick up on that aren't necessarily going to be in telemetry data from an endpoint detection standpoint." Yes, but: Even when these workers are detected, they're not easy to fire. Many of them are so talented that managers are reluctant to even believe they could actually be in North Korea, Alexandra Rose, director at Sophos' Counter Threat Unit, told Axios. If these workers are caught, employers then face a litany of problems: Some workers will download sensitive internal data and extort the companies for a hefty sum in a last-ditch effort to bleed the company of whatever money they can. Some workers have filed legal complaints, including workers' compensation claims, Barnhart said. In one case, Barnhart said he had a worker try to claim domestic violence protections as they were being fired just to buy time. "There is a lot of focus on companies that cybersecurity shouldn't just be for the CISO," Rose said. "You want a bit of that security feel throughout the company, and this is the kind of case that really demonstrates why that is." The bottom line: Some companies also hesitate to report these incidents, fearing they could be penalized for unknowingly violating U.S. sanctions — even though law enforcement officials have said they're more interested in cooperation than prosecution. What's next Right now, the operations are predominantly focused on making money for North Korea's regime. Threat level: But the hacking groups involved are evolving into something more sophisticated and dangerous — including by potentially building their own AI models and feeding in sensitive U.S. company data. That's a particular concern in the defense sector. Barnhart says his teams have seen North Korean IT workers increasingly studying information about AI technologies, drone manufacturing and other defense contract work. What to watch: As U.S. companies become more alert, North Korean IT workers are shifting their focus abroad as they seek employment at other companies and set up laptop farms throughout Europe — suggesting the operation is only just now ramping up, instead of slowing down.
Yahoo
19-04-2025
- Business
- Yahoo
China 'has completed its journey to cyber superpower' - and Google security expert explains why threats could get even worse in years to come
When you buy through links on our articles, Future and its syndication partners may earn a commission. With businesses of all sizes facing a range of cybersecurity threats on a daily basis, the need for a strong and intelligent threat protection offering has never been more crucial. At its recent Google Cloud Next 25 event, the company was understandably keen to tout its cybersecurity leadership, unveiling a range of new tools and services, with AI unsurprisingly playing a major supporting role. To find out more on what threats businesses should really be worried about, and to learn more about Google's own security priorities, I spoke to Sandra Joyce, Vice President of Google Threat Intelligence Group, at the event. Cyber threats can now originate from any country, but Joyce highlights the sheer amount of possible risks coming from 'the big four' - Russia, Iran, North Korea, and most notably - China. China is, 'probably the biggest (threat)...they're getting so hard to detect,' Joyce declares. 'They have, I would say, completed their journey to cyber superpower status.' 'There's likely a capability we haven't seen, but certainly espionage is first and foremost China's big lever to pull,' Joyce explained. 'Their capabilities are increasing in ways that are very concerning,' she says, highlighting the recent Salt Typhoon attacks against critical US infrastructure as evidence of the nation's growing strength in cyber operations. "We're looking at a major increase in capability,' Joyce says, 'they're leveraging what we're calling the visibility gap and concentrating their efforts on those areas where endpoint detection and response solutions (EDRs) don't traditionally operate, like firewalls and edge devices.' Joyce notes that her team used to be able to detect Chinese threat actors 'pretty easily' via the infrastructure being used - however the criminals have now switched to using rented hardware, which is refreshed every 30 days and operated in small offices. Given the scale of these threats, I ask Joyce about what role Google itself has to play in the wider security space going forward - is it being a first response system, a protector - or to take the first strike? 'That is the goal,' she says, 'we do take direct action, especially if they're touching the Google infrastructure - but we have a lot of options to take action…more and more, some of the creative thinking we have is, how do we disrupt this type of activity - within the laws that govern this type of activity.' Working with law enforcement forces is a key method, she notes, but Google Cloud also takes direct action on the infrastructure itself, and partners with other organizations for co-ordinated takedowns. 'There's a lot of ways we can disrupt and do the right thing,' she says, highlighting the company's responsibility to protect not only Google's products and people - but its customers too, 'the more we know about the threats, the more we can do.' I also ask Joyce about the role of AI in cybersecurity, given it has transformed so many other areas in the business world over the past few years. The company announced several AI-enabled security services and tools at Cloud Next 25, most notably Google Unified Security (GUS), a combined platform for firms to access all their security tools in a single location, as well as several security-focused AI agents. Joyce says the potential impact is, 'fascinating…this is now the modern way people are going to expect to be able to interact with data.' She notes that threat detection, analysis and mitigation will all see a huge boost from AI, greatly speeding up processes that used to take months into a matter of days, all enabled by natural language prompts that make it easy for all workers to use. "I don't think that we have an excuse to not lead in this space,' she adds, "because we have the technology, we have the expertise, we have the recipe to make something incredible.'

Wall Street Journal
29-01-2025
- Politics
- Wall Street Journal
中国やイランなどのハッカー、グーグル「ジェミニ」利用して攻撃能力強化
中国やイラン、および他の国々とつながりのあるハッカーらが、新たな人工知能(AI)テクノロジーを活用して米国や世界の標的に対するサイバー攻撃能力を高めていることが、米当局者や新たなセキュリティー調査によって明らかになった。 グーグルでサイバー攻撃対策に携わる専門家によると、過去1年間に、20カ国以上の数十に上るハッカーグループがグーグルのチャットボット「ジェミニ」を利用し、悪意のあるコードの作成、サイバー脆弱(ぜいじゃく)性の探索、攻撃対象とする組織の調査などを行った。 Hackers linked to China, Iran and other foreign governments are using new AI technology to bolster their cyberattacks against U.S. and global targets, according to U.S. officials and new security research. In the past year, dozens of hacking groups in more than 20 countries turned to Google's Gemini chatbot to assist with malicious code writing, hunts for publicly known cyber vulnerabilities and research into organizations to target for attack, among other tasks, Google's cyber-threat experts said. While Western officials and security experts have warned for years about the potential malicious uses of AI, the findings released Wednesday from Google are some of the first to shed light on how exactly foreign adversaries are leveraging generative AI to boost their hacking prowess. This week, the China-built AI platform DeepSeek upended international assumptions about how far along Beijing might be the AI arms race, creating global uncertainty about a technology that could revolutionize work, diplomacy and warfare. Groups with known ties to China, Iran, Russia and North Korea all used Gemini to support hacking activity, the Google report said. They appeared to treat the platform more as a research assistant than a strategic asset, relying on it for tasks intended to boost productivity rather than to develop fearsome new hacking techniques. All four countries have generally denied U.S. hacking allegations. 'AI is not yet a panacea for threat actors and may actually be a far more important tool for defenders,' said Sandra Joyce, vice president of threat intelligence at Google. 'The real impact here is they are gaining some efficiency. They can operate faster and scale up.' Current and former U.S. officials said they think foreign hacking units are turning to other chatbots as well. Last year, OpenAI also revealed some information about five foreign hacking groups using ChatGPT and said it had disabled the accounts associated with them. That research likewise found that cyberattackers weren't using ChatGPT for generating significant or novel cyberattacks. A Google spokeswoman said the company terminated accounts linked to malicious activity outlined in its report but declined to disclose how many accounts in total were disrupted. The company found that a range of sophisticated hacking groups—also known as advanced persistent threats—were using Gemini, but that Chinese and Iranian groups had relied on the tool the most. More than 20 China-linked groups and at least 10 Iran-linked groups were seen using Gemini, Google said, making them easily the most active countries seeking to use the chatbot. Iranian groups, which exhibited the heaviest overall use, pursued an array of goals on Gemini, including research into defense organizations to target with hacking attempts and generation of content in English, Hebrew and Farsi to be used in phishing campaigns. China was the next most frequent user of Gemini, the report said, with hacking groups linked to Beijing also conducting reconnaissance on targets in addition to attempting to learn more about specific hacking tactics, including how to exfiltrate data, evade detection and escalate privileges once inside a network. In North Korea, hackers used Gemini to draft cover letters for research jobs, likely in support of the regime's efforts to have its spies hired for remote technology jobs to earn what U.S. officials have said is hundreds of millions in revenue to support its nuclear weapons program. Russia, meanwhile, used the platform relatively sparingly and for mostly mundane coding-related tasks. Laura Galante, director of the U.S. Cyber Threat Intelligence Integration Center during the Biden administration, said the new details published by Google were generally consistent with the findings of U.S. intelligence agencies on how adversaries are seeking to weaponize generative AI. 'They're using Gemini to get a leg up in crafting their victim lists and probably improving the effectiveness of the human-directed parts of their operations,' Galante said. She added that large-language models didn't appear to be 'a game changer in terms of the scale of compromises or enabling new tactics or novel operations—but these are still the relatively early days.' Despite modest uses of generative AI so far, both the U.S. and China see AI technologies as pivotal to future supremacy. The possibility that China's DeepSeek is rivaling top-tier AI models for a fraction of the cost sent shock waves through Silicon Valley and Washington this week. Unlike Google, DeepSeek's creators have released their product's source code, making its misuse harder to track and virtually impossible to prohibit. DeepSeek's low cost could have significant national-security implications, too. For years, senior U.S. intelligence officials have warned that China and other adversaries are racing to develop and deploy AI systems to support—and in some cases supplant—their existing military and intelligence objectives. In a blog post Wednesday, Kent Walker, Google's chief legal officer, said continued export controls on U.S. chips were needed and urged the U.S. government—including the military and spy agencies—to update the procurement process to make it easier to adopt AI services. 'America holds the lead in the AI race—but our advantage may not last,' Walker said. Write to Dustin Volz at and Robert McMillan at