Latest news with #Jakkal


Time of India
6 hours ago
- Politics
- Time of India
'Midnight Blizzard', 'Cozy Bear' and more ...How Microsoft, Google and other tech companies plans to untangle weird hacker nicknames
Microsoft, Google, CrowdStrike and Palo Alto Networks have announced that they will create a public glossary for state-sponsored hacking groups and cybercriminals. The goal is to reduce confusion caused by numerous unofficial nicknames for these entities. Microsoft and CrowdStrike expressed hopes of involving other industry partners and the US government in this effort to identify threat actors. "We do believe this will accelerate our collective response and collective defense against these threat actors," stated Vasu Jakkal, corporate vice president at Microsoft Security. Why it matters for US government and researchers Cybersecurity companies have long assigned coded names to hacking groups because attributing digital attacks can be difficult. Researchers need a way to track their adversaries. These names vary from functional, like "APT1" (Mandiant) or "TA453" (Proofpoint), to more colorful aliases such as "Earth Lamia" (TrendMicro) or "Equation Group" (Kaspersky). CrowdStrike's evocative names, like " Cozy Bear " for Russian hackers and "Kryptonite Panda" for Chinese groups, have been particularly popular, leading others to adopt similar styles. For example, Secureworks (now owned by Sophos) began using "Iron Twilight" for Russian hackers previously known as "TG-4127" in 2016. Microsoft also recently changed its naming convention from element-themed names like "Rubidium" to weather-themed ones such as "Lemon Sandstorm" or "Sangria Tempest." "But the same actor that Microsoft refers to as Midnight Blizzard might be referred to as Cozy Bear, APT29, or UNC2452 by another vendor. Our mutual customers are always looking for clarity. Aligning the known commonalities among these actor names directly with peers helps to provide greater clarity and gives defenders a clearer path to action," Jakkal said. However, the proliferation of these unique aliases has created overload. A 2016 U.S. government report on hacking attempts against the election caused confusion by using 48 different nicknames for various Russian hacking groups and malicious programs, including "Sofacy," "Pawn Storm," and "Tsar Team." Michael Sikorski, CTO for Palo Alto's threat intelligence unit, called the initiative a "game-changer," noting, "Disparate naming conventions for the same threat actors create confusion at the exact moment defenders need clarity." Adam Meyers, CrowdStrike's senior vice president of Counter Adversary Operations, highlighted an early success. He reported that the initiative already helped his analysts link a group Microsoft named "Salt Typhoon" with CrowdStrike's "Operator Panda." 5 biggest AI announcements at Microsoft Build 2025
Yahoo
16-04-2025
- Business
- Yahoo
AI is making online shopping scams harder to spot
Online scams are nothing new, but artificial intelligence is making it easier than ever to dupe people. What used to take days now takes a scammer only minutes to create. A new report from Microsoft highlights the scale of the problem. The company says it took down almost 500 malicious web domains last year and stopped approximately 1.6 million bot signup attempts every hour. "Last year we were tracking 300 unique nation-state and financial crime groups. This year, we're tracking 1,500," Vasu Jakkal, corporate vice president of Microsoft Security told CBS News Confirmed. The company attributes much of the rise in this type of crime to generative AI which has streamlined the process to make a website. "You can just buy a kit off the web," Jakkal explained. "It's an assembly line. Someone builds the malware. Someone builds the infrastructure. Someone hosts the website." Jakkal explained that AI isn't just helping set up fraudulent sites, it also helps make them more believable. She said scammers use generative AI to create product descriptions, images, reviews and even influencer videos as part of a social engineering strategy to dupe shoppers into believing they're scrolling through a legitimate business, when in reality they're being lured into a digital trap. Another tactic outlined in Microsoft's report is domain impersonation. Jakkal said scammers make a near-perfect copy of a legitimate website's address, sometimes changing just a single letter, to trick consumers into giving up money and information. As well as raising awareness of these scams, the company is introducing new tools to help safeguard their customers. Microsoft's web browser, Microsoft Edge, now features typo and domain impersonation protection which prompts users to check the website's URL if the program suspects there may be a misspelling. The browser also uses machine learning to block potentially malicious sites before consumers reach the homepage. "We're trying to combat at every place where we see there's a potential of someone being vulnerable to a fraud attempt," Jakkal said. The idea is to put checks and balances in place so people are able to pause and reevaluate, he said. Scott Shackelford, executive director at the Center for Applied Cybersecurity Research at Indiana University, commended Microsoft for being one of the most proactive companies in fraud prevention, but said more action needed to come from both the private and public sector. "Having the backing of big tech as part of this kind of public, private partnership would be a really great way to show that they do take it seriously." No matter where you're browsing, CBS News Confirmed compiled some tips to spot sham sites. Tips to stay safe while shopping onlineBe wary of impulse buying: Scammers will try to use pressure tactics like "limited-time" deals and countdown timers to get you to shop fast. Take a moment to pause and make sure the site you're on is the real for typos in the URL: Some scam sites will try to mimic real companies. But since they don't own the domain, it's common to see a url that's just slightly off from what you'd rely on social media links: If you're going from an app to a shopping site, close out of the page that opens automatically and try to find it independently on a web browser. Check the reviews: Fraudulent sites will use fake reviews to make the products seem real. Watch out for similar phrases or wording in the reviews, or an overwhelming number of five-star a credit card: This allows you to dispute the payment or claim fraud if it turns out the deal really was too good to be true. Sneak peek: The Detective's Wife Watch: Blue Origin's first all-women flight crew launches to space Biden slams current administration without naming Trump in first public speech since leaving office


CBS News
16-04-2025
- Business
- CBS News
AI is making online shopping scams harder to spot
Online scams are nothing new, but artificial intelligence is making it easier than ever to dupe people. What used to take days now takes a scammer only minutes to create. A new report from Microsoft highlights the scale of the problem. The company says it took down almost 500 malicious web domains last year and stopped approximately 1.6 million bot signup attempts every hour. "Last year we were tracking 300 unique nation-state and financial crime groups. This year, we're tracking 1,500," Vasu Jakkal, corporate vice president of Microsoft Security told CBS News Confirmed. The company attributes much of the rise in this type of crime to generative AI which has streamlined the process to make a website. "You can just buy a kit off the web," Jakkal explained. "It's an assembly line. Someone builds the malware. Someone builds the infrastructure. Someone hosts the website." Jakkal explained that AI isn't just helping set up fraudulent sites, it also helps make them more believable. She said scammers use generative AI to create product descriptions, images, reviews and even influencer videos as part of a social engineering strategy to dupe shoppers into believing they're scrolling through a legitimate business, when in reality they're being lured into a digital trap. Another tactic outlined in Microsoft's report is domain impersonation. Jakkal said scammers make a near-perfect copy of a legitimate website's address, sometimes changing just a single letter, to trick consumers into giving up money and information. As well as raising awareness of these scams, the company is introducing new tools to help safeguard their customers. Microsoft's web browser, Microsoft Edge, now features typo and domain impersonation protection which prompts users to check the website's URL if the program suspects there may be a misspelling. The browser also uses machine learning to block potentially malicious sites before consumers reach the homepage. "We're trying to combat at every place where we see there's a potential of someone being vulnerable to a fraud attempt," Jakkal said. The idea is to put checks and balances in place so people are able to pause and reevaluate, he said. Scott Shackelford, executive director at the Center for Applied Cybersecurity Research at Indiana University, commended Microsoft for being one of the most proactive companies in fraud prevention, but said more action needed to come from both the private and public sector. "Having the backing of big tech as part of this kind of public, private partnership would be a really great way to show that they do take it seriously." No matter where you're browsing, CBS News Confirmed compiled some tips to spot sham sites.


Sky News
27-03-2025
- Business
- Sky News
Hacking by criminals and spies has reached 'unprecedented complexity', Microsoft says
A surge in hacking attempts by criminals, fraudsters and spy agencies has reached a level of "unprecedented complexity" that only artificial intelligence will be able to combat, according to Microsoft. "Last year we tracked 30 billion phishing emails," says Vasu Jakkal, vice president of security at the US-based tech giant. "There's no way any human can keep up with the volume." In response, the company is launching 11 AI cybersecurity "agents" tasked with identifying and sifting through suspicious emails, blocking hacking attempts and gathering intelligence on where attacks may originate. With around 70% of the world's computers running Windows software and many businesses relying on their cloud computing infrastructure, Microsoft has long been the prime target for hackers. Unlike an AI assistant that might answer a user's query or book a hair appointment, an AI agent is a computer programme that autonomously interacts with its environment to carry out tasks without direct input from a user. In recent years, there's been a boom in marketplaces on the dark web offering ready-made malware programmes for carrying out phishing attacks, as well as the potential for AI to write new malware code and automate attacks, which has led to what Ms Jakkal describes as a "gig economy" for cybercriminals worth $9.2trn (£7.1trn). She says they have seen a five-fold increase in the number of organised hacking groups - whether state-backed or criminal. "We are facing unprecedented complexity when it comes to the threat landscape," says Ms Jakkal. The AI agents, some created by Microsoft, and others made by external partners, will be incorporated into Microsoft's portfolio of AI tools called Copilot and will primarily serve their customers' IT and cybersecurity teams rather than individual Windows users. Because an AI can spot patterns in data and screen inboxes for dodgy-looking emails far faster than a human IT manager, specialist cybersecurity firms and now Microsoft have been launching "agentic" AI models to keep increasingly vulnerable users safe online. But others in the field are deeply concerned about unleashing autonomous AI agents across a user's computer or network. In an interview with Sky News last month, Meredith Whittaker, CEO of messaging app Signal, said: "Whether you call it an agent, whether you call it a bot, whether you call it something else, it can only know what's in the data it has access to, which means there is a hunger for your private data and there's a real temptation to do privacy invading forms of AI." Microsoft says its release of multiple cybersecurity agents ensures each AI has a very defined role, only allowing it access to data that's relevant to its task. It also applies what it calls a "zero trust framework" to its AI tools, which requires the company to constantly assess whether agents are playing by the rules they were programmed with. A roll-out of new AI cybersecurity software by a company as dominant as Microsoft will be closely watched. Last July, a tiny error in the software code of an application made cybersecurity firm CrowdStrike instantly crash around 8.5 million computers worldwide running Microsoft Windows, leaving users unable to restart their machines. The incident - described as the largest outage in the history of computing - affected airports, hospitals, rail networks and thousands of businesses including Sky News - some of which took days to recover.


Axios
24-03-2025
- Business
- Axios
Microsoft injects AI agents into security tools
Microsoft said Monday it will soon roll out 11 new AI agents for its security-focused Copilot aimed at offloading some of the most repetitive tasks that bog down cybersecurity teams. Why it matters: Microsoft is the latest major vendor to embed autonomous AI agents directly into its security suite in an effort to reduce burnout for cyber pros and boost efficiency through AI-powered automation. The big picture: Security professionals have long hoped that AI could help close the cybersecurity workforce gap and ease analyst burnout. The U.S. only has enough cyber professionals to fill 83% of the available cyber jobs, according to federal data. Security teams spend about three hours a day just responding to alerts, with some teams seeing more than 4,400 alerts daily, according to research from Vectra AI. While many legacy cybersecurity vendors have released AI copilots or assistants, only a small group have rolled out agents that can take autonomous action. Zoom in: Starting next month, Microsoft will make six of its own new agents and five agents from partner companies available for preview in Security Copilot — which is already integrated into all of Microsoft's security tools. Each agent focuses on a different task: One specifically combs through potential phishing emails. Another can craft notification letters to send to different regulators after a data breach. Customers can configure each agent's level of access and autonomy, including whether the agent acts under its own identity (with a unique username and password) or as an extension of a human account. Each agent also has a map of its thinking so human users can review their decisions — and even override or correct their selections. Case in point: If an agent wrongly flags a training email as phishing, the security team can label it a false positive and instruct the agent not to flag messages from that vendor again. Between the lines: Microsoft says the new agents are a direct response to customer feedback. Agents are "an inflection point for us," Vasu Jakkal, corporate VP of security at Microsoft, told Axios at a media preview event on Thursday. "Copilot was more like question-answer, and (customers) always asked us 'Well, we would like it to one-click and get that done.'" Microsoft first made Security Copilot widely available last year, and Jakkal said customers quickly began asking for more autonomous functionality. Partners rolling out agents in Copilot include OneTrust, Aviatrix, BlueVoyant, Tanium and Fletch. What they're saying: "There's just opportunity everywhere," Dorothy Li, corporate VP of Microsoft Security Copilot, told Axios. "These are the [tasks] that had the highest amount of pain, most volume and where agents can make the most impact today and that's where we chose to start." Microsoft also anticipates that it will roll out more security agents in the near future, Li added. The intrigue: Microsoft also relied on an internal generative AI red team to pressure test the new agents for potential security risks. The red team worked closely with product teams throughout the entire development lifecycle, said Victoria Westerhoff, director of AI safety and security red teaming at Microsoft.