Latest news with #ProjectIre
&w=3840&q=100)

Business Standard
2 days ago
- Business
- Business Standard
Tech Wrap Aug 7: Samsung soundbars, Copilot Vision in moto ai, Instagram
Samsung soundbars launched. Motorola phones gets Copilot Vision AI support. Instagram rolls out new features. GTA Online update. Microsoft's Project Ire. Google Gemini BS Tech New Delhi Samsung launches new soundbars with AI sound optimisation Samsung has unveiled its 2025 lineup of soundbars in India, adding several new models such as the premium HW-Q990F and convertible HW-QS700F. These soundbars are equipped with AI sound optimisation to fine-tune audio in real-time, enhanced bass control to avoid distortion, an active voice amplifier suited for flexible setups, and a built-in gyro sensor that modifies audio output based on placement. Motorola has incorporated Microsoft's Copilot Vision AI into its 'moto ai' platform. This addition delivers a more advanced, camera-focused AI experience to selected Motorola devices in specific markets. Motorola says the move represents a stronger integration of Microsoft's Copilot tools and further solidifies the collaboration between the two firms. Instagram is introducing a range of new tools to make the app more personal and social. These updates include a repost function for resharing posts, a map feature allowing optional location sharing, and a new 'Friends' tab inside the Reels section. The features are intended to boost content discovery and help users stay connected with friends and favourite creators in real-time. Rockstar Games, the American video game company, has rolled out a new update for GTA Online featuring Community Race Series and Community Combat Series. The latest update offers exclusive rewards to highlighted creators. In addition, players will now find expanded tools for job creation, along with tips from Rockstar on how to get featured in the game. Microsoft has presented a prototype AI tool named Project Ire, which can independently reverse-engineer software and detect threats like malware without human involvement. The tech giant revealed details of the project in a blog post, calling it a significant advancement in automating software analysis and threat detection through AI. Google is aiming to present Gemini's newly launched guided learning mode as a supportive study tool for students, rather than a means to simply obtain answers. The US tech company is promoting the idea that students should prioritise grasping core concepts and developing a deeper understanding of subjects instead of relying solely on quick solutions. The forthcoming Pixel 10 series from Google is anticipated to introduce new AI-powered imaging features built around its Gemini model. According to Android Headlines, the lineup might include tools like "Camera Coach" within the Camera app and "Conversational Photo Editing" in Google Photos, which would allow image enhancement through text-based inputs. OpenAI, the American artificial intelligence firm, is likely to announce its new model GPT-5 today. A teaser shared on X (formerly Twitter) on Wednesday stated: 'LIVE5TREAM THURSDAY 10AM PT,' with the letter 'S' swapped out for a '5' — indicating a possible reference to GPT-5. The event is set to be livestreamed at 10:30 PM IST. Google is said to be expanding its AI Mode feature to Android tablets, following its earlier launch on smartphones. As noted by 9To5Google, the feature appears in the Google app version 16.30 beta and gives tablet users access to the same Gemini-driven AI search capabilities already available on mobile devices.
&w=3840&q=100)

Business Standard
2 days ago
- Business Standard
Project Ire: Know about Microsoft's AI agent to detect malicious software
Microsoft's Project Ire is an AI-powered agent that can reverse engineer unknown software, analyse its behaviour, and autonomously classify it as malicious or benign - without human intervention New Delhi Microsoft has unveiled a prototype AI agent called Project Ire that can autonomously reverse-engineer software and identify cybersecurity threats like malware, without any human input. The company shared details of this research project in a recent blog post, calling it a step forward in using AI to analyse and classify software more efficiently. What is Microsoft's Project Ire? Project Ire is a prototype developed by researchers from Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & Quantum. It's designed to act like a digital analyst that can inspect unknown software, understand how it works, and determine if it's harmful or not. The system is built on the same underlying framework as Microsoft's earlier Discovery platform. It uses large language models (LLMs) and a set of advanced tools that specialise in reverse engineering, the process of taking apart a software program to figure out what it does. How does it work? Microsoft said that its Defender products currently scan over a billion devices every month for threats. But when software looks suspicious, it often requires a security expert to investigate. That process is slow, difficult, and prone to burnout, especially since it involves combing through countless alerts and making judgment calls without clear right answers. That's where Project Ire comes in. Unlike many other AI systems used in cybersecurity, this one is not just reacting to known threats. It's making informed decisions based on complex signals, even when there's no obvious answer. For instance, some programmes might include reverse engineering protection not because they're malicious, but simply to guard their intellectual property. Project Ire attempts to solve this by working like a smart agent. It starts by scanning a file using automated tools that identify its type, structure, and anything unusual. Then it reconstructs how the software works internally, mapping out its functions and flow using tools like Ghidra and Angr. From there, the AI model digs deeper. It calls on a variety of tools through an application programming interface (API) to inspect specific parts of the code, summarise key functions, and build a detailed 'chain of evidence' that explains every step it took to reach a conclusion. At the end of the process, the system generates a final report and classifies the file as either benign or malicious. It can even cross-check its findings against expert-validated data to reduce errors. How will Microsoft use Project Ire? In tests using real-world malware data from Microsoft Defender, Project Ire was able to correctly identify many malicious files while keeping false alarms to a minimum — just four per cent false positives, according to Microsoft. Thanks to this strong performance, Microsoft says it will begin integrating the technology into its Defender platform under the name 'Binary Analyzer.' The goal is to scale the system to work quickly and accurately across all types of software, even those it's never seen before. Ultimately, Microsoft wants Project Ire to become capable of detecting brand-new malware directly from memory, at a large scale.


News18
3 days ago
- News18
Microsoft's New AI System Can Alert Users About Malware Threats And Fight It
Last Updated: Microsoft's new AI system has been built to fight dangerous malware threats and block their entry before they cause massive damage. Microsoft is the latest tech giant to deploy AI to fight the big malware threat and protect its users. The company's new AI system called Project Ire is looking to change the way cybersecurity attacks are handled and prevented before they even make a dent. Hackers are relying on AI to manipulate and evolve their mode of attacks so it was obvious that the likes of Google and Microsoft will put AI to work and help them fight the ugly battle. Microsoft's AI Weapon: How It Fights The Malware Microsoft has developed the AI system to fully understand the nature of the malware and find its origin to kill the threat at its core. The company did a test where Project Ire was accurately able to create a threat report which managed to block the advanced malware risk. It even says the detection about the malware was 98 percent accurate with a 2 percent false positive rate also mentioned. Using AI to build new tools to generate videos and images is all good but the actual purpose of AI is to help sophisticate the existing mechanisms and future-proof for evolved attacks where AI will be fully in control for most of the actions. Tackle The Next-Gen Threats An AI to fight the AI seems to be the right focus here and Microsoft is clearly working steadfast to make its own internal security tools are up in arms and up-to-date with the latest attacking trends. These updates from Microsoft come a few after Google claimed to have thwarted a major cyberattack with the help of its own Project Ire-like tools. Google has built a new AI agent that is claimed to have stopped a major cyberattack by analysing its footprints and blocking its path to cause destruction. Sundar Pichai, CEO, Google said the AI agent called Big Sleep was deployed to detect and stop these cyberattacks from affecting billions and it seems to have done just that recently. AI agents are making lives easier for people, allowing them to delegate tasks and work effectively. It seems these agents are doing much more than that. The new AI agent has been developed by its DeepMind and Project Zero team. The AI agent has been trained to look for active security issues that are unknown to the industry and researchers. We'll probably need more advanced AI systems to tackle these new-gen threats and the giants are coming through with their response in the best way possible. view comments First Published: August 07, 2025, 14:25 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Time of India
3 days ago
- Time of India
Microsoft creates AI-powered ‘self-defending software': What is it and how it works
Microsoft has developed an advanced AI system that can reverse-engineer and identify malicious software without any human help. Named Project Ire, this prototype system automatically dissects software files to figure out what they do and if they're dangerous—a process typically performed by human security experts, the company said. How Microsoft's Project Ire works Essentially, Project Ire's approach is a departure from existing security tools that scan for known threats. According to Microsoft, the AI 'automates what is considered the gold standard in malware classification: fully reverse engineering a software file without any clues about its origin or purpose.' This deep, autonomous analysis is crucial as both security defenders and hackers increasingly use AI to their advantage, according to the company. The company said that in a recent test, Project Ire demonstrated its accuracy by creating a threat report that was strong enough to automatically block an advanced piece of malware. According to Microsoft, early tests show the AI is highly accurate. When the system identified a file as malicious, it was correct 98% of the time, with a false positive rate of just 2%. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Is this legal? Access all TV channels without a subscription! Techno Mag Learn More Undo Part of a Larger Security Initiative The development of Project Ire is part of a broader company-wide push on security. Following a series of high-profile vulnerabilities, Microsoft has made security its top priority through its Secure Future Initiative. Like Google's 'Big Sleep' AI, which focuses on discovering vulnerabilities in code, Project Ire is part of a new wave of AI systems designed to address cybersecurity threats in novel ways. The system, developed by teams across Microsoft Research, Microsoft Defender , and Microsoft Discovery & Quantum, will now be used internally to help speed up threat detection across Microsoft's own security tools. Apple Confirms: Majority of iPhones Sold in US Are Now Made in India AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Hans India
3 days ago
- Hans India
Microsoft's AI Agent ‘Project Ire' Can Independently Detect and Block Malware with High Accuracy
In a significant leap toward AI-driven cybersecurity, Microsoft has introduced Project Ire, a powerful artificial intelligence agent capable of independently detecting and blocking malware. Designed to function with minimal human oversight, the tool leverages advanced reverse engineering techniques to inspect software, assess its intent, and determine its threat level—all without relying on prior knowledge of the codebase. The innovation comes at a time when security teams are grappling with alert fatigue and the overwhelming volume of threats. 'This kind of work has traditionally been done manually by expert analysts, which can be slow and exhausting,' Microsoft stated in its official blog post. By removing much of the manual load, Project Ire promises both speed and scalability in enterprise threat detection. Unlike conventional AI security tools that often struggle with ambiguity in malware traits, Project Ire approaches the challenge with a unique methodology. Microsoft has equipped the agent with the ability to build a detailed 'chain of evidence'—a step-by-step record of its decision-making process. This audit trail allows cybersecurity professionals to verify conclusions, enhancing both transparency and trust in automated systems. The agent starts by identifying the file's type and structure, followed by reconstructing its control flow using decompiling tools like Ghidra and symbolic execution frameworks such as angr. It integrates various analytical tools via API to summarize the function of each code block, gradually building its chain of logic that supports the final verdict. In terms of performance, the results are compelling. During internal testing, Project Ire was tasked with analyzing a set of Windows drivers containing both safe and malicious files. The AI accurately classified 90% of them, with a precision score of 0.98 and a recall of 0.83. Only 2% of safe files were mistakenly flagged—a relatively low false positive rate in the cybersecurity domain. Microsoft then challenged the AI with a tougher dataset of nearly 4,000 complex and previously unreviewed software files, typically reserved for manual inspection. Even in this scenario, Project Ire demonstrated remarkable efficiency, maintaining a precision score of 0.89 and limiting false positives to just 4%. A standout achievement occurred when Project Ire became the first reverse engineer—human or AI—within Microsoft to compile sufficient evidence to warrant the autonomous blocking of an advanced persistent threat (APT) malware sample. That malware has since been neutralized by Microsoft Defender. The project is a collaborative effort involving Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & Quantum. As cyber threats become more sophisticated and persistent, tools like Project Ire are expected to become essential components of modern digital defense frameworks, offering faster, more consistent, and less labor-intensive threat mitigation. With Project Ire, Microsoft is not just enhancing its security toolkit—it's redefining what AI can accomplish in the world of malware defense.