logo
Microsoft says its new AI Agent can spot and block malware on its own

Microsoft says its new AI Agent can spot and block malware on its own

India Today4 days ago
Microsoft has unveiled a new artificial intelligence system that can independently detect and block malware, without any human assistance. Called Project Ire, this prototype agent is designed to reverse-engineer software files and determine whether they are safe or harmful, marking a major step forward in cybersecurity. According to Microsoft's blog post, Project Ire can fully analyse a software file even if it has no prior information about the file's source or purpose. It uses decompilers and other advanced tools to scan the code, understand its behaviour, and decide whether it poses a risk. The tool is the result of a joint effort between Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & Quantum.advertisement'This kind of work has traditionally been done manually by expert analysts, which can be slow and exhausting,' Microsoft explained. Security researchers often suffer from alert fatigue and burnout, making it hard to maintain consistency across large-scale malware detection.Project Ire stands out from other AI security tools because malware classification is particularly difficult to automate. There is no clear-cut way for a machine to verify its decisions, and many traits of malicious software can also appear in legitimate programs. This makes it hard to train a system that is both accurate and reliable.
To tackle this, Microsoft equipped Project Ire with a system that builds what it calls a 'chain of evidence', a step-by-step trace showing how the agent reached its conclusion. This audit trail allows human experts to later verify its findings and improves accountability in case of errors.Project Ire's analysis begins with triaging the file type and structure, then reconstructing its control flow using tools like Ghidra and angr. It can then call different tools through an API to summarise each code function, adding the results to its evidence chain.Microsoft tested the agent in two key evaluations. In one trial, it analysed a dataset of Windows drivers, some malicious, others safe. The AI correctly identified 90 per cent of the files, with only 2 per cent of the safe files wrongly flagged as threats. This gave Project Ire a precision score of 0.98 and a recall of 0.83.In a tougher real-world test, Microsoft gave the AI nearly 4,000 complex files that had not yet been reviewed by any other automated systems. These files were meant for manual inspection by experts. Even under these conditions, Project Ire achieved a high precision score of 0.89, with a false positive rate of just 4 per cent.In fact, Project Ire was the first reverse engineer, human or machine, at Microsoft to produce a malware detection case strong enough to justify automatic blocking of an advanced persistent threat (APT) sample. That malware has now been neutralised by Microsoft Defender.- EndsTune InMust Watch
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Microsoft Edge Gets a Major AI Upgrade with New Copilot Mode
Microsoft Edge Gets a Major AI Upgrade with New Copilot Mode

Time of India

time2 hours ago

  • Time of India

Microsoft Edge Gets a Major AI Upgrade with New Copilot Mode

The browser wars are heating up, but this time it's all about AI! Microsoft has introduced "Copilot Mode" for its Edge browser, an experimental feature that uses "agentic AI" to turn your browser into a proactive personal assistant. In this video, we break down what Copilot Mode is, how it works, and how it can help you compare products, summarize articles, and even navigate the web hands-free. We'll also cover the important details on privacy and how you can try this new feature for yourself. Read More

Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained
Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained

News18

time4 hours ago

  • News18

Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained

Last Updated: AI veganism is abstaining from using AI systems due to ethical, environmental, or wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient Even as the world goes gaga over artificial intelligence (AI) and how it could change the way the world and jobs function, there are some who are refraining from using it. They are the AI vegans. Why is it? What are their reasons? AI veganism explained. What is AI veganism? The term refers to applying principles of veganism to AI — either by abstaining from using AI systems due to ethical, environmental, or personal wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient. Some view AI use as potentially exploitative — paralleling harm to animals via farming. Is AI so bad that we need to abstain from it? Here's what studies show: A 2024 Pew study showed that a fourth of K-12 teachers in the US thought AI was doing more harm than good. A Harvard study from May found that generative AI, while increasing the productivity of workers, diminished their motivation and increased their levels of boredom. A Microsoft Research study found that people who were more confident in using generative AI showed diminished critical thinking. Time reports growing concerns over a phenomenon labeled AI psychosis, where prolonged interaction with chatbots can trigger or worsen delusions in vulnerable individuals—especially those with preexisting mental health conditions. A study by the Center for Countering Digital Hate found that ChatGPT frequently bypasses its safeguards, offering harmful, personalized advice—such as suicide notes or instructions for substance misuse—to simulated 13-year-old users in over half of monitored interactions. Research at MIT revealed that students using LLMs like ChatGPT to write essays demonstrated weaker brain connectivity, lower linguistic quality, and poorer retention compared to peers relying on their own thinking. A study from Anthropic and Truthful AI found that AI models can covertly transmit harmful behaviors to other AIs using hidden signals—these actions bypass human detection and challenge conventional safety methods. A global report chaired by Yoshua Bengio outlines key threats from general-purpose AI: job losses, terrorism facilitation, uncontrolled systems, and deepfake misuse—and calls for urgent policy attention. AI contributes substantially to global electricity and water use, and could add up to 5 million metric tons of e-waste by 2030—perhaps accounting for 12% of global e-waste volume. Studies estimate AI may demand 4.1–6.6 billion cubic meters of water annually by 2027—comparable to the UK's total usage — while conceptually exposing deeper inequities in AI's extraction and pollution impacts. A BMJ Global Health review argues that AI could inflict harm through increased manipulation/control, weaponization, labour obsolescence, and—at the extreme—pose existential risks if self-improving AGI develops unchecked. What is the basis of the concept? Ethical Concerns: Many AI models are trained on creative work (art, writing, music) without consent from original creators. Critics argue this is intellectual theft or unpaid labor. Potential Future AI Sentience: Some fear that sentient AI might eventually emerge, and using it today could normalise treating it as a tool rather than a being with rights. Environmental Impact: AI systems — especially large language models—consume massive resources which contribute to carbon emissions and water scarcity. Cognitive and Psychological Health: Some believe overuse of AI weakens our ability to think, write, or create independently. The concern is about mental laziness or 'outsourcing" thought. Digital Overwhelm: AI makes everything faster, more accessible—sometimes too fast, leading to burnout, distraction, or dopamine addiction. Social and Cultural Disruption: AI threatens job markets—especially in creative fields, programming, and customer service. Why remaining an AI vegan may be tough? AI is deeply embedded in many systems — from communication to healthcare—making total abstinence unrealistic for most. Current AI lacks consciousness, so overlaying moral concerns meant for animals onto machines may distract from real human and animal rights issues. Potential overreach: Prioritising hypothetical sentient AI ethics could divert attention from pressing societal challenges. With Agency Inputs view comments Location : New Delhi, India, India First Published: August 10, 2025, 18:08 IST News explainers Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Sam Altman on Elon Musk: All day he does is tweeting, 'how much OpenAI sucks, our model is bad and ...
Sam Altman on Elon Musk: All day he does is tweeting, 'how much OpenAI sucks, our model is bad and ...

Time of India

time5 hours ago

  • Time of India

Sam Altman on Elon Musk: All day he does is tweeting, 'how much OpenAI sucks, our model is bad and ...

OpenAI CEO Sam Altman has bluntly responded to recent attacks from Tesla CEO Elon Musk, saying he doesn't spend much time thinking about the xAI founder. Speaking in an interview with CNBC's Squawk Box, Altman dismissed Musk's repeated criticisms of OpenAI and its newly launched GPT-5 creator OpenAI recently unveiled its latest AI model GPT-5. The company claims that the latest AI model offer advancements in accuracy, speed, reasoning and math capabilities. However, after the launch of GPT-5, Microsoft CEO Satya Nadella announced full integration of GPT-5 across Microsoft ecosystem. Responding to Nadella's post, Tesla CEO Elon Musk said, 'OpenAI is going to eat Microsoft alive'. Sam Altman responds to Elon Musk's comment about GPT-5 During the CNBC interview, Andrew Ross Sorkin asked Altman his views on Musk's comment that 'OpenAI is going to eat Microsoft alive'. 'You knew I'd asked the question. I think you knew I'd asked the question. You probably saw Elon yesterday. He said, quote, OpenAI will eat Microsoft alive, and then Satya responding to that. What do you think when you read that?' asked Sorkin. Replying to the question Altman said, 'You know, I don't think about him that much.' Sorkin then said, 'I'm not sure what he means except to say that he thinks in the grand scheme of the partnership, that, ultimately, you'll have more power and more influence and more leverage over them than they'll have over you.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like The Secret Lives of the Romanovs — the Last Rulers of Imperial Russia! Learn More Undo To this Altman said that Elon Musk is someone who was just tweeting all day about how much OpenAI sucks, and our model is bad, and, you know, not going to be a good company and all that. 'I thought he was most -- I mean, I -- someone was -- I thought he was just like tweeting all day about how much like OpenAI sucks and our model is bad and, you know, not being a good company and all of that. So, I don't know how you square those two things,' said Altman. The remarks come amid escalating tensions between the two former collaborators, who co-founded OpenAI in 2015 before parting ways over disagreements about the company's direction. Musk has since launched his own AI venture, xAI, and recently claimed that OpenAI would 'eat Microsoft alive' following the tech giant's integration of GPT-5 across its platforms. GPT-5 launched, free for all OpenAI says that GPT-5 is the company's 'best model yet for coding and agentic tasks.' The model comes in three sizes — gpt-5, gpt-5-mini, and gpt-5-nano — for developers to balance performance, cost, and speed. In the API, GPT-5 is the reasoning model that powers ChatGPT 's top performance. A separate non-reasoning version, called gpt-5-chat-latest, will also be available. Sam Altman said GPT-5 is a major leap from GPT-4 and a 'pretty significant step' toward Artificial General Intelligence (AGI). 'GPT-5 is really the first time that I think one of our mainline models has felt like you can ask a legitimate expert, like a PhD-level expert, anything... We wanted to make it available in our free tier for the first time,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store