Latest news with #MicrosoftResearch


News18
2 days ago
- News18
Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained
Last Updated: AI veganism is abstaining from using AI systems due to ethical, environmental, or wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient Even as the world goes gaga over artificial intelligence (AI) and how it could change the way the world and jobs function, there are some who are refraining from using it. They are the AI vegans. Why is it? What are their reasons? AI veganism explained. What is AI veganism? The term refers to applying principles of veganism to AI — either by abstaining from using AI systems due to ethical, environmental, or personal wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient. Some view AI use as potentially exploitative — paralleling harm to animals via farming. Is AI so bad that we need to abstain from it? Here's what studies show: A 2024 Pew study showed that a fourth of K-12 teachers in the US thought AI was doing more harm than good. A Harvard study from May found that generative AI, while increasing the productivity of workers, diminished their motivation and increased their levels of boredom. A Microsoft Research study found that people who were more confident in using generative AI showed diminished critical thinking. Time reports growing concerns over a phenomenon labeled AI psychosis, where prolonged interaction with chatbots can trigger or worsen delusions in vulnerable individuals—especially those with preexisting mental health conditions. A study by the Center for Countering Digital Hate found that ChatGPT frequently bypasses its safeguards, offering harmful, personalized advice—such as suicide notes or instructions for substance misuse—to simulated 13-year-old users in over half of monitored interactions. Research at MIT revealed that students using LLMs like ChatGPT to write essays demonstrated weaker brain connectivity, lower linguistic quality, and poorer retention compared to peers relying on their own thinking. A study from Anthropic and Truthful AI found that AI models can covertly transmit harmful behaviors to other AIs using hidden signals—these actions bypass human detection and challenge conventional safety methods. A global report chaired by Yoshua Bengio outlines key threats from general-purpose AI: job losses, terrorism facilitation, uncontrolled systems, and deepfake misuse—and calls for urgent policy attention. AI contributes substantially to global electricity and water use, and could add up to 5 million metric tons of e-waste by 2030—perhaps accounting for 12% of global e-waste volume. Studies estimate AI may demand 4.1–6.6 billion cubic meters of water annually by 2027—comparable to the UK's total usage — while conceptually exposing deeper inequities in AI's extraction and pollution impacts. A BMJ Global Health review argues that AI could inflict harm through increased manipulation/control, weaponization, labour obsolescence, and—at the extreme—pose existential risks if self-improving AGI develops unchecked. What is the basis of the concept? Ethical Concerns: Many AI models are trained on creative work (art, writing, music) without consent from original creators. Critics argue this is intellectual theft or unpaid labor. Potential Future AI Sentience: Some fear that sentient AI might eventually emerge, and using it today could normalise treating it as a tool rather than a being with rights. Environmental Impact: AI systems — especially large language models—consume massive resources which contribute to carbon emissions and water scarcity. Cognitive and Psychological Health: Some believe overuse of AI weakens our ability to think, write, or create independently. The concern is about mental laziness or 'outsourcing" thought. Digital Overwhelm: AI makes everything faster, more accessible—sometimes too fast, leading to burnout, distraction, or dopamine addiction. Social and Cultural Disruption: AI threatens job markets—especially in creative fields, programming, and customer service. Why remaining an AI vegan may be tough? AI is deeply embedded in many systems — from communication to healthcare—making total abstinence unrealistic for most. Current AI lacks consciousness, so overlaying moral concerns meant for animals onto machines may distract from real human and animal rights issues. Potential overreach: Prioritising hypothetical sentient AI ethics could divert attention from pressing societal challenges. With Agency Inputs view comments Location : New Delhi, India, India First Published: August 10, 2025, 18:08 IST News explainers Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Time of India
5 days ago
- Time of India
Principles over processors: How 'AI Veganism' fights AI's threat to human skills in an automated future
Understanding AI Veganism Ethical Concerns Environmental Impact Concerns Over Human Capability While many industries and tech enthusiasts promote artificial intelligence as the next major leap in technological progress, a segment of the population is actively resisting its adoption. This reluctance is not limited to concerns over job displacement or professional changes—it also stems from ethical, environmental, and personal new technologies follow an adoption curve where hesitant users eventually embrace them. However, research indicates that AI may not fit this pattern. Some individuals who might otherwise be early adopters are choosing to avoid AI entirely, leading experts to compare the trend to veganism, where abstention is based on enduring principles rather than temporary veganism refers to the conscious choice to abstain from using AI tools and systems, much like how dietary vegans avoid animal-derived products. Studies suggest that this choice is not just a phase but a long-term stance, often rooted in values that are not easily altered over time.A key factor in AI avoidance is algorithmic aversion—the tendency for people to trust human judgment over algorithmic decision-making, even when algorithms perform better. This skepticism extends into everyday areas, such as preferring human advice in dating or creative of the most prominent reasons behind AI veganism is opposition to how AI models are trained. Studies have shown that when people learn that content creators did not consent to their work being used for AI training, they are more likely to avoid such issue was at the forefront of the 2023 Writers Guild of America and SAG-AFTRA strikes, where unions demanded legal protections to prevent companies from using creative works without permission or payment. Many independent and freelance creators remain without such motivation for AI abstention mirrors the environmental arguments made by traditional vegans. AI's growing demand for computing power significantly increases the consumption of electricity and water, particularly for cooling data centers. Research has indicated that efficiency improvements in AI systems may not reduce total energy use due to the rebound effect, where better efficiency encourages greater overall have also shown that increased awareness of AI's environmental footprint influences how people choose to engage with the technology Some people avoid AI out of fear it may negatively affect mental sharpness. A Microsoft Research study found that frequent users of generative AI displayed lower critical thinking skills. Similarly, a Cambridge University survey revealed that students sometimes rejected AI tools over worries about becoming overly dependent and intellectually as veganism has created niche markets for plant-based products, AI veganism could encourage businesses to market themselves as AI-free. Privacy-focused companies such as DuckDuckGo and Mozilla have already shown that catering to specific user values can sustain a loyal customer base.
&w=3840&q=100)

Business Standard
6 days ago
- Business Standard
Project Ire: Know about Microsoft's AI agent to detect malicious software
Microsoft's Project Ire is an AI-powered agent that can reverse engineer unknown software, analyse its behaviour, and autonomously classify it as malicious or benign - without human intervention New Delhi Microsoft has unveiled a prototype AI agent called Project Ire that can autonomously reverse-engineer software and identify cybersecurity threats like malware, without any human input. The company shared details of this research project in a recent blog post, calling it a step forward in using AI to analyse and classify software more efficiently. What is Microsoft's Project Ire? Project Ire is a prototype developed by researchers from Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & Quantum. It's designed to act like a digital analyst that can inspect unknown software, understand how it works, and determine if it's harmful or not. The system is built on the same underlying framework as Microsoft's earlier Discovery platform. It uses large language models (LLMs) and a set of advanced tools that specialise in reverse engineering, the process of taking apart a software program to figure out what it does. How does it work? Microsoft said that its Defender products currently scan over a billion devices every month for threats. But when software looks suspicious, it often requires a security expert to investigate. That process is slow, difficult, and prone to burnout, especially since it involves combing through countless alerts and making judgment calls without clear right answers. That's where Project Ire comes in. Unlike many other AI systems used in cybersecurity, this one is not just reacting to known threats. It's making informed decisions based on complex signals, even when there's no obvious answer. For instance, some programmes might include reverse engineering protection not because they're malicious, but simply to guard their intellectual property. Project Ire attempts to solve this by working like a smart agent. It starts by scanning a file using automated tools that identify its type, structure, and anything unusual. Then it reconstructs how the software works internally, mapping out its functions and flow using tools like Ghidra and Angr. From there, the AI model digs deeper. It calls on a variety of tools through an application programming interface (API) to inspect specific parts of the code, summarise key functions, and build a detailed 'chain of evidence' that explains every step it took to reach a conclusion. At the end of the process, the system generates a final report and classifies the file as either benign or malicious. It can even cross-check its findings against expert-validated data to reduce errors. How will Microsoft use Project Ire? In tests using real-world malware data from Microsoft Defender, Project Ire was able to correctly identify many malicious files while keeping false alarms to a minimum — just four per cent false positives, according to Microsoft. Thanks to this strong performance, Microsoft says it will begin integrating the technology into its Defender platform under the name 'Binary Analyzer.' The goal is to scale the system to work quickly and accurately across all types of software, even those it's never seen before. Ultimately, Microsoft wants Project Ire to become capable of detecting brand-new malware directly from memory, at a large scale.


Hans India
6 days ago
- Hans India
Microsoft's AI Agent ‘Project Ire' Can Independently Detect and Block Malware with High Accuracy
In a significant leap toward AI-driven cybersecurity, Microsoft has introduced Project Ire, a powerful artificial intelligence agent capable of independently detecting and blocking malware. Designed to function with minimal human oversight, the tool leverages advanced reverse engineering techniques to inspect software, assess its intent, and determine its threat level—all without relying on prior knowledge of the codebase. The innovation comes at a time when security teams are grappling with alert fatigue and the overwhelming volume of threats. 'This kind of work has traditionally been done manually by expert analysts, which can be slow and exhausting,' Microsoft stated in its official blog post. By removing much of the manual load, Project Ire promises both speed and scalability in enterprise threat detection. Unlike conventional AI security tools that often struggle with ambiguity in malware traits, Project Ire approaches the challenge with a unique methodology. Microsoft has equipped the agent with the ability to build a detailed 'chain of evidence'—a step-by-step record of its decision-making process. This audit trail allows cybersecurity professionals to verify conclusions, enhancing both transparency and trust in automated systems. The agent starts by identifying the file's type and structure, followed by reconstructing its control flow using decompiling tools like Ghidra and symbolic execution frameworks such as angr. It integrates various analytical tools via API to summarize the function of each code block, gradually building its chain of logic that supports the final verdict. In terms of performance, the results are compelling. During internal testing, Project Ire was tasked with analyzing a set of Windows drivers containing both safe and malicious files. The AI accurately classified 90% of them, with a precision score of 0.98 and a recall of 0.83. Only 2% of safe files were mistakenly flagged—a relatively low false positive rate in the cybersecurity domain. Microsoft then challenged the AI with a tougher dataset of nearly 4,000 complex and previously unreviewed software files, typically reserved for manual inspection. Even in this scenario, Project Ire demonstrated remarkable efficiency, maintaining a precision score of 0.89 and limiting false positives to just 4%. A standout achievement occurred when Project Ire became the first reverse engineer—human or AI—within Microsoft to compile sufficient evidence to warrant the autonomous blocking of an advanced persistent threat (APT) malware sample. That malware has since been neutralized by Microsoft Defender. The project is a collaborative effort involving Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & Quantum. As cyber threats become more sophisticated and persistent, tools like Project Ire are expected to become essential components of modern digital defense frameworks, offering faster, more consistent, and less labor-intensive threat mitigation. With Project Ire, Microsoft is not just enhancing its security toolkit—it's redefining what AI can accomplish in the world of malware defense.


India Today
7 days ago
- India Today
Microsoft says its new AI Agent can spot and block malware on its own
Microsoft has unveiled a new artificial intelligence system that can independently detect and block malware, without any human assistance. Called Project Ire, this prototype agent is designed to reverse-engineer software files and determine whether they are safe or harmful, marking a major step forward in cybersecurity. According to Microsoft's blog post, Project Ire can fully analyse a software file even if it has no prior information about the file's source or purpose. It uses decompilers and other advanced tools to scan the code, understand its behaviour, and decide whether it poses a risk. The tool is the result of a joint effort between Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & kind of work has traditionally been done manually by expert analysts, which can be slow and exhausting,' Microsoft explained. Security researchers often suffer from alert fatigue and burnout, making it hard to maintain consistency across large-scale malware Ire stands out from other AI security tools because malware classification is particularly difficult to automate. There is no clear-cut way for a machine to verify its decisions, and many traits of malicious software can also appear in legitimate programs. This makes it hard to train a system that is both accurate and reliable. To tackle this, Microsoft equipped Project Ire with a system that builds what it calls a 'chain of evidence', a step-by-step trace showing how the agent reached its conclusion. This audit trail allows human experts to later verify its findings and improves accountability in case of Ire's analysis begins with triaging the file type and structure, then reconstructing its control flow using tools like Ghidra and angr. It can then call different tools through an API to summarise each code function, adding the results to its evidence tested the agent in two key evaluations. In one trial, it analysed a dataset of Windows drivers, some malicious, others safe. The AI correctly identified 90 per cent of the files, with only 2 per cent of the safe files wrongly flagged as threats. This gave Project Ire a precision score of 0.98 and a recall of a tougher real-world test, Microsoft gave the AI nearly 4,000 complex files that had not yet been reviewed by any other automated systems. These files were meant for manual inspection by experts. Even under these conditions, Project Ire achieved a high precision score of 0.89, with a false positive rate of just 4 per fact, Project Ire was the first reverse engineer, human or machine, at Microsoft to produce a malware detection case strong enough to justify automatic blocking of an advanced persistent threat (APT) sample. That malware has now been neutralised by Microsoft Defender.- EndsTune InMust Watch