
First-ever AI malware ‘LazyHug' hides in ZIP files to hack Windows PCs
CERT-UA says that the attacks are from the Russian threat group APT028. Written in the popular coding language Python, LameHug uses APIs from Hugging Face and is powered by Qwen-2.5-Coder-32B-Instruct, an open-sourced large language model developed by Alibaba Cloud to generate and send commands.
As is the case with AI chatbots like Gemini, ChatGPT and Perplexity, the large language model can convert instructions given in natural language into executable code or shell commands. In an email sent by the group to Ukrainian government authorities impersonating ministry officials, the payload delivering the LameHug malware was hidden in a ZIP archive that contained files named 'AI_generator_uncensored_Canvas_PRO_0.9.exe' and 'image.py'.
The malware used commands that allowed APT-28, the threat group that sent these emails, to extract information about the infected Windows PC and search for text and PDF documents stored in the Documents, Downloads and Desktop folders. This information was then sent to a remotely controlled server, but as of now, it is unclear how the LLM-powered attack was carried out.
According to a recently issued advisory by the threat intelligence sharing platform IBM X-Force Exchange, this is the first documented case where a malware is using LLMs to write executable commands, which 'allows threat actors to adapt their practice during a compromise without needing new payloads, potentially making the malware harder to detect by security software or static analysis tools.' The news comes after security analysis firm Check Point said that it discovered a new malware called Skynet that evades detection by AI tools.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
The unexpected risk Ivy League students face using ChatGPT in class
How Ivy League universities in the US are managing ChatGPT and AI misuse. (AI Image) The rapid adoption of artificial intelligence tools such as ChatGPT has introduced new challenges in US higher education, especially within Ivy League universities. These prestigious institutions are grappling with how to integrate AI in learning while maintaining academic integrity. As reported by Forbes, the use of generative AI presents both opportunities and risks for students, educators and the institutions themselves. Ivy League schools have not imposed blanket rules on AI use, but instead emphasise the autonomy of individual instructors to determine policies in their courses. This approach reflects the complexity of AI's impact on learning outcomes and academic honesty, and it places responsibility on students to understand when and how AI can be used appropriately. Instructor and course autonomy define AI policies Princeton University's official policy states that if generative AI is permitted by the instructor, students must disclose its use but not cite it as a source since AI is considered an algorithm rather than an author. The policy further advises students to familiarise themselves with departmental regulations regarding AI use, as reported by Forbes. Similarly, Dartmouth College allows instructors to decide whether AI tools can be used based on intended learning outcomes. This decentralised system means that students cannot assume uniformity in AI policies across courses, even within the same institution. A student permitted to use AI for brainstorming in one class might find it prohibited in another. This variation extends to disciplines; STEM courses may allow wider use of AI tools, while humanities departments like English often restrict AI to preserve critical thinking and originality. AI misuse is considered academic dishonesty Several Ivy League schools, including the University of Pennsylvania and Columbia University, have clearly stated that misuse of generative AI constitutes academic dishonesty. According to Forbes, students who improperly use AI may face disciplinary measures similar to those for plagiarism. Understanding how AI functions is critical for students to make informed decisions about its use. They need to be able to evaluate AI-generated content critically, identify hallucinations or inaccuracies, and disclose AI assistance when allowed. As Forbes reports, schools also emphasise the importance of respecting intellectual property rights, warning against uploading confidential or proprietary information to AI platforms without proper protections in place. Policies are evolving alongside AI technology Given the fast pace of AI development, Ivy League institutions regularly review and update their AI guidelines. Columbia University notes that guidance on generative AI use is expected to evolve as experience with the technology grows, as reported by Forbes. Faculty are encouraged to experiment with new pedagogical methods and adapt their course policies to reflect changing realities. Students preparing for collegiate study are advised to develop technological literacy and critical thinking skills to navigate these shifting policies successfully. The indiscriminate use of AI tools may hinder their ability to demonstrate independent thought, a quality highly valued by Ivy League admissions and faculty alike. In summary, Ivy League students face the unexpected risk of navigating a complex and evolving landscape of AI policies. While generative AI offers powerful tools for learning, misuse or overreliance can lead to academic consequences. Awareness, transparency and critical engagement with AI are essential to avoid these risks, as these elite US institutions continue to balance innovation with academic standards. TOI Education is on WhatsApp now. Follow us here . Ready to navigate global policies? Secure your overseas future. Get expert guidance now!


Time of India
2 hours ago
- Time of India
Chatbot culture wars erupt as bias claims surge
For much of the last decade, America's partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices. Those battles aren't over. But a new one has already started. This fight is over artificial intelligence, and whether the outputs of leading AI chatbots like ChatGPT, Claude and Gemini are politically biased. Conservatives have been taking aim at AI companies for months. In March, House Republicans subpoenaed a group of leading AI developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri's Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a 'new wave of censorship' by training their AI systems to give biased responses to questions about President Trump. On Wednesday, Trump himself joined the fray, issuing an executive order on what he called 'woke AI'. 'Once and for all, we are getting rid of woke,' he said in a speech. Republicans have been complaining about AI bias since at least early last year, when a version of Google's Gemini AI system generated historically inaccurate images of the American founding fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading AI companies were training their models to parrot liberal ideology. Since then, top Republicans have mounted pressure campaigns to try to force AI companies to disclose more information about how their systems are built, and tweak their chatbots' outputs to reflect a broader set of political views. Now, with the White House's executive order, Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force AI companies to address their concerns. The order directs federal agencies to limit their use of AI systems to those that put a priority on 'truth-seeking' and 'ideological neutrality' over disfavoured concepts like diversity, equity and inclusion.


Time of India
2 hours ago
- Time of India
Master the Prompt, Master the Future: Why This Skill Is a Game Changer
Live Events In an artificial intelligence-driven world, prompt engineering is the secret sauce that powers next-generation productivity , creativity, and innovation. What was once a niche skill hiding in the backrooms of AI research labs and among front-line innovators is now becoming a fundamental competence across businesses, like marketing and journalism, software coding, and customer experience are just some examples. The age of AI has arrived. It's not coming. And prompt engineering is the language we need to learn to what is prompt engineering? Simply put, it's the deliberate design of input questions (or "prompts") to direct AI models-particularly generative models such as ChatGPT, Claude, Gemini, or Midjourney is to generate extremely relevant, effective, or innovative responses. But don't get it wrong: this is not necessarily asking smarter questions. It's a matter of mastering the subtlety of context, tone, limitations, and ordering—basically learning how to communicate machine logic in human should we care? Because AI is only as intelligent as the prompts it's provided. Garbage in, garbage out. Whether you're teaching an AI to abstract legal documents, create marketing materials, create an image, or provide customer support, the distinction between a mediocre result and a breakthrough solution often rests on how well you design the prompt. In today's hybrid human-AI workflow, the prompt is your steering are waking up to it. Innovative companies are already upskilling squads with instant engineering playbooks. Job titles are changing. Titles such as "Prompt Designer" or "AI Interaction Specialist" are appearing on job boards everywhere, and with salaries to prove it. LinkedIn has even introduced prompt engineering as a skill you can highlight. It's no longer exclusive to techies - it's becoming a universal makers, analysts, and even CEOs are adopting this as a meta-skill - a force multiplier that boosts decision-making, productivity, and ideation. Need a career competitive advantage? Learn to prompt. Wish to be remarkable in your next pitch deck or client meeting? Speak AI fluently with prompt here's the catch: the space is still open. There is no set rulebook yet, and that's the opportunity. Prompt engineering is more art than science now. It favours curiosity, experimentation, and iteration. That means people who begin now can help define its standards, best practices, and even the ethics of AI-human collaboration Just as Excel was the 2000s' must-have skill and coding ruled the 2010s, prompt engineering is emerging as the 2020s' power skill. It's not about automating humans - it's about augmenting them. The people who can prompt with precision, creativity, and purpose will be driving the next revolution of digital change.