
Kaspersky warns AI-generated passwords expose users to attacks
Kaspersky has issued a warning regarding the use of large language models (LLMs) such as ChatGPT, Llama, and DeepSeek for password generation, citing unpredictable security weaknesses that could make users vulnerable to cyberattacks.
The increased prevalence of online accounts has led to a surge in password re-use and reliance on predictable combinations of names, dictionary words, and numbers. According to Kaspersky, many people are seeking shortcuts by using AI-based tools like LLMs to create passwords, assuming that AI-generated strings offer superior security due to their apparent randomness.
However, concerns have been raised over the actual strength of these passwords. Alexey Antonov, Data Science Team Lead at Kaspersky, examined passwords produced by ChatGPT, Llama, and DeepSeek and discovered notable patterns that could compromise their integrity.
"All of the models are aware that a good password consists of at least 12 characters, including uppercase and lowercase letters, numbers and symbols. They report this when generating passwords," says Antonov.
Antonov observed that DeepSeek and Llama sometimes produced passwords utilising dictionary words with letters swapped for similarly-shaped numbers, such as S@d0w12, M@n@go3, and B@n@n@7 for DeepSeek, and K5yB0a8dS8 and S1mP1eL1on for Llama. He noted: "Both of these models like to generate the password 'password': P@ssw0rd, P@ssw0rd!23 (DeepSeek), P@ssw0rd1, P@ssw0rdV (Llama). Needless to say, such passwords are not safe."
He explained that the technique of substituting certain letters with numbers, while appearing to increase complexity, is well-known among cybercriminals and can be easily breached using brute force methods. According to Antonov, ChatGPT produces passwords which initially appear random, such as qLUx@^9Wp#YZ, LU#@^9WpYqxZ and YLU@x#Wp9q^Z, yet further analysis reveals telling consistencies.
"However, if you look closely, you can see patterns. For example, the number 9 is often encountered," Antonov said.
Examining 1,000 passwords generated by ChatGPT, he found that certain characters, such as x, p, l and L, appeared with much higher frequency, which is inconsistent with true randomness. Similar patterns were observed for Llama, which favoured the # symbol and particular letters. DeepSeek showed comparable tendencies in password generation habits.
"This doesn't look like random letters at all," Antonov commented when reviewing the symbol and character distributions.
Moreover, the LLMs often failed to include special characters or digits in a significant portion of passwords: 26% of ChatGPT passwords, 32% for Llama, and 29% for DeepSeek were affected. DeepSeek and Llama occasionally generated passwords that were shorter than the 12-character minimum generally recommended for security.
These weaknesses, including pronounced character patterns and inconsistent composition, potentially enable cybercriminals to target common combinations more efficiently, increasing the likelihood of successful brute force attacks.
Antonov referenced the findings of a machine learning algorithm he developed in 2024 to assess password strength, stating that almost 60% of all tested passwords could be deciphered in under an hour using contemporary GPUs or cloud-based cracking services. When applying similar tests to AI-generated passwords, the results were concerning: "88% of DeepSeek and 87% of Llama generated passwords were not strong enough to withstand attack from sophisticated cyber criminals. While ChatGPT did a little better with 33% of passwords not strong enough to pass the Kaspersky test."
Addressing the core problem, Antonov remarked, "The problem is LLMs don't create true randomness. Instead, they mimic patterns from existing data, making their outputs predictable to attackers who understand how these models work, notes Antonov"
In light of these findings, Kaspersky recommends individuals and organisations use dedicated password management software instead of relying on LLMs. According to Kaspersky, dedicated password managers employ cryptographically secure generators, providing randomness with no detectable patterns and storing credentials safely in encrypted vaults accessible via a single master password.
Password management software, Kaspersky notes, often provides additional features such as auto-fill, device synchronisation, and breach monitoring to alert users should their credentials appear in data leaks. These measures aim to reduce the risk of credential theft and the impact of data breaches by encouraging strong, unique passwords for each service.
Kaspersky emphasised that while AI is useful for numerous applications, password creation is not among them due to its tendency to generate predictable, pattern-based outputs. The company underlines the need to use reputable password managers as a first line of defence in maintaining account security and privacy in the digital era.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
5 hours ago
- Techday NZ
New book explores AI's transformative role in vocational learning
Researchers from Ara Institute of Canterbury have contributed to a new book examining the role of artificial intelligence (AI) in vocational education and training. The publication, titled AI in Vocational Education and Training , was released by Springer Nature and draws on the efforts of a group of educators and researchers from Ara Institute of Canterbury and Otago Polytechnic. Broader perspective Rather than focusing primarily on academic integrity concerns, the book examines how AI technologies, particularly generative AI tools, can be leveraged to improve student learning outcomes and educational practices. The editor of the collection, Dr Selena Chan, is an Education Developer at Ara and a recipient of the Ako Aotearoa Prime Minister's Supreme Award for excellence in tertiary teaching. Dr Chan stated that the work serves as a practical guide for vocational education and training (VET) providers interested in introducing AI in their teaching and learning. "It also addresses ethical concerns, ensuring AI supports learning rather than undermine academic integrity," she said. The widespread introduction of natural language AI chatbots, such as ChatGPT, has ignited public discussion about plagiarism risks in education. Dr Chan, however, argues that their book encourages a shift in focus. She noted that artificial intelligence has considerable value in promoting active learning, encouraging students to engage in critical thought, and supporting independent analysis. "While the arrival of AI-powered natural language chatbots such as ChatGPT have sparked widespread debate about plagiarism, Dr Chan said the book offers an alternative perspective - highlighting AI's enormous potential to support deeper engagement, critical thinking and independent analysis." Range of disciplines The contributors examine AI integration within a range of vocational subjects, including construction management, graphic design, nursing, and business. The publication also dedicates attention to a project focused on adapting AI to benefit neurodivergent learners. Dr Chan explained that educational developers, learning designers, tutors, and senior students collaborated to create AI-assisted learning activities aimed not only at engaging students but also at helping them become more independent in their analysis and improving their practical application skills. She acknowledged the significant role of students in this research, saying the work would not have been completed without their involvement. "I wanted to acknowledge the invaluable input of ākonga (students) in the research, without whom the work would not have been completed," Dr Chan said. She also recognised the contribution of Scott Klenner, who is both Ara's Research Manager and the Director of Rangahau, Research and Postgraduate Studies at Otago Polytechnic. "Scott's guidance on inclusivity, rangahau (Māori research) and the incorporation of mātauranga Māori (Māori knowledge) in research design was invaluable." Collaboration between institutes Scott Klenner commented on the joint effort between Ara and Otago Polytechnic, describing it as evidence of their leadership in applying AI to the local vocational education sector. "[The publication highlights] our academics' te hinengaro me te ringa mahi (thinking and work) for producing leading research, with an international publisher, exploring the most significant evolution of education this century." The editors describe the collection as a resource that offers case studies, practical frameworks and guidelines for both educators and policymakers seeking to adapt vocational education to contemporary demands in Aotearoa New Zealand. Dr Chan's background includes extensive publications in vocational education and technology-enhanced learning, and she serves as Academic Capability Leader and co-editor of the International Journal of Training Research.


Techday NZ
a day ago
- Techday NZ
ChatGPT leads enterprise AI, but model diversity is surging
New Relic has published its first AI Unwrapped: 2025 AI Impact Report, presenting data from 85,000 businesses on enterprise-level adoption and usage trends in artificial intelligence models. ChatGPT's leading role The report reveals that developers are overwhelmingly favouring OpenAI's ChatGPT for general-purpose AI tasks. According to the findings, more than 86% of all large language model (LLM) tokens processed by New Relic customers involved ChatGPT models. Nic Benders, Chief Technical Strategist at New Relic, stated, "AI is rapidly moving from innovation labs and pilot programmes into the core of business operations. The data from our 2025 AI Impact Report shows that while ChatGPT is the undisputed dominant model, developers are also moving at the 'speed of AI,' and rapidly testing the waters with the latest models as soon as they come out. In tandem, we're seeing robust growth of our AI monitoring solution. This underscores that as AI is ingrained in their businesses, our customers are realising they need to ensure model reliability, accuracy, compliance, and cost efficiency." The report highlights that enterprises have been quick to adopt OpenAI's latest releases. ChatGPT-4o and ChatGPT-4o mini emerged as the primary models in use, with developers making near-immediate transitions between versions as new capabilities and improvements are launched. Notably, there has been an observed pattern of rapid migration from ChatGPT-3.5 Turbo to ChatGPT-4.1 mini since April, indicating a strong developer focus on performance improvements and features, often taking precedence over operational cost savings. Broadening model experimentation The findings also suggest a trend toward greater experimentation, with developers trying a wider array of AI models across applications. While OpenAI remains dominant, Meta's Llama ranked second in terms of LLM tokens processed among New Relic customers. There was a 92% increase in the number of unique models used within AI applications in the first quarter of 2025, underlining growing interest in open-source, specialised, and task-specific solutions. This diversification, although occurring at a smaller scale compared to OpenAI models, points to a potentially evolving AI ecosystem. Growth in AI monitoring As the diversity of model adoption increases, the need for robust AI monitoring solutions has also grown. Enterprises continue to implement unified platforms to monitor and manage AI systems, with New Relic reporting a sustained 30% quarter-over-quarter growth in the use of its AI Monitoring solution since its introduction last year. This growth reflects a drive among businesses to address concerns such as reliability, accuracy, compliance, and cost as AI systems become more embedded in day-to-day operations. Programming languages trends The report notes that Python solidifies its status as the preferred programming language for AI applications, recording nearly 45% growth in adoption since the previous quarter. follows closely behind Python in terms of both volume of requests and adoption rates. Java, meanwhile, has experienced a significant 34% increase in use for AI applications, suggesting a rise in production-grade, Java-based LLM solutions within large enterprises. Research methodology details The AI Unwrapped: 2025 AI Impact Report's conclusions are drawn from aggregated and de-identified usage statistics from active New Relic customers. The data covers activity from April 2024 to April 2025, offering a representative view of current AI deployment and experimentation trends across a substantial commercial user base. Follow us on: Share on:


Scoop
2 days ago
- Scoop
Marketing Agencies Urged To Pivot As Meta Moves Toward Fully Automated Advertising By 2026
Press Release – Alexanders Digital Marketing Agencies that once focused on deliverables like social posts and Google Ads are now being challenged to step into a new role: strategic enablers (helping clients convert leads into customers). With Meta announcing its ambition to fully automate advertising campaigns by 2026 using artificial intelligence, social media marketing agencies are quaking in their boots, and being urged to rethink their role in a fast-evolving digital landscape. According to a recent Reuters report, Meta is investing heavily in AI systems that will plan, purchase, and optimise ad campaigns with minimal human input, generating 30-40% better results at 10% of the cost, which could potentially wipe out much of the creative industry around social. The announcement signals a dramatic acceleration toward a future where media buying, and ad creative are machine-led. This shift is already being felt across the industry. AI tools like ChatGPT, Canva, and Meta's own Advantage+ are allowing small and mid-sized businesses to produce marketing content and run campaigns in-house, reducing their reliance on traditional agencies for execution. 'Clients no longer need an agency to write every post, design every banner, or set up every ad campaign,' said Rachel Alexander, founder of Alexanders, Christchurch's first digital marketing agency. 'They have Canva, ChatGPT, HeyGen, MidJourney & Meta automation. What they need now is someone to help them make sense of it all,' she said. Agencies that once focused on deliverables like social posts and Google Ads are now being challenged to step into a new role: strategic enablers (helping clients convert leads into customers). A recent YouTube vlog 'Meta just killed the creative industry: The 2026 Automation Apocalypse' by Julia McCoy, CEO at First Movers & AI thought leader, describes this well. 'Agencies must pivot from being tactical executors to strategic advisors, bringing clarity, structure, and prioritisation to an increasingly overwhelming landscape,' said McCoy. 'Business marketers need to think of their agency as a marketing generalist doctor: diagnosing weak points, recommending tailored treatments, and coaching internal teams through implementation,' said Alexander. With many SMEs building internal marketing teams and experimenting with DIY tools, the opportunity for agencies lies in offering higher-value services such as sales enablement, CRM integration, AI content workflows, and conversion strategy. 'It's less about deliverables, more about direction. Less about content calendars, more about conversion journeys…The marketing agency of the future is less like a factory and more like a consultancy,' said McCoy. Alexander said she is mindful, but not anxious, because we've always been a hybrid between a marketing consultancy and marketing agency. For New Zealand agencies looking to adapt, this means embracing AI, not competing with it and reasserting their value as interpreters, integrators, and insight-driven advisors. 'AI has been disruptive technology but being agile is the key to success. It's helped us survive for 28 years. Time to pivot again!,' said Alexander.