
Passwords you should never use revealed as millions are urged to change ‘weak' phrases
MILLIONS of people are being urged to rethink their online security after experts revealed the full list of passwords you should never use.
Weak, lazy, and predictable passwords are putting countless accounts at risk, and cybersecurity researchers warn the time to change them is now.
A major analysis by the team at Cybernews, who combed through 19 billion leaked passwords, uncovered the worst offenders — phrases and numbers so common that hackers can guess them in seconds.
Neringa Macijauskaitė, an information security expert at Cybernews, said: "we're facing a widespread epidemic of weak password reuse."
At the top of the danger list is the classic '123456' — a password so lazy it's practically handing over the keys.
Other shockingly common choices include '123456789,' 'password,' 'qwerty,' and '111111.'
These simple strings are often the first combinations hackers try when launching attacks.
Here's the full list of passwords you should never use:
123456
123456789
qwerty
password
12345
qwerty123
1q2w3e
12345678
111111
1234567890
But it's not just numbers and default words causing problems.
Names are another major weak spot.
Cybernews found that around 8 per cent of passwords included popular names, making them an easy target.
For example, 'Ana' appeared in 179 million leaked passwords, while pop culture names like 'Mario' (9.6 million), 'Batman' (3.9 million), 'Thor' (6.2 million), 'Joker' (3.1 million), and 'Elsa' (2.9 million) were shockingly common.
Curse words are also widely used and appear in hacker wordlists.
Among the leaked data, researchers found millions of passwords containing terms like 'ass' (165 million), 'fk' (16 million), 'st' (6.5 million), 'dick' (3.2 million), and 'b***h' (3.2 million).
Food names, football teams, and locations are equally risky.
They may feel personal and easy to remember, but they're exactly the kind of terms attackers are primed to exploit.
Another massive problem is password reuse.
The average person now manages roughly 100 passwords across around 200 online accounts, according to anti-virus company NordPass.
Using the same password across multiple sites increases the chances of a domino effect, where one data breach can compromise all your accounts.
'If you reuse passwords across multiple platforms, a breach in one system can compromise the security of other accounts,' Macijauskaitė warned.
Individuals must take immediate action to strengthen their online security.
Users are strongly advised to stop relying on lazy, weak, or predictable passwords and instead create strong, unique passwords for each of their accounts.
Additionally, enabling two-factor authentication wherever available is an important extra layer of protection that significantly reduces the risk of unauthorised access.
For those who struggle to manage multiple complex passwords, the use of a reputable password manager is highly recommended.
Such tools can securely store and generate unique passwords, helping users maintain robust security across all platforms without the burden of memorising each one.
Taking these steps is essential to safeguarding personal and financial information in today's increasingly digital world.
2

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NBC News
3 days ago
- NBC News
FDA's AI tool for medical devices struggles with simple tasks
A new Food and Drug Administration AI tool that could speed up reviews and approvals of medical devices such as pacemakers and insulin pumps is struggling with simple tasks, according to two people familiar with it. The tool — which is still in beta testing — is buggy, doesn't yet connect to the FDA's internal systems and has issues when it comes to uploading documents or allowing users to submit questions, the people say. It's also not currently connected to the internet and can't access new content, such as recently published studies or anything behind a paywall. The artificial intelligence, dubbed internally CDRH-GPT, is intended to help staffers at the agency's Center for Devices and Radiological Health, a division responsible for ensuring the safety of devices implanted in the body as well as essential tools like X-rays and CT scanners. The division was among those affected by the sweeping mass layoffs at the Department for Health and Human Services earlier this year. While many of the device reviewers were spared, the agency eliminated much of the backend support that enables them to issue approval decisions on time. The work of reviewers includes sifting through large amounts of data from animal studies and clinical trials. Depending on the applicant, it can take months or even over a year — which an AI tool could feasibly help shorten. Experts, however, are concerned that the FDA's push toward AI could outpace what the technology is actually ready for. Since taking over the agency on April 1, Commissioner Dr. Marty Makary has pushed to integrate artificial intelligence across the FDA's divisions. How this move into AI could affect the safety and effectiveness of drugs or medical devices hasn't been determined. Last month, Makary set a June 30 deadline for the AI rollout. On Monday, he said the agency was ahead of schedule. But the two people familiar with CDRH-GPT say that it still needs significant work and that FDA staff were already concerned about meeting the June deadline, at least in its original form. 'I worry that they may be moving toward AI too quickly out of desperation, before it's ready to perform,' said Arthur Caplan, the head of the medical ethics division at NYU Langone Medical Center in New York City. He stressed that reviewing medical devices accurately is essential, since people's lives depend on it. 'It still needs human supplementation,' Caplan said. AI 'is really just not intelligent enough yet to really probe the applicant or challenge or interact.' The FDA directs all media inquiries to the Department of Health and Human Services. A spokesperson for HHS did not respond to a request for comment. On Monday, Makary announced that a separate AI tool, called Elsa, had been rolled out to all FDA employees. Elsa is now intended for basic tasks agency-wide, such as summarizing data from adverse event reports. 'The first reviewer who used this AI assistant tool actually said that the AI did in six minutes what it would normally take him two to three days to do,' Makary said in an interview last week. 'And we're hoping that those increased efficiencies help. So I think we've got a bright future.' The reality inside the agency is quite different, the same two sources said. While the concept is solid and a step in the right direction, they said, some staff feel it's being rushed and not yet ready for prime time. 'AI tools to help with certain tasks for reviewers and scientists seems reasonable given the potential utility of AI,' one of the people said. However, the person said they disagree with the 'aggressive roll out' and claims that it could reduce work 'by hours and days.' To be sure, experts say, it's not uncommon for a company or government agency to launch a new product and then refine it through iterative updates over time. Staff have worked hard to get Elsa up and running, the people said, but it still can't handle some core functions and needs more development before it can support the agency's complex regulatory work. When staff tested the tool Monday with questions about FDA-approved products or other public information, it provided summaries that were either incorrect or only partially accurate, one of the people said. It's unclear, the people said, whether CDRH-GPT will eventually be integrated into Elsa or remain a standalone system. Richard Painter, a law professor at the University of Minnesota and a former government ethics lawyer, said there are also concerns about potential conflicts of interest. He wondered whether there is a protocol in place to prevent any government official — such as an FDA reviewer using the technology — from having financial ties with companies that could benefit from AI. While the technology has existed for years, he said, it's still a new venture for the FDA. 'We need to make sure that the people involved in these decisions do not have a financial interest in the artificial intelligence companies that would get the contracts,' Painter said. 'A conflict of interest can greatly compromise the integrity and the reputation of a federal agency.' Some at the FDA don't see AI as a solution to their overwhelming workloads — they see it as a sign that they may eventually be replaced. The FDA is 'already spread thin from the RIF [layoffs] and the steady loss of individuals while in a hiring freeze and no capacity to backfill,' one of the people said.


Reuters
4 days ago
- Reuters
US FDA launches AI tool to reduce time taken for scientific reviews
June 2 (Reuters) - The U.S. Food and Drug Administration said on Monday that it had launched a generative AI tool, Elsa, aimed at improving efficiency across its operations, including scientific reviews. "Today's rollout of Elsa is ahead of schedule and under budget, thanks to the collaboration of our in-house experts across the centers," said FDA Commissioner Marty Makary. The agency said it is already using Elsa to expedite clinical protocol reviews, shorten the time needed for scientific evaluations, and pinpoint high-priority inspection targets. Once the FDA receives an application for a potential drug approval, it has six to 10 months to make a decision. Elsa assists with reading, writing, and summarizing tasks. It can summarize adverse events to support safety profile assessments of drugs and rapidly compare packaging inserts. "Elsa offers a secure platform for FDA employees to access internal documents while ensuring all information remains within the agency. The models do not train on data submitted by regulated industry, safeguarding the sensitive research and data handled by FDA staff," the FDA said. In May, the regulator said it would fully integrate AI by June 30, following an experimental run.


Daily Mail
23-05-2025
- Daily Mail
1.2b social media users' data stolen in historic breach: Check your bank account NOW
Over a billion Facebook users have had their private account information stolen in one of the largest data breaches in social media history. A cybercriminal using the alias ByteBreaker claims to have scraped 1.2 billion Facebook records and is now selling the data on the dark web. Scraping, or web scraping, involves using automated tools to collect large amounts of data from websites, similar to copying and pasting information at scale. Cybersecurity researchers at Cybernews revealed that the stolen data includes names, user IDs, email addresses, phone numbers, birthdates, gender information, and location data such as city, state, and country. Investigators say ByteBreaker exploited a flaw in a specific Facebook tool designed to let apps or programs access user data. If verified, ByteBreaker's trove would represent the largest single data-scraping incident from a social media platform to date. Officials are urging all Facebook users to change their passwords, freeze their credit, and activate fraud alerts on their bank accounts. They warn that the dataset scraped by ByteBreaker contains enough information for cybercriminals to open credit cards in victims' names or access their financial accounts. ByteBreaker shared a sample of 100,000 user records on the dark web to prove they have the data. However, both Facebook and cybersecurity experts are questioning if the cyber thief actually has what they claim. A spokesperson from Meta told Daily Mail that the stolen information ByteBreaker allegedly has is actually from a 2021 Facebook breach involving more than 500 million users. 'This is from 2021, so it's not a new claim. We disclosed this years ago and have taken steps to prevent similar incidents from happening,' Meta said in a statement Thursday. According to researchers from Hackread, some of the data in the 100,000 user sample ByteBreaker posted on the dark web was from that 2021 breach, so it's possible the cybercriminal is trying to pass off old data as new. Cyber experts added ByteBreaker claimed in their dark web ad that the 1.2 billion accounts were stored in '200 million rows.' In databases, however, each 'row' represents one user's complete info (name, email, etc.). So, 1.2 billion records should need 1.2 billion rows, not 200 million, adding even more skepticism to the hacker's story. It would surpass the 700 million LinkedIn scrape and the 533 million Facebook breach of 2021. 'Scraping data using features meant to help people violates our terms. We have teams across the company working to detect and stop these behaviors,' Meta wrote in a statement after the 2021 data breach. Meta told Daily Mail that their stance on this issue hasn't changed and the company firmly believes no new data has been taken from Facebook. ByteBreaker claims that they collected the data by abusing Facebook's Application Programming Interface (API). Facebook's API can allow an app to access user profiles to show their names or posts. The hacker figured out a way to trick or overuse this API to collect massive amounts of user data without permission. It's like finding a loophole in a library's computer system to download everyone's contact info instead of just borrowing a book. Along with changing your email password and freezing your credit, consider updating your passwords for accounts that use the same email or phone number that may have been stolen from Facebook. You can also enable two-factor authentication, which adds an extra step to logins, like a code sent to your phone or email, making it harder for hackers to access your account even if they have your password.