Exclusive: AI-Generated Passwords Are Dangerously Predictable, Cybersecurity Firm Warns
Have you ever turned to an artificial intelligence tool to create a password for you? When you do, it typically generates one quickly and confidently asserts that the output is strong and secure. However, new research shared exclusively with Sky News and verified by the broadcaster reveals that this confidence is dangerously misplaced. According to findings from the AI cybersecurity firm Irregular, major AI models including ChatGPT, Claude, and Gemini produce passwords that are highly predictable and riddled with repeated patterns.
The Illusion of Security in AI Passwords
Irregular's investigation, which Sky News has independently confirmed, demonstrates that large language models (LLMs) do not generate passwords randomly. Instead, they derive results based on patterns embedded in their training data. This means they are not creating genuinely strong passwords but rather something that merely appears robust—an impression of strength that is, in reality, highly predictable. Predictable patterns are the enemy of effective cybersecurity because they allow automated tools used by cybercriminals to guess passwords with alarming ease.
Dan Lahav, co-founder of Irregular, issued a stark warning in response to the findings. "You should definitely not do that," he told Sky News, referring to using AI for password generation. "And if you've done that, you should change your password immediately. We don't think it's known enough that this is a problem." The research highlights that this issue extends beyond individual users to developers who increasingly rely on AI to write code, potentially embedding weak passwords into apps, programs, and websites.
Evidence of Predictability and Repetition
In a sample of 50 passwords generated using Anthropic's Claude AI, Irregular found only 23 unique passwords. One password, K9#mPx$vL2nQ8wR, was repeated 10 times. Other examples included variations like K9#mP2$vL5nQ8@xR, K9$mP2vL#nX5qR@j, and K9$mPx2vL#nQ8wFs. When Sky News tested Claude to verify the research, the first password it produced was K9#mPx@4vLp2Qn8R, further confirming the pattern. While some AI-made passwords require mathematical analysis to expose their weaknesses, many are so regular that their flaws are visible to the naked eye.
OpenAI's ChatGPT and Google's Gemini AI showed slightly less regularity in their outputs but still produced repeated passwords and predictable character patterns. Additionally, Google's image generation system, NanoBanana, exhibited similar errors when asked to create pictures of passwords on Post-it notes. Online password checkers often rate these AI-generated passwords as extremely strong, with one tool claiming a Claude password would not be cracked by a computer in 129 million trillion years. However, these assessments are flawed because the checkers are unaware of the underlying patterns that drastically reduce security.
Widespread Implications and Developer Risks
Mr. Lahav emphasized the severity of the issue, stating, "Our best assessment is that currently, if you're using LLMs to generate your passwords, even old computers can crack them in a relatively short amount of time." This vulnerability is not limited to casual users. A search on GitHub, the most widely-used code repository, found AI-generated passwords already embedded in code for apps, programs, and websites. For instance, searching for K9#mP, a common prefix used by Claude, yielded 113 results, while k9#vL, a substring from Gemini, produced 14 results. Many of these instances appear in security best practice documents or placeholder code, but Irregular identified some in what it suspects are real servers or services.
"Some people may be exposed to this issue without even realizing it just because they delegated a relatively complicated action to an AI," said Mr. Lahav. He called on AI companies to instruct their models to use tools for generating truly random passwords, similar to how humans might use a calculator. This step could mitigate the risk and enhance overall cybersecurity.
Expert Recommendations and Alternative Solutions
Graeme Stewart, head of public sector at cybersecurity firm Check Point, offered some reassurance, noting that this is one of the rare security issues with a straightforward fix. "In terms of how big a deal it is, this sits in the 'avoidable, high-impact when it goes wrong' category, rather than 'everyone is about to be hacked'," he explained. Other experts pointed out that passwords themselves are inherently vulnerable and suggested stronger authentication methods.
Robert Hann, global VP of technical solutions at Entrust, recommended using passkeys such as face and fingerprint ID wherever possible. "There are stronger and easier authentication methods," he said. For situations where passkeys are not an option, the universal advice is to choose a long, memorable phrase and avoid asking an AI for assistance. Sky News has reached out to OpenAI for comment, while Google and Anthropic have declined to respond.
This research underscores the critical need for awareness and caution in the rapidly evolving landscape of AI and cybersecurity. As AI tools become more integrated into daily tasks, understanding their limitations is essential for protecting personal and organizational data from potential breaches.