The rise of AI-powered coding tools has made life easier for developers — but it’s also introduced new security threats. One of the biggest issues lies in AI “hallucinations” — the tendency of AI assistants to invent package names that don’t actually exist.
Researchers have previously noted that code-generation tools often suggest using nonexistent libraries. According to a recent study, about 5.2% of packages recommended by commercial AI models are fabricated. For open-source models, that number jumps to 21.7%.
Normally, trying to run code that imports a fake package results in an error. But attackers quickly realized how to exploit this weakness. All they need to do is upload a malicious package with the same hallucinated name to a public repository like PyPI or npm. The next time an AI assistant recommends that name, the malware could be automatically downloaded when the user installs dependencies.
Studies show that these hallucinated names aren’t entirely random. Some fake names appear repeatedly, while others may never show up again. For example, in repeated testing of the same prompts, 43% of hallucinated package names were consistent across all runs, while 39% were seen only once.
This tactic has been dubbed “slopsquatting” — a variation of the better-known “typosquatting”, where users mistype package names or URLs and accidentally download malicious versions. Seth Larson, a security developer at the Python Software Foundation, says the full extent of the problem is still unknown. He urges developers to double-check any AI-suggested packages before installing them.
There are many reasons a developer might attempt to install a nonexistent package — from simple typos to internal company naming conventions that clash with public names.
Feross Aboukhadijeh, founder of security company Socket, notes that developers now rely so heavily on AI suggestions that they often skip basic verification. This has led to widespread use of phantom packages — names that sound real, often accompanied by convincing descriptions, fake GitHub repositories, and even fraudulent blog posts to create an illusion of legitimacy.
Another problem is AI-generated content used for search engine summaries. Google, for example, may recommend fake or malicious packages simply by reproducing text from their npm or PyPI pages — without any fact-checking or verification.
In one recent case, Google’s AI-generated “Overview” snippet mistakenly recommended a malicious package named @async-mutex/mutex instead of the legitimate async-mutex. Another incident involved a hacker going by the alias “_Iain”, who posted detailed guides on dark web forums for building botnets using malicious npm packages — and used ChatGPT to automate much of the process.
The Python Software Foundation is actively addressing this issue by building new tools to protect the PyPI repository from malicious uploads. They encourage developers to carefully inspect package names and contents before installation. For larger organizations, they recommend using mirrored repositories with pre-vetted libraries to reduce the risk of supply chain attacks.
As the use of AI in development continues to grow, so does the importance of human oversight — because even the smartest code assistant can be tricked into delivering malicious code straight to your project.