A hacker used AI to create ransomware that evades antivirus detection
Vibe coding is all the rage among enthusiasts who are using large language models (or “AI”) to replace conventional software development, so it’s not shocking that vibe coding has been used to power ransomware, too. According to one security research firm, they’ve spotted the first example of ransomware powered and enabled by an LLM—specifically, an LLM by ChatGPT maker OpenAI.
According to a blog post from ESET Research interviewing researcher Anton Cherepanov, they’ve detected a piece of malware “created by the OpenAI gpt-oss:20b model.” PromptLock, a fairly standard ransomware package, includes embedded prompts sent to the locally stored LLM. Because of the nature of LLM outputs (which create unique, non-repeated results with each prompt), it can evade detection from standardized antivirus setups, which are designed to search for specific flags.
ESET elaborates in a Mastodon post, spotted by Tom’s Hardware. PromptLock uses Lua scripts to inspect files on a local system, encrypt them, and send sensitive data to a remote computer. It appears to be searching for Bitcoin information specifically, and thanks to the wide-open nature of the OpenAI model and the Ollama API, it can work on Windows, Mac, and Linux. Because gpt-oss:20b is a lightweight, open-source AI model that can run on local PC hardware, it doesn’t need to call back to more elaborate systems like ChatGPT—and as a result, it can’t be outright blocked by OpenAI itself.
It’s written in Golang using Lua scripts, tools that would be familiar to anyone who’s making games in, say, Roblox. The point being that it’s possible PromptLock was created by someone with little-to-no experience in conventional programming. Though the output is variable, the prompts themselves are static, so Cherepanov says that “the current implementation does not pose a serious threat” despite its novelty.
“Script kiddies are now prompt kiddies,” said one Mastodon user in reply.