Researchers flag code that uses AI systems to carry out ransomware attacks

Researchers at cybersecurity firm ESET claim to have identified the first piece of AI-powered ransomware in the wild.
The malware, called PromptLock, essentially functions as a hard-coded prompt injection attack on a large language model, causing the model to assist in carrying out a ransomware attack.
Written in Golang programming code, the malware sends its requests through Ollama, an open-source API for interfacing with large language models, and a local version of an open-weights model (gpt-oss:20b) from OpenAI to execute tasks.
Those tasks include inspecting local filesystems, exfiltrating files and encrypting data for Windows, Mac and Linux devices using SPECK 128-bit encryption.
According to senior malware researcher Anton Cherepanov, the code was discovered Aug. 25 by ESET on VirusTotal, an online repository for malware analysis. Beyond knowing that it was uploaded somewhere in the U.S., he had no further details on its origins.
“Notably, attackers don’t need to deploy the entire gpt-oss-20b model within the compromised network,” he said. ”Instead, they can simply establish a tunnel or proxy from the affected network to a server running Ollama with the model.”
ESET believes the code is likely a proof of concept, noting that functionality for a feature that destroys data appears unfinished. Notably, Cherepanov told CyberScoop that they have yet to see evidence of the malware being deployed by threat actors in ESET telemetry.
“Although multiple indicators suggest the sample is a proof-of-concept (PoC) or work-in-progress rather than fully operational malware deployed in the wild, we believe it is our responsibility to inform the cybersecurity community about such developments,” the company said on X.
In screenshots provided by ESET, the ransomware code embeds instructions to the LLM, telling it to generate malicious Lua scripts, asking it to verify the contents of files to determine if they contain personally identifiable information and – using its “analysis mode” – generating a ransom note based on what the program thought a ransomware actor might write.
It also provided a sample Bitcoin address – which appears to be the known address of the cryptocurrency’s anonymous creator Satoshi Nakamoto – to use when demanding payment.
It’s a novel example of leveraging security holes in the prompting process, inducing an AI program to carry out the core functions of ransomware: locking files, stealing data, threatening and extorting victims and extracting payment.
Researchers in AI security are increasingly highlighting the potential risk for businesses and organizations who deploy AI “agents” into their networks, noting that these programs must be given high level administrative access to carry out their jobs, are vulnerable to prompt injection attacks and can be turned against their owners.
Because the malware relies on scripts generated by AI, Cherepanov said one difference between PromptLock and other ransomware “is that indicators of compromise (IoCs) may vary from one execution to another.”
“Theoretically, if properly implemented, this could significantly complicate detection and make defenders’ jobs more difficult,” he noted.