September 5, 2025
The GIST Large language models can execute complete ransomware attacks autonomously, research shows
Lisa Lock
scientific editor
Andrew Zinin
lead editor
Editors' notes
This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread

Criminals can use artificial intelligence, specifically large language models, to autonomously carry out ransomware attacks that steal personal files and demand payment, handling every step from breaking into computer systems to writing threatening messages to victims, according to new research from NYU Tandon School of Engineering posted to the arXiv preprint server.
The study serves as an early warning to help defenders prepare countermeasures before bad actors adopt these AI-powered techniques.
A simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks—mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes—across personal computers, enterprise servers, and industrial control systems.
This system, which the researchers call "Ransomware 3.0," became widely known recently as "PromptLock," a name chosen by cybersecurity firm ESET when experts there discovered it on VirusTotal, an online platform where security researchers test whether files can be detected as malicious.
The Tandon researchers had uploaded their prototype to VirusTotal during testing procedures, and the files there appeared as functional ransomware code with no indication of their academic origin. ESET initially believed they found the first AI-powered ransomware being developed by malicious actors. While it is the first to be AI-powered, the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment.
"The cybersecurity community's immediate concern when our prototype was discovered shows how seriously we must take AI-enabled threats," said Md Raz, a doctoral candidate in the Electrical and Computer Engineering Department who is the lead author on the Ransomware 3.0 paper the team published publicly.
"While the initial alarm was based on an erroneous belief that our prototype was in-the-wild ransomware and not laboratory proof-of-concept research, it demonstrates that these systems are sophisticated enough to deceive security experts into thinking they're real malware from attack groups."
The research methodology involved embedding written instructions within computer programs rather than traditional pre-written attack code. When activated, the malware contacts AI language models to generate Lua scripts customized for each victim's specific computer setup, using open-source models that lack the safety restrictions of commercial AI services.
Each execution produces unique attack code despite identical starting prompts, creating a major challenge for cybersecurity defenses. Traditional security software relies on detecting known malware signatures or behavioral patterns, but AI-generated attacks produce variable code and execution behaviors that could evade these detection systems entirely.
Testing across three representative environments showed both AI models were highly effective at system mapping and correctly flagged 63%–96% of sensitive files depending on environment type. The AI-generated scripts proved cross-platform compatible, operating on (desktop/server) Windows, Linux, and (embedded) Raspberry Pi systems without modification.
The economic implications reveal how AI could reshape ransomware operations. Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models. Open-source AI models eliminate these costs entirely.
This cost reduction could enable less sophisticated actors to conduct advanced campaigns previously requiring specialized technical skills. The system's ability to generate personalized extortion messages referencing discovered files could increase psychological pressure on victims compared to generic ransom demands.
The researchers conducted their work under institutional ethical guidelines within controlled laboratory environments. The published paper provides critical technical details that can help the broader cybersecurity community understand this emerging threat model and develop stronger defenses.
The researchers recommend monitoring sensitive file access patterns, controlling outbound AI service connections, and developing detection capabilities specifically designed for AI-generated attack behaviors.
More information: Md Raz et al, Ransomware 3.0: Self-Composing and LLM-Orchestrated, arXiv (2025). DOI: 10.48550/arxiv.2508.20444
Journal information: arXiv Provided by NYU Tandon School of Engineering Citation: Large language models can execute complete ransomware attacks autonomously, research shows (2025, September 5) retrieved 5 September 2025 from https://techxplore.com/news/2025-09-large-language-ransomware-autonomously.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Explore further
Cybersecurity officials warn against potentially costly Medusa ransomware attacks 53 shares
Feedback to editors