
In short
- Google has identified five malware families that interrogate LLMs to generate or hide malicious code.
- A DPRK-affiliated group called UNC1069 used Gemini to search wallet data and create phishing scripts.
- Google says it has disabled the accounts and tightened security around model access.
Google has warned that several new malware families are now using large language models at runtime to modify or generate code, marking a new phase in how state-linked and criminal actors deploy artificial intelligence in live operations.
In one report Released this week, the Google Threat Intelligence Group said it has tracked at least five different types of AI-enabled malware, some of which have already been used in ongoing and active attacks.
The newly identified malware families “dynamically generate malicious scripts, obscuring their own code to evade detection,” while also using AI models “to create malicious features on demand,” rather than having them hard-coded into malware packages, the Threat Intelligence group said.
Each variant uses an external model such as Gemini or Qwen2.5-Coder at runtime to generate or obfuscate code, a method GTIG calls “just-in-time code creation.”
The technique represents a shift from traditional malware design, where malware logic is typically hardcoded into the binary.
By outsourcing parts of its functionality to an AI model, the malware can continually make changes to protect itself from systems designed to deter it.
Two of the malware families, PROMPTFLUX and PROMPTSTEAL, demonstrate how attackers are integrating AI models directly into their operations.
The GTIG technical brief describes how PROMPTFLUX runs a “Thinking Robot” process that calls Gemini’s API every hour to rewrite its own VBScript code, while PROMPTSTEAL, linked to the Russian APT28 groupuses the Qwen model hosted on Hugging Face to generate Windows commands on demand.
The group also identified activities of a North Korean group known as UNC1069 (Masan) that abused Gemini.
Google’s research unit describes the group as “a North Korean threat actor known to conduct cryptocurrency theft campaigns using social engineering,” with notable use of “language related to computer maintenance and credential collection.”
According to Google, the group’s questions to Gemini included instructions on how to locate wallet application data, generate scripts to access encrypted storage, and compile multilingual phishing content targeting crypto exchange employees.
These activities, the report added, appeared to be part of a broader effort to build code that can steal digital assets.
Google said it has already disabled the accounts linked to these activities and implemented new security measures to limit model abuse, including refined prompt filters and stricter monitoring of API access.
The findings could point to a new attack surface where malware interrogates LLMs at runtime to locate wallet storage, generate customized exfiltration scripts, and craft highly credible phishing lures.
Declutter has reached out to Google about how the new model could change its approach to threat modeling and attribution, but has not yet received a response.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

