According to The Next Web, a small group of threat researchers at ESET unveiled something on February 19 that made the security community collectively hold its breath. At first glance, the code glowing on their monitors in Košice, Slovakia, looked like standard Android trash — the usual routines, the usual shady ambitions. But digging deeper into the execution flow, the analysts found a ghost in the machine.
This wasn’t a blind script firing off pre-programmed commands into the void. It was malware that could actually pause, survey the device it was inhabiting, and reason about what to do next. As of early 2026, that distinction — between executing and thinking — is the one that keeps security researchers up at night.
They called it PromptSpy. And it represents a fundamental rupture in how we need to think about digital threats. Decades of defending against mindless automatons — code that follows a script and nothing more — left us reasonably well-equipped. What we weren’t equipped for is fighting entities that can perceive their environment and consult an artificial intelligence for directions.
Why Android Has Always Been a Hacker’s Migraine
To grasp why PromptSpy lands so hard, you first have to understand why attacking Android has historically been such a headache for threat actors. The ecosystem is famously fragmented — sprawling, really, in a way that defies easy generalization.
Thousands of device models roam the wild. A Samsung Galaxy operates differently than a Google Pixel, which operates differently than a Motorola. Different screen sizes, customized UI skins, entirely different OS versions running simultaneously across the installed base. For a hacker trying to write a malicious app that automatically taps a specific button to grant itself permissions, that fragmentation is a wall. A hard-coded coordinate that lands on “Allow” on one handset might hit dead space on another.
Traditional malware attacks this problem with brute force — sprawling, clunky scripts stuffed with if-then statements, gambling on where interface elements might live. Brittle by design. The moment a manufacturer pushes a UI refresh, the whole thing collapses.
PromptSpy doesn’t bother. It sidesteps the entire puzzle of Android’s chaotic version history by outsourcing its vision to Google’s own generative AI model, Gemini. Which is, depending on your disposition, either audacious or darkly elegant.
When PromptSpy lands on a phone, it takes a snapshot of the current screen — buttons, text labels, layout, the full structural hierarchy of the interface. Then, right at runtime, it ships that package off to Gemini.
The malware essentially asks the AI: I am looking at this exact screen right now. Give me step-by-step instructions on where to swipe and what to tap so I can keep myself pinned in the recent-apps list.
Gemini processes the image, reads the context, and sends the instructions back. The malware executes them. Neat. Terrifying. Both.
How a Piece of Malware Learned to Call for Help Mid-Attack
Look, the idea of a virus phoning a cloud-based AI for tech support on how to burrow deeper into your phone sounds like third-rate science fiction. But in practice, when you actually trace the execution flow, it’s exactly what’s happening — on real devices, right now.
The specific technique is all about persistence. Getting a malicious app onto a phone is the easy part. Keeping it there is where threat actors have traditionally struggled. Users grow suspicious. They swipe apps away, dig into settings, hit uninstall. PromptSpy turns to its AI-generated instructions to constantly manipulate the interface — keeping itself active, blocking uninstallation attempts behind invisible screen overlays that the user can’t easily pierce.
Beyond that novel trick, the rest of the payload reads like a greatest-hits compilation of cyber threats. It carries a Virtual Network Computing (VNC) module — which, for the uninitiated, hands a remote operator a live feed of your screen and lets them steer your phone as if it were in their own hands. Lock screen data gets captured. Activity gets recorded. Screenshots fire off silently in the background.
But the AI integration is the beating heart of the operation. By reading the device’s state in real-time and adapting to it, the malware stops behaving like a blunt instrument. It starts behaving like a remote operator with situational awareness — and that shift changes everything about how defenders need to respond.
“We are no longer looking at code that just follows instructions. We are looking at code that queries models trained on language and context to decide its next move. The era of static malware is quietly ending.”
— ESET Threat Research Team
Caught Early — But That’s Not the Reassuring Part
Before you hurl your phone into the nearest body of water, some grounding is warranted. PromptSpy is not currently torching the global mobile infrastructure.
Telemetry from ESET shows this threat hasn’t spread widely in the wild. The analyzed samples appear to be part of a tightly targeted campaign aimed at users in Argentina. Language clues embedded in the code, combined with the specific distribution vectors the attackers used, point toward something closer to a proof of concept than a mass-scale outbreak. Play Protect — Android’s built-in security scanner — reportedly flags known samples on contact. Unless you’re pulling apps from unverified third-party forums, your immediate exposure is, in most cases, negligible.
That’s the reassuring part. Here’s the part that isn’t.
A proof of concept is exactly that: proof. It proves a specific, highly advanced attack vector has crossed out of the theoretical and into the real. The people writing these packages aren’t tinkering with dusty old code — they are actively stress-testing how large language models can be weaponized to dismantle our most basic security assumptions. The low infection rate today is almost beside the point. The methodology is what matters.
Six Months From Desktop to Mobile — The Timeline Should Alarm You
We are sitting in the first quarter of 2026, and the dominant conversation in tech is still about how to control and defend against artificial intelligence. This discovery didn’t arrive in isolation.
Just last August — barely six months before PromptSpy surfaced — ESET uncovered PromptLock, widely recognized as the first AI-driven ransomware to hit desktop systems. Now the same philosophy has migrated to mobile. Six months. That’s the development cycle we’re apparently dealing with now.
There are two distinct lenses through which to view this trajectory.
First, the raw technical reality. For years, the feedback loop PromptSpy exploits — reading a UI, interpreting it, reacting dynamically — was the exclusive domain of benign software. Automated testing suites used it. Accessibility tools built for visually impaired users relied on it. Injecting a large language model into a malicious runtime control loop, though, rewrites the math for defenders entirely. Security systems are architected to detect known patterns, known signatures, known behaviors. How do you build a detection layer against an app that invents its attack pattern on the fly by querying a third-party AI? Per the AV-TEST Institute, millions of new malicious Android packages were registered last year alone. If even a sliver of those start leaning on dynamic AI reasoning, the volume of unpredictable threat variations will overwhelm signature-based detection — not gradually, but abruptly.
Then there’s the symbolic weight of what just happened.
For a long time, we consoled ourselves with the idea that AI was just very fast autocomplete. Understanding context — distinguishing a “Settings” menu from a “Banking” app purely by looking at a screen — felt like a distinctly human capability. PromptSpy punctures that comfort. It demonstrates that generative AI has crossed into offensive territory requiring genuine contextual reasoning. Not pattern-matching. Reasoning. That’s a different animal.
The cat-and-mouse dynamic of cybersecurity just received a significant hardware upgrade — on the mouse’s side.
The Quiet Machinery Behind “Smart” Malware
We crave a dramatic narrative. We want our cyber threats to look like movie hackers in dark rooms, furiously typing while green text cascades down a monitor. The reality is quieter. Stranger, arguably.
PromptSpy doesn’t reinvent what malware wants. Data theft, screen surveillance, locking users out — those ambitions are as old as the category itself. What it rewires is the survival strategy. By learning to read its environment and adapt in real-time, it stops being a static artifact and becomes a reactive agent. That’s not a minor refinement. That’s a category shift.
Worth pausing on something here, too: the attackers didn’t build their own AI. Why would they? Building and hosting a large language model capable of complex image recognition and spatial reasoning demands enormous computational infrastructure — millions of dollars, serious engineering talent, and ongoing maintenance. It’s far cheaper to hijack an API call to an existing commercial model and let someone else’s billion-dollar investment do the heavy lifting. Gemini, in this scenario, becomes an unwitting accomplice. And the hands-on reality is that this kind of API abuse is genuinely difficult to shut down cleanly, because — as Google’s own safety team has acknowledged — the malware’s requests can look nearly indistinguishable from a legitimate accessibility tool asking for help parsing a screen.
Can Google block the malware from using Gemini? They can, and they do — AI providers constantly refresh their safety filters to intercept and reject malicious prompts. The problem is that threat actors typically disguise requests through careful prompt engineering, wrapping the malicious query in the language of something harmless. To the model, it’s a reasonable ask. To the device owner, it’s a quiet catastrophe.
Adaptive. Observant. Patient.
Those aren’t words we’ve historically applied to malware. They are now.
What This Actually Changes for the Rest of Us
For the vast majority of users, the practical guidance remains familiar: keep Google Play Protect enabled, stick to apps from the official Play Store, and treat unsolicited APK files from unfamiliar sources with the suspicion they deserve. Known variants of PromptSpy get flagged automatically. Your immediate risk, in most scenarios, is low.
But the broader implication — the one worth sitting with — is that the tools we built to make computers understand us are now being turned against us by our own software. The accessibility APIs that help a visually impaired user navigate their phone. The generative AI that interprets screen context to answer a question. These capabilities don’t know who’s invoking them or why. They just work. And when they work in service of a malicious runtime loop, the results are, in practice, quite difficult to distinguish from legitimate behavior until the damage is done.
As we push deeper into 2026, the question isn’t whether malware will keep getting smarter. That answer is already settled. The real question — the uncomfortable one — is whether our defenses can develop the same capacity for contextual reasoning that our adversaries are now borrowing from the cloud. Static signatures won’t cut it against an agent that writes its own playbook mid-execution.
The malware isn’t just running anymore. It’s watching. And it’s asking questions.
Reporting draws from multiple verified sources. The editorial angle and commentary are our own.
