State-Sponsored Hackers Turn Google’s AI Into Full-Spectrum Attack Assistant

State-sponsored hacking groups are exploiting Google’s Gemini AI across every stage of their cyber operations, from reconnaissance to post-compromise activity.

4
State-Sponsored Hackers Turn Google’s AI Into Full-Spectrum Attack Assistant
Security, Compliance & RiskNews

Published: February 12, 2026

Kristian McCann

State-sponsored hackers are systematically exploiting Google’s Gemini AI model throughout every phase of cyberattacks, from initial reconnaissance through post-compromise operations, according to a new assessment from Google’s Threat Intelligence Group (GTIG).

The report reveals that advanced persistent threat (APT) groups from China, Iran, North Korea, and Russia are leveraging the large language model to enhance their offensive capabilities, while cybercriminals are increasingly integrating AI tools into their malicious operations.

The findings mark a significant evolution in how sophisticated adversaries are weaponizing publicly available AI systems. Rather than developing proprietary tools from scratch, these groups are effectively outsourcing portions of their attack development to commercial AI platforms, accelerating their operations while reducing costs and technical barriers.

How Threat Actors Are Weaponizing Gemini AI

Although GTIG noted that no APT or information operations actors have achieved breakthrough capabilities that fundamentally alter the existing threat landscape, the developments represent a concerning trend.

In one particularly alarming finding, threat actors have deployed more than 100,000 prompts against Gemini in large-scale model extraction attempts designed to replicate its reasoning capabilities across multiple languages.

Google’s investigation reveals that Chinese threat actors, including APT31 and Temp.HEX, have used expert cybersecurity personas to direct Gemini in automating vulnerability analysis and generating targeted testing plans within fabricated scenarios. In one documented case, a China-based actor tested Hexstrike MCP tooling while directing the model to analyze Remote Code Execution (RCE) techniques, web application firewall (WAF) bypass methods, and SQL injection test results against specific U.S.-based targets.

Another Chinese threat group frequently leveraged Gemini for code debugging, technical research, and guidance on intrusion capabilities.

The Iranian adversary APT42 deployed Google’s language model to enhance social engineering campaigns and accelerate the development of custom malicious tools through debugging, code generation, and exploitation technique research.

North Korean and Russian actors similarly incorporated Gemini into their operational workflows for tasks ranging from target profiling and open-source intelligence gathering to phishing lure creation, translation, coding assistance, and vulnerability testing.

Beyond state-sponsored groups, cybercriminals are also demonstrating increased sophistication in AI integration. Two malware families highlight this trend: HonestCue, a proof-of-concept framework discovered in late 2025, uses the Gemini API to dynamically generate C# code for second-stage malware payloads, which are then compiled and executed in memory. CoinBait, a React single-page application disguised as a cryptocurrency exchange, contains artifacts indicating its development was accelerated using AI code generation tools—potentially including the Lovable AI platform, based on client library usage patterns.

Cybercriminals have also weaponized generative AI in ClickFix campaigns delivering the AMOS information-stealing malware for macOS. These operations lure users through malicious ads appearing in search results for common troubleshooting queries, prompting victims to execute harmful commands.

Additionally, Google has identified large-scale model extraction and distillation attempts, where organizations with authorized API access methodically query Gemini with more than 100,000 prompts to replicate its decision-making processes and reproduce its functionality in competing models. This practice amounts to intellectual property theft and threatens the AI-as-a-service business model.

Google’s Response and Defensive Measures

Google has responded to identified threats by disabling accounts and infrastructure associated with documented malicious activity. The company has enhanced its classifiers and underlying models, enabling them to refuse assistance with similar attack patterns moving forward. These targeted defenses aim to prevent abuse while maintaining legitimate use cases for the platform.

“We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,”

the report stated. The company emphasizes that it designs AI systems with robust security measures and safety guardrails, regularly testing them to improve their resilience.

The observed activity represents an evolution of existing techniques rather than a revolutionary change in adversary capabilities. However, Google expects malware operators to continue integrating AI into their toolsets as these technologies become more accessible and sophisticated.

The Path Forward for Security

The weaponization of commercial AI platforms like Gemini underscores the dual-use nature of advanced language models and the ongoing challenge of balancing innovation with security. As threat actors demonstrate increasing sophistication in leveraging these tools, the cybersecurity industry faces a critical inflection point in how AI systems are designed, deployed, and defended.

From state-sponsored espionage operations to financially motivated cybercrime, adversaries are systematically incorporating AI capabilities to reduce development costs, accelerate operations, and lower technical barriers to entry.

Now, adversaries can delegate complex tasks—from vulnerability research to payload generation—to a mainstream AI assistant built for everyday users. This is not about hackers discovering an obscure exploit; it is about nation-state actors and cybercriminals systematically integrating a public AI tool into every phase of their operations, turning what was once a labor-intensive process into an AI-assisted workflow—and making the future cyber landscape far more perilous.

Agentic AIAgentic AI in the Workplace​AI AgentsAI AssistantCall RecordingCommunication Compliance​Generative AIGenerative AI Security​Security and Compliance
Featured

Share This Post