- In early 2025, Check Point Research identified a cyber attack campaign exploiting the popularity of generative AI service, Kling AI. The attack began with deceptive social media ads leading to a fake website designed to trick users into downloading malicious files.
- The attack used fake Facebook pages and ads to distribute a malicious file which ultimately led to the execution of a remote access Trojan (RAT), granting attackers remote control of the victim’s system and the ability to steal sensitive data.
- The malware deployed in this campaign featured advanced evasion techniques, including file masquerading to disguise harmful executable files as harmless media files, and extensive anti-analysis methods to avoid detection.
- Check Point’s threat emulation and Harmony Endpoint offer robust protection against the techniques and threats outlined in this campaign, ensuring defense against malicious files, remote access tools, and targeted social engineering attacks.
As generative AI continues to capture global attention, threat actors are quick to exploit AI’s capabilities and popularity. From deepfake scams to impersonation attacks, the rising trust in AI-powered platforms has created new openings for cyber criminals. In early 2025, Check Point Research began tracking a sophisticated threat campaign that capitalized on this trend, specifically by impersonating Kling AI, a widely used image and video synthesis tool with over 6 million users.
This campaign, propagated through false Facebook advertisements and spoofed pages, ultimately directed users to a counterfeit website designed to deliver a malicious payload. In this blog, we break down the tactics used in this campaign and examine how attackers are leveraging the credibility of generative AI services to deceive users and spread malware.
For a comprehensive understanding of the impersonation attack, read Check Point Research’s publication here.
From Fake Ads to Fake Downloads
The attack begins with fake advertisements on social media. Since early 2025, our team has identified around 70 sponsored posts that falsely promote the popular AI tool Kling AI. These ads come from convincing but fraudulent Facebook pages designed to look like the real company.
Clicking on one of these ads leads users to a fake website that closely mimics Kling AI’s actual interface. Just like the real tool, the site invites users to upload images and click a “Generate” button to see AI-powered results. However, instead of delivering an image or video, the site offers a download—one that appears to be an archive file containing a new AI-generated media file.
The downloaded file is made to look like a harmless image, complete with a name like Generated_Image_2025.jpg and even a familiar image icon. However, behind this seemingly harmless appearance lies something dangerous: the file is a disguised program intended to compromise the user’s system. This technique—known as filename masquerading—is a common tactic used by threat actors to trick users into launching malicious software.
Once opened, the program quietly installs itself, ensures it can restart automatically every time the computer is turned on. It also checks for any signs that it’s being watched or analyzed by cyber security tools and tries to avoid detection.
Stage 2: Silent Takeover with Remote Access Tools
After the initial fake file is opened, a second, more serious threat is activated. This stage installs called a remote access Trojan (RAT), a malware that allows attackers to take control of the victim’s computer from a distance.
Each version of this tool is slightly altered to avoid detection, but all include a hidden configuration file that connects back to the attackers’ server. These files also contain campaign names like “Kling AI 25/03/2025” or “Kling AI Test Startup,” suggesting ongoing testing and updates by the threat actors.
Once in place, the malware begins monitoring the system—especially web browsers and extensions that store passwords or other sensitive data—giving attackers the ability to steal personal information and maintain long-term access.
A Familiar Playbook: Tracing the Campaign
While the exact identity of the attackers remains unknown, evidence strongly suggests links to Vietnamese threat actors. Facebook-based scams and malware campaigns are a known tactic among groups from the region—especially those focused on stealing personal data.
In this case, our analysis revealed multiple clues pointing in that direction. Similar campaigns themed around AI tools have previously contained Vietnamese-language terms within the malware code. Consistent with that pattern, we found several references—such as debug messages—in Vietnamese in this latest campaign as well.
These findings align with broader trends observed by other security researchers investigating similar Facebook malvertising efforts.
Defending Against the New Face of AI-Themed Threats
As generative AI tools grow in popularity, cyber criminals are finding new ways to exploit that trust. This campaign, which impersonated Kling AI through fake ads and deceptive websites, demonstrates how threat actors are combining social engineering with advanced malware to gain access to users’ systems and personal data.
With tactics ranging from file masquerading to remote access and data theft, and signs pointing to Vietnamese threat groups, this operation fits into a broader trend of increasingly targeted and sophisticated social media-based attacks.
To help organizations stay protected, Check Point threat emulation and Harmony Endpoint offer comprehensive coverage across attack methods, file types, and operating systems—effectively blocking the threats outlined in this report. As always, proactive threat detection and user awareness remain essential in defending against evolving cyber threats.
For a comprehensive understanding of the impersonation attack, read Check Point Research’s publication here.