Soon after the launch of AI models DeepSeek and Qwen, Check Point Research witnessed cyber criminals quickly shifting from ChatGPT to these new platforms to develop malicious content. Threat actors are sharing how to manipulate the models and show uncensored content, ultimately allowing hackers and criminals to use AI to create malicious content. Called jailbreaking, there are many methods to remove censors from AI models. However, we now see in-depth guides to jailbreaking methods, bypassing anti-fraud protections, and developing malware itself.

This blog delves into how threat actors leverage these advanced models to develop harmful content, manipulate AI functionalities through jailbreaking techniques, and carry out sophisticated cyber crimes. We will explore real-world examples of these malicious activities and highlight the urgent need for heightened vigilance in the face of this evolving threat.

The Threat Landscape

Both Qwen and DeepSeek have shown potential as powerful tools for creating malicious content with minimal restrictions. While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones —individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.

It’s crucial to highlight that despite ChatGPT’s anti-abuse mechanisms, uncensored versions of ChatGPT are already available on various repositories across the internet. As these new AI models gain popularity, similar uncensored instances of DeepSeek and Qwen are anticipated to emerge, further escalating the risks involved.

Real-World Examples

Here are some alarming examples of how these AI engines are being utilized for malicious purposes and then shared on the open web for the use of additional threat actors.

Developing Infostealers

Threat actors have been reported creating infostealers using Qwen, focusing on capturing sensitive information from unsuspecting users.

Jailbreaking Prompts

Jailbreaking refers to methods that allow users to manipulate AI models to generate uncensored or unrestricted content. This tactic has become a preferred technique for cyber criminals, enabling them to harness AI capabilities for malicious intent.

In the below screenshot, cyber criminals share jailbreaking prompts for DeepSeek that can manipulate the model’s responses, including methods like the “Do Anything Now” approach and exploiting techniques like the “Plane Crash Survivors” method.

Bypassing Banking Protections

Multiple discussions and shared techniques on using DeepSeek to bypass banking system anti-fraud protections have been found, indicating the potential for significant financial theft.

Mass Spam Distribution

Cyber criminals are using three AI models—ChatGPT, Qwen, and DeepSeek together —to troubleshoot and optimize scripts for mass spam distribution, improving the efficiency of their malicious activities.

Emerging Cyber Threats: The Dark Side of Advanced AI Tools

The rise of models like Qwen and DeepSeek marks a concerning trend in the cyber threat landscape, where sophisticated tools are increasingly exploited for malicious purposes. As threat actors utilize advanced techniques like jailbreaking to bypass protective measures and develop info stealers, financial theft, and spam distribution, the urgency for organizations to implement proactive defenses against these evolving threats ensures robust defenses against potential misuse of AI technologies. Amid the race to develop and release new GenAI models, security must be prioritized, or organizations will continue to be exposed to unacceptable risk.

Check Point Research will continue to monitor the ways that threat actors are leveraging GenAI and other emerging technologies for harm.

You may also like