For too long, the narrative around AI in cyber security has focused on its defensive capabilities. While AI is revolutionizing how organizations protect themselves – bringing unprecedented speed, accuracy, and automation – it’s crucial to acknowledge the other side of the coin. Cyber criminals are quickly embracing AI, using large language models (LLMs) and advanced agentic AI to craft more potent and elusive attacks.

Consider the rise of malicious LLMs like WormGPT and the more recent Xanthorox AI. These aren’t just theoretical threats; they’re platforms designed for nefarious purposes. WormGPT, based on the GPT-J model, was marketed as a “blackhat” alternative, offering features tailored for malicious activities and reportedly trained on malware-related data. While its creators have ceased operations, the genie is out of the bottle. We’re seeing a trend of offensive AI tools, including BurpGPT, PentestGPT, FraudGPT, and PassGPT, with plans for even more sophisticated models like “Evil-GPT-Web3.”

Xanthorox AI, emerging in Q1 2025, represents a significant leap. Unlike previous iterations, it’s an autonomous, modular system built from scratch, operating entirely offline for enhanced anonymity and resilience. Its five specialized AI models – Xanthorox Coder, V4 Model, Xanthorox Vision, Xanthorox Reasoner Advanced, and the Fifth Coordination Module – work in concert to automate malware development, reconnaissance, social engineering, and coordinated attacks without external oversight. This is not just an LLM; it’s an agentic AI, signifying a logical and concerning shift in the cyber criminal’s toolkit.

AI-Enhanced Phishing Breaking Brand Trust

The impact of these AI developments on brand protection is particularly acute in the realm of phishing attacks. Threat actors are already using prompt injection techniques to manipulate legitimate LLMs into generating compelling phishing content. This means phishing attacks are being launched faster, with greater frequency, and with an alarming degree of personalization.

In 2024, a staggering 67.4% of global phishing incidents involved AI tactics, with the finance industry among the top targets. This isn’t just about volume; it’s about sophistication. AI enables attackers to craft highly personalized and convincing campaigns, including spear-phishing, deepfakes, and advanced social engineering techniques.

Emails are now Flawless, Fast, and Focused

One of the most immediate impacts is on phishing emails themselves. Gone are the days when grammatical errors and spelling mistakes were clear indicators of a scam. AI-generated emails are often indistinguishable from legitimate corporate communications. This speed of creation and personalization is significantly augmented by LLMs, allowing attacks to be launched and scaled rapidly. Research from 2021, showed that even with older AI, spear-phishing emails generated by AI achieved a 60% click-through rate. Another more recent study from 2024 found that fully AI-generated phishing emails achieved a 54% click-through rate in a human subject study, a 350% increase over arbitrary phishing emails.

A chilling real-world example occurred in February 2024, when the Hungarian branch of a European retail company lost €15.5 million in a business email compromise (BEC) attack. The attackers used generative AI to create emails that perfectly mimicked the tone, style, and formatting of prior corporate correspondence, targeting financial staff with urgent money transfer requests. These emails were error-free and contextually accurate, bypassing traditional filters and highlighting the effectiveness of AI-enhanced BEC attacks.

Deepfakes Are Becoming the Ultimate Impersonation

Beyond sophisticated emails, deepfakes have added an entirely new dimension to phishing. These synthetic media, created using deep learning, can fabricate realistic images, audio, or video to impersonate individuals, fake voice messages, or simulate video calls. What was once limited to highly skilled individuals is now easily accessible, with Deloitte reporting that 25.9% of executives have experienced one or more deepfake incidents.

Consider these “in the wild” examples:

  • UAE (2020) where a bank manager lost approximately $35 million after falling victim to an AI-driven phishing attack. Threat actors used deepfake voice technology to impersonate a company director, whose voice was cloned from publicly available audio samples. Spoofed emails from the “director” and a “lawyer” lent further legitimacy.
  • Hong Kong (January 2024) where a multinational firm suffered a $25 million loss due to a deepfake video scam. Attackers used AI-generated deepfake videos to impersonate the company’s CFO and other employees during a video conference call, exploiting group dynamics to dispel a finance worker’s doubts.
  • UK (May 2024) where in an unsuccessful but alarming attempt, attackers used a fake WhatsApp account, AI-generated voice cloning, and manipulated YouTube footage. They impersonated a CEO during a Microsoft Teams meeting, aiming to trick an agency leader into setting up a new business entity to solicit funds and personal details.
AI Analysis Boosting Reconnaissance

AI’s role in phishing isn’t limited to content generation and deepfakes. AI-powered data analysis allows attackers to harvest and analyze vast datasets from social media, public records, and breached databases at unprecedented speeds. This facilitates highly targeted spear-phishing campaigns tailored to specific individuals or organizations.

AI can predict victim behaviors and optimize attack timing. For instance, AI can analyze communication patterns within an organization to determine the ideal moment to deploy a phishing email mimicking a CEO’s tone and style, significantly increasing success rates.

As AI models become more powerful, threat actors are effectively receiving the same upgrades for highly targeted reconnaissance. Advanced features in models like Grok 3 and ChatGPT 4.0 allow for rapid analysis of public information. This means AI can forecast high-value opportunities, crafting campaigns that exploit trends before organizations can implement defenses.

An “in the wild” example from India in early 2024 saw a major financial institution targeted by an AI-driven phishing campaign that compromised sensitive customer data. Attackers used NLP and generative AI to craft spear-phishing emails mimicking the CEO’s writing style, incorporating official formatting and terminology gleaned from LinkedIn and corporate websites. These emails directed top managers to a fraudulent internal portal, granting attackers access to financial databases. In fact, there has been a 175% increase in financial phishing attacks in India during H1 2024, with AI playing a significant role.

We’re witnessing a significant shift, where artificial intelligence (AI) is not just a tool for defense but also a potent. Our latest ebook, “AI & Speed in Cyber,” delves into this critical shift, highlighting how AI is impacting brand protection, malware infections and the relentless surge of phishing attacks.

Download it here.

You may also like