Site icon Check Point Blog

The double-edged sword of artificial intelligence in cyber security

EXECUTIVE SUMMARY:

The pace of change in the field of artificial intelligence (AI) is difficult to overstate. In the past few years, we have seen a groundbreaking revolution in the capabilities of AI and the ease with which it can be used by ordinary businesses and individuals.

ChatGPT, a form of generative AI that leverages large language models (LLMs) to generate original human-like content, already has 180 million regular users and has had 1.4 billion visits to its website so far in 2023. Businesses are leveraging this kind of technology to automate tasks, run predictive analytics, derive insights for decision-making and even hire new employees – and that’s just scratching the surface.

For businesses and individuals, AI is nothing short of a game-changer. But where game-changing technologies can be used for good, they can also be used for evil. While AI is being utilized to enhance cyber security operations and improve network monitoring and threat detection, it is also being leveraged by threat actors to enhance their attack capabilities and methods. As we approach 2024, the race between offensive and defensive AI has never been closer or more important.

As detailed in Check Point’s 2023 Mid-Year Cyber Security Report, cyber criminals are harnessing AI to create more sophisticated social engineering tactics. By leveraging generative AI, they can create more convincing phishing emails, develop malicious macros in Office documents, produce code for reverse shell operations and much more. Even more concerning is that AI can be used to scale these operations more easily, allowing threat actors to target more victims in a shorter space of time. I think we can all agree that artificial intelligence is most definitely a protector, but it is also a threat.

AI for – and against – cyber security

This year has witnessed AI's profound influence on both the offensive and defensive sides of cyber security. It has emerged as a potent tool in defending against sophisticated cyberattacks, significantly improving threat detection and analysis. AI-driven systems excel in identifying anomalies and detecting unseen attack patterns, mitigating potential risks before they escalate. For instance, real-time intelligence can be used by AI algorithms to monitor networks in real-time and accurately defend against threats as they emerge, reducing the occurrence of false positives.

However, the same AI that fortifies our defences is also being weaponized by cyber adversaries. Tools, such as ChatGPT, have been manipulated by malicious actors to create new malware, accelerate social engineering tactics and produce deceptive phishing emails that can pass even the most stringent scrutiny. Such advancements underscore the cyber arms race, where defence mechanisms are continually challenged by innovative offensive strategies.

With deep fake videos and voice-making capabilities now within reach, we should expect AI-powered social engineering tactics to get even more sophisticated. If the waters weren’t murky enough, ChatGPT can also be used to spread misinformation and the risk of 'hallucinations,' where AI chatbots fabricate details to answer user queries satisfactorily, making tools like these being seen purely as a force for good increasingly difficult.

The democratization of AI

One of the things that has made ransomware such a prevalent threat to businesses around the world is the rise of Ransomware-as-a-Service or RaaS. This refers to the creation of organized groups that operate almost like legitimate businesses, creating ransomware tools and selling intelligence around exploits and vulnerabilities to the highest bidder. That means even less experienced threat actors or those with limited resources can still orchestrate sophisticated attacks against large targets – all they need to do is buy access to the right tools.

Just as RaaS amplified the capabilities of threat actors by democratizing malicious software and making it more accessible, AI-as-a-Service (AIaaS) is amplifying capabilities around artificial intelligence. The democratization of AI tools, such as ChatGPT and Google Bard, has made them accessible to a broader audience.

While these tools hold immense potential for business and society, they are also being exploited for malicious purposes. For instance, Russian-affiliated cyber criminals have already bypassed OpenAI's geo-fencing restrictions and utilized generative AI platforms to craft sophisticated phishing emails, malware keyloggers and even basic ransomware code. In a white hat exercise, Check Point also achieved similar results with Google’s Bard AI, convincing the platform to eventually help with the creation of keylogging or ransomware code through a series of prompts – something any user with the slightest bit of knowledge could achieve.

Regulatory challenges

The evolving landscape of AI presents a host of regulatory challenges that underscore the importance of a well-rounded framework to govern its application. The ethical considerations at the forefront of these issues pivot around the notions of fairness, accountability and transparency. AI systems – in particular generative AI – are susceptible to inherent biases that could perpetuate or even exacerbate existing human prejudices. For instance, decision-making AI in hiring or lending could unfairly favour certain demographics over others, necessitating regulatory oversight to ensure equitable practices.

As AI becomes integral in various sectors, the potential for adverse outcomes, be it in cyber security, privacy infringements or misinformation campaigns, escalates. Regulatory frameworks are needed to ensure that the development and deployment of AI technologies adhere to one collective standard. This is important for 'good' AI, but regulation isn’t something that nefarious actors typically worry about. This could further widen the gap in the race between defensive and offensive AI.

Securing the present and the future

Amidst the focus on AI's future potential, it is crucial not to overlook the basics. Ensuring fundamental security practices, such as patching vulnerabilities, running regular scans and shoring up endpoints remains essential. While it is tempting to invest all efforts into threats of the future, addressing present-day challenges is equally as important.

As AI continues to evolve, its capabilities will undoubtedly expand, serving both defenders and attackers. While AI-driven solutions are enhancing defense mechanisms, cyber criminals are also harnessing AI to refine their tactics. The symbiotic relationship between AI and cyber security suggests that as one side advances, the other will adapt and innovate in response.

This article was originally published by the World Economic Forum and has been reprinted with permission.

Exit mobile version