Site icon Check Point Blog

It’s up to us to determine if generative AI helps or harms our world

Reprinted with permission from the World Economic Forum

There has been much discussion about the promise and hype of artificial intelligence (AI). I think we can all agree that AI is a disruptor technology, with the potential to improve our lives drastically via personalized medicinesafer transportation and so much more. And it has great potential to help the cybersecurity industry accelerate development of new protection tools and validate some aspects of secure coding. However, the introduction of this new technology also carries a potential for abuse and global harm. Ultimately, it is up to us to determine the role that AI will play in our lives. It’s a conversation that must happen now.

In November 2022, OpenAI released a new AI model called ChatGPT (Generative Pre-Training Transformer), which interacts in a conversational way, enabling people to ask questions and receive answers. ChatGPT is extremely popular, garnering over a million users in the five days following its launch. Many people used it to write poetry or create new recipes. But others had more nefarious ideas. Researchers quickly discovered that it is not only possible, but easy to use ChatGPT to create malicious emails and code that can be used to hack organizations. And, in fact, a mere few weeks since its release, it is already being used for this exact purpose, with even novices creating malicious files and putting us all at risk.

Generative AI may increase cyberattacks

Why does this matter? The world experienced a 38% increase in cyberattacks in 2022 (as compared to 2021). The average organization was attacked 1,168 times per week. Education and healthcare were two of the most targeted industries, resulting in hospitals and schools coming to a standstill. Physicians were unable to treat patients and schools closed with children sent home. We may now see an exponential rise in cyberattacks due to generative AI models.

 

We may now see an exponential rise in cyberattacks due to generative AI models. Image: Check Point Research

To its credit, OpenAI has invested a tremendous amount of effort to stop abuse of its AI technology, writing, “while we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.” But, unfortunately, ChatGPT is struggling to stop the production of dangerous code.

To illustrate this point, researchers compared ChatGPT and Codex, another AI-based system that translates natural language to code. A full infection flow was created with the following restriction: no code was written, and the AI did all the work. The researchers assembled the pieces and, if executed, they found it could be used as a phishing email attack weaponized with a malicious Excel file with macros that download a reverse shell. In other words, the attacker operates as the listener and the victim as the initiator of the attack.

Taking it a step further, the team at Check Point Research recently uncovered instances of cybercriminals using ChatGPT to develop malicious tools – in some instances, these hackers relied entirely on AI for the development, while others simply used the AI to greatly accelerate the time to create malicious code. From malware to phishing to a script that can encrypt a victim’s machine automatically without any user interaction and the creation of an illicit marketplace, it is concerning to see how quickly cybercriminals are adopting AI and using it in the wild for their disruptive purposes.

Intersection of AI, cybersecurity and safety

So, can generative AI be used for harm? Yes. Can AI be used to arm cybercriminals to shortcut phishing attacks? Yes. When will this become a true threat to society? Soon.

This month at the World Economic Forum’s Annual Meeting, we have an opportunity for leaders from government, AI, business and cybersecurity to come together to discuss the urgent intersection of AI, cybersecurity and safety. We should not aim to stifle innovation, but rather to ensure that there are safeguards in place.

AI is already leading to scientific breakthroughs. It’s helping to detect financial fraud and build climate resilience. It’s a tool for us to use to improve and advance many areas of our lives. This includes safety and cybersecurity. By incorporating AI into a unified, multi-layered security architecture, cybersecurity solutions can provide an intelligent system that not only detects – but actively prevents – advanced cyberattacks.

The time is now to have these conversations. Just as many have advocated for the importance of diverse data and engineers in the AI industry, so must we bring in expertise from psychology, government, cybersecurity and business to the AI conversation. It will take open discussion and shared perspectives between cybersecurity leaders, AI developers, practitioners, business leaders, elected officials and citizens to determine a plan for thoughtful regulation of generative AI. All voices must be heard. Together, we can surely tackle this threat to public safety, critical infrastructure and our world. We can turn generative AI from foe to friend.

 

Exit mobile version