Site icon Check Point Blog

The latest AI predictions for 2024 from an industry expert

EXECUTIVE SUMMARY:

In this highly informative and engaging interview, Check Point expert Sergey Shykevich spills the tea on the trends that he and his threat intelligence team are currently seeing. You’ll get insights into what’s happening with AI and malware, you’ll find out about how nation-state hackers could manipulate generative AI algorithms, and get a broader sense of what to keep an eye on as we move into 2024.

Plus, Sergey also tackles the intellectual brain-teaser that is whether or not AI can express creativity (and the implications for humans). Let’s dive right in:

To help our audience get to know you, would you like to share a bit about your background in threat intelligence?

I’ve been in threat intelligence for 15 years. I’ve spent 10 years in military intelligence (various positions, mostly related to cyber space intelligence) and I’ve been in the private sector for around 6 years.

These last two years have been at Check Point, where I serve as the Threat Intelligence Group Manager for Check Point Research.

Would you like to share a bit about the cyber trends that you’ve seen across this year, especially as they relate to AI?

Yes. We have seen several trends. I would say that there are 3-4 main trends.

How can AI be leveraged to counter some of the threats that we’re seeing and that we’ll see into the future?

On the phishing and impersonation side, I think AI is being used and will mostly be used to identify specific patterns or anomalies within email content, which is no easy job for these tools. Most of the phishing content that’s created by AI is pretty good, especially since the data is now pulled directly from the internet (ex. the latest version of ChatGPT). The AI-based solutions can much better identify suspicious attachments and links, and can prevent the attacks in the initial stages.

But of course, the best way to counter AI-based phishing threats, as they exist right now, is still to avoid clicking on links and attachments.

Most cyber criminals aim to get people to take further action – to fill out a form, or to engage in some other activity that helps them. I think that a big thing that AI can do is to identify where a specific phishing email leads to, or what is attached to the email.

Of course, there’s also the possibility of using AI and ML to see what emails a person receives, whether or not they look like phishing emails (based on the typical emails that a person receives on the day-to-day). That’s another possible use-case for AI, but I think that AI is more often used for what I mentioned before; phishing attack assessment.

Could our cyber crime-fighting AI be turned against us?

In theory, yes. I think that this is more of an issue for the big, well-known AI models like ChatGPT — there are a lot of theoretical concerns about how these companies protect their models (or fail to).

There are really two main concerns here. 1) Will unauthorized people have access to our search queries and what we submit? 2) Manipulation — a topic about which there is even more concern than the first. Someone could manipulate a model to provide very biased coverage of a political issue, making the answer or answers one-sided. There are very significant concerns in this regard.

And I think everyone who develops AI or generative AI models that will be widely used needs to protect them from hacking and the like.

We haven’t seen such examples and I don’t have proof that this is happening, but I would assume that big nation state actors, like Russia and China, are exploring methods for how to manipulate AI algorithms.

If I were on their side, I would investigate how to do this because with hacking and changing models, you could influence hundreds of millions of people.

We should definitely think more about how we protect generative AI, from data integrity to user privacy and the rest.

Do you think that AI brings us closer to understanding human intelligence? Can AI be creative?

It’s an interesting set of questions. ChatGPT and Bing now have a variety of different models that can be used. Some of these are defined as ‘strict’ models while others are defined as ‘creative’ models.

I am not sure that it really helps us understand human intelligence. I think that it may put before us more questions than answers. Because I think that, as I mentioned previously, 99.999% of people who are using AI engines don’t really understand how they work.

In short, AI raises more questions and concerns than it does provide understanding about human intelligence and human beings.

For more AI insights from Sergey Shykevich, click here. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

Exit mobile version