Sergey Shykevich has a military intelligence background and is an experienced cyber threat Intelligence manager with a proven-track record of success.

He has a profound knowledge of cyber and fraud oriented threats, and wide experience in building
intelligence deliverables and products for companies of different sizes and in various industries.

In this expert interview with Check Point’s Sergey Shykevich, he delves into the trouble with the lack of transparency in AI models, the potential for algorithmic manipulation, and who should be held to account.

Shykevich also highlights must-know insights for business leaders, making recommendations as to how leaders can create an environment that accommodates generative AI technologies and new AI-based tools.

The interview also touches on the topic of deepfakes, their prevalence and the need for both regulation and anti-deepfake technologies. Finally, we get a glimpse into what this Threat Intelligence Group Manager is working on right now! Don’t miss this!

The need for “explainable AI models” (models that provide interpretable and explainable results) is a gposting concern. How is this being addressed and/or how would you like to see this addressed?

I think it’s a very interesting question and a major discussion point. I think that the big thing with ChatGPT and why it’s been such a success in the last year, leading everyone to jump on the AI hype-train, is that it’s really easy to use without knowing what’s there – without knowing how it works.

The models that ChatGPT and similar tools are based on are not new. It’s not like OpenAI completely invented new models of generative AI, or something like that. But they’ve made it easy-to-use, with a simple user-interface. And because it’s so easy to use, everyone can use it – so they do.

But, as you mentioned, on the negative side, it’s unclear as to what influences the output of these tools. Maybe someone could influence the output; from a political perspective or adversarial perspectives or something along those lines.

In the media, there have been several news stories pertaining to how generative AI just invents events. In one case, I saw some complaints about how that these tools invented the sexual assault of a student, as perpetrated by a professor, at a college. The materials even included a link to an alleged Washington Post story about the incident. However, all of this was fictitious; it was falsely created; none of it ever happened.

The Guardian even published an editors’ note that in some cases, ChatGPT has been used to create fake Guardian stories. ChatGPT can “absorb” a writer’s style and “learn” about the topics that he/she usually covers, and can then invent a whole article, including a link.

I really think that it falls on the vendor to bear some responsibility and maybe even to offer some type of disclosure. For example, I think that we all wish that they would put in some kind of guardrails around pedophilia and explicit content. And I think it’s the vendors’ responsibility to tell people ‘we’re doing some kind of algorithmic manipulation of the data in order to ensure that we avoid this kind of content’

I think that at this point, that’s maybe the best thing to do.

What is something about AI that you think business leaders don’t realize?

I think that the biggest thing is that generative AI is here and it will not disappear. It’s impossible to go back at this point.

From many organizations, we hear ‘oh, is there a way to completely block access to generative AI platforms on the network level?’ And I do understand, because there are a lot of concerns now about information leakage, for example. Such as when people from inside companies, and we’ve seen examples of this, have submitted source code or sensitive PII.

As a company, Check Point tries to allow companies who want it – to block it, but it’s becoming increasingly difficult because it’s an inherent part of accessing the internet – To block access to generative AI means blocking access to Bing, for example.

I think everyone should understand it; should consider the best ways to mitigate the risk. Much of this comes back to awareness in terms of employees and in terms of organizations; what people can do with the technology, what people should do with it, and what shouldn’t be done with it.

A hacker could deepfake our colleagues. What are your predictions around deepfakes and anti-deepfake technologies?

I think that in terms of deepfakes, the technologies were already there, and deepfakes may be one of the only area in which AI was already widely used before the AI ‘boom’ of this last year.

Some deepfakes from a year ago or more might not have been that creative or extant on such scale, but the technology is already there. At this point, it’s very difficult to differentiate a real photo from a fake one.

So in terms of deepfakes, I think that there should be some regulations or maybe some watermarking – those may be the best possible solutions – because again, as I mentioned previously, AI is here to stay. It’s not going to disappear. People will use it to create pictures, videos and the like.

Of course, there’s also an educational dimension to deepfakes, but it’s never enough. The vendors, at least the responsible ones, need to provide some way in which to differentiate deepfakes from real content.

In terms of combating deceptive, defaming or dangerous deepfakes, I assume it’s possible with AI. But I must say, even at this point, there are some AI engines (even ChatGPT launched an engine intended to allow users to see whether or not content was generated with AI), but they are far from being accurate. So, this stuff isn’t easy to identify automatically.

I assume that there will be some improvement. I am not sure that it will be perfect.

How can we protect our privacy as automated systems continuously track us?

I think, currently, the moment that you turn on your phone, computer or go out of your house, you are exposed. There is just no way around it.

In modern, Western countries, there is just no way not to have internet. I think that we should be careful and thoughtful about what information we, ourselves, submit to platforms and such; whether it’s our sensitive information or that of the company or organization that we work for.

To answer the question, yes, we should be careful and thoughtful about it, but I think that avoiding privacy intrusions is completely impossible.

If it’s not top secret, in general, what kinds of cool research are you working on at the moment?

So right now, because of the war in the Middle East between Israel and Hamas, we are focusing our efforts on cyber threat actors who are targeting this region of the world.

We’re trying to understand, we’re trying to map – whether it’s nation-state actors or hacktivists.

Very similar to what we saw in Ukraine, what starts in war zones may quickly spread all over the world. Those weeks, our focus is trying to understand these types of cyber threats and trends.

For more insights from threat intelligence expert Sergey Shykevich, please see CyberTalk.org's past coverage. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.

You may also like