Keely Wilkins has been in the technology industry for nearly thirty years. She has worked in corporate, higher education, medical, MSSP, and VAR organizations. Keely earned her MS of Cybersecurity from Florida Institute of Technology. 

Keely joined Check Point Software Technologies 3 years ago as the Security Engineer in Virginia and was recently inducted into the Office of the CTO Evangelist Guild.

OVERVIEW:

In this interview, Keely discusses the impact of social engineering techniques and how the “blast radius” will continue to gpost as emerging technology tips the scales in favor of malicious actors (a.k.a criminals).

Can you give us a brief explanation of social engineering?

Social engineering is a fancy term for manipulation. No one wants to feel manipulated or vulnerable. People want to feel secure and strong.  Listening to a Darknet Diaries podcast recently, the guest referred to people as exploitable unpatched vulnerabilities. That description dehumanizes the individual and categorizes them as just another threat vector. Maybe our egos need that layer of abstraction to acknowledge ourselves as actionable vulnerabilities.

Call it what you will, the objective of social engineering and manipulation is the same: to get a person to act in a way that’s contrary to their natural inclinations.

To your point, people aren't comfortable being perceived as vulnerable. How do the statistics bear out for social engineering? 

It's important to take a step back to gain a broader perspective.  Social engineering is the art of hacking humans.  Specific to cyber security, the value a person holds as an exploit is their privileged access to data, a system, a network, a building, or their proximity to leadership. If the bad actor (a.k.a criminal) is successful in persuading their target to click a link that delivers malicious code, or otherwise enabling unauthorized access, bad things can happen very quickly.

Verizon's DBIR states that 82% of all breaches in 2021 involved humans. To put that in context, the research examined 4,110 confirmed breaches; 82% of that is ~3,370 humans. That doesn't make us the weakest link, it makes us a predictable target. That should make us feel uncomfortable and heighten our awareness.

Social engineering is the precursor to most other attacks; it gets the door open.

What techniques are used to persuade us to act contrary to our training and best interest? How can we turn the tables and become less predictable?

Persuasion is a performance art. It's calculated like a science, and delivered like a song. The key is making the target feel something: trust, urgency, kinship, authority, disdain, power, stress, or pity. Once the target is emotionally vested, the game is afoot.

There are automated tools that use publicly available information to craft very personal and compelling stories that appeal to specific targets. For example, deepfakes and influence bots are designed to impact groups of people of similar ideology and/or propel them toward false information that drives them toward extremism.

Then, there are more traditional social engineers who prefer to engage their targets in face-to-face conversation to gain trust. They don't have to convert your beliefs, they just have to divert your attention long enough to get you to click a link, transfer a file, share some information, or open a door.

Becoming less predictable is rooted in knowing yourself and being mindful of your triggers.

  • Be impolite if you feel suspicious of someone's behavior
  • Independently confirm the information being presented to you
  • If you feel rushed into taking action on something important, take a breath and think it through

Tell us more about automated social engineering. Does it go beyond deepfakes and influence bots?

Using automation tools allows the hacker to personalize attacks based on publicly available information as opposed to having to comb through the target’s social media accounts, the accounts of their family and friends, public records, etc. OSNIT (Open-source Intelligence) is one of the tools used to craft automated social engineering hacks.

Deepfakes and Influence Bots can be compared to what we traditionally may call propaganda. They are often devised and distributed by nation-states or extremists trying to divert attention, sway support, promote dissent, and just basically trick people into believing whatever nonsense supports their objective. They present altered images, voices, and text to change the narrative of an event to support their own agenda. The easiest way to neutralize this type of social engineering is to independently confirm what is being presented.

Now, imagine all of these techniques amped up and weaponized such that they are not easily decipherable. So far, being impolite and rational can get most of us out of trouble with social engineering. What happens when you're being targeted by an AI (artificial intelligence) that can replicate the voice of a human so precisely that your ear cannot distinguish the AI from the real person's voice. Remember, social engineering is predicated on getting you to feel something. There may be subtle nuances between the soundwave produced by the AI and the voice spoken by your loved one, but you won't hear it.

Combine that AI voice replication capability with an advanced language modeling AI so savvy that one of its own engineers declared it to be sentient (alive) and the art of social-engineering just took a wrong turn down a dark alley.  To be fair, there are good uses for this technology. But I see the challenges of maintaining the purity of the good intent as well as the ease with which these technologies can be weaponized. These are emerging technologies that are intended to manipulate how we hear information.  It is intended to illicit emotion and prompt action. It is a machine built to hack humans.

How do we respond to AI social engineering?

All of the same suggestions for non-AI social engineering hold true here.  At least for now.

Another test you can employ if you feel you are encountering an AI replicated voice is to add some spontaneity to the conversation. Add a comical or illogical divergence into the conversation. I haven't met an AI yet that has decent comic timing.

There is also a glimmer of hope in the form of AI Ethics committees.  AI Ethics committees set the guidelines for how AI engines are to be designed. The two emerging technologies mentioned in this article have come under quite a bit of public and professional scrutiny.  The path forward is not set in stone.  If your organization has an AI Ethics Committee, talk to them, become part of the conversation, use your power of persuasion on the side of good.

This topic is evolving. Watch for more articles in the near future.

For more information on Deepfakes and Influence Bots, see the 2021 CPX presentation by Micki Boland titled "Cyber Warfare 2021: Next level $#@! You need to know about today's cyber warfare" – available on YouTube.

You may also like