Micki Boland is a global cyber security warrior and evangelist with Check Point’s Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology, and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.
Are deepfakes the 21st century’s answer to photoshopping? Deepfake technology has steadily progressed and proliferated across the past decade.
Deepfakes rely on a form of artificial intelligence (known as deep learning) to create fabricated digital narratives; they can turn people into talking puppets, and convincingly depict events that never happened. In short, they mean that we can no longer trust our eyes.
In this article, knowledgeable and highly accomplished industry expert Micki Boland provides insightful predictions around deepfakes and discusses how to prevent them from negatively affecting your business operations.
Take steps to prevent deepfakes from undermining your enterprise. Learn more in the interview that follows:
In brief, based on everything that you’ve seen this year, what concerns you most regarding the business implications of deepfakes?
Ultimately, for the business world, especially for executive leadership, everyone should know the risks associated with the malicious use of deepfakes for CEO impersonation and financial fraud. These uses of deepfakes can lead to brand manipulation and harm, and present very real threats to all organizations. Take these threats seriously.
What are your predictions around how deepfake technology will evolve in 2024?
1. We will see an increasing number of attempts to utilize deepfakes — in the form of voicefakes, used for exploitation by cyber criminals — for financial gain and extortion.
These will increasingly be utilized for deep social engineering and impersonation of CEO and C-suite officers, board members and financial transaction owners.
I think it is highly probable that we will see direct malicious attacks using deepfake video against a publicly traded organization in 2024; with the intent to derail, disrupt, destabilize, or financially harm the organization and its shareholders.
2. We will see an increase in deepfake videos that impersonate private sector executives and authority figures in government.
These will be created by hacktivists, nation states and proxy actors for disruption, destabilization and disinformation, fomenting division and ultimately leading to election interference.
We will also see an increase in use of deepfake video technology for the purpose of generating non-consensual pornography and child exploitation.
3. We will see acceleration of synthetically generated content shared on the internet and virally disseminated via social media platforms. Media groups, advertisers, big tech platforms, news bots, and social engineering bots will rapidly spread this synthetic content.
It will become increasingly challenging to identify content that is created by humans vs. that generated by generative AI.
Best practices for deepfake prevention? How can organizations avoid being duped by deepfakes?
Right now, it is very difficult to detect synthetically generated content; whether deepfake video, audio, images, or news. And the technology is getting better.
ChatGPT and Large Language Models will accelerate the creation and distribution of deepfakes. Three key focus areas can help organizations and people from being defrauded or manipulated by deepfake technologies:
1. Training, awareness and preparation related to organizational use and consumption of generative AI and awareness around malicious deepfake content is absolutely crucial.
This awareness training must be delivered to all employees within enterprises and organizations, with special attention to training and awareness for the C-level and board, for officers, and for organizational financial custodians.
Business process reviews with separation of duties and “chain of custody” protocols for financial transaction management must be reviewed asap and shored up.
Organizations need to prepare for incident handling and response, and need to have a communications plan, should deepfakes be used against the organization.
2. Create guardrails defining which groups within your organization can use generative AI, and for what purposes. If your marketing department uses generative AI for creating synthetic videos, ensure that the right people are using the approved tools and platforms. Your organization can create a launch pad for organizationally approved generative AI platforms access. Cyber security solutions providers can enforce generative AI policies on security gateways.
3. It is unlikely that detection technologies will be able to keep up with the rapid acceleration of tools and platforms used to create generative AI and deepfakes. And detection requires yet another step to determine whether the deepfake content is malicious, illicit or benign.
These challenges will demand the development of zero trust principles around generative AI models (NIST AI Risk Management Framework), as well as clear identification, labeling, and authenticity verification mechanisms for AI-generated content that is posted to social media platforms.
Is there anything else that you would like to share with the C-level thought leadership audience around this topic?
Connect with Check Point or your cyber security partner. Take steps now to train yourselves and your people. Train people to be aware of and discerning in terms of the content that they are receiving and the use of social engineering bots to disseminate deepfakes.
Include malicious deepfake brand damage information in your IR/IH and communications plans. Once a deepfake is unleashed on social media, it is difficult to stop proliferation. Think outside of the box in your tabletop exercises and identify the potential ways a malicious executive deepfake would cause your organization the most harm. Prepare for this.
For mitigating use of deepfake technologies in the context of financial fraud, ensure that there are human validations and authentications for all steps in the financial transaction process, with separation of duties and multiple humans having the “nuke codes” for authorizing financial transactions.
Be sure to add or expand AI-based threat intelligence and proactive threat blocking within your organization.
Last, but not least, your HR teams need to understand that deepfake technologies have been used by prospective 'job candidates' in the hiring process, including in "face swapping" incidents. Take proper investigative steps to ensure that candidates are indeed who they say that they are.
Discover additional thought leadership insights around deepfakes here:
|