By Shira Landau, Editor-in-Chief, CyberTalk.org.

Enterprises and individuals have adopted generative AI at an extremely impressive rate. In 2024, generative AI is projected to reach 77.8 million users worldwide — an adoption rate of more than double that of smartphones and tablets across a comparable time frame.

While the integration of generative AI into work environments offers coveted agility and productivity gains, such benefits remain tenuous without the right workforce (and societal) structures in-place to support AI-driven gpostth.

It nearly goes without saying — Generative AI introduces a new layer of complexity into organizational systems. Effective corresponding workplace transformation — one that enables people to use generative AI for efficiency and productivity gains —  depends on our abilities to secure it, secure our people, and secure our processes.

In the second half of 2024, CISOs and cyber security teams can facilitate the best possible generative AI-based business outcomes by framing discussions and focal points around the following:

5 ways generative AI will impact CISOs and security teams

1. Expanded responsibilities. It should have been written on a neon sign…Generative AI will add new ‘to-dos’ to CISOs’ (already extensive) list of responsibilities. Only 9% of CISOs say that they are currently prepared to manage the risks associated with generative AI.

New generative AI-related responsibilities will involve dealing with data security and privacy, access control, model integrity and security, and user training, among other things.

2. AI governance. As generative AI’s footprint expands within enterprises, cyber security leaders must develop comprehensive governance frameworks to mitigate corresponding risks.

This includes addressing the potential for “shadow generative AI,” referring to the unsanctioned use of generative AI tooling. Shadow generative AI poses challenges that parallel those associated with shadow IT.

To build a strategic AI governance plan for your organization, start with an assessment of your organization’s unique needs and generative AI use-cases.

3. User training. Successful AI governance hinges on effective user awareness and training initiatives. Currently, only 17% of organizations have fully trained their teams on the risks around generative AI.

Prioritize generative AI awareness programs, as to communicate acceptable and unacceptable use-cases. This ultimately minimizes the potential for painful cyber security stumbles.

4. The dual-use dilemma. This concept refers to the notion that generative AI technologies can be applied for both beneficial and malicious gain.

The overwhelming majority of CISOs (70%) believe that generative AI will lead to an imbalance in “firepower,” enabling the cyber criminals to wreak havoc on organizations at an unprecedented rate.

Will AI-generated phishing emails achieve a higher click-through rate and perpetuate a high volume of attacks? No one knows. In the interim, CISOs are advised to proactively update and upgrade cyber security technologies.

5. AI in security tooling. Just over a third of CISOs currently use AI — either extensively, or peripherally — within cyber security functions. However, within the next 12 months, 61% of CISOs intend to explore opportunities for generative AI implementation in security processes and protocols.

If your organization is currently assessing AI-based cyber security threat prevention technologies, see how Check Point’s Infinity AI Copilot can advance your initiatives. Learn more here.

Also, be sure to check out this CISO’s Guide to AI. Lastly, to receive cyber security thought leadership articles, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

You may also like