Main Highlights:

  • Security professionals express cautious optimism about the potential of generative AI to bolster cybersecurity defenses, acknowledging its ability to enhance operational efficiency and threat response.

  • Organizations are proactively developing governance structures for generative AI, recognizing the importance of establishing robust policies and enforcement mechanisms to mitigate associated risks.

  • Generative AI is predicted to become a key factor in cybersecurity purchasing decisions by the end of 2024, with its applications expected to be pervasive across security operations, emphasizing the shift towards more AI-integrated cybersecurity solutions.

 

As the digital landscape evolves, so does the domain of cybersecurity, now standing on the brink of a transformative era powered by generative AI. In A research recently conducted by TechTarget’s Enterprise Strategy Group, sponsored by Check Point, unveils compelling insights and statistics that underscore the critical role of generative AI in shaping the future of cybersecurity. To gather data for this report, TechTarget’s Enterprise Strategy Group conducted a comprehensive online survey of IT and cybersecurity professionals from private- and public sector organizations in North America between November 6, 2023 and November 21, 2023. To qualify for this survey, respondents were required to be involved with supporting and securing, as well as using, generative AI technologies.

The main objectives of this research were:

  • Identify current usage of and plans for generative AI.
  • Establish how generative AI influences the balance of power between cyber-adversaries and cyber-defenders.
  • Determine how organizations are approaching generative AI governance, policies, and policy enforcement.
  • Monitor how organizations will apply generative AI for cybersecurity use cases.

eBook – Generative AI for Cybersecurity -ESG Sponsored Research – 2024

Here’s an exploration of the findings:

Generative AI Has a Foothold Today and Will Be Pervasive by the End of 2024

92% of respondents agree that machine learning has improved the efficacy and efficiency of cybersecurity technologies.

The Adoption Paradox: While 87% of security professionals recognize the potential of generative AI to enhance cybersecurity defenses, there’s a palpable sense of caution. This stems from the understanding that the same technologies can also be leveraged by adversaries to orchestrate more sophisticated cyber-attacks.

 

Strategic Governance and Policy Development: An impressive 75% of organizations are not just passively observing but are actively developing governance policies for the use of generative AI in cybersecurity. This proactive approach indicates a significant shift towards embedding AI within the cybersecurity fabric, ensuring that its deployment is both effective and responsible.

 

Investment and Impact: The research predicts a pivotal trend: by the end of 2024, generative AI will influence cybersecurity purchasing decisions for more than 60% of organizations. This statistic is a testament to the growing confidence in AI’s capabilities to revolutionize security operations, from threat detection to incident response.

 

Operational Efficiency and Threat Response: One of the standout statistics from the research is that 80% of surveyed security teams anticipate generative AI to significantly improve operational efficiency. Moreover, 65% expect it to enhance their threat response times, underscoring the technology’s potential to not just augment but actively accelerate security workflows.

 

Challenges and Concerns: Despite the optimism, the research also sheds light on prevailing concerns. Approximately 70% of respondents highlighted the challenge of integrating generative AI into existing security infrastructures, while 60% pointed out the risks associated with potential biases and ethical considerations.

GenAI Balance of Power Skews Toward Cyber-adversary Advantage

Of course, cyber-adversaries also have access to open GenAI applications and have the technical capabilities to develop their own LLMs. WormGPT and FraudGPT are early examples of LLMs designed for use by cybercriminals and hackers. Will cyber-adversaries use and benefit from LLMs? More than three-quarters of survey respondents (76%) not only believe they will, but also feel that cyber-adversaries will gain the biggest advantage (over cyber-defenders) from generative AI innovation. Alarmingly, most security professionals believe that cyber-adversaries are already using GenAI and that adversaries always gain an advantage with new technologies. Respondents also believe that GenAI could lead to an increase in threat volume, as it makes it easier for unskilled cyber-adversaries to develop more sophisticated attacks. Security and IT pros are also concerned about deep fakes and automated attacks.

Conclusion: Navigating the New Frontier

The research by ESG illuminates the complex yet promising horizon of generative AI in cybersecurity. It presents a narrative of cautious optimism, where the potential for innovation is balanced with a keen awareness of the challenges ahead. As organizations navigate this new frontier, the insights from this study serve as a beacon, guiding the development of strategies that are not only technologically advanced but also ethically grounded and strategically sound.

In essence, the future of cybersecurity, as painted by this research, is not just about embracing generative AI but doing so in a manner that is thoughtful, responsible, and ultimately transformative.

eBook – Generative AI for Cybersecurity -ESG Sponsored Research – 2024

You may also like