By Manuel Rodriguez. With more than 15 years of experience in cyber security, Manuel Rodriguez is currently the Security Engineering Manager for the North of Latin America at Check Point Software Technologies, where he leads a team of high-level professionals whose objective is to help organizations and businesses meet cyber security needs. Manuel joined Check Point in 2015 and initially worked as a Security Engineer, covering Central America, where he participated in the development of important projects for multiple clients in the region. He had previously served in leadership roles for various cyber security solution providers in Colombia.

Technology evolves very quickly. We often see innovations that are groundbreaking and have the potential to change the way we live and do business. Although artificial intelligence is not necessarily new, in November of 2022 ChatGPT was released, giving the general public access to a technology we know as Generative Artificial Intelligence (GenAI). It was in a short time from then to the point where people and organizations realized it could help them gain a competitive advantage.

Over the past year, organizational adoption of GenAI has nearly doubled, showing the gposting interest in embracing this kind of technology. This surge isn't a temporary trend; it is a clear indication of the impact GenAI is already having and that it will continue to have in the coming years across various industry sectors.

The surge in adoption

Recent data reveals that 65% of organizations are now regularly using generative AI, with overall AI adoption jumping to 72% this year. This rapid increase shows the gposting recognition of GenAI’s potential to drive innovation and efficiency. One analyst firm predicts that by 2026, over 80% of enterprises will be utilizing GenAI APIs or applications, highlighting the importance that businesses are giving to integrating this technology into their strategic frameworks.

Building trust and addressing concerns

Although adoption is increasing very fast in organizations, the percentage of the workforce with access to this kind of technology still relatively low. In a recent survey by Deloitte, it was found that 46% of organizations provide approved Generative AI access to 20% or less of their workforce. When asked for the reason behind this, the main answer was around risk and reward. Aligned with that, 92% of business leaders see moderate to high-risk concerns with GenAI.

As organizations scale their GenAI deployments, concerns increase around data security, quality, and explainability. Addressing these issues is essential to generate confidence among stakeholders and ensure the responsible use of AI technologies.

Data security

The adoption of Generative AI (GenAI) in organizations comes with various data security risks. One of the primary concerns is the unauthorized use of GenAI tools, which can lead to data integrity issues and potential breaches. Shadow GenAI, where employees use unapproved GenAI applications, can lead to data leaks, privacy issues and compliance violations.

Clearly defining the GenAI policy in the organization and having appropriate visibility and control over the shared information through these applications will help organizations mitigate this risk and maintain compliance with security regulations. Additionally, real-time user coaching and training has proven effective in altering user actions and reducing data risks.

Compliance and regulations

Compliance with data privacy regulations is a critical aspect of GenAI adoption. Non-compliance can lead to significant legal and financial repercussions. Organizations must ensure that their GenAI tools and practices adhere to relevant regulations, such as GDPR, HIPPA, CCPA and others.

Visibility, monitoring and reporting are essential for compliance, as they provide the necessary oversight to ensure that GenAI applications are used appropriately. Unauthorized or improper use of GenAI tools can lead to regulatory breaches, making it imperative to have clear policies and governance structures in place. Intellectual property challenges also arise from generating infringing content, which can further complicate compliance efforts.

To address these challenges, organizations should establish a robust framework for GenAI governance. This includes developing a comprehensive AI ethics policy that defines acceptable use cases and categorizes data usage based on organizational roles and functions. Monitoring systems are essential for detecting unauthorized GenAI activities and ensuring compliance with regulations.

Specific regulations for GenAI

Several specific regulations and guidelines have been developed or are in the works to address the unique challenges posed by GenAI. Some of those are more focused on the development of new AI tools while others as the California GenAI Guidelines focused on purchase and use. Examples include:

EU AI Act: This landmark regulation aims to ensure the safe and trustworthy use of AI, including GenAI. It includes provisions for risk assessments, technical documentation standards, and bans on certain high-risk AI applications.

U.S. Executive Order on AI: Issued in October of 2023, this order focuses on the safe, secure, and trustworthy development and use of AI technologies. It mandates that federal agencies implement robust risk management and governance frameworks for AI.

California GenAI Guidelines: The state of California has issued guidelines for the public sector's procurement and use of GenAI. These guidelines emphasize the importance of training, risk assessment, and compliance with existing data privacy laws.

Department of Energy GenAI Reference Guide: This guide provides best practices for the responsible development and use of GenAI, reflecting the latest federal guidance and executive orders.

Recommendations

To effectively manage the risks associated with GenAI adoption, organizations should consider the following recommendations:

Establish clear policies and training: Develop and enforce clear policies on the approved use of GenAI. Provide comprehensive training sessions on ethical considerations and data protection to ensure that all employees understand the importance of responsible AI usage.

Continuously reassess strategies: Regularly reassess strategies and practices to keep up with technological advancements. This includes updating security measures, conducting comprehensive risk assessments, and evaluating third-party vendors.

Implement advanced GenAI security solutions: Deploy advanced GenAI solutions to ensure data security while maintaining comprehensive visibility into GenAI usage. Traditional DLP solutions based on keywords and patterns are not enough. GenAI solutions should give proper visibility by understanding the context without the need to define complicated data-types. This approach not only protects sensitive information, but also allows for real-time monitoring and control, ensuring that all GenAI activities are transparent and compliant with organizational and regulatory requirements.

Foster a culture of responsible AI usage: Encourage a culture that prioritizes ethical AI practices. Promote cross-department collaboration between IT, legal, and compliance teams to ensure a unified approach to GenAI governance.

Maintain transparency and compliance: Ensure transparency in AI processes and maintain compliance with data privacy regulations. This involves continuous monitoring and reporting, as well as developing incident response plans that account for AI-specific challenges.

By following these recommendations, organizations can make good use and take advantage of the benefits of GenAI while effectively managing the associated data security and compliance risks.

 

 

You may also like