Site icon Check Point Blog

AI data leaks are reaching crisis level: Take action

By Marco Eggerling, Field CISO, EMEA

Much speculation surrounds how ChatGPT and similar technologies will change how we live and work. As an increasing number of employees casually adopt AI-based tools like ChatGPT to increase productivity levels, security professionals worry that trade secrets or sensitive data input into the tools could compromise security.

Since ChatGPT's public launch, at least 30% of knowledge workers have tested the tool in the workplace setting and 4.9% have pasted data into it. In order to allay privacy and data security fears, several well-known organizations have taken decisive action, blocking employee ChatGPT access altogether.

However, many organizations have simply issued vague warnings regarding the security dangers associated with generative AI-based services. And many more companies feel as though they're navigating through darkness and haven't emerged with messages around conversational AI tools.

Data leaks at crisis levels

As noted above, the biggest privacy concern for most companies, when it comes to generative AI, has to do with the inadvertent disclosure of sensitive information. A well-meaning employee may use ChatGPT to summarize meeting notes, but in so doing, may accidentally share future business plans with ChatGPT.

Independent researchers estimate that 11% of what employees enter into ChatGPT is confidential and that the average company leaks data into ChatGPT hundreds of times per week.

Because ChatGPT is integrated into a variety of third-party plugins, the risks around leaked data are even higher than some might assume. Further, OpenAI states that it may disclose users' personal information to unspecified third-parties without informing users in order to meet business objectives.

Adding to concerns, in late March, OpenAI announced that it needed to temporarily remove ChatGPT from the internet in order to fix a bug that allowed some users to see the subject lines of other users' chat histories. Theoretically, another bug could enable users to view other users' private messages — and the effects could be more damaging than we could even begin to imagine.

Bringing your team on-board

The Pandora's Box has been opened. Rather than trying to move backwards in time, we must realize the opportunities offered by conversational AI.

The first step in reducing conversational AI data leaks and in protecting confidential information consists of working with stakeholders to create realistic expectations around generative AI use.

Stakeholders need to assess their organization's level of risk tolerance in relation to widespread use of tools like ChatGPT. Stakeholders should then consider where they wouldn't want to see generative AI used, where they would want to see generative AI used, and the strengths and limitations of AI in corresponding contexts.

Then, teams need to develop training, policies, and safeguards for users. You may want to assign specific IT and/or cyber security staff members to oversee AI-focused data privacy and protection initiatives.

Practical ways to prevent data loss

When it comes to the actual tactics that your IT and/or cyber security staff can implement in order to prevent chatbot-related data leaks, you might start with the following, if you haven't done so already:

1. Cyber security awareness. Education can have a tremendous impact on the safety and security of corporate data. Cyber security research indicates that fewer than 1% of workers are responsible for 80% of ChatGPT-related data compromises.

Employees need to know that, in the way that a company does not trust Google Translate to fully accurately translate their marketing materials for new geographies, ChatGPT cannot be fully trusted in terms of its data privacy practices or its output. The stakes are too high.

Analyst firm Gartner suggests that employees who use ChatGPT should conceptualize the data input into the tool as data that they are posting to a public site, such as a social networking site or a public blog. Employees should ask themselves about whether the data that they're inputting into ChatGPT is appropriate for external eyes.

2. DLP prevention. Preemptively protect your business from unintentional loss of valuable and sensitive information. A DLP solution not only provides you with visibility into how users handle data, it can also alert end-users to proper data handling practices without involving IT/security teams.

DLP solutions can detect sensitive messages and block conversations containing sensitive messages. Admins can also review findings within defined messages.

3. Adjusting BYOD policies. If your enterprise does not permit employees to use ChatGPT on corporate devices, your employees may be using it on personally owned devices. It's theoretically possible, and not terribly unlikely, that employees are loading data into the Teams app on their phones, and copying and pasting it into the ChatGPT app.

From the standpoint of responsible data management, enterprises need to review existing policies around both BYOD and ChatGPT.

4. Enterprise license with OpenAI. If your organization permits employees to use ChatGPT, consider negotiating an enterprise license with the company. Here's why: In the event that OpenAI's security fails, OpenAI caps liability at $100.00 USD. An enterprise license can afford you robust security and higher caps.

In conclusion

Take a proactive approach. Prevent data loss to generative AI tools like ChatGPT, Bard and other emerging tech. Get your teams on the same page, consider the aforementioned guidelines, and prevent generative AI-related data compromises.

This novel technology requires management via novel strategies and tactics. With expert guidance, blue flame thinking, and the support of your security team, your workforce's use of generative AI tools should only result in the strongest of business outcomes.

Lastly, subscribe to the CyberTalk.org newsletter for executive-level interviews, analyses, reports and more each week. Subscribe here.

Exit mobile version