Love it or hate it, large language models (LLMs) like ChatGPT and other AI tools are reshaping the modern workplace. As AI becomes a critical part of daily work, establishing guardrails and deploying monitoring tools for these tools is critical.
That’s where Check Point’s Harmony SASE comes in. We’ve already talked about Browser Security and the clipboard control feature to help define what types of information can’t be shared with LLMs.
For monitoring these services, our GenAI Security service shows exactly which AI tools your team is using, who is using them, what kind of information they’re sharing with the tools, as well as how risky their information sharing is for the company.
This is critical information for an organization to have. For example, it’s so easy to upload a spreadsheet to an LLM and ask it to analyze the data. Whether it’s survey results or expense reports from the past quarter, these tools can quickly analyze and provide insights.
Once data is shared with an LLM, however, it may be stored as training data or inadvertently shared with another user as part of a response. Even without direct misuse, uploading raw survey data to an external service could violate company data handling policies or even breach compliance regulations like GDPR (General Data Protection Regulation), CCPA (California Consumer Privacy Act) or HIPAA (Health Insurance Portability and Accountability Act), all of which strictly govern the use of personally identifiable information (PII).
For these reasons it’s critical to have visibility into the information your workforce is sharing with generative AI tools.
The GenAI Security Dashboard
GenAI Security shows you all the essential information you need to know about how AI is being used in your organization. Check Point can monitor your workforce’s actions on more than 300 services to deliver these insights.
The top of the dashboard presents three summary panels that break down AI usage by application (e.g., ChatGPT, Claude, Google Gemini, Microsoft Copilot), use cases (e.g., code generation, data analysis), and types of sensitive information shared.
Below that section are a set of smaller tiles displaying the total number of GenAI user sessions across the company, the number of applications in use by your team, and a summary of sensitive prompts grouped by the type of information they’re sharing.
The heart of the GenAI Security dashboard is the per-interaction summary. You can filter these interactions based on clickable elements in the tiles above, or by using the filtering tools just above the detailed summaries.
Each interaction includes key details such as session risk level, the GenAI service used, a description of shared sensitive information, the use case, the type of data shared, the user’s name, and the interaction’s date and time.
In addition to monitoring, companies need clear policies defining which tools are approved and what types of information can be shared with external AI tools.
There may also be times when you don’t want your team to be using certain AI tools at all. This could be because it didn’t pass your company’s vendor risk assessment, or simply because it generally seems too risky.
In those cases, Check Point’s SASE has you covered with robust web filtering rules that can block users from accessing undesirable websites and that apply to people in the office and working remote. The rules are granular meaning you can apply them to the entire company or only specific teams.
If your concern is about employees using the tool during work hours, you can leverage our hybrid service to apply GenAI web filtering rules only when users are connected to the company network, ensuring flexibility.
With GenAI Security, Check Point’s SASE doesn’t just help you monitor and control AI usage—it ensures your business remains secure.
Ready to see how Check Point’s SASE can safeguard your organization from GenAI risks? Book a personalized demo today to explore how we can protect your business.