Site icon Check Point Blog

AI, ChatGPT and separating fact from fiction

Cindi Carter is a Field CISO for the Americas region at Check Point.

Navigate the fast-moving world of artificial intelligence (AI) and ChatGPT with agility and confidence, leveraging impactful cyber security expertise. In this interview, Cindi Carter, Field CISO Americas for Check Point, provides comprehensive insights into relevant privacy, security and governance concerns. Explore unique perspectives from a respected industry voice, and gain insight into how you can advance your businesses' cyber resilience.

What should CISOs know about ChatGPT and business security? 

There are gposting concerns about the privacy implications in relation to the use of OpenAI’s ChatGPT. Although ChatGPT data is amassed from what are considered publicly available sources, it’s the integration of the end user’s input that puts sensitive business and personal data at risk.

Even when the data that's used to feed ChatGPT's language learning model is publicly available, its use can compromise what is known as contextual integrity. This is a fundamental principle in legal discussions of privacy. Contextual integrity requires that individuals’ information is not revealed outside of the context in which it was originally produced.

Moreover, the data ChatGPT was trained on can be proprietary or copyrighted…

Lastly, it’s important to recognize that the unprecedented gpostth of ChatGPT may make the platform uniquely vulnerable, as its creators rush to keep up with demand. Case in point: A bug introduced from an open source software library that ChatGPT uses on its platform resulted in ChatGPT users being shown chat data that belonged to other people. Additionally, the exposure of payment information for some “ChatGPT Plus” subscribers, including names, emails, billing addresses, card expiration dates and the last four digits of the card used to subscribe to the service, were exposed.

What are businesses getting right in relation to artificial intelligence?

Artificial intelligence is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people and businesses to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making.

AI is being integrated with and deployed into a variety of sectors, including finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways, and that raises important questions for society, the economy, and governance.

What are businesses getting wrong or where are there misinterpretations? 

A general apprehension has followed artificial intelligence throughout its history, and the sense of apprehension is no different with ChatGPT. Critics have been quick to raise the alarms over this technology, but now even those closest to it are proceeding with caution.

For example, if you interact with ChatGPT through a third-party platform or service, such as a chatbot hosted on a website or a voice assistant device, that platform may collect personal information (IP address or device information). In short, the security considerations are not strictly limited to the ChatGPT platform itself; there are third-party considerations as well.

What should CISOs communicate to employees, execs, boards?

Educate the aforementioned groups – and the CISOs are not solely responsible for this communication. Just as digital transformation, cloud adoption, and other technological advances have enriched businesses, executive leadership should determine the business objectives for ChatGPT’s use, provide governance, and support the security recommendations necessary to prevent unintentional consequences of its use.

ChatGPT is blocked in a number of countries, including China, Iran, North Korea and Russia. From a geopolitical standpoint, this isn’t too surprising. What is surprising is that Italy banned ChatGPT over privacy concerns. What do you think about this?

This is a politically, socially, economically-charged discussion in our entire world – kind of like the COVID-19 pandemic – but for very different reasons. The data collection used to train ChatGPT can be considered problematic. No one was asked about whether or not OpenAI could use our data.

This is a clear violation of privacy, especially when data is sensitive and can be used to identify us, our family members, or our location. I’ve often talked about the dichotomy in healthcare, “keep it private, share it with everyone,” as clinicians share critical data to make important decisions pertaining to the health outcomes of their patients. The potential benefits of ChatGPT notwithstanding, there is still the human connection at stake.

For more business and security insights pertaining to ChatGPT, please see CyberTalk.org's past coverage and be sure to read our eBook – here. Lastly, please sign up for our newsletter, which delivers top-tier cyber security content straight to your inbox every week – here.

Exit mobile version