top of page
Writer's pictureVinicius Alves

How to avoid exposing our company's information when using ChatGPT

Updated: Aug 20

Business man working and an AI powered robot helping him to work
Image created by Lexica.ai

Here, we will show you which cases this happens in and what you can do about it.

Since ChatGPT appeared on the scene, many people have smartly incorporated it into their workflow. The artificial intelligence (AI) chatbot has become a great ally in the office because it can help schedule tasks, compose emails, draft summaries, analyze data, and generate graphs. The enthusiasm for using this OpenAI-developed tool to increase productivity can come with a caveat. Some users may expose confidential data about themselves or their company without knowing. The reason? ChatGPT, by design, doesn't know how to keep secrets.


ChatGPT, at work: a double-edged sword

Let's take the following situation as an example. You work in a company, and you have to attend a meeting in a couple of hours. You will discuss the company's business strategy for the coming year at the meeting. To better prepare your speech, you decide to write down the most important points of the annual planning document. The document in question contains confidential information that, if leaked, could have negative consequences for your company. Some sections analyze the competition, and others mention products that haven't been launched yet. But you have little time to read it, so summarize it using ChatGPT. Like so, avoid exposing our information when using ChatGPT.


A table created by ChatGPT

You upload the PDF to the chatbot, you get the key points you need, and your intervention in the meeting is a success. Not only were you perfectly informed about your specific area, but you also had a global view of where the company you belong to is heading. Now, that information could fall into the wrong hands. When we use ChatGPT for the first time, a pop-up window warns us not to share confidential information. Sometimes, however, we click OK without fully knowing what this means. OpenAI employees can see the content, which can even be used to improve the chatbot.

Avoid exposing our information when using ChatGPT by investigating security incidents, or improving the model performance, for example.

The ChatGPT pop-up window with usage suggestions.

This is detailed in the company's data usage policy, which is led by Sam Altman. OpenAI makes it clear that it can use user prompts, responses, images, and files to improve the performance of the underlying model, i.e., GPT-3.5, GPT-4, GPT-4o, or future versions. The way to improve the models is to train them with more information so that when someone asks a question about a topic, they can give a more accurate answer. So, unless you have taken some of the precautions we will see below, you could be training the model with confidential data. The danger, however, is not just that ChatGPT will leak trade or other secrets. Once the data is on OpenAI's servers, it can be viewed by company employees or authorized "trusted service providers" for various reasons. There are four scenarios in which others may view your chatbot's activity history.

To investigate security incidents.
To provide you with the assistance you have requested.
To respond to legal issues.
To improve model performance.

This scenario has led companies such as Samsung to take steps to prevent sensitive data from being leaked. For example, the use of chatbots for certain tasks has been limited and corporate versions have been implemented that promise not to use chat data for training tasks, to avoid exposing our information when using ChatGPT


How to improve data security in ChatGPT

Users and businesses have two specific alternatives available to them that OpenAI promises will protect their sensitive data: disable the model enhancement option with conversations or use one of the enterprise versions of ChatGPT. Let's take a closer look at how to use each. If you use ChatGPT or ChatGPT Plus and want to prevent your conversations from being used to train OpenAI models, try this:

1 - Open ChatGPT from a computer.
2 - Click on your profile picture and then click on Settings.
3 - Click on Data Controls and then click on Enhance template for all.
4 - Make sure that the Enhance model for all switch is turned off. If you work in a professional environment using the paid ChatGPT Enterprise or ChatGPT Team solutions, in all cases the data is not used to train OpenAI models. In addition, they are protected by encryption of data at rest (AES-256) and in transit (TLS 1.2+).

Don't forget, your chats are still visible

Even when using some of the paid professional tools mentioned above, there are cases where people outside your company can view conversations. In the case of ChatGPT Enterprise, OpenAI employees can access conversations to resolve issues, retrieve user conversations, provided they have your permission, or when requested by the courts. In the case of ChatGPT Team, OpenAI employees can access conversations "to provide engineering support," investigate potential abuse, and ensure legal compliance.

This is also where "specialized external contractors" who will be able to view conversations in the event of abuse or misuse also come into play. OpenAI employees or external agents who in all cases can view ChatGPT users' conversations are subject to confidentiality and security obligations.

What do you think, should we stop using ChatGPT on companies or these data are saved that nobody will see?

4 views0 comments

Recent Posts

See All

Comments


bottom of page