Using Security Controls to Mitigate Privacy Risk When Employees Use ChatGPT

Person typing on a laptop
Author: Oluwafemi Adeyemo Adeleke, CISA, CISM, CRISC, CDPSE, CGEIT, CCISO
Date Published: 13 December 2023
Related: The Promise and Peril of the AI Revolution: Managing Risk

The artificial intelligence (AI) chatbot ChatGPT, created by OpenAI was released for public use in late 2022 and has become the most widely adopted and fastest growing consumer application in history, attracting 100 million active users within two months of product launch.1 It is both a large language model (LLM) and a generative AI application that uses machine learning (ML) algorithms and deep learning processes to analyze large volumes of data from various databases in the public domain. As an LLM, the chatbot is trained with large quantities of text data and uses both supervised and unsupervised learning models to provide responses to users.

Figure 1

ChatGPT is notable for its ability to generate human-like responses in different use cases and has the potential to generate relevant responses in line with the context of the user’s input. Figure 1 shows some of its potential uses.

Enterprises are rapidly adopting ChatGPT to optimizeexisting service offerings and roll out new ones. It is capable of improving productivity and efficiency through automation, creating value by providing a competitive advantage. In addition, employees are using ChatGPT to perform complex tasks, even without formal authorization from their employers.2

The widespread adoption of ChatGPT has created security and privacy concerns. Although ChatGPT provides innumerable advantages to enterprises and individuals, its security and privacy implications should be carefully evaluated. In addition, if employees use the AI chatbot, employers should implement effective information security controls and data protection measures and then monitor and assess the design and operating effectiveness of those controls to ensure compliance with security policies, contractual obligations and privacy regulations.

Security and Privacy Risk Associated With ChatGPT

Users need to ask ChatGPT relevant questions to generate the desired content. When users input data into ChatGPT, by implication, they are providing data that can be used by OpenAI. Under ChatGPT’s terms of use, OpenAI is authorized to use inputted content and ChatGPT’s output to maintain, develop and optimize its services, and users are responsible for ensuring that content provided to the chatbot complies with applicable laws and regulations.

OpenAI does not use content entered into or received from application programming interface (API) content to develop or maintain services, but it may use content from services that are not API-based, in line with its terms of use. Non-API content users must formally opt out of OpenAI’s authority to use their content for service maintenance and improvement. It is noteworthy that opting out of data usage for service improvement may not prevent data breaches. ChatGPT is currently hosted in the United States and may use personal information provided to maintain or analyze services, develop new solutions and services, and conduct research, in line with its privacy policy. OpenAI may also provide personal information to third parties without explicit notice to data subjects, in line with its privacy policy related to the disclosure of personal information. These privacy polices and terms of use should be evaluated by organizations.

While interacting with ChatGPT, employees may unintentionally share classified information such as confidential business data, trade secrets and personal information about customers, including health-related information, and the security of this information may not be guaranteed. Also, because ChatGPT uses ML algorithms and deep learning processes to analyze large volumes of data in the public domain, employees may receive proprietary information as content output, thereby creating a legal risk for their employers.

If an employee inputs data related to an employer’s business strategy, IT strategy or marketing strategy into ChatGPT, there is a risk that the content generated to the employee may be used to produce similar content for competitors; this information could have significant commercial value and influence market share. A survey conducted in January 2023 by Fishbowl, an online professional networking application, found that 68 percent of employees who were using ChatGPT were doing so without the knowledge of their line managers and employers.3

In March 2023 OpenAI took ChatGPT offline because of a software bug that allowed a few active users to see other users’ chat histories. OpenAI believes the same bug may have been responsible for a breach of cardholder data that may have allowed some active users to view other users’ names, email addresses, credit card types, credit card expiration dates, billing addresses, and the last four digits of card numbers, although full credit card numbers were not breached. According to OpenAI, at software bug has been addressed, and additional software security control measures have been implemented.4

Enterprises that choose to adopt AI technology are expected to implement effective security controls to preserve the confidentiality, integrity and availability (CIA) of their information.

Samsung Electronics recorded three incidents involving breaches of its confidential information in March 2023, when its employees provided semiconductor equipment measurement data, software program code, and highlights of a corporate meeting to the AI-powered chatbot as input.5 This information may have become an integral part of ChatGPT’s learning database and thus could be accessible to all users. It is worth mentioning that OpenAI advises users not to share sensitive information when using the ChatGPT platform because it is unable to delete specific prompts from a user’s conversation history. Research conducted by Cyberhaven found that 11 percent of the content employees provided to ChatGPT included sensitive data; in addition, it found 199 incidents of confidential business information being uploaded to the platform per 100,000 employees surveyed in 2023, as well as 173 occurrences of customer data being uploaded.6

Mitigating Security and Privacy Risk When Using ChatGPT

Figure 2

ChatGPT may give enterprises a competitive advantage and speed the performance of certain tasks, but enterprises that choose to adopt AI technology are expected to implement effective security controls to preserve the confidentiality, integrity and availability (CIA) of their information. To reduce the threat landscape, enterprises should identify which business functions require the use of generative AI such as ChatGPT, including the corresponding use cases and specific processes, and only those functions should be permitted to use AI chatbots. It may be expedient for enterprises to conduct in-depth risk assessments of each use case prior to implementation. This would enable them to identify potential risk factors, determine corresponding risk ratings, and put mitigation controls in place to enhance digital trust. In addition to a security risk assessment, several other assessments should be conducted (figure 2):

  • Data privacy impact assessment (DPIA)
  • Data transfer impact assessment (DTIA)
  • Business impact analysis (BIA)

Figure 3

Conducting a DPIA allows an enterprise to ascertain whether the existing data protection agreement is adequate for data processing related to new use cases. Benchmarking the existing DTIA against the potential data transfer in new use cases ensures that legal and regulatory requirements have been adequately evaluated for compliance with local data protection regulations. More important, when interaction with the AI chatbot involves the use of personal data, the rights of data subjects need to be respected, in line with the data protection regulations applicable to the location where the data are being collected.

In addition, the existing BIA must be evaluated to ensure that the impact of new use cases on the enterprise, employees, partners, parties to contractual obligations, and other stakeholders is carefully considered. For example, enterprises might need to rethink or reinvent existing information security practices and controls with the aim of making the use of ChatGPT more secure. Some of the practices that must be updated are shown in figure 3.

Security Awareness

Security awareness training should include acceptable and prohibited uses of ChatGPT and other generative AI platforms so that employees are aware of the potential risk and their responsibilities related to the safe use of the platform. Enterprises should prioritize security awareness training and provide targeted training to users of AI chatbots; however, they also need to ensure inclusive security awareness training to educate nonusers of the generative AIs on how they also can protect their data, and how and why their use may be prohibited for work purposes. The security and privacy risk attributed to employees’ use of ChatGPT calls for employers and digital trust practitioners to reinvent security awareness training, increase the frequency of such training, define metrics for measuring its effectiveness, and monitor those metrics as a means of achieving continuous improvement.

Policy and Process Updates

Enterprises need to update security policies to include risk factors that are specific to the use of ChatGPT and other generative AI platforms, including:

  • Update the acceptable use policy to include the use of generative AI or define and implement a policy on the use of generative AI for targeted users and general staff.
  • Update third-party vendor policies.
  • Update the incident response plan.

Figure 4

Enterprises should consider defining and implementing policies on the acceptable use of generative AI, including ChatGPT, in the workplace (figure 4). Providing employees with clear guidelines gives them more confidence when using the AI platform. Defining and implementing policies that are specific to the acceptable use of generative AI reduces the likelihood of privacy and security breaches occurring.

OpenAI has no control over what information employees input into ChatGPT, so employers must ensure that their employees use the platform in a manner that supports the enterprise’s objectives, while putting digital trust at the center of all interactions with the AI chatbot. Highlighting the acceptable use of ChatGPT on the application’s landing page and specifically requiring employees to agree to the policy before using the application reminds them of their security obligations, and each time employees agree to proceed, this can be logged for record-keeping and investigative purposes.

Although enterprises may define and implement security measures aimed at reducing the risk posed by using ChatGPT, third-party suppliers of services may suffer data breaches if they are not subject to similar policies, and enterprises may suffer breaches through them. Enterprises need to ensure that suppliers disclose whether ChatGPT or any other generative AI platform is among the tools and processes they use to deliver services. Such suppliers should be required to verify security due diligence and update their contract security schedules to include provisions related to the use of generative AI software. Enterprises may need to update data processing agreements with third parties to include provisions about whether suppliers can enter data processed on ChatGPT. Nondisclosure agreements (NDAs) with third-party suppliers may also need to be updated accordingly. The time to evaluate third-party suppliers to determine whether and to what extent they use generative AI is now. Enterprises should have legally binding contracts and NDAs to preserve the CIA of their data.

Enterprises also must update existing computer security and incident response plans to include how to respond to unforeseen data breaches caused by employees’ use of ChatGPT, and they should conduct periodic tabletop exercises to test the plans.

Technical Security Controls

Figure 5

With respect to technical controls, enterprises may consider implementing the technical controls depicted in figure 5.

Disabling the upload feature on the ChatGPT application limits the information employees can provide to the platform, thus reducing the threat landscape.

Although the API version of ChatGPT appears to be safer, not all enterprises will be able to implement it. Enterprises that use the web-based version should consider denying the use of generative AI by default and allowing only those whose job function requires it to access the platform at a group policy level. Moreover, organizations can also implement ChatGPT within their internal domain.

Enterprises must conduct risk assessments on each use case they intend to implement, and they should implement applicable security controls to mitigate privacy and regulatory risk.

In addition, enterprises should consider implementing role-based access controls (RBAC) to limit employees’ access to ChatGPT. Only those employees whose jobs require use of the platform should have access to it, and such access should be granted only after employees have undergone security awareness training specific to use of the platform and have demonstrated an understanding of the policy on its acceptable use. Also, enterprises may consider integrating ChatGPT with their single sign-on (SSO) and multifactor authentication (MFA) systems. Security information and event management (SIEM) monitoring and security and privacy audits can be used to monitor policy violations. ChatGPT or other generative AI platforms can also be integrated with existing SIEM monitoring, and triggers can be set to alert the monitoring team if authorized users share classified information when interacting with the AI platform. Finally, enterprises should conduct security and privacy audits of ChatGPT’s use to identify potential policy violations and security and privacy risk factors.

Conclusion

Although enterprises can derive immense competitive advantages when they adopt ChatGPT and other generative AIs, then can also expose themselves to significant privacy risk, including legal and regulatory infractions. Enterprises must conduct risk assessments on each use case they intend to implement, and they should implement applicable security controls to mitigate privacy and regulatory risk. In addition, security practitioners must reinvent and innovate existing security controls to preserve their organization’s digital trust when leveraging ChatGPT and other generative AIs for competitive business advantages.

Endnotes

1 Hu, K.; “ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note,” Reuters, 2 February 2023, http://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
2 Fishbowl, “Seventy Percent of Workers Using ChatGPT at Work Are Not Telling Their Boss; Overall Usage Among Professionals Jumps to 43%,” 1 February 2023, http://www.fishbowlapp.com/insights/70-percent-of-workers-using-chatgpt-at-work-are-not-telling-their-boss/
3 Ibid.
4 OpenAI; “March 20 ChatGPT Outage: Here’s What Happened,” 24 March 2023, http://openai.com/blog/march-20-chatgpt-outage
5 Doo-yong, J.; “Concerns Turned Into Reality... As Soon as Samsung Electronics Unlocks ChatGPT, 'Misuse' Continues” Economist, 30 March 2023, http://economist.co.kr/article/view/ecn202303300057?s=31
6 Coles, C.; “Eleven Percent of Data Employees Paste Into ChatGPT Is Confidential,” Cyberhaven, 28 February 2023, http://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt/

OLUWAFEMI ADEYEMO ADELEKE | CISA, CISM, CGEIT, CRISC, CDPSE

Is director and principal consultant at 3DMA Consulting Limited and a freelance cybersecurity consultant. His practice focuses on third-party cybersecurity risk management; cybersecurity transformation; cybersecurity governance, risk and compliance (GRC); and operational and cybersecurity resilience. He can be reached at oluwafemi.adeleke@3dmaconsult.com.

Additional resources