Lessons Learned From a Controversial Terms of Service Update

Safia Kazi
Author: Safia Kazi, CIPT
Date Published: 17 August 2023

In mid-August 2023, privacy communities around the Internet were abuzz with criticism of a terms of service (ToS) update made by Zoom in March regarding artificial intelligence (AI). Zoom, a video communication platform, updated their terms of service to indicate:

10.4 Customer License Grant. You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content: (i) as may be necessary for Zoom to provide the Services to you, including to support the Services; (ii) for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof; and (iii) for any other purpose relating to any use or other act permitted in accordance with Section 10.3. If you have any Proprietary Rights in or to Service Generated Data or Aggregated Anonymous Data, you hereby grant Zoom a perpetual, irrevocable, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to enable Zoom to exercise its rights pertaining to Service Generated Data and Aggregated Anonymous Data, as the case may be, in accordance with this Agreement.

Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.1

Based on the criticism Zoom received, it then updated its ToS several days later to indicate that “Zoom does not use any of your audio, video, chat, screen sharing, attachments or other communications-like Customer Content (such as poll results, whiteboard and reactions) to train Zoom or third-party artificial intelligence models.2 Seeing an enterprise respond to consumers’ privacy concerns is refreshing. But there are some lessons that can be learned from the initial change to the ToS.

Zoom is not alone in training AI with consumer content or ineffectively communicating how personal data may be used. Microsoft’s privacy statement lists the following broad uses of personal data:

  • Provide our products, which includes updating, securing, and troubleshooting, as well as providing support. It also includes sharing data, when it is required to provide the service or carry out the transactions you request.
  • Improve and develop our products.
  • Personalize our products and make recommendations.
  • Advertise and market to you, which includes sending promotional communications, targeting advertising, and presenting you with relevant offers.3

It is unclear what products Microsoft uses personal data to improve and develop. Its privacy statement also specifies, “Our automated methods often are related to and supported by our manual methods. For example, to build, train, and improve the accuracy of our automated methods of processing (including artificial intelligence or AI), we manually review some of the predictions and inferences produced by the automated methods against the underlying data from which the predictions and inferences were made.”4

It is unclear why Zoom was the focus of so much scrutiny recently for its privacy practices but other enterprises doing something similar have not received this criticism. Although Zoom is being used in this example, many enterprises would benefit from exploring the issues with Zoom’s previous ToS and implementing the lessons learned.

Issues With the Initial Update

There were numerous privacy-related concerns associated with Zoom’s initial ToS update. One of the main concerns is that the purpose for collecting customer content data was incredibly broad. Per Zoom’s previous ToS, customer content could be used for machine learning or AI purposes. It indicated that audio, video and chat would not be used to train AI models without consent, but what is consent in this context? Is consent an active and explicit opt-in or is consent granted by accepting Zoom’s long and broad ToS? Forcing users to accept a long, all-or-nothing ToS diminishes trust and does not provide consumers with any meaningful way to express privacy preferences.5

Another consent-related concern is that privacy preferences may only be shown to account owners, not all meeting participants. Attending a Zoom meeting does not require having a Zoom account. And even if all participants saw the ToS before a meeting, they may not have had the ability to decline Zoom’s excessive data collection. Those who are required to use Zoom for work meetings may have felt pressured to use the platform despite their concerns—and this issue is not specific to Zoom.

Even if all participants saw the ToS before a meeting, they may not have had the ability to decline Zoom’s excessive data collection.

In addition, some medical providers leverage Zoom for telehealth services. Zoom has distinct business associate agreements (BAA) with healthcare providers, but Zoom indicated that—for healthcare and education customers—it “will not use customer content, including education records or protected health information, to train our artificial intelligence models without your consent.”6 This consent would have been provided to Zoom by the healthcare provider, not the patient. Training AI on data from healthcare accounts with BAAs should not be possible. Healthcare providers may not be savvy from a technical privacy perspective and might inadvertently share their patients’ protected health information (PHI).

Communication around Zoom’s update was unclear. It took many users months to realize this change to the ToS. To be fair, no enterprise can force consumers to read a ToS, but when Zoom received criticism about the update, it published a blog post that did not clarify how customer data could be used for AI training purposes. It emphasized, “For AI, we do not use audio, video, or chat content for training our models without customer consent,”7 but it did not address what customer consent looks like. It was not until a few days later that Zoom revised its ToS (and the accompanying blog post) to indicate that the organization would not use consumer content to train its AI models. Zoom should be commended for listening to consumer concerns and revising its data processing practices in accordance with those concerns.

The blog post initially had a screenshot of how account owners/administrators could opt in to Zoom’s generative AI features, but opting in to generative AI functionality is entirely different than consenting to data collection for the purposes of training AI. The blog post conflated agreeing to use generative AI tools with consenting to data collection, and the two are not the same. And while all meeting participants would be notified if Zoom’s AI features are running, their only options would be to leave the meeting or accept it,8 which does not actually offer attendees a meaningful choice.

Lessons Learned

Enterprises looking to change the way they process data or the scope of data collected must notify consumers. To ensure that data subjects are protected, any changes made should be in alignment with privacy by design (PbD) principles. The principles of “visibility and transparency: keep it open,” and “respect for user privacy: keep it user-centric”9 are especially important. Enterprises should state exactly what consent looks like and how consumers can exercise their privacy-related rights. Note that enterprises may have specific consent-related obligations depending on applicable privacy laws and regulations (e.g., under the EU General Data Protection Regulation [GDPR], data subjects have the right to withdraw their consent).10

When new, large-scale changes are made to a ToS or privacy notice, enterprises should ensure that messaging is consistent across the organization. Product managers, marketing teams and social media teams should be informed of the changes being made and how to accurately and clearly speak to them. Zoom received questions about what providing consent looked like, but the questions focused on how to opt out of Zoom’s generative AI features, not how to opt out of sharing data for AI training purposes. It is possible that those who wrote blog posts and answered questions about this issue did not have the technical know-how to understand the difference between collecting data to train AI and using an AI tool. A brief frequently asked questions (FAQ) document for staff can help ensure that all enterprise representatives are speaking accurately and consistently. Although blog posts and supplementary materials may be useful, the ToS is ultimately what consumers are agreeing to, so it is imperative that the content of those documents are in alignment.

Although blog posts and supplementary materials may be useful, the ToS is ultimately what consumers are agreeing to, so it is imperative that the content of those documents are in alignment.

A ToS document or privacy notice should explain what data are collected and why they are collected. Enterprises must know exactly how data will be used and be able to relay that information. Enterprises must avoid collecting data if they cannot explain to consumers exactly how those data will be used.

In addition, some information, if improperly disclosed, can cause more harm than other information. For example, credit card numbers and health information may be more detrimental if breached than email addresses. Zoom previously would have allowed healthcare providers to consent to sharing data for the purposes of training AI, but these providers deal with health-related information. This is highly sensitive and, depending on the jurisdiction, may be subject to stringent regulatory requirements. Enterprises must understand the data they collect and if they must prohibit data sharing practices for certain types of data or consumers/accounts.

There are numerous privacy-related concerns associated with AI.11 Enterprises desiring to use consumer data to train AI algorithms need to consider the implications of the data that may train AI. For example, transcripts of a conversation between a therapist and patient might result in biases and prejudice against certain mental health conditions if those conversations are used to train AI.

Conclusion

Consumers may not read every ToS for every service provider they interact with, but enterprises are still obligated to have clear and up-to-date ToSs in place. By practicing PbD, clearly communicating with data subjects, knowing the purposes for which data were collected and being able to clearly communicate that information, enterprises can empower and build trust with their consumers.

Endnotes

1 Zoom, “Zoom Terms of Service,” Internet Archive Wayback Machine, 7 August 2023
2 Zoom, “Zoom Terms of Service,” 11 August 2023
3 Microsoft, “Microsoft Privacy Statement,” August 2023
4 Ibid.
5 Kazi, S.; “Educated and Empowered Data Subjects: A Privacy Prerequisite,” ISACA industry News, 9 March 2022
6 Hashim, S.; “How Zoom’s Terms of Service and Practices Apply to AI Features,” Zoom Blog, Internet Archive Wayback Machine, 7 August 2023
7 Ibid.
8 Ibid.
9 Cavoukian, A.; Operationalizing Privacy by Design: A Guide to Implementing Strong Privacy Practices, Information and Privacy Commissioner, Canada, December 2012
10 Wolford, B.; “What Are the GDPR Consent Requirements?” GDPR EU
11 Tang, A.; “Making AI GDPR Compliant,” ISACA® Journal, vol. 5, 2019

Safia Kazi, CSX-F, CIPT

Is a privacy professional practices principal at ISACA®. In this role, she focuses on the development of ISACA’s privacy-related resources, including books, white papers and review manuals. Kazi has worked at ISACA for 9 years, previously working on the ISACA® Journal and developing the award-winning ISACA Podcast. In 2021, she was a recipient of the AM&P Network’s Emerging Leader award, which recognizes innovative association publishing professionals under the age of 35.