Problems With Using Record Count as a Proxy for Risk

Jack Freund
Author: Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, Chief Risk Officer, Kovrr
Date Published: 14 September 2020

Developing a cyberrisk appetite can be a daunting task. Many organizations struggle to determine how much risk they have, how much is acceptable and how to measure risk at all. When faced with these problems, many practitioners default to measuring things that are within arm’s reach or appear novel or significant based on someone’s experience. This “works” to the extent that following a process and taking measurements alleviates our concerns and gives us a feeling of assuredness. Indeed, research has shown that, regardless of how good it really is, we believe that following any process at all will yield a good result.

In response, many organizations consider the number of records as one of those values that is both easy to measure and novel in measuring risk. It seems so straightforward: There are industry reports that talk about records lost and cost per record. Indeed, including record count as a denominator also offers a quick way to compute risk in economic terms. While it is true that a loss of 1 million records is generally worse than a loss of 100 records, these 5 questions can help identify serious problems with this approach:

  • How many lost records are acceptable? Whose? The first problem is an ethical concern. There is a big difference in admitting that, on a long enough timeline, your organization will be subject to a data breach of some non-zero record count. It is quite another to say it is not a major concern to lose 1,000 records. Is this something you are willing to tell your customers?
  • How many records are too much? If your record count risk appetite metric represents a breach number, you have a lagging appetite measurement problem. Say, for example, that the record count breach number is 1 million. You will not know if you have exceeded your appetite until after you have had an incident. Good luck trying to manage appetite in arrears.
  • How will you operationalize? What should an organization do in response to an appetite measure that says 1 million records? Should you cap your customers at less than 1 million to stay under the appetite? Should you split your customer database into separate systems so that there are less than 1 million in each? Each action you may take in response to such a metric is more ludicrous than the next.
  • What about economies of scale? When you have a data breach, there will be some fixed costs and some variable costs. In general, when fixed costs are spread over a greater number of records, the per-record cost decreases. A simple example is that the same fixed US$100,000 investigation retainer you pay spread across 1,000 records versus 1 million records will yield a very different metric (US$100 versus US$0.10). Using this metric, cynically, means that if you are going to have a breach, it is more cost effective to have a larger one than a smaller one. Clearly this is not an attitude we want to encourage in our organizations.
  • Which customers are most valuable? Every organization has a concept of key customers. These can be long-term customers, highly profitable customers, high-profile customers or customers whose executives are on your board. The opposite is also true, with some marketing professionals advocating for dividing your customer base into so-called “angel and demon customers.” Saying that a set number of records are okay to lose also means that you would be okay with losing these key customers, which clearly is not true.

The biggest problem with a records-based view of appetite is that it differs from how other risk disciplines manage risk. For example, you cannot engage in practical risk transfer to limit the number of records you have in order to stay under the limit (outside of divesting parts of the organization). Yes, cyberinsurance policies leverage record count as one of the metrics used to compute premiums, but they use plenty of others as well (they use this number to compute what their maximum payout event may be and how likely it is to occur). While record count is convenient, it is not the most actionable measure and will likely not reflect an organization's core values. Instead, one should have the difficult but meaningful conversation with their organization about establishing a range of monetary losses that represents a cyberincident and reflects how much the organization will be able to handle. It will force the organization to shift from thinking it wants zero cyberrisk to determining how much it could handle, and how much it is willing to spend to avoid such a loss.

Jack Freund, Ph.D., CISA, CRISC, CISM, CGEIT, CSX-P, CDPSE, is head of cyberrisk methodology for the Moody’s/Team8 Cyber Risk Assessment Venture, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, FAIR Institute Fellow, IAPP Fellow of Information Privacy and ISACA’s 2018 John W. Lainhart IV Common Body of Knowledge Award recipient.