In the early 2000s, Sub-Saharan Africa’s telecommunications infrastructure was critically underdeveloped, calling for modernization. Fewer than 16 out of every 100 people had access to a fixed telephone line, placing the region among the least connected in the world. The strategy that was deployed was not to run the same copper lines that powered telephony in the European Union and United States. Instead, Sub-Saharan African nations decided to leverage the preceding decades’ enormous improvements in wireless technology. By expediting efficiency compared to more developed countries, they were able to quickly and affordably bring first-rate telecommunications services to the masses.
This phenomenon, known as technological leapfrogging, can be observed in many recent examples. Consider how many organizations today identify as cloud-native. Few organizations want to build a data center and then decommission it, only to start again in the cloud. Sure, this seems obvious in retrospect, but in the moment, it is anything but. As current trends move in one direction, it takes substantial effort to recognize that a phase shift in technology is happening and that organizations can ride this wave to success.
This technological shift can be seen in the definitions of the 5 cybersecurity maturity levels (initial, managed, defined, quantitatively managed, optimizing). There is a shifting, somewhat subjective view of what it takes to achieve various maturity levels. However, one principle that is widely recognized is that Level 4 is “quantitatively managed.” This level requires people, processes, and technologies to be measured using quantitative metrics. However, is it really reasonable for an organization to wait until it gets to Level 4 to begin quantitatively thinking about its information systems?
This structural bias may surface when an organization evaluates its risk assessment processes. Decision-makers often begin by saying that their organization is too immature to collect quantitative metrics, so instead, they will use a less sophisticated approach and directly rate risk as high, medium, and low. Many organizations mistakenly believe that they need to accumulate a substantial amount of data in order to build a cyberrisk quantification (CRQ) program. Thus, they do not believe they will be eligible to measure risk quantitatively until they have already built out a deprecated qualitative rating system—the risk equivalent to running copper phone lines.
If you start with no knowledge about a subject and add one data point, you have significantly improved what you know about that subject. Building a security and risk program is no different. Organizations should begin measuring from day one and add dimensions to the measurements as they mature. Furthermore, organizations should embrace the benefits of modernity and avoid the assumption that they must first implement a low-fidelity qualitative risk program before transitioning to a quantitative one. Instead, leapfrogging to using quantitative risk will extract the most benefit for an organization.
The true irony lies in a measurement system that claims to assess organizational maturity on a scale of 1 to 5 (often with decimal points) yet advises against focusing on quantitative measurement for the first 3 levels. Organizations can leapfrog risk programs with quantitative measures of loss exposure using CRQ to accelerate programs at all maturity levels.
Jack Freund
Is the chief risk officer for Kovrr, coauthor of the award-winning book on cyber risk, Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, ISSA Distinguished Fellow, IAPP Fellow of Information Privacy, ISC2 Global Achievement Awardee, and ISACA’s John W. Lainhart IV Common Body of Knowledge Award recipient.