Learning From Other People’s Mistakes

Jack Freund
Author: Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, Chief Risk Officer, Kovrr
Date Published: 11 January 2023

Tips of the Trade

For decades, parents have implored their children to not do what their friends are doing. It is the progenitor of the aphorism, “If all your friends jumped off a bridge, would you?” This same philosophy is also the source of collective wisdom worldwide and a key component of risk management. A friend and I recently joked about power and energy safety standards, saying that behind each line in those documents is a story of somebody making a mistake.

One prerequisite to this consolidation of wisdom is the need for information sharing. Information about what works and what does not work is needed to enact controls in an environment that help prevent certain events from happening twice. This can be accomplished in several ways.

Using organizations such as ISACA® to stay connected to peers working at other enterprises helps professionals converse about relevant topics. But information sharing goes beyond merely discussing what you are working on and how you are solving control problems. There is also a need to discuss what went wrong. This means sharing information about what failed and why. This is hard for several reasons, not the least of which is that it is embarrassing to admit to failure. However, there can also be legal impacts of admitting that something went wrong and that as a result services, people’s data, or even their lives were endangered. However, the next level of maturity in the cybersecurity field will never be reached if public statements about an incident belie the underlying cause, belittle cyberexperts' work to prevent it, and overestimate the ability to prevent such incidents from happening at all. In short, not all cyberincidents can be attributed to sophisticated nation-state hackers leveraging advanced persistent threats (APTs), phrases such as “we are taking it seriously” notwithstanding.

I am not the first nor will I be the last to call for mandatory event disclosure; however, I would require better classification of what went wrong. It would be great to have such mandatory disclosure requirements leverage the the Vocabulary for Event Recording and Incident Sharing (VERIS) standard and identify the controls that failed using a framework such as MITRE ATT@CK. So, what can be done until such reporting requirements are updated to meet expectations? There is a need for cyberprofessionals to continue sharing information informally among ourselves, such as in information sharing and analysis centers (ISACs), but we also need to leverage near-miss analysis better.

Because they are necessarily not incidents, near misses rarely get the attention they deserve. There is valuable information in understanding when something takes place that, fortunately, did not cause failure to the extent that it could have. This can be considered the best of both worlds. First, the organization effectively receives a free penetration test of its systems, and second, the event was stopped before anything truly bad happened. Analyzing such events and classifying them is helpful; it can improve discipline in organizations and help prevent such incidents from happening in the future.

It is important to point out that near-miss analysis differs significantly from regular internal audit or risk assessment findings. There may be a Venn diagram of when a near miss encompassed a known finding, however, a near miss requires that a threat agent apply some force against systems (either externally or internally) and finds them lacking. When this happens, the threat agent has effectively found something new that has gone unnoticed by any of the three lines of defense. In such cases, the near miss illuminates for the organization where weaknesses exist, and as such, the near miss should result in some sort of documented finding once it is better understood what control weaknesses enabled the attack to occur.

There are strong arguments for sharing near-miss information. First, since there was little to nothing in terms of impact, the barriers to sharing, such as regulatory disclosures, reputation damage, and embarrassment, are lower. Second, an organization is likely to incur more near misses than actual events, which can yield a treasure trove of valuable information. Some commercial, operational risk data-sharing services for financial services enterprises have a specific flag for near-misses—that is how valuable they are.

Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, NACD.DC, is a vice president and head of cyberrisk methodology for BitSight, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, ISSA Distinguished Fellow, FAIR Institute Fellow, IAPP Fellow of Information Privacy, (ISC)2 2020 Global Achievement Awardee and the recipient of the ISACA® 2018 John W. Lainhart IV Common Body of Knowledge Award.