Misinformation—A Rapidly Growing Threat to Enterprises

Jon Brandt
Author: Jon Brandt, Director, Professional Practices and Innovation, ISACA
Date Published: 10 February 2020

As if the constant defensive posture of protecting digital assets were not enough, we now must question the authenticity of information. Deepfake reporting is on the rise and, just this past month, a popular US television series included deepfake in its storyline.1 Deepfakes are fake multimedia recordings but are usually manifested as video.2 Deepfakes can take the form of a celebrity’s face, for example, being juxtaposed on the image of someone else with results that often appear extremely realistic. Deepfakes are no longer limited to celebrities, however. They present concerns for government officials, organizations of all sizes and tomorrow’s leaders sitting in classrooms. But while deepfakes receive much of the reporting, they are simply one method of the broader issue of misinformation.

Merriam-Webster defines “misinformation” as “incorrect or misleading information.”3 A subset of misinformation is disinformation, which Merriam-Webster’s defines as “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth.”4 As I prepared to write this, I found neither definition to be perfect, but most writings differentiate the 2 based on intent. That is, disinformation is deliberate, whereas misinformation can be due to errors or omissions.

As a former cryptologist and member of the US intelligence community, I am aware that disinformation is not new, as it predates my military service years. During my career, deceptive messaging tactics were employed through information operations, psychological operations and information warfare. Like most things, technology has expanded and accelerated the size and scope of misinformation. Disinformation reporting has historically been tied to governments and the earliest accounts of disinformation were written by the Soviets.5, 6

Misinformation and disinformation should be real concerns for information security professionals. Arguably, the “intent” line drawn between the 2 terms becomes blurred, in part, due to erroneous journalistic reporting and the abundance of news pieces that are opinions but are not positioned as such. People are fallible, but due diligence on the part of media outlets of all sizes has diminished over time, as demonstrated by high-profile story corrections or retractions. Whether these errors are intentional is not for me to say, but in a world that largely lacks accountability, it is difficult to categorize repetitive errors as nondeliberate.

As I further my academic studies, I find myself focusing less on the immediacy of problems and instead questioning what can be done to prevent or at least minimize spreading misleading information in the future. To counter misinformation (including disinformation), information security professionals must identify contributing factors and conditions:

  • Although technology has been a game changer in many ways, it too can be a crutch. Business owners of predominately cash businesses struggle to find employees who can manually provide the correct change on a transaction. Similarly, organizations that lose power often must shut down (at least temporarily) because of the reliance on technology.
  • As a parent of school-age children, I routinely encounter this overreliance. Physical textbooks have transitioned to digital versions, which translates to obvious cost savings. Unfortunately, lesson plans are increasingly just fixed numbers of worksheets that largely ignore the bulk of the text. This often translates into reduced context, comprehension and analytical reasoning being provided in the learning material. Then there is the issue of relying on cloud software when Internet access is not available in all homes.
  • Technology also influences how people conduct academic research. In recent years, early learners are introduced to “Google,” “Siri” and “Alexa”—all of which are influenced by search engine optimization, search engine manipulation and artificial intelligence (AI) algorithms. Thus, educators must provide young children and teenagers adequate instruction and oversight to recognize the quality of information and minimize reliance on nonacademic information sources so as to minimize ethical issues in research.7 Similar trends exist in higher education where 1 study revealed that “for more than 80% of higher education students, the general-purpose Google search engine is the most important, relied-on and frequently-used source of academic information.”8
  • Finally, social media may have been created for benign reasons, but today, it is a breeding ground for misinformation. To combat this, fact checkers have popped up to support or deny claims. Unfortunately, fact checking is inherently flawed due to confirmation bias.9

The previous conditions are sizeable challenges that carry no easy solutions given our reliance on technology. However, amidst all the uncertainty is this fact: The accuracy and quality of information can make or break enterprises.

Security professionals often focus training efforts on social engineering—particularly phishing. And, although phishing remains an important security topic, security departments can add value to enterprises by educating employees and executives on topics such as misinformation. Business professionals routinely harness the power of technology but often lack an understanding of how widespread and powerful misinformation is. How are enterprises protecting their brands? How confident are they of the sources of information feeding business cases? What would an enterprise do if a senior executive found herself or himself the victim of a deepfake, for example, of that executive disparaging competitors or announcing a product release, acquisition or merger? While some of these scenarios may seem far-fetched, they are made possible by an abundance of software and tutorials on the web.

Jonathan Brandt, CISM, CCISO, CFR, CISSP, CSA+, PMP
Is a senior information security practice manager in ISACA’s Knowledge and Research department. In this role, he contributes thought leadership by generating ideas and deliverables relevant to ISACA’s constituents. He serves ISACA® departments as a subject matter expert on information security projects and leads author management teams whenever external resources are necessary. Brandt is a highly accomplished US Navy veteran with more than 25 years of experience spanning multidisciplinary security, cyberoperations and technical workforce development. Prior to joining ISACA, Brandt was a project manager for classified critical infrastructure projects across the globe.

Endnotes

1 Wolf Films, CBS Television Studios, Universal Television, FBI, Season 2, Episode 12, “Hard Decision”
2 Porup, J. M.; “How and Why Deepfake Videos Work—And What Is at Risk,” CSO, 10 April 2019
3 Merriam-Webster, “Misinformation
4 Merriam-Webster, “Disinformation
5 Bittman, L.; R. Godson; The KGB and Soviet Disinformation: an Insider’s View, Oxford: Pergamon Press, UK, 1985
6 Pacepa, I. M.; R. J. Rychlak; Disinformation: Former Spy Chief Reveals Secret Strategies for Undermining Freedom, Attacking Religion, and Promoting Terrorism, WND Books, USA, 2013
7 Teachnology, “How Gas Google Changed Education
8 Salehi, S.; J. T. Du; H. Ashman; “Use of Web Search Engines and Personalisation in Information Searching for Educational Purposes,” Information Research: An International Electronic Journal, vol. 23, no. 2, June 2018
9 Marietta, M.; “How Fact-Checking Is Flawed: The Perils of Uneven Standards,” Psychology Today, 7 June 2019