Artificial intelligence (AI) is rapidly changing how organizational leaders think about their approaches to information risk and security. The current media attention surrounding AI and its early integration of concepts and capabilities into security vendors’ products and services has created great interest in both its current-state capabilities and its future potential—for both the adversary and defender alike. The adversary community has embraced concepts such as generative AI to assist in understanding attacks and vulnerabilities, rapidly developing malware and attack code, identifying potential targets and exploiting weaknesses. Defenders are also looking to AI to help them rapidly enhance their capabilities by automating detection, assessment, and response to threats and attack activities; automating security testing; and assisting in event and incident investigation and response activities.
The information risk and security community must constantly adjust and adapt to emerging technologies, capabilities, and techniques of their adversaries, and the tools and capabilities that are available to them to enhance their defenses and effectiveness. For any emerging technology that is in its early stages of mass recognition and deployment, there is often a broad and exciting range of possibilities that should be explored before a select few use cases can be considered normal, expected and beneficial. There are five ways current and emerging AI capabilities are likely to change how organizations and professionals approach information risk and security:
- Security event and incident response investigations and actions will become more efficient and effective. Cyberincident response investigations and subsequent actions are often a race against time. This is especially true if an organization is under active attack or has become the victim of a successful attack. Information risk and security professionals spend a significant amount of time collecting, processing and analyzing data as part of their investigation and response activities. AI has the potential to automate many data collection, processing and analysis activities and reduce the signal-to-noise ratio (SNR) often associated with data collection and processing. While AI is not likely to ever completely replace human analysis, it will provide information risk and security professionals more accurate, actionable data to conduct analysis and coordinate response activities.
- Threat and vulnerability analysis will become more automated and prevalent. Risk and security professionals long recognized that the use of threat and vulnerability analysis is highly useful in learning how an adversary is likely to operate and attack, what the likelihood of an attack occurring is, and what the material impact to an organization or operating environment might be if a successful attack occurs. All these factors are essential data points and input values for security risk assessments activities. Currently, threat and vulnerability analysis is often a human-intensive and time-consuming process. As a result, analyses are typically only conducted on a point-in-time basis for high visibility business processes and supporting information infrastructure and associated data assets. The use of AI to support analysis will likely drive more automation and efficiency, which should result in greater ongoing use.
Risk and security vendors and service providers who provide products and services to support analysis activities will provide tools and solutions that incorporate large learning models (LLMs) to support analysis activities. These LLMs will be continuously updated with intelligence related to attacker tools, methods, and tactics; known technical vulnerabilities that exist and/or are being exploited; details of how vulnerabilities are identified; and possible corrective actions and/or compensating controls. But to be effective, information risk and security professionals within organizations must enrich these models and tools with data and insights specific to their organizations’ business process activities, risk appetites, expected user activities and behaviors. Only then will such tools be useful and embraced by business process owners and leaders. The combination of both data sets will allow for AI-supported, ongoing, automated threat and vulnerability analysis activities to be applied across all material business processes.
- Application security will be more effective. Many security events and incidents are the result of insecure application code, applications that are misconfigured or applications that have been manipulated by adversaries and used as part of their attack activities. The volume of security-related software patches and updates that are produced by application vendors on an ongoing basis has provided clear evidence that current approaches to application security must be enhanced to be effective. AI is likely to accelerate these enhancements by integrating application security-based LMMs into application development and security testing and protective tools such as static application security testing (SAST) and dynamic application security testing (DAST), software composition analysis (SCA), web application firewalls (WAFs), application programming interface (API) security gateways, and quality assurance and penetration testing. These LMMs can ensure that application source code and running applications are tested against—and are resilient to—variations and permutations of known and expected attacker methods and tactics in a highly efficient and risk-based testing environment.
The current use of AI-based application development assistant tools (e.g., Microsoft GitHub Copilot, GitLab Code Suggestions) is likely to produce more secure code, if the LMMs that are used to train these tools are continually trained to incorporate security considerations, dependencies, coding styles and context, and business activities specific to the organizations they support. Many of these tools have been found to occasionally produce vulnerable code and applications because the LMMs on which they were trained were not based on high-quality, secured code, or because they did not account for the applicable organization’s unique business requirements and instead only leveraged public domain data inputs and LMMs. As these tools are exposed to more application development activities and source code, and are trained on organization-specific code and applications, they are likely to continuously improve and provide enhanced secure code development support.
AI-supported tools can improve the efficacy of application security and quality assurance testing capabilities by learning and applying new testing techniques to the structure, context and expected behaviors of the application code that the tool is analyzing as the code is being developed. AI-based capabilities can also continuously develop configuration policies and rules for WAFs and API security gateways using knowledge gained by learning the application code and constant enrichment from security intelligence sources related to attacker methods and tactics.
- Security tools will become more effective at identifying and defending against known and emerging attack techniques and capabilities. Security vendors have always attempted to incorporate learning and knowledge of known and possible attacker methods, tactics and capabilities into their tools. With the addition of AI capabilities and security-focused LMMs that are trained on known attack methods and practices, their tools and capabilities are likely to become more effective at defending against known and emerging attack methods and techniques. Historically security vendors have had to wait for an attack method, tactic, tool or vulnerability to be identified before they are able to update their tools. This is known as a patient zero effect, wherein a security vendor must race to update their detection and response activities as soon as they identify and validate a new attack, threat vector or vulnerability. With the introduction of AI security, vendors may become more accurate at predictive modeling of emerging attacker methods and practices and can proactively create identification and defensive methods to identify and protect against threats to their services and tools.
- Defenders may become too reliant on AI and forget the critical value of the information risk and security human professional. There are obvious benefits to the introduction of AI capabilities within information risk and security activities in organizations, but there are also concerning behaviors that are likely to emerge. AI in its current and near-future projected state of maturity cannot and should not replace the intuition and analysis capabilities of human information risk and security professionals. Many security leaders are hopeful that AI will help fill the gap created by the lack of available risk and security professionals that currently exists. The fact remains that human risk and security professional have the advantage of knowing and understanding the concept of “maybe” while computers, even with AI, still make decisions based on binary (i.e., “yes” or “no”) logic.
Current AI security tools and capabilities are only as effective as the LMMs on which they are based and will continue to make assumptions and decisions based on them. Experienced information risk and security professionals have a keen understanding that they must always follow the evidence that is presented to them while allowing for the possibility of a “maybe” factor that accounts for nonobvious factors or the opportunity for introduction of misinformation by an adversary to misguide their activities. Human intuition is powerful and essential when it comes to information risk and security activities, and will not be replicable by AI in the near term.
The only constant in information risk and security is change. Adversaries consistently and effectively adapt to new security techniques and change their approaches to counteract them. Capable adversaries also actively monitor and learn how AI-based security tools and capabilities operate and react to stimulation. They can use these insights to identify how to circumvent such tools, and in some cases, use these reactions as part of their attack techniques to increase their effectiveness and their ability to hide activities until they want them to be known. It is for these reasons that current AI-enabled risk and security tools and capabilities should be viewed as complimentary to the human information risk and security professionals, not as a replacement to them.
A big step forward
The use of AI in information risk and security programs and activities is likely to provide a significant step toward enhancing the information risk and security postures of organizations that effectively leverage it. AI tools can provide valuable assistance, insights and automation to foundational and repeatable risk- and security-related activities. If viewed as complimentary to the information risk and security professional, rather than as a replacement, AI has the potential to significantly enhance the maturity of many organizations’ information risk and security programs. Alternatively, if organizations view AI as a replacement for the human information risk and security professional, they are likely to create a false sense of security for themselves and may become less mature—and more vulnerable—over time.
John P. Pironti, CISA, CRISC, CISM, CGEIT, CDPSE, CISSP, ISSAP, ISSMP
Is the president of IP Architects LLC.