The adoption of modern cloud-native technologies has unleashed the latest wave of productivity and enabled new use cases across industries achieving digital transformation, including being applied to artificial intelligence (AI). For organizations, data science and AI capabilities have become one of the leading use cases for cloud migration. In addition, with the recent innovation in AI, generative AI poses an area of interest to boost cybersecurity across enterprises. According to a recent Global Data report, generative AI will grow 80% annually and reach US$33 billion by 2027.1
Although this transformation creates many opportunities for organizations, it also leads to severe risk and exponential growth in cybercrimes. Cybercriminals are continuously experimenting with AI technologies and developing new algorithms to design never-seen-before malware to break into enterprises. As organizations prioritize digitization, the focus on cloud security is rapidly growing, but unfortunately, many organizations still consider data and application security as an afterthought, creating a void that bad actors can easily exploit. In addition, a lack of skilled cybersecurity resources restrict organizations from being a step ahead of these bad actors.
Organizations can use generative AI as a powerful tool to outsmart these bad actors, proactively identify the new threats and take corrective actions. As the frequency and severity of cybersecurity threats continue to rise, the likelihood of attacks also becomes increasingly high. Organizations' digital immune system (DIS) must be strengthened and resilient to detect and counter threats. However, this immunity comes at a cost.
Although generative AI provides enormous benefits, there are some significant challenges with it. Generative AI requires significant training and cultural change to ensure confidential data is not compromised. In addition, the security data model training infrastructure requires a significant investment and strong business support.
Benefits
The increased adoption of digital technologies requires chief information security officers to augment and enhance cyberrisk management. Some distinct benefits of leveraging generative AI for cybersecurity include:
- Early threat detection—Large language models (LLMs), built on previous security breaches, will detect future violations much faster than manual monitoring. The anomaly detection, behavioral analysis and pattern recognition aligned with enterprise security policies have the potential to proactively identify potential threats before damage is caused. In addition, the enterprise defense system can generate new counter-responses from generative AI security models depending on how the attacker is changing its actions. For example, to better protect its patient data and applications to comply with regulations, United Family Healthcare deployed an AI-enabled security operations platform that increased visibility and sped up its time to detect, contain and respond to ransomware attacks.2
- Zero trust policy in a hybrid world—Organizations' networks have expanded substantially, creating security vulnerabilities due to the number of people working and accessing critical assets from multiple locations. To leverage the zero-trust policy process to safeguard data, organizations must categorize data into four categories (highly sensitive, regulated, nonpublic and public). Generative AI can help in assessing vulnerabilities and implementing a zero-trust policy in a hybrid working environment, delivering a better user experience based on a scorecard-based risk assessment. For example, Cloudflare extended its zero trust security controls platform, Cloudflare One, to generative AI services that will enable enterprises to safely and securely use the latest generative AI tools without putting intellectual property and customer data at risk.3
- Automated incident response—Although there are many incident response tools and solutions are available, cybercrimes are rising and achieving an incident-free stage is difficult. The threats are continuously changing, and cybercriminals are getting smarter with each passing day. Generative AI can be used in endpoint detection and response tools and vulnerability scanners to enhance security analytics, correlation and the detection of phishing and fraud campaigns. Generative AI can also automate defense responses, enabling organizations to respond swiftly and effectively to cyberattacks and enhance the capabilities of security orchestration automation and response (SOAR)-based solutions. Integrating AI-based security tools with an enterprisewide cybersecurity operation will automatically detect and mitigate threats. For example, Atos launched a new security operations center (SOC) service based on IBM QRadar and Watson AI, resulting in an AI-backed threat detection and log analysis.4
- Preventive development—AI solutions with a deep-learning model in natural language processing can be used to take a preventative approach to cybersecurity by prioritizing security earlier in the development process and highlighting potential security breaches to developers when the application code is being developed. Deep learning AI models can also suggest potential fixes for security flaws to developers. For example, GitLab recently announced a new AI-driven security feature that uses LLMs to explain potential vulnerabilities to the developers.5
- Labor shortage—There is a growing labor shortage of cybersecurity professionals, because of which many organizations have experienced issues such as a lack of proper time for assessment and training and slow patching of critical systems.6 The burnout among workers is also alarming, potentially widening the talent gap by causing people to leave their organizations. To address this, generative AI can be used to train less-skilled security practitioners to expedite the decision-making process by being able to analyze future threats more quickly and accurately. A global response to cyberthreats in multiple languages and different time zones, providing around-the-clock vigilance against cyberthreats can be optimized with AI technologies without being affected by labor shortages. Sophos is one example of an organization doing this.7
- Enterprise response—Organizations often secure their network, data and applications using disparate toolsets, meeting they lack a balanced and comprehensive enterprise response to cyberthreats. Intelligent automation enabled by AI provides a scalable way to address this issue by minimizing complex activities with easy-to-comprehend response and recovery tasks. For example, BigPanda (an AIOps platform) recently announced a generative AI-based incident analysis capability providing a detailed incident assessment (including incident impact, likely root causes and natural language incident titles and summaries).8
- Transformation of DevSecOps with AI—DevSecOps is the practice of bringing development, operations and security teams together so that workload security and policy compliance is no longer an afterthought. Unfortunately, the development and operations team often believes these security policies slow them down. To address this, generative AI can be used to recommend fixes for security flaws in code and make SecOps easy to adopt without delaying the development cycles. The easy-to-comprehend instructions based on different development and operations personas can help deliver highly secure and scalable applications without affecting the software development cycles. For example, a leading financial enterprise integrated generative AI into the DevSecOps pipeline to automate code review and to identify security vulnerabilities, coding errors and nonadherence to industry standards.9
- A Level playing field against attackers—Generative AI and other cutting-edge technology provide an opportunity to level the playing field with bad actors. Bad actors have been using AI tools to breach enterprise data and demand ransomware for some time. These attacks are becoming more sophisticated as the technology has advoaced; therefore, if enterprises are not using similar or more advanced technologies, they are at a great risk of being subject to sophisticated attacks. For example, NotPetya is one of the most destructive malware ever to be deployed, costing organizations billions of dollars. It spread quickly and efficiently using an AI-powered algorithm that allowed it to infect computers without detection.10
Although generative AI provides many benefits, it can also lead to potential risk if solutions are not correctly implemented within the guardrail of enterprise policies and ethical boundaries.
Limitations
Although generative AI provides many benefits, it can also lead to potential risk if solutions are not correctly implemented within the guardrail of enterprise policies and ethical boundaries. The first concern is the deployment of confidential information in the public domain. Second, an ill-trained AI model can do as much damage as none at all because AI models are only as good as the data that powers them. Third, because there is a cost associated with building and training models, a successful generative AI model for cybersecurity requires an efficient infrastructure and large datasets. Without sufficient data and the necessary resources, AI systems can produce incorrect monitoring results and false positives, which can have consequences for organizations. A higher number of false positives results in the cybersecurity team routinely ignoring these security warnings, causing alert fatigue and a high possibility of missing real cyberrisk. For example, security warnings were ignored by the Target security team, resulting in one of the largest data breach incidents in history.11 Last but not least, the continuous AI training and enablement of cybersecurity professionals are necessary to make these models beneficial to organizations. The lack of budget for cyberrisk management, the cultural shift needed to apply generative AI technologies to cybersecurity, leadership commitment and continuous motivation from team members make enablement hard to achieve.
Conclusion
Generative AI is a powerful tool for automating response, predicting and countering threats. However, it also has a high chance of more risk than benefits if it is not implemented correctly.
Endnotes
1 GlobalData, “Global Generative AI Revenue to Grow at 80% CAGR Over 2022-27, Forecasts GlobalData,” 9 August 2023
2 IBM, “Protecting Patient Data as an Act of Care”
3 Cloudfare, “Cloudflare Equips Organizations With the Zero Trust Security They Need to Safely Use Generative AI,” 15 May 2023
4 IBM, “Safer, Simpler, Service-Based—Security Made Better,” October 2022
5 Lardinois, F.; “GitLab’s New Security Feature Uses AI to Explain Vulnerabilities to Developers,” TechCrunch, 24 April 2023
6 Kagubare, I.; “Tackling the Labor Shortage in Cybersecurity,” The Hill, 26 July 2023
7 Gallagher, S.; “SophosAI at DEF CON: Orchestrating Large-Scale Scams Using Text, Audio and Image Generative AI,” Sophos News, 7 August 2023
8 Sibille, B.; “Unleash the True Power of AIOps With BigPanda New Generative AI,” BigPanda, 11 July 2023
9 Ghosh, B,; “Generative AI in DevSecOps,” Medium, 6 August 2023
10 Bezverkhyi, A.; “Petya.A/NotPetya Is an AI-Powered Cyber Weapon, TTPs Lead to Sandworm APT Group,” SOC Prime, 2 July 2017
11 Chiacu, D.; “Target Missed Many Warning Signs Leading to Breach,” Reuters, 25 March 2014
Arun Mamgai
Has more than 18 years of experience in cloud-native security, open-source secure supply chain, AI/machine learning, cloud modernization, digital transformation, data management, and digital marketing while working with Fortune 1000 customers across industries. He has published many articles highlighting the use of technology to build modern cloud solutions securely. He has been invited to speak at leading schools on topics like “digital transformation” and “Application level attack in connected vehicle protocol”. He has also mentored multiple start-ups and actively engages with a nonprofit institution that enables middle school girls to become future technology leaders.