Artificial Intelligence (AI) is here to disrupt industries and transform products and services in unique new ways. Generative AI creates data, such as text, images, videos and more by learning patterns from vast data sets and by generating unique outputs. Applications are unlimited and opportunities for creating value are numerous. The rapid rise of AI has left many concerned with the consequent cybersecurity, privacy, risk and ethical implications. Lawmakers in the European Union are working to combat these concerns quickly by drafting the AI Act, a regulation poised to become the global standard regarding AI. Europe is the first continent to attempt to pass an AI regulation of this scale.
Controlling the expeditious growth of AI systems like ChatGPT is no simple feat, but it is crucial in order to protect users’ rights and safety without hindering the potential benefits of AI technology. This article will take a look at AI regulations across the world before exploring the AI Act.
The Global Stage
In the US, President Joe Biden and Vice President Kamala Harris met with the heads of companies like Google and Microsoft, who are developing their own AI, to discuss the potential and risks of developing this technology. The administration will be investing US$140 million to establish seven new AI research institutes, and it is expected that the White House will be issuing guidance for federal agencies and AI tools in the near future. The Federal Trade Commission is keeping a watchful eye on companies that are creating or investing in AI in the interest of protecting consumers.
Italy was the first Western country to ban ChatGPT after a suspected data breach of Europe’s privacy regulations, including the General Data Protection Regulation (GDPR). The Italian data-protection authority said that ChatGPT’s users’ conversations and payment information was compromised during a data breach on 20 March 2023, and that there was no legal justification for collecting and storing personal information to train AI algorithms. The ban has since been lifted, and ChatGPT has implemented enough changes to meet Italian authorities’ expectations.
France is predicted to have a great influence on European regulations, and the National Commission on Informatics and Liberty (CNIL) is seeking to lead the way on nationally enforcing the AI Act. The French data protection authority announced a four-step action plan on 16 May 2023 to understand AI technology, guide its development, create an AI ecosystem and to control AI systems. This plan also provides a focus on the core competencies of audit and control of digital systems.
With more and more jurisdictions studying the impact of AI in their societies and economies, it is becoming evident that regulating AI is an important step for enjoying the benefits of these technologies in a responsible manner.
The AI Act
The AI Act has been approved by the Internal Market Committee and the Civil Liberties Committee, two key committees that are composed of members of European Parliament. Many are expecting this legislation to be another case of “the Brussels effect,” or European regulations becoming the standard and influencing the rest of the world. Global tech companies must follow similar rules, like those established by the GDPR, in order to do business with European enterprises. Thus, it becomes simpler to comply with these regulations on an international scale to reduce the likelihood of mistakes from attempting to adhere to multiple standards.
“An issue is that of productivity: unless all countries adopt similar regulations, countries that restrict AI could find themselves at a massive competitive disadvantage to those that do not,” says Raef Meeuwisse, author of Artificial Intelligence for Beginners and director at Cyber Simplicity Ltd. “This regulation would have been great about 10 years ago, but it might struggle to keep pace with current AI developments. Nonetheless, it is still further forward than most regions—and I predict that there will be many countries and regions racing to regulate AI over the next 12 months.”
The updated draft of the AI Act includes requiring organizations to disclose “training data” for their AI systems. This could have a domino effect if it reaches the US, where many copyright lawsuits are underway against AI image generators for using artists’ data to train AI without their consent.
Additionally, the AI Act would seek to create a free, public and accessible database of high-risk AI systems to track their deployment throughout Europe. This would enable the public to more readily understand how AI functions and how it affects them directly, including the collection of their data. To build an ecosystem of trust regarding AI in Europe aligns with ISACA’s belief that digital trust is a prerequisite for citizens and their increasingly digital lives. This database would be a step toward transparency that would aim to prevent the persistent problem of not understanding AI until harm has already been done.
“The new EU law is definitely well-intentioned and contains some laudable components, especially aiming to prohibit the use of AI to perform subliminal techniques to manipulate people. However, there are several significant challenges to the reality of trying to regulate AI,” says Meeuwisse. “One problem is that AI’s complexity defies simplification—for example, current AI can run millions of years of human analysis each second, and the insides are so large and inscrutable that they defy analysis. Attempts at XAI or explainable AI are well-intentioned but questionable as to how accurate, honest or transparent they can really be.”
See more in-depth analysis of the AI Act from Hafiz Sheikh Adnan Ahmed in this week’s issue of the @ISACA newsletter.
What’s next for regulating AI?
The current timeline for the AI Act necessitates its approval before spring 2024, but professionals remain concerned about what will happen in the meantime. With ChatGPT growing exponentially in just a few months, experts worry about what other AI issues will develop between now and the implementation of the AI Act.
ISACA believes that an approach to digital trust that combines auditing, cybersecurity and privacy in the context of AI is essential. Continuously auditing AI outputs would help to provide the necessary transparency for how these systems function, and a holistic risk management framework is key for effective efforts to cover people, processes and technology, within the context of the specific AI application.
This also points to the transformation of the professions in the digital trust domain, for managing AI and for using AI in order to bring those professions to their next level. For example, auditing complete data outputs through the use of AI in real time vs. sampling, or the identification of patterns when auditing through AI algorithms that may present results invisible to the human AI, will transform the audit profession. The same applies for cybersecurity and privacy, domains that can benefit from AI with advance threat hunting or behavior analytics that in turn need to be supervised for not breaching policies and regulations. Behind all this, there is an increasing need for upskilled professionals who combine state-of-the-art skills in their domain with knowledge on AI.
With the increased risk landscape comes increased accountability for digital trust professionals and certifications become even more important for ensuring that the job will be done by people who are continuously evolving their knowledge and skills, as certifications not only require passing exam scores but also proof of adequate experience and the collection of continuous education for maintaining the credential.
The EU has poised itself to lead the rest of the world on safe and effective AI regulations. The pressure is on to quickly develop thorough regulations alongside the swiftly evolving and adapting AI systems that have taken the world by storm. More importantly, implementing regulations effectively requires addressing the industry’s skills gap, which will be crucial in the years to come.