Auditing Guidelines for Artificial Intelligence

Hafiz Sheikh Adnan Ahmed
Author: Hafiz Sheikh Adnan Ahmed, CGEIT, CDPSE, CISO
Date Published: 21 December 2020

Over the past few years, there has been a tremendous shift toward emerging technologies such as blockchain, robotics and artificial intelligence (AI). Global organizations and governments have come to terms with the impact—and opportunity—of advanced technology. Governments around the world consider AI to be a nation-defining capability. A report from HolonIQ shows that countries are looking to their education systems to develop excellent generational AI capability while ensuring equity, privacy, transparency, accountability, and economic and social impact.

Significant opportunities lie ahead for AI software market penetration, despite short-term economic turbulence resulting from the COVID-19 pandemic. As AI grows in importance and popularity, the role of internal auditors has evolved in lockstep to address a variety of new challenges that have yet to be fully anticipated.

There are 2 primary aspects auditors should consider while performing the audit of AI applications:

  1. Compliance—Assess risk related to the rights and freedoms of data subjects.
  2. Technology—Assess risk related to machine learning, data science and cybersecurity.

A starting point for auditing an organization’s AI is defining the scope and objectives of the audit and considering the risk the AI initiative poses to the organization. These areas of risk should be compiled in a document such as a risk and control matrix (RCM), which lists each risk and related controls. COBIT® 2019 provides an effective framework for considering the risk of any initiative or process within an organization.

There are several examples of risk related to AI strategy:

  • Lack of alignment between IT plans and business needs
  • IT plans that are inconsistent with the organization’s expectations or requirements
  • Improper translation of IT tactical plans from the IT strategic plans
  • Ineffective governance structures that fail to ensure accountability and responsibility for IT processes related to the AI function

From a compliance perspective, auditors need to understand the underlying data privacy and data protection principles and the impact of AI applications and initiatives on the rights and freedoms of data subjects and natural persons.

The UK’s Information Commissioner’s Office (ICO) has drafted the following guidelines that serve as a baseline for auditors auditing AI applications, which take into consideration data protection principles according to the EU General Data Protection Regulation (GDPR):

  • Accountability and governance in AI, including Data Protection Impact Assessments (DPIAs)—Completing a DPIA is legally required if organizations use AI systems that process personal data. DPIAs offer an opportunity to consider how and why organizations are using AI systems to process personal data and what the potential risk could be. Additionally, depending on how they are designed and deployed, AI systems will inevitably involve making trade-offs between privacy and other competing rights and interests. It is the job and duty of the auditors to understand the need to know what these trade-offs may be and how organizations can manage them.
  • Fair, lawful and transparent processing—As AI systems process personal data in various stages for a variety of purposes, there is a risk that if organizations fail to appropriately distinguish each distinct processing operation and identify an appropriate lawful basis for it, it could lead to a failure to comply with the data protection principle of lawfulness. Auditors must identify these purposes and have an appropriate lawful basis in order to comply with the principle of lawfulness.
  • Data minimization and security—Auditors need to ensure that personal data is processed in a manner that guarantees appropriate levels of security against its unauthorized or unlawful processing, accidental loss, destruction or damage. They also need to verify that all movements and storing of personal data from 1 location to another are recorded and documented. This will help to monitor the effectiveness of appropriate security risk controls.
  • The exercising of individual rights in AI systems, including rights related to automated decision-making—Under data protection law and regulations such as GDPR, individuals have rights relating to their personal data. Within the scope of AI, these rights apply wherever personal data is used at any of the various points in the development and deployment life cycle of an AI system. Auditors need to ensure that individual rights of information, access, rectification, erasure, and to restriction of processing, data portability, object (rights referred to in Articles 13-21 of the GDPR) are considered when developing and deploying AI.

AI is a reality that promises to transform more than just the way enterprises do business. It will touch every corner of society. AI will have a far-reaching impact on the audit profession as well, given auditors’ need to provide AI assurance. Auditors should ask themselves whether organizations and audit teams are ready for the tough questions surrounding AI and the approach with which it is to be audited. With little guidance and few frameworks available for auditing AI, auditors need to focus on the controls and governance structures that are in place to determine whether they are operating effectively.

Hafiz Sheikh Adnan Ahmed, CGEIT, CDPSE, COBIT 5 Assessor, ISO 20000 LA/LI, ISO 22301 LA/LI, ISO 27001 LA/LI, is a governance, risk and compliance (GRC), information security and IT strategy professional with more than 15 years of industry experience. He serves as a board member of the ISACA® United Arab Emirates (UAE) Chapter and volunteers at the global level of ISACA as a Topic Leader for the Engage online communities, member of the IT Advisory Group and the Chapter Compliance Task Force, ISACA® Journal article reviewer and SheLeadsTech Ambassador. He previously served as a chapter award reviewer and on the Certified in the Governance of Enterprise IT® (CGEIT®) Quality Assurance Team. He can be reached via email at adnan.gcu@gmail.com and LinkedIn (http://ae.linkedin.com/in/adnanahmed16).