Achieving Digital Trust in an AI Economy

Caren Shiozaki
Author: Caren Shiozaki, CGEIT, CEDS, CDPSE, LPEC
Date Published: 5 July 2023

Businesses operate over long waves of innovation cycles. In each cycle, companies must reckon with disruptions brought about by the technologies of the time.

Digital disruption is nothing new, so why does this time feel different?

The pace of technological change has been accelerating, driving shorter innovation cycles. The biggest force of the current wave of disruption is artificial intelligence (AI).

AI is more disruptive than other technologies due to its ability to mimic human cognitive functions, such as learning, reasoning and problem-solving. AI algorithms can adapt to different situations and learn from diverse data sets. They can analyze complex data patterns more efficiently than humans, and make predictions or recommendations based on that analysis. These capabilities have revolutionized business decision-making.

The AI Economy

For these reasons, AI is widely viewed as being an engine of productivity and economic growth. The “AI Economy” refers to economic activity generated by the development and use of AI technologies. According to Forbes, McKinsey (a global consulting firm) expects global economic activity of around US$13 trillion by 2030. Pricewaterhouse Cooper takes a more aggressive view: $15.7 trillion by the same year.

Competition and FOMO (fear of missing out) is driving the rapid adoption of AI by many companies – often without performing comprehensive due diligence, which is made more difficult by the dynamic nature of the AI landscape.

Gartner identifies four elements of digital disruption:

Business: the market, business models, development, pricing, delivery
Technology: design, innovation, usage
Industry: processes, standards, methods, customers
Society: culture, habits, movement

Consider some of the ways in which AI affects each of the four elements of disruption:

Business

  • AI can analyze vast quantities of data that will allow the organization to make better and faster data-driven decisions. AI-driven data analytics tools and decision support systems can provide actionable insights from customer data, market trends and other sources to fine-tune strategies.
  • Supply chain operations can be optimized by predicting demand, improving inventory management, streamlining logistics and reducing costs.
  • In the area of risk management and fraud detection, AI analysis tools ingest large volumes of data to identify patterns of suspicious activities or anomalies. Algorithms can learn from past incidents and adapt to new threats as they emerge.

Technology

AI is fueling other technology trends.

  • AI has created momentum behind autonomous systems including drones and self-driving cars.
  • Natural Language Processing (NLP) drives models such as ChatGPT, which can generate coherent and contextually relevant text. NLP enables applications such as chatbots and virtual assistants.
  • AI can run on edge devices because of the convergence with edge computing. EdgeAI is being used in industrial IoT and smart home solutions.

Industry

  • Because of AI analytics, the amount of data that businesses need to process is growing exponentially.
  • Customer behavior, purchasing history and preferences can be analyzed to personalize customer experiences at scale. This can be done on a multi-dimensional basis (e.g., crossing industries).
  • AI-based tools have been able to boost EBITDA (earnings before interest, taxes, depreciation and amortization) by 2 to 5 percentage points when B2B and B2C companies use them to improve aspects of pricing.

Social

  • AI is reshaping the workplace. Some job roles are being transformed, and new job opportunities are expected to evolve from the AI economy. However, some level of job displacement is inevitable.
  • New AI tools make it cheap and easy to make convincing fake video, audio and text. Misinformation and infodemics have a negative impact on individuals’ health and society.

Companies looking to capitalize on the AI Economy must cultivate the faith of their stakeholders (consumers, employees, investors, community members). In a 2022 study conducted by McKinsey, 72 percent of respondents from around the globe said that knowing a company’s AI policies is important before making a purchase. Establishing digital trust must be an imperative.

Pursuing Digital Trust

The six elements of digital trust are:

Figure 1

Based on ISACA’s 2023 State of Digital Trust report

As with any digital transformation initiative, fundamental governance, risk and compliance (GRC) methods apply. Your AI business strategy is clearly stated and disseminated throughout the company. The quantification of associated digital risks and mitigation approaches have been identified and are being managed.

Organizational culture and “tone at the top” are always critical for successful governance. Given the singular nature of AI, they take on a more urgent nature. All employees at all levels of the organization must accept personal accountability for ensuring the responsible use of AI – it cannot be limited to one functional area (usually falling to IT). Even within IT, there may be functions that don’t consider responsible AI as “in the job description.”

To reinforce the interrelation between the AI strategy and the company’s GRC program, establish a codified AI Ethics Policy. This policy would include:

  • The ethical principles that guide the use of AI within the organization. For example: transparency, privacy, human rights, social benefit
  • A commitment to complying with relevant laws, regulations and industry standards pertaining to AI
  • An emphasis on the importance of fairness in AI systems, and how bias and discrimination would be mitigated against individuals or groups based on characteristics such as gender, age and socioeconomic background
  • Promotion of transparency in AI systems and algorithms
  • Establishment of guidelines for the responsible handling of data, including user data privacy and obtaining informed consent for data handling
  • Recognition of the importance of maintaining security and safety of AI systems and data
  • The need for human oversight and accountability, and how that will be achieved
  • Commitment to training employees and providing resources to support their responsibility in upholding ethical AI standards
  • The approach to engage other stakeholders and solicit feedback to address the responsible use of AI by the organization
  • Description of the framework for how the organization will approach ongoing evaluation, monitoring and improvement of AI systems and practices.

While AI can bring about many benefits, it also poses challenges related to ethics, privacy and social impact. Companies must commit to the responsible use of AI and place a priority on digital trust to navigate these issues.

Editor’s note: Learn more about AI through ISACA’s Artificial Intelligence Fundamentals Certificate.