When Artificial Intelligence Fails

When Artificial Intelligence Fails

By

AI technology and models are used across industries to analyze, predict, generate, and represent information, decisions, and outcomes that impact operations and business strategy. A range of departments, functions, and roles are beginning to rely on AI as a critical foundation for business processes that support operations, long-term strategic planning, and day-to-day tactical decisions.

A spectrum of AI technology spans predictive analytics, machine learning, deep learning, natural language processing, and robotic process automation to the new era of generative AI. Within these various approaches, there are three core components:

  • Input Component: Delivers assumptions and data to a model.
  • Processing Component: Analyzes inputs to generate predictions, decisions, and content. Within AI systems, there is often a continuous learning component/engine, which sets it apart from conventional data processing systems.
  • Reporting/Output Component: Translates the processing into useful business information.

While the common understanding of AI models is that they have these three components, the reality is that there are multiple parts to each of these individual components. Multiple elements within each—input, processing, and reporting—connect to each other and have an array of assumptions, data, and analytics. Adding to this complexity are the human and process elements intertwined throughout the business use of AI that weave together various manual processing and technology integration elements needed to use and interpret AI. As the global environment changes, AI models themselves have to change and adapt to accurately represent the world in which they exist.

Models are used to represent scenarios and produce outcomes through inputs of values, relationships, events, situations, expressions, and characteristics. This is what makes up the 'input component' of a model. The real world is a complex web of interactions, relationships, and variables of significant complexity and intricacy that models cannot fully represent. Inputs are a simplified abstract of the real world used to process and report on quantitative estimates in outcomes. The challenge is that introducing wrong assumptions, bias, and bad (or incomplete) information is compounded by the complexity of variables in the real world. Models can fail in their validity and reliability through their inability to process any variables that sit outside their input scope. Validity speaks to accuracy, whereas reliability speaks to repeatability. Something can be very reliable but not at all accurate. There is a risk that complex models lose both validity and reliability as the focus shifts from analyzing the impact of key critical variables to the fragile interactions and relationships of a variety of variables. They will reliably provide an outcome, but it will increasingly be inaccurate or invalid.

When AI Fails

Organizations are in the early stages of becoming highly dependent upon AI to support critical business processes and decisions. AI is now crucial to many businesses. The expanding use of AI in organizations reflects how it can improve business decisions. Still, AI comes with risks when internal errors or misuse results in poor decisions.

Unfortunately, as much value as AI provides, it also exposes organizations to significant risk. Ironically, the AI tools often used to model and predict risk can be a significant risk exposure in themselves if not governed appropriately. Risk within AI models comes from the potential for adverse consequences from decisions based on incorrect or misused AI. It leads to poor business and strategic decision-making, financial loss, legal and regulatory issues, and damage to an organization's brand. For example, disclosing restricted information to "public AI" might be a risk when employees register for and use tools like ChatGPT for business purposes. The most dangerous thing (a moral hazard) for an organization is to develop complete trust in what is being produced/delivered by AI.

AI should inform decisions and raise points for consideration rather than being solely relied upon to make decisions—especially those that are business-critical. AI, inappropriately used and controlled, brings many risks to organizations. These include:

  • Dynamic & Changing Environment: AI models are not static. In reality, new AI models and uses are being added, old ones are retired, and current AI technology and models are constantly changing. Compounding this is the constant change in risk, regulations, and business that puts the environment that AI is supposed to represent in a constant state of flux. Organizations face significant risk when the environment changes, yet AI and its associated data inputs fail to evolve to represent the current environment. AI models that were accurate last year may not be accurate this year, and necessary changes may come even sooner and more frequently than that. Organizations need to monitor and manage AI models throughout the year.
  • Lack of Governance & Control: The pervasive use of AI has also introduced what could now be Shadow AI, a component of Shadow IT where the line of business bypasses IT and uses technology that has not been approved. This increases risk through inappropriate and unauthorized use that exposes the organization. Cybersecurity has been a growing concern for businesses for years, but especially recently as organizations are more interconnected and internally diverse now than ever before. Creating even further risk, cybercrime has been a major threat to organizations in recent years as the growing presence of cybercriminal groups and the technology implemented by them catch up to and in some cases surpass that of organizations. If this Shadow AI technology is used by these groups to target organizations, that is a risk very few would be ready to handle due to the highly dynamic nature of the AI landscape and how little organizations understand its use and implementation for them.
  • More Than the AI Model-Processing Component: The use of AI is more than the AI model-processing component. It is an aggregation of components that span a variety of input, processing, and reporting functions that integrate with each other and work together. This includes the overall AI modeling and use project. Organizations risk being fixated on the AI model-processing component alone while the many supplementary components undergo rapid changes that are not governed, and bad input data means bad decisions from AI. This is not only ignoring two of the three major components of AI, but it is also in the wrong order. It is the equivalent of cooking a dish without the proper ingredients or even a good recipe to follow. The quality of results from AI and the AI model itself depends upon the quality of the input data and the assumptions: errors in inputs and assumptions lead to inaccurate processing and outputs. This is compounded further when the organization operates with oversight in other areas and/or the organization is compartmentalized into silos that do not interact and communicate with each other.
  • Errors in Input, Processing & Reporting: AI may have errors that produce inaccurate outputs without proper development and validation. Errors can occur throughout the AI lifecycle from design through AI implementation and use and can be found in any or all of the input, processing, and reporting components. With specific data, if that data is not annotated correctly, the outcome will always be wrong. These errors may be from the development of the AI model in its inputs, processing, and reporting or can be errors introduced through changes and modifications to the model components over time. Errors may also occur from the failure of AI to change to shifting business use and a changing business environment.
  • Undiscovered Model Flaws: It's possible that an AI model will initially appear to generate highly predictive output, despite having serious flaws in its design/training. In this case, the AI solution may gain increased credibility and acceptance throughout the organization until eventually, some combination of data exposes the flaw. This can often take time to come to light due to shortcomings within the organization's operations, strategy, and/or structure. If the organization does not have proper monitoring, management, reporting, accountability, etc. of their processes and operations, a flaw in the AI model can easily slip through the cracks, at least for longer than it should. When organizations operate in siloed departments where communication and transference of information do not flow easily or at all, things can go unseen for long periods. False positives are part of any predictive system but can be extremely convincing with AI, leading to greater long-term reliance on a flawed model.
  • Misuse of Models: A significant risk from AI is when it is used incorrectly. An accurate AI model will produce accurate results but lead the business to error if used for purposes the AI tech/model was never designed for or should not have been used for in the first place. This can occur when the model is incorrectly designed to meet the organization's needs or when the model is tasked with something it was not designed for later on. Organizations need to ensure that models are accurate and appropriately used. Organizations face risk when using and applying existing AI to new areas without validating AI in that context.
  • Misrepresentation of Reality: The very nature of AI means they are a representation of reality and not reality itself. AI models are simplifications of that reality and, in the process of simplification, may introduce assumptions and errors due to bias, misunderstanding, ignorance, or lack of perception. It can be very easy for that reality to be oversimplified in the design of the AI model, particularly if the organization has an oversimplified perception of their own reality. This risk is particularly a hot topic in generative AI, which may leverage inaccurate data, but also a risk across AI.
  • Limitations in the Model: AI models approximate the real world with a finite set of inputs and variables, in contrast to an infinite set of circumstances and variables in the real world. Risk is introduced when AI is used with inaccurate, misunderstood, missing, or misapplied assumptions that they are built upon. Things are even further complicated because of the diverse nature of business. The number of different types of organizations is innumerable, and even two organizations of the same 'type' can be vastly different from each other. Even the different roles and tasks that an AI model will be given by the organization will be various and diverse. It is more than possible to ask too much of your AI model.
  • Pervasiveness of Models: Organizations bear significant risk as AI can be used at any level without accountability and oversight. Anyone can acquire and/or access AI that may or may not serve the organization properly. Organizations struggle to identify AI being used not only within traditional business but also across third-party relationships. This could be a particular cause for concern as technology and software being used by third parties has particularly been a vulnerability for organizations' cybersecurity. The problem grows as existing AI models are modified and adapted to new purposes. The original AI model developer in the organization often does not know how others are adapting and using AI.
  • Big Data and Interconnectedness: The explosion of inputs and variables from massive amounts of data within organizations has made AI use complex across input, processing, and reporting components. Organizations already manage mountains of data, and while AI will make it easier to manage and process that mountain, ensuring it is correctly and completely input will help determine how accurately, effectively, and efficiently the AI model performs. The interconnectedness of disparate information sets makes AI models more complex and challenging to control. This issue is made even worse when the organization's various departments operate in silos that hardly interact with each other. This leads to a lack of standardization, inconsistent use of data, data integrity issues across systems that feed into models, and data integrity within AI.
  • Inconsistent Development and Validation: AI models are being acquired/developed, revised, and modified without any defined development and validation process. The role of audit in providing independent assurance on AI integrity, use, and fit for purpose is inconsistent and needs to be addressed in organizations. Accuracy and efficiency in reporting is a greater concern for organizations now than ever. Across the globe, various organizations are either preparing to have to report or are already required to report on ESG by new regulations. More importantly, it will not be long before organizations will be subject to reporting regulations on their use of AI.
How AI has Developed for Business

The use of AI technology in business has been a staple for years, but with recent advancements, particularly the emergence of generative AI in the last couple of years, the potential for AI applications in business has expanded greatly while its role has become less clearly defined.

Generative AI burst onto the scene in 2023, becoming commonplace in people's lives as either a topic of conversation or through use of the technology either experimentally, recreationally, or practically. When it comes to business, very few organizations have not at least discussed the possibility of implementing generative AI technology somewhere in their business, while others have either adopted a generative AI platform, have developed the technology in-house, or are in the process of doing so. 2024 is the year that implementation of AI, particularly generative AI, technology will increase and become more widespread, and for those labeled 'early adopters', it will be the year they should start to see value created by their use of AI.

The result of the evolution of AI technology is an incredibly dynamic and unpredictable environment for businesses. This inherently brings risk to organizations along with dozens of question marks as to where to start and how to effectively and safely navigate the journey of implementing new AI technology.

One of those big questions organizations must face when looking to incorporate AI into their business strategy and structure is its usage. While the capabilities of AI now are vast and incredible in their own right, how can businesses harness that to drive value for their business?

There are a number of AI functions that are effecting change and growth for businesses now. Some of the notable ones include Machine Learning (ML), Named Entity Recognition (NER), Natural Language Processing (NLP), and Deep Learning (DL). But how are businesses putting these into practice? Here are some of the most common use cases for AI:

  • AI-enabled innovations, products, and services: Virtual assistants are the most commonly found example, though businesses are finding other ways to innovate and create new offerings through AI.
  • Automating routine cognitive work: AI has been used for years to automate manual tasks like data entry, but businesses are now using generative AI technology to handle cognitive tasks like summarizing reports, drafting communications, etc.
  • Using AI for leveling up workers: When tasks are not able to be automated, experts say AI can still help workers by offering advice and guidance throughout their work that can 'level up' their performance. Grammarly and other similar applications are common examples that help improve a worker's writing. Generative AI brings even more capability to help, as it can enable workers with little to no experience to perform tasks such as writing software code, designing a logo, or crafting a marketing strategy.
  • Using AI as a creative force: The capability of generative AI to compose new material is a potentially powerful tool. However, it has been the center of some debate as to whether AI-generated material is derivative in the legal and/or artistic sense. Regardless of this, it is being implemented by organizations to generate a wide range of works.
  • Accessing and organizing knowledge via AI: Generative AI technology is showing particular potential for organizations and their employees. It not only allows workers to search through mountains of information to find relevant elements, but the technology is also capable of organizing and summarizing those. This is a function, however, that requires some oversight. AI models have limited data pools that they draw from, which can lead to incorrect assumptions.
  • AI for optimization: This use case is transcendent across industries and functions. AI-based business applications can use algorithms and modeling to transform data into actionable insights that aid organizations with optimizing a range of functions and business processes.
  • Higher productivity and more efficient operations: Organizations are inserting AI technology into many business processes that utilize human labor, either partially or fully performing, and can often do it faster and more accurately than a human could.
  • More effective learning and training through AI: AI can allow organizations to implement more effective and efficient training programs. The technology can even customize the training program to each person based on their learning preferences and level of knowledge and experience.
  • AI as a coach and monitor: AI-powered systems are capable of monitoring and analyzing actions in near real-time and then provide feedback on the spot.
  • Decision support: Organizations have started to implement AI-powered Decision Support Systems (DSS) to sort and analyze data and then use that to offer suggestions and inform human decisions.
  • AI-enabled quality control and quality assurance: Manufacturers have been utilizing a form of AI called machine vision for multiple decades now, which they are currently improving with quality control software that has Deep Learning capabilities, improving the speed and accuracy of quality control functions along with improving costs.
  • AI for personalized customer services, experiences, and support: This is perhaps the most widespread use case for AI technology. Many organizational websites have an AI chatbot now, either for help and customer support or to aid users in navigating the site.
  • Safer operations: Businesses with workers who operate outside are using data collected from weather-reading technology that is then fed into AI systems to identify problematic behavior, dangerous conditions, or business opportunities and then make suggestions on how to properly and safely respond. Other industries use AI-enabled software applications to monitor safety conditions on the job.
  • AI for functional area improvements: Functions such as:o Customer Serviceo Marketingo Supply Chaino Human Resources (HR)o Cybersecurityo Information Technology (IT)o The C-suite and board
  • AI for industry-specific needs: Some industries using AI to meet their specific needs are:o Healthcareo Financial Serviceso Industrial Maintenanceo Transportation

Organizations across industries and across the globe are implementing AI technology for various functions and use cases, each of which is unique to each region, industry, individual organization, and even the individual departments and functions of the business. But how are organizations adopting new AI technology and what is the rate of adoption?

McKinsey & Company, a global strategy and management consulting firm, has conducted a number of surveys that shed light on how companies are adopting and implementing AI in the last few years with recent advancements in the technology.

Their data indicates that adoption of both generative AI technology and AI technology, in general, has gone up from a year ago, with the adoption of generative AI nearly doubling (65%) and that of other AI technology going from 50%, where it has hovered for the previous six years, up to 72%. Additionally, McKinsey's research indicates a large majority of organizations believe that AI will have a significant impact within their respective industries in the coming years.

McKinsey also found, however, that employees are generally ahead of their employers when it comes to utilizing genAI and show an enthusiasm for the technology that organizations should harness. In a survey published in August of this year, 91% said they use a publicly available genAI application regularly for work, while only 13% said that their respective organizations have begun using genAI in multiple use cases. This small percentage is a group McKinsey labels as 'early adopters'.

On average, these early adopters, or high performers, are using genAI in three business functions. While McKinsey's research found that roughly half of organizations are using off-the-shelf AI solutions with little to no customization, these high performers are far more likely to use either a highly customized solution or one built in-house. High performers also reported that 10% of their organizations' earnings before interest and taxes (EBIT) can be attributed to their usage of AI.

Early adopters are not only more likely to use genAI for functions such as risk, legal, and compliance, they are also more likely to be aware of and working toward mitigating genAI-related risks. There is a higher likelihood among high performers to have a set of risk-related best practices in the context of AI that they follow.

It is this awareness of and response to risk that has been the key to their success in their implementation of AI technology, including the value they have derived from its use. That approach may be a result of their being farther along in their use of AI than others, and therefore having encountered more risks. McKinsey's data shows that 70% of early adopters have already experienced problems with data (i.e., defining processes for data governance, developing the ability to quickly integrate data into AI models, insufficient training data, etc.).

As complex and dynamic as the AI landscape is, it's only natural for regulation of AI technology to be expected. And while in the long run that may simplify things, it adds more complexity for those organizations looking to implement AI now.

While the European Union (EU) with its comprehensive EU AI Act and China with its series of AI legislation (China is also working on a comprehensive piece of AI regulation) have been the leaders in rolling out AI regulations, it is far from a localized trend. Across the world, various jurisdictions have already begun regulating the AI sector or are working toward that end.

Some, like Canada for example, seem to be following closer in the footsteps of the EU and China with more comprehensive regulation. Others, such as the United States and India, have taken a more middle-of-the-road approach: dedicating task forces to monitoring and exploring the usage and ramifications of AI technology, as well as some more specific AI guidelines and regulations. There is a large contingent, however, with examples like the United Kingdom, Japan, and Australia, who have not passed any AI regulations yet, but have decided instead to use existing policies and regulations in the context of AI.

Regardless of where your organization operates, AI regulation should be on the radar now: whether the organization is just starting down the road of AI implementation in an already regulated jurisdiction or is an early adopter in a jurisdiction currently unregulated. This can be achieved by already operating within regulatory norms and preparing reports on AI usage.

AI is rapidly growing in variety, complexity, and use within organizations. It is quickly moving from a tactical focus to a strategic pillar that provides the infrastructure and backbone for strategy and decisions at all levels of the organization. The evolution of AI as well as the time it is left ungoverned bring about loss and potential disaster. At the end of the day, implementation of genAI is going to take a certain level of experimentation, but it should be experimentation with a goal in mind. Firstly, any genAI usage and strategy should be in line with the organization's business strategies. The goal should be to create measurable value through implementation of the genAI technology.

Unfortunately, many organizations lack governance and architecture for AI risk management. Organizations need to provide a structured approach for AI governance, risk management, and compliance that addresses the AI governance lifecycle and architecture to manage AI and mitigate the risks they introduce while capitalizing on the significant value of AI when properly used.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.