blackstroke

08/05/2023

Board level of understanding of the opportunities and risks of AI is essential, as companies – and their boards – are subject to specific legal duties, which means that you, as a board member, need to understand what AI is being used, how it is used, and what risks there are.

While no one expects you to become a technical expert, here’s what you need to know about AI:

Inductive Learning: Artificial Intelligence (AI), and more specifically its largest subfield, Machine Learning (ML, which we will focus on here), is all about inductive learning. It is a machine-based system that takes in data, pursues a goal, performs a ‘best-guess’ prediction/action, and receives some form of feedback, gradually improving its predictions/actions over many such cycles.

By contrast, traditional computing uses deductive logic; that is, it follows ‘instructions’ (sometimes billions of them). Real-world systems that apply AI are typically a combination of both types of learning, a dynamic loosely analogous to the coordination of right and left-brain hemispheres.

Data: Since data are the lifeline of AI, your data strategy regarding the classic 5 V’s (Volume, Variety, Velocity, Validity, and Value) needs to be a central concern when adopting AI. How much data you require depends on the problem and the AI algorithm. A simple rule of thumb is that the more data you have, the less subject matter know-how is required for the problem at hand.

This has led to some frustration among functional experts and engineers, watching ‘ignorant’ data scientists successfully optimize their work. The converse is also true: The less data you have, the more you need to understand the underlying dynamics of the problem. In the extreme case, where you understand the dynamics perfectly, you might not need any external data at all.

Go and chess, where the program knows the rules, are two such examples. Indeed, by purely internal means, namely, via simulation (self-play), AlphaZero—the AI program designed to master such games—generated all the data needed to excel.

Interpretation of Results: AI uses extremely rich algorithms. Applying them, one can ‘discover a pattern anywhere’, so AI does not accept the performance measures of classical statistics (such as confidence intervals on observed data). Instead, the only valid performance measure of an AI algorithm is the quality of predictions of future states, or in more pragmatic terms, predictions based on fresh data, which the system has not seen before.

The essence of learning is to ‘effectively generalize to such new situations’. As a board member, you will need to assess the results. By far the most powerful tool we can offer for this is visualization, accompanied by standard prediction performance measures (such as accuracy).

Learning to demand and interpret results in this way is mandatory.

Speed, Scale and Learning Architectures: Another critical property of AI is its unprecedented speed. Electronic signals travel more than a million times faster than brain signals. Via our modern digital infrastructure, machine-based intelligence also attains new scales.

Since AI improves with the volume of data, you should always centralize learning in your organization, although you can continue to act globally. A useful metaphor in this regard is autonomous vehicles: They drive autonomously, but individual cars don’t learn. Instead, they transmit their data to a central point, where the algorithm learns from all data and is tested, and the new version is then downloaded (i.e., copied and distributed) to all vehicles at specified intervals.

Deep Learning and Deep Reinforcement Learning: It is inductive learning that there cannot be a universal ‘best’ algorithm: anything could be a pattern. And indeed, we use a wide variety of algorithms in many business applications. However, for specific problems, some algorithms currently work better than others. Most famously, neural networks unexpectedly led to drastic improvements in computer vision and natural language processing (NLP) around 2012—and rebranded as ‘Deep Learning’ re-emerged as the mainstream AI technology. Vision and language are not only the longest-standing hard problems in AI but are also of enormous practical importance: Vision allows machines to interact with the physical world, and language allows them to interact with humans.

Building on those advances, Deep Reinforcement Learning (DRL), an even more advanced technique replacing labeled data with sparser feedback rewards, has led to dramatic breakthroughs in games (with superhuman performance across all classical games) as well as in robotics. DRL is just beginning to enter the business world, primarily via controlling machines. However, while Deep Learning approaches can be very powerful when data are plentiful, their inner workings are often not easily explained (this is the ‘black box’ problem).

Scaling AI: Since the programmer specifies only the ‘learning rule’, AI algorithms at their core are comparatively simple and the basic concepts quite accessible. The flip side, however, is that such algorithms require training data to become useful.

This fact has critical consequences for leveraging AI: Because you cannot isolate the software from the data, modularization is difficult and adequate tooling has not yet been developed. As a result, successful pilots are often deceptively easy, but scaling and maintaining AI becomes fiendishly hard. One must keep track of the ‘data versions’ the algorithm was trained on and continuously manage the so-called ML Pipeline as a develop- er-to-user workflow.

Mastering AI at Scale remains one of the toughest business challenges today.

Make-or-Buy Decisions: The intertwining of data and algorithms also means that AI is not plug-and-play. This, combined with the data-dependent, statistical nature of the prediction, renders particularly tricky the contractual management of tech suppliers being considered for ‘building’ and possibly ‘maintaining’ an AI application (in particular, as the data they require often belong to you). ‘Make-or-Buy’ is no longer a clear-cut decision, but rather is replaced by a continuum of partnership structures.

Bias, Risk, and Ethics: First the good news: AI is incredibly powerful at identifying outliers and detecting risks, as well as acting on them quickly. This is why AI has become indispensable in detecting fraud and money laundering, assessing compliance, and more. On the downside, AI introduces its own risks: For instance, cybersecurity becomes even more business-critical when actions have been automated. Also, the data AI was trained on, determine its predictions and actions. If training data were biased towards men, the algorithm will be as well. While everyone understands that this requires some care, some wonder why this has become such a big issue.

One reason is AI’s unprecedented speed and scale, which dramatically amplify mistakes. Another is the fact that everything is measured and can be changed by re-engineering (contrary to most human actions), thus inviting controls. Moreover, many dilemmas—for example, the famous ‘trolley problem,’ which presents the ethical nightmare of having to choose between pulling a lever that redirects a trolley so that it kills one person and letting the trolley stay its course so that it kills five people—have never been faced by humans, so we lack ethical guidelines.

In the past, other rules, such as ‘equal treatment’, have not required hyper-precise definitions, but with AI one is forced to choose among methods for achieving this goal: groups are different and not everyone can be treated exactly the same, so is ‘on average’ enough? However, once you’ve learned to ‘interpret the results’ of AI (see above), you will be able to assess the business risks and ethical impact of many trade-offs. This will not solve all dilemmas – in particular, all leading language models self-trained on the internet reveal how biased our communication still is today – but you will be able to make informed judgment calls within the new business environment and trigger the appropriate countermeasures.

AI and Humans: Paradoxically, artificial intelligence seems to be more clearly defined than human intelligence. Note, however, that nowhere have we attempted to define the ‘level of intelligence’ of AI—a question often at the center of discussions of human intelligence—but only AI’s ‘performance on specific problems’. This is deliberate. After all, ‘submarines don’t swim’—that is, machines solve problems differently from humans. We no longer measure cars with respect to horses, for example, we accept that the two do different things. The same attitude is appropriate regarding AI.

Nevertheless, one can ask how current AI differs from human intelligence. We already mentioned speed and scale. Another difference is that AI is trained on narrow fields (i.e., ‘Artificial Narrow Intelligence’); after all, we want AI to perform specific tasks. Within the narrow confines of those fields, AI can then exhibit ‘strategic thinking and creativity’, as famously demonstrated in Chess and Go. One flaw of most current AI is that it remains quite data- and energy-hungry, although in this respect we are seeing continuous progress. Also, what AI lacks entirely is ‘common sense’—taking actions based on obvious insights from other areas of real life. An AI trained purely on financial data, for example, will not react to a fire in the next room. Whether we will ever see a broader ‘Artificial General Intelligence’ may be a stimulating topic for fireside chats, but it currently has no board relevance.