The Unexplainability Myth

 


by Paul Signorelli, Chief Solution Architect

One of the criticisms of machine learning algorithms – particularly deep learning algorithms – is that they are a black box. In other words, the reason they reached a particular solution cannot be explained. This leads to a host of problems – from compliance, to risk/reward determination, to issues of discrimination – and helps feed a general distrust of AI.

But, is it really true that we can never explain “why” a machine learning algorithm reached the conclusion it did? It may not be the complete myth (as the title of this blog suggests), but at the same time, it is possible to understand why algorithms reach their conclusions. In fact, the data scientist who trained and tuned the model, discovering what inputs gave the best outputs, can readily explain why the model reaches the result it does. That knowledge just needs to be exposed.

To understand this better, let us start with the key reason people say ML algorithms are black boxes. From a mathematical perspective, I believe it stems from the fact that these algorithms generally rely upon stochastic processes; in other words, the process for determining the solution is random.

This means for an ML algorithm – for example, a deep neural net (DNN) – each time it is trained it will take a different path to the solution. The individual optimization steps will be different with each run. 

Imagine a mountain with an infinite number of paths to the top. If you noticed a hiker at any given point on the journey you would not know the path they took to that point. DNNs operate the same way. But at the end of the day, given the same data and hyper parameters, they will always converge to nearly the same answer. In fact, if they did not, they would be too unreliable to use.

So, if we cannot know how the hiker got up the mountain, how is it we can explain what drives an ML algorithm to its solution? It goes back to the work of the data scientist configuring the model input. A big part of that process is testing features to be used as input to a model to determine their impact on the output. If a particular feature brings the model closer to the ground truth, then a causal relationship may exist, and the data scientist will work to test and verify this.

Imagine if we gave the hiker clues about the best path up the mountain. Which paths are too steep or too rocky to navigate? Where are the best shortcuts? Think of these hints as information input into a model design to help the model better understand and map the real-world relationships that exist in the data. This provides meaning to the model. In the business AI domain, these hints or features could be drawn from data such as demographics about the market, promotional activities of a business, socioeconomics about business activities, and, especially in the age of COVID-19, health factors throughout the market.

At the end of this process, you end up with the knowledge behind the data elements that drive the model solution in a way that provides real meaning.

At r4, we have always tried to focus on understanding these types of relationships. This is due in large part to the fact that AI business applications really necessitate it. Businesses cannot operate the same way self-driving cars do where patterns can be turned into direct action.  Businesses must instead rely heavily on converting statistical analysis into an understanding of their customers within the context of their business operations. They need to convert AI output into human action in the form of marketing campaigns, sales pitches, product development, brand development, and the like. These functions require appealing to people on a human level and therefore they require output that humans can translate.

Share