One of the most important pending subjects presented by the current development of Artificial Intelligence when we talk about the productive sector is the true systemic integration of data in decision-making processes. In fact, it is still very common in a large number of organizations that, despite having a certain maturity of data and perhaps even having managed to analyze the available data in a routine and almost automated way, and perhaps even being able to automatically and periodically report the results of these analyses, a good part of the decisions continue to be made without fully taking these results into account. Data- driven organizations require a change of culture that impacts many aspects of the organization, and all the people who make it up, but, apart from that, they require an adoption of Artificial Intelligence that facilitates this transformation.
And one of the most critical aspects of being able to adopt AI at an organizational level is having explainable models. In fact, if we intend to use the results of an AI to guide the organization, and for decisions to be based on these results, it will not be enough to guarantee that the AI has good levels of prediction and is almost never wrong. Even when we have a model that works well in 99% of cases, we will not be able to integrate it well into our decision-making processes unless the model is explainable. That is, the decision-maker can understand (without being an engineer, or knowing anything about AI or statistics) the reason for the prediction provided by the AI model and can support the decision in an argument that is sufficiently understandable and convincing for the other humans who must validate the decision or assume it.
From the earliest origins, in the 1950s, there is a family of AI methods called connectionist or sub symbolic methods, which are based on computational metaphors of some collective intelligence systems that can be found in nature. We call them bio-inspired algorithms. And almost all of them are based on the principle of combining many small pieces that make simple and easy things and when they are all put together, they can solve very large problems. Among them, artificial neural networks stand out, which are inspired by the functioning of the brain, and the transmission of impulses between neurons and which can approximate highly complex functions with great precision, but their results are not easy to justify. Artificial neural networks are known, as are their successors, deep networks (from deep learning) and generative AI, as black box systems, because they do not easily explain how they construct their predictions and this, of course, strongly compromises the use of this type of model to support strategic or tactical decisions. Note that a black box model does not mean that we do not know what the machine does, but that we cannot explain it easily. For example, in the case of artificial neural networks, it is perfectly possible to mathematically write which formula generates the results of the network from the input data. And this equation is known and can be determined. What happens is that it is so, so complicated that it is not much use to have it written down, because it also does not allow us to understand why the prediction is what it is. See the example with a ridiculously small artificial neural network, with no hidden layers, only two input neurons and a logistic activation function, and only three input variables, and you’ll understand how bad this equation must look with networks of hundreds of neurons per layer and many intermediate layers.

Explainable AI is the branch of AI that works to complement these models (which can model the highest complexity with great precision) with a subsequent argumentative layer that generates explanations for the results and facilitates this transition to data-driven organizations. Some authors use numerical indicators to determine the variables that contribute most to a prediction, such as Shapply values, and others design sustainability experiments on already trained artificial neural networks that allow us to graph what impact the variables have on the response that comes out of the network. The thing is that we have to find a way to interpret the why of these predictions in order to close the circle well.
If we want a prediction to have consequences in the real world, for someone to make one (or several) decisions and modify reality, it is necessary for that someone to understand why the machine proposes that prediction, and to have arguments to decide whether or not to trust that result. No one takes risks in uncertain situations, and we must also consider that all predictive AI models have model indices that are always below 100% and this means that they are wrong in some cases.
From another perspective, some of the unsupervised techniques also do not generate results that can be delivered directly. This would be the case of clustering, which is powerful in detecting patterns in data, but then requires a whole post-process to interpret what these classes mean, which is not trivial at all and requires a lot of effort.
