×

Explainability problem in real economy

News / Blog / Blog Posts / April 2022 / Explainability problem in real economy
05 April. 2022
Dr Olga Shvarova, Chief Innovation Officer
Black Box AI models invite questions about whether we can rely on this technology outside of research environments, as well as whether the models’ predictions can be trusted enough to be used in real economy. A new field of AI model interpretability has emerged over the last few years to address these concerns, and to test the usability of AI for business performance predictions.
 
AI models can be used for business analytics if they leverage information from the enterprise (data about customers, transactions, historic data, etc.) as inputs or features. Features are used by the model to determine the output; and model explainability required understanding which features influence a prediction. Generally, two approaches are used: local explainability that describes how the model arrived at a single prediction (e.g. a single customer’s churn score) and global explainability that describes which features are most useful to make all predictions.
 
Example features that influence customer churn value are frequency of transactions and number of product types bought. In a coffee shop, frequency of transactions is a good predictor of customer churn but not number of products, while for a bank, a number of products (mortgage, loan, bank account, etc.) is a good predictor of customer churn but not a frequency of transactions. The model for a coffee shop will learn with the same data sets but will need to learn to use different features to make accurate predictions. 
 
A main goal in model explainability is to understand the impact of including a feature in a model. To compute explainability is to understand each feature’s contribution to the model’s performance by comparing performance of the whole model to performance without the feature. To do this in many iterative cycles is too time consuming and too expensive, so Microsoft engineers, who created one of the first explainability models for business processes, use Shapley values to identify each feature’s contribution, including interactions, in a single training cycle.
 
Shapley values are a technique from game theory, where the goal is to understand the gains and costs (“contributions”) of several features (“actors”) working in a coalition. In Machine Learning, the “actors” are features, and the Shapley Value algorithm can estimate each feature’s contribution even when they interact with other features.
 
More detail about Shapley analysis can be found here: GitHub – slundberg/shap: A game theoretic approach to explain the output of any machine learning model.
 
Another technique used by Microsoft for other types of models e.g. deep learning neural networks, is the method of  integrated gradients. Most deep learning models are implemented using neural networks, which learn by fine-tuning weights of the connections between the neurons in the network. Integrated gradients evaluate these connections to explain how different inputs influence the results. This provides sufficient explainability to the model output to calculate the aggregated impact for each customer or a group of customers.  
 
For the business managers who use AI models to optimise their processes and predict their business future trajectory, this approach will also help to identify particular trends and patterns to pay attention to in their customer base to design their development strategy. As we can see, apart from an interesting theoretical problem, explainability tools may turn into a vital part of running a successful business, and may replace human effort expended on business analysis in the near future. 
 
 
More Posts