Zinia – Transparent decisioning, made easy
Without technology our natural intelligence relies on intuition, which fails when handling large amounts of complex and rich data. Artificial intelligence (AI) solutions allow to quantifiably handle data, and to extract information from it. Intuitive interfaces that display and explain such information can help us change and improve the way we understand data and make decisions based on data. Explainability opens up the potential for a symbiotic relationship with technology, which allows us to stay true to our goals or objectives, whilst leveraging the most advanced and automated solutions from AI.
Data science and artificial intelligence are becoming ever more prevalent as the technological advancements are resulting in more businesses turning to AI to complete complex tasks. This ranges from analysing large amounts of data for making predictions to automating existing processes for increased efficiency. The greater the problem complexity, the greater the need for explainable AI to help the user understand how and why an algorithm reached a specific result.
Often AI models appear as black boxes that can hardly be understood by data scientists. This plug ‘n play approach is sadly how most often AI techniques are used and abused, with no deep understanding of their inner workings and of their power and limitations. Explainable AI allows to leverage such models, and further serves to overcome this lack of intuitive understanding by communicating how your data is being analysed, how specific results are produced.
AI can additionally help to make decisions, and ideally optimal decisions. It is also important to provide transparent algorithms so that key decision-makers can trust the software and the insights it provides. This may include what techniques are being employed to generate a set of results, giving the user necessary information on the purpose behind the findings as well as any potential method limitations that must be factored in. Eventually this approach will enable performance certificates for AI algorithms, not dissimilar to energy efficiency assessments of buildings and appliances, or food hygiene certificates, and hence be important for the governance of AI technology. As such, one of the goals of XAI is that of providing a meaning behind results that can lead to actionable instructions for decision makers.
XAI-enabled transparency serves as one step for communicating AI to non-technical users, with the other being an explanation of what the generated results are showing and how they have been reached. Explainable AI hence empowers non-specialist users to become the key decision-makers -democratising this technology for all businesses. The significance of conveying why a certain decision has been made extends beyond the user to the client, particularly when the result is adverse or perceived as harmful or unfair.
In order to determine the efficacy of an explanation, it must be quantified by how well it communicates a sense of understanding of an outcome to humans. For an AI platform, understanding can be achieved by providing the underlying explanatory factors of why a result or a decision has been reached. This uncovers the chain of events that occur between data input and AI-output so customers can retrace the data analysis process that led to an outcome of interest. This information empowers the user to debug any outcomes that are misaligned with their goals so model performance is improved. Thus, with explainable AI, reciprocal learning can be implemented – customers are able to learn and understand their insights in order to further maximise their objectives.
Explainability is thus important in general, as it is specifically for Zinia and its users. With automated credit decisioning, for example, explaining the decisions put forward by the machine is essential from both compliance and ethical perspectives. Customers decision letters provide an explanation as for why an individual has received a certain credit rate.
Many AI platforms nowadays do not align their data analysis strategies with the user’s goals. On the other hand, Zinia employs human-centred AI which empowers users to drive the creation of the analysis models in a clear and efficient way. The resulting feedback and suggested actions are therefore aligned with the desired business KPIs, so users can make data-driven decisions that lead to more successful outcomes.
Zinia achieves general explainability and the mentioned “reciprocal learning” using an interface that is modular and expandable, dividing the AI architecture into a number of unique but integrated functions, each of which provides data explanation, predictions or decision making.
For instance, causal explanations are used to provide reasons behind observations in a comprehensible manner, and Zinia applies this concept through causal relationship networks that describe the correlation between the user’s key data features. A combination of textual and visual results ensure that different variables are recognised and the directions of influence are apparent. Consequently, the user is able to directly discover useful insights and gain confidence in the results, as there is no need for more abstract/empirical analysis that may introduce bias.
Zinia also builds trust between the decision-makers and our technology by engaging them in the data analysis process, which is only achievable with our new human-centred AI. Our platform is able to automatically identify key features from raw datasets and empowers non-specialist users to create a model that optimises a business objective. Moreover, the results summaries clearly display insights from your data as well as the accuracy that a specific model has achieved on a dataset. Feature importance and PCA variance graphs display the underlying explanatory factors behind the presented observations. With a high level of visibility on both the input and output ends of the data analysis process, the user receives a comprehensive view of how a decision has been reached. For our case study, Zinia’s explanations can help clients better understand their lending agreement by revealing the wider business outcomes chain that led to this result. Furthermore, our feature importance graphs can inform actionable instructions as they highlight features that should be optimised to advance their interests.
The data explanation modules are dedicated to revealing trends and insights on the provided data. This follows our principle of keeping humans ‘in-the-loop’: users are better equipped to use the extensive prediction and decision-making models to maximise their business KPIs. The clustering results highlight data points with similar characteristics by grouping them into regions or clusters: this therefore identifies what data features are closely correlated and which are disassociated (far apart) from each other. Our causal relationship results also display feature correlation, however only two variables are considered at a time. Instead of a spatial measurement, the correlation value is determined through magnitude and direction of influence. When applied to all features in the dataset, a network of positive and negative links is established as well as the directions they apply.
Handling bias is another emerging theme that will require data scientists’ focus. This factor cuts across data, models, decisions and outcomes. For example, explainability is not just why decisions were made, but also once the decision was made and outcome registered, how do we explain the entire business outcomes chain. The collected dataset could be biased so the AI that is built on it will need to ensure it doesn’t carry (and amplify) these biases. Zinia provides early features to allow bias detection and mitigation.
Author: Professor A. Abate
Date 16/03/2023