Investigating Avenues For Increasing The Explainability Of AI Systems
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Saudi Digital Library
Abstract
AI is increasingly being relied upon to provide solutions and predictions in many areas of human endeavor.
Examples include healthcare (predicting disease), engineering problems, insurance and driverless cars.
One important issue that has arisen is the lack of trans transparency of these AI systems; how do
these systems actually produce a result? Is it possible to interrogate them to convey to end users the
‘thinking’ behind the decision or prediction? Not having an understanding of why an AI system makes a
particular decision is unsettling and creates doubt regarding the use of AI. To address this, methods and
approaches are being developed to attempt to make AI systems more transparent, and this endeavor is
called explainable AI, or XAI. XAI looks at the output of an AI system and tries to provide an explanation
for the output which a human can understand and therefore make informed decisions about whether to
trust the AI system.
One of the sub-fields of XAI is explainable machine learning, or XML, which can be used to explain
machine learning models such as Decision trees and neural networks. In this thesis a decision tree
algorithm is coded to analyse telecom Churn data to predict customer preferences, and then an ‘off the-shelf’ XAI technique is used to explain the output of the decision tree. Additionally another ML
approach is developed to also analyse the data, and then also interpreted by the same XAI methods.
We then discuss the performance of the XAI program with respect to the two ML methods to see how
effective it is in explaining the ML outputs. Moreover,Present an example of system dialogue that shows
the interaction between explainer and explainee.
To summarize the major tasks learned during the course of this project, I have learned fundamental
concepts in XAI and XML. A wide range of topics was researched and investigated regarding XAI
algorithms and I decided to focus on InterpretML. I wrote some code to statistically analyse a churn
data set and then used InterpretML to gain further insights. I considered and discussed the usefulness
and limitations of InterpretML. I improved the interface between the user and InterpretML by creating
a natural language dialogue system that shows the conversation between the explainer and explainee.
Finally, I planned and read about different possible questions such as contrastive questions