Explanation of Machine Learning/AI

Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Explainable artificial intelligence (XAI) is currently making a name for itself. This is a field that aims to develop techniques to allow artificial intelligence (AI) and machine learning (ML) models to become comprehensible to humans. Although XAI techniques are growing, one of the most significant challenges is that there are no uniform standards for the interpretation of ML and measurements of its quality. In addition, not much is known about the mechanisms of ML decisions. This leads to public distrust of AI systems. This report discusses interpreting ML models by applying explainable methods and investigates their feature importance. It illustrates this issue by performing an experiment using Python language to explore three supervised ML methods (logistic and linear regression, decision trees, and support vector machines SVM/SVR) and build six models to obtain understandable predictions on regression and classification datasets. In terms of interpretation, it applies LIME, Eli5, and SHAP methods to interpret the black box of ML models by extracting their feature importance. In addition, it makes a comparison to determine the best methods. Finally, it tries to simplify the machine's predictions by providing a simple visualization for the final explanations which illustrates the factors affecting the prediction.

Description

Keywords

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyright owned by the Saudi Digital Library (SDL) © 2025