Pedrycz, WitoldAlateeq, Majed Mohammad2023-10-042023-10-042023-10-02IEEEhttps://hdl.handle.net/20.500.14154/69316With the rapid development of machine learning models along with increasingly complex data structures, it becomes difficult to ground the reliability of models’ predictions despite the substantial progress in favor of high approximation properties. The lack of interpretability remains a key barrier in order to fully leverage the tremendous success of intelligent systems since it delivers critical analysis abilities to the end user to achieve efficiency in decision-making processes. The purpose of Interpretability and transparency is to reveal interconnections of intelligent models leading to justifying decision-making process, eliminating vagueness and capturing a factor of uncertainty in data space. Therefore, any advancement made to the interpretability feature will positively impact overall models’ performance. In the presented considerations, this work relies on logic-oriented fuzzy neural networks to represent knowledge in a transparent way with the aid of information granules. In the synergistic collaboration with fuzzy logic, neural networks deliver a vast array of learning abilities that can be even augmented with various fuzzy analytical methods to discover hidden data patterns for better interpretability. The high modularity of the constructed networks (leading to multifunctionality and robustness) is inherited from the logic nature of AND/OR neurons. The logic-oriented neurons play a pivotal role in the developed models and realize a logic approximation of experimental data and reflect general decomposition of Boolean function in two-valued logic. Information granularity is a key component in building abstract concepts to humans for knowledge acquisition and reasoning. In fact, information granules serve as a vehicle to interpret and represent knowledge domain, offering efficient way to describe complex and nonlinear systems. Fuzzy sets, as a form of information granules, adequately handle imprecise and vague knowledge in systems and consequently are a key in building transparent and interpretable models. Thus, humans can easily comprehend real-world systems or natural phenomena. The overall model efficiency, expressed in terms of accuracy and interpretability when dealing with the design and validation of AND/OR networks, constitutes a focal point of this research, along with effective quantification of the extracted knowledge especially in the case of high-dimensional input–output space. The primary objective of this dissertation is to analyze and design a cohesive interpretable framework capable of maintaining high approximation capabilities. In this study, we used logic-oriented fuzzy AND\OR networks as a backbone of overall interpretable framework. Starting off with structural analysis of the network, the structure exhibits low efficiency caused by gradient-based learning algorithms. Therefore, other gradient-based learning alternatives are superior in improving convergence due to their adaptive learning mechanisms. We demonstrate that the rate of convergence can be improved significantly by integrating randomized learning techniques through generating random weight values of connectives. Furthermore, we proposed an innovative interpretable method to describe and quantify data using concepts. The approach describes reference information granules positioned in some space (output space) inducing fuzzy sets localized in the input space. The description is realized by running a conditional fuzzy clustering followed by a calibration process completed through logic networks. The synergy between conditional clustering and logic networks presents highly cohesive linguistic dependency between objects and their attributes. As for the interpretability, a thoroughly discussion of interpretation aspects of concept analysis and conceptual clustering is presented as a means for uncertainty quantification and rigorous explainability. Further enhancement of the interpretation framework is proposed by presenting a novel method of conditional clustering. We developed a mathematical model that takes into consideration multi conditions positioned in the output space to induce information granules in input space simultaneously making these types of models more reflective of reality. The experimental studies involve synthetic data machine learning datasets from publicly-available repositories.142enInterpretabilityMachine LearningFuzzy LogicFuzzy NeuronsFuzzy ClusteringFuzzy C-MeansLogic-Oriented Fuzzy Neural Networks: Optimization and Applications of Interpretable Models of Machine LearningThesis