Information Integrity: From a Lens of Explainable AI With Cultural and Social Behaviors

dc.contributor.advisorThai, My T
dc.contributor.authorAlharbi, Raed
dc.date.accessioned2023-07-05T06:39:36Z
dc.date.available2023-07-05T06:39:36Z
dc.date.issued2023-08-11
dc.description.abstractThe rapid development of Artificial intelligence (AI), such as machine learning (ML) and deep neural networks (DNNs), has changed the way information is processed and used. However, along with these advancements, challenges to information integrity have emerged. The widespread dissemination of misinformation through digital platforms, coupled with the lack of transparency in black-box ML models, has raised concerns about the reliability and trustworthiness of informa- tion to expert users (ML developers) and non-expert users (end-users). Unfortunately, employing eXplainable Artificial Intelligence (XAI) approaches on real-world applications to improve the trustworthiness of DNNs models is still far-fetched and not straightforward. Motivated by these observations, this thesis concentrates on two directions. • Misinformation Mitigation. In the first direction, we leverage XAI techniques to mitigate misinformation through three main approaches: evaluating the trustworthiness of fake news detection models from a user perspective, studying the influence of social and cultural behavior on misinformation propa- gation, and analyzing the diffusion of descriptive norms in social media networks to promote positive norms and combat misinformation. • Developing Advanced ML Models. In the second direction, we turn our attention to developing ML models from two aspects. The first aspect exploits XAI behaviors to provide a new method to simultaneously preserve the performance and explainability of student models, which in their primitive form provide little transparency. In the second aspect, we develop the Temporal graph Fake News Detec- tion Framework (T-FND), which effectively captures heterogeneous and repetitive charac- teristics of fake news behavior.
dc.format.extent108
dc.identifier.urihttps://hdl.handle.net/20.500.14154/68497
dc.language.isoen_US
dc.subjectexplainable AI
dc.subjectML
dc.subjectSocial Analysis
dc.subjectDNNs
dc.titleInformation Integrity: From a Lens of Explainable AI With Cultural and Social Behaviors
dc.typeThesis
sdl.degree.departmentDepartment of Computer and Information Science and Engineering
sdl.degree.disciplineexplainable AI
sdl.degree.grantorUniversity of Florida
sdl.degree.nameDoctor of Philosophy in Computer Science

Files

Copyright owned by the Saudi Digital Library (SDL) © 2024