Information Integrity: From a Lens of Explainable AI With Cultural and Social Behaviors
Abstract
The rapid development of Artificial intelligence (AI), such as machine learning (ML) and
deep neural networks (DNNs), has changed the way information is processed and used. However,
along with these advancements, challenges to information integrity have emerged. The widespread
dissemination of misinformation through digital platforms, coupled with the lack of transparency
in black-box ML models, has raised concerns about the reliability and trustworthiness of informa-
tion to expert users (ML developers) and non-expert users (end-users). Unfortunately, employing
eXplainable Artificial Intelligence (XAI) approaches on real-world applications to improve the
trustworthiness of DNNs models is still far-fetched and not straightforward. Motivated by these
observations, this thesis concentrates on two directions.
• Misinformation Mitigation.
In the first direction, we leverage XAI techniques to mitigate misinformation through three
main approaches: evaluating the trustworthiness of fake news detection models from a user
perspective, studying the influence of social and cultural behavior on misinformation propa-
gation, and analyzing the diffusion of descriptive norms in social media networks to promote
positive norms and combat misinformation.
• Developing Advanced ML Models.
In the second direction, we turn our attention to developing ML models from two aspects.
The first aspect exploits XAI behaviors to provide a new method to simultaneously preserve
the performance and explainability of student models, which in their primitive form provide
little transparency. In the second aspect, we develop the Temporal graph Fake News Detec-
tion Framework (T-FND), which effectively captures heterogeneous and repetitive charac-
teristics of fake news behavior.
Description
Keywords
explainable AI, ML, Social Analysis, DNNs