Explainable AI Approach for detecting Generative AI Imagery

dc.contributor.advisorBarns, Chloe
dc.contributor.authorAlghamdi, Sara
dc.date.accessioned2025-01-26T06:20:31Z
dc.date.issued2024-09-29
dc.description.abstractThe rapid advancement of Artificial Intelligence (AI) and machine learning, particularly deep learning models such as Convolutional Neural Networks (CNNs), has revolutionized image classification across diverse fields, including healthcare, autonomous vehicles, and digital forensics. However, the proliferation of AI-generated images, commonly referred to as deepfakes, has introduced significant ethical, societal, and security challenges. Deepfakes leverage AI to create highly realistic yet synthetic media, complicating the ability to differentiate between authentic and manipulated content. This has heightened the need for robust tools capable of accurately detecting and classifying such media to combat the risks of misinformation, fraud, and erosion of public trust. Traditional models, while effective in classification, often lack transparency in their decision-making processes, limiting stakeholder trust. To address this limitation, this study explores the integration of Explainable AI (XAI) techniques, such as SHAP (SHapley Additive exPlanations), with CNNs to enhance interpretability and trust in model predictions. By employing CNNs for high-accuracy classification and XAI methods for feature-level explanations, the research aims to contribute to digital forensics and content moderation, offering both technical reliability and transparency. This study highlights the critical need for trustworthy AI systems in the fight against manipulated media, providing a framework that balances efficacy, transparency, and ethical considerations.
dc.format.extent43
dc.identifier.urihttps://hdl.handle.net/20.500.14154/74734
dc.language.isoen
dc.publisherAston University
dc.subjectConvolutional Neural Networks (CNNs)
dc.subjectExplainable AI (XAI)
dc.subjectSHAP (SHapley Additive exPlanations)
dc.subjectDeepfakes
dc.subjectAI-generated Images
dc.subjectImage Classification
dc.subjectDigital Forensics
dc.subjectMedia Integrity
dc.subjectContent Moderation
dc.subjectMisinformation
dc.subjectModel Interpretability
dc.subjectTransparency
dc.subjectEthical AI
dc.subjectMachine Learning
dc.subjectManipulated Media Detection
dc.subjectTrustworthy AI Systems.
dc.titleExplainable AI Approach for detecting Generative AI Imagery
dc.typeThesis
sdl.degree.departmentENGINEERING & PHYSICAL SCIENCES Artificial Intelligence
sdl.degree.disciplineArtificial Intelligence
sdl.degree.grantorAston University
sdl.degree.nameMSc Artificial Intelligence

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
ٍSACM-Dissertation.pdf
Size:
968.1 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed to upon submission
Description:

Copyright owned by the Saudi Digital Library (SDL) © 2025