DETECTING MANIPULATED AND ADVERSARIAL IMAGES: A COMPREHENSIVE STUDY OF REAL-WORLD APPLICATIONS
Abstract
The great advance of communication technology comes with a rapid increase of disinformation in many kinds and shapes; manipulated images are one of the primary examples of disinformation that can affect many users. Such activity can severely impact public behavior, attitude, and be- lief or sway the viewers’ perception in any malicious or benign direction. Additionally, adversarial attacks targeting deep learning models pose a severe risk to computer vision applications. This dissertation explores ways of detecting and resisting manipulated or adversarial attack images. The first contribution evaluates perceptual hashing (pHash) algorithms for detecting image manipulation on social media platforms like Facebook and Twitter. The study demonstrates the differences in image processing between the two platforms and proposes a new approach to find the optimal detection threshold for each algorithm. The next contribution develops a new pHash authentication to detect fake imagery on social media networks, using a self-supervised learning framework and contrastive loss. In addition, a fake image sample generator is developed to cover three major image manipulating operations (copy-move, splicing, removal). The proposed authentication technique outperforms the state-of-the-art pHash methods. The third contribution addresses the challenges of adversarial attacks to deep learning models. A new adversarial-aware deep learning system is proposed using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. The proposed approach outperforms current state-of-the-art adversarial defense systems. Finally, the fourth contribution fuses big data from Extra-Military resources to support military decision-making. The study pro- poses a workflow, reviews data availability, security, privacy, and integrity challenges, and suggests solutions. A demonstration of the proposed image authentication is introduced to prevent wrong decisions and increase integrity. Overall, the dissertation provides practical solutions for detect- ing manipulated and adversarial attack images and integrates our proposed solutions in supporting military decision-making workflow.
Description
The main goal of this research is to fight misinformation and manipulated images by investigating state-of-the-art image authentication models and fill the gaps by introducing a new approach compatible with real-world applications. The importance of this project is unlimited in the cyber world, and we study our research on Twitter (X) and Facebook platforms and simulate our development into a decision-making system for the military to increase integrity and performance. In addition, a new adversarial attack detection method is introduced to increase the robustness of machine learning models.
Keywords
Image Authentication, Adversarial Attacks, Decision-Making, Fake News, Machine Learning, Cybersecurity, Computer Security
Citation
IEEE