Automated Surface Anomaly Detection System for Quality Inspection Of Small Parts Using Computer Vision and Convolutional Neural Networks
Abstract
Acceptance sampling is an essential procedure in the statistical quality control do- main to inspect incoming lots before shipping or producing vital products, including medicine, electronic components, automobile parts, and processed food. Prior re- search has shown that a visual surface inspection of incoming lots is typically done by experienced inspectors who find between 70% to 80% of the defective parts, spending an average of 54 seconds on each part. The low speed and low accuracy of inspection tangibly decrease product quality and impair economic benefits. Considerable effort has been aimed toward surface anomaly detection using deep learning for real-time detection. However, two main issues limit the performance of deep learning in this context: cost and the partial vision or wasted time issue. This research aims to resolve these problems and establish a novel framework to detect anomalies in real-time using computer vision and convolutional neural networks (CNNs). The research framework consists of two phases, and its primary focus is on surface anomalies to assist with the detection speed and accuracy during acceptance sampling activities. The first phase focuses on developing a compact and computationally efficient anomaly detection model for edge computing. I propose a novel CNN architecture named InspectNet designed for deployment to edge devices. The architecture focuses on maximizing the accuracy and minimizing the inference time of detection by coalescing two state-of-
the-art CNN architectures. InspectNet yielded an accuracy of 98.5% and a detection speed of 0.12 seconds which is competitive with state-of-the-art models. The second phase of the dissertation focuses on deploying InspectNet to a novel portable multi- camera inspection system named Qbox, and its potential applications. The hardware design of Qbox consists of a transparent box that contains six cameras with micro- processors to perform computations, and each camera has an LED light that can be adjusted according to specifications. The system software design was developed using Swift 2.0 and named AI Detector. The software works by taking each frame of the camera’s live feed and passing it through InspectNet for defect detection, where a bounding box is drawn around defective areas. The Qbox prototype was configured and tested by conducting experiments to find the system’s optimal settings and test the performance. The results showed that the Qbox achieved 96.5% accuracy with a detection speed of 0.171 seconds. These results constituted a 21% to 37.8% detection accuracy improvement and 99.6% detection time improvement over traditional visual inspection methods. Finally, the Qbox’s generalizability, capabilities, and potentials were demonstrated through a case study where inspection of plastic elbow adapters was successfully and efficiently trained on InspectNet using transfer learning and then inspected for surface anomalies using the Qbox system.