HUMAN ACTIVITY RECOGNITION: INTEGRATING SENSOR FUSION AND ARTIFICIAL INTELLIGENCE

dc.contributor.advisorIlyas, Mohammad
dc.contributor.authorAlanazi, Munid
dc.date.accessioned2024-11-05T20:38:06Z
dc.date.issued2024-12
dc.description.abstractHuman Activity Recognition (HAR) plays a crucial role in various applica tions, including healthcare, fitness tracking, security, and smart environments, by enabling the automatic classification of human actions based on sensor and visual data. This dissertation presents a comprehensive exploration of HAR utilizing ma chine learning, sensor-based data, and Fusion approaches. HAR involves classifying human activities over time by analyzing data from sensors such as accelerometers and gyroscopes. Recent advancements in computational technology and sensor avail ability have driven significant progress in this field, enabling the integration of these sensors into smartphones and other devices. The first study outlines the foundational aspects of HAR and reviews existing literature, highlighting the importance of ma chine learning applications in healthcare, athletics, and personal use. In the second study, the focus shifts to addressing challenges in handling large-scale, variable, and noisy sensor data for HAR systems. The research applies machine learning algorithms to the KU-HAR dataset, revealing that the LightGBM classifier outperforms others in key performance metrics such as accuracy, precision, recall, and F1 score. This study underscores the continued relevance of optimizing machine learning techniques for improved HAR systems. The study highlights the potential for future research to explore more advanced fusion techniques to fully leverage different data modal ities for HAR. The third study focuses on overcoming common challenges in HAR research, such as varying smartphone models and sensor configurations, by employing data fusion techniques. Experiments were conducted on the KU-HAR and UCI HAR datasets using popular machine learning classifiers, including Decision Trees, Random Forest, Gradient Boosting, and XGBoost. XGBoost achieved the highest accuracy of 96.83%, demonstrating its effectiveness in classifying fundamental human activi ties, with decision-level fusion methods further improving results. The fourth study delves into the implementation of multimodal fusion techniques for HAR by combin ing wearable sensor data with visual data. The research investigates the performance of the late fusion method in integrating sensor and visual modalities.
dc.format.extent107
dc.identifier.urihttps://hdl.handle.net/20.500.14154/73477
dc.language.isoen_US
dc.publisherFlorida Atlantic University
dc.subjectHuman Activity Recognition (HAR)
dc.subjectInertial Sensor
dc.subjectAccelerometer
dc.subjectGyroscope
dc.titleHUMAN ACTIVITY RECOGNITION: INTEGRATING SENSOR FUSION AND ARTIFICIAL INTELLIGENCE
dc.typeThesis
sdl.degree.departmentDepartment of Electrical Engineering and Computer Science
sdl.degree.disciplineComputer science - Data Science and Analytics (DSA) concentration
sdl.degree.grantorFlorida Atlantic University
sdl.degree.namePhD CS, Data Science and Analytics (DSA) concentration

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
SACM-Dissertation.pdf
Size:
1.96 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed to upon submission
Description:

Copyright owned by the Saudi Digital Library (SDL) © 2025