ADAPTIVE SELF-LEARNING AND MULTI-STAGE MODELING FOR EFFICIENT MEDICAL AND DENTAL IMAGE SEGMENTATION
dc.contributor.advisor | Yugyung, Lee | |
dc.contributor.author | Alqarni, Saeed | |
dc.date.accessioned | 2025-04-16T13:34:46Z | |
dc.date.issued | 2025 | |
dc.description | PhD Dissertation in Computer Science and AI | |
dc.description.abstract | Medical imaging has revolutionized healthcare by enabling non-invasive visualization of anatomical structures and pathologies, significantly improving diagnostic accuracy, treatment planning, and patient monitoring. Modalities like computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound provide critical insights into the human body, yet precise medical image segmentation remains a challenging task. This difficulty arises from factors such as image variability, noise, artifacts, and the limited availability of annotated data necessary to train robust segmentation models. Overcoming these hurdles is essential to unlock the full potential of medical imaging in diverse clinical applications. This dissertation presents a novel framework for efficient and accurate medical image segmentation, incorporating multi-stage transfer learning, uncertainty-driven data selection, and weakly supervised learning. By combining human-guided refinement with adaptive data selection, this research addresses fundamental barriers such as data scarcity, computational resource limitations, and the high cost of annotation. The framework is structured around three key objectives: 1. Adaptive Uncertainty Sampling with SAM (AUSAM), which introduces a flexible, real-time data selection and segmentation approach, reducing reliance on large annotated datasets through dynamic thresholds and DBSCAN clustering. 2. AUSAM-SL - Active Self-Learning with SAM, which integrates entropy-based active learning with iterative self-labeling, supported by SAM for initial training, refining the selection criteria, and enhancing model predictions. 3. AUSAM-3D- 3D Modeling for Domain-Aware Segmentation and Aggregation, which builds upon AUSAM by incorporating a spatial and volumetric dimension, improving segmentation accuracy for organs and tumors, and enabling more clinically relevant outcomes. Preliminary results on medical and dental imaging datasets (MRI, CT, X-ray) validate the effectiveness of the proposed framework in improving segmentation accuracy while maintaining computational efficiency. The research offers scalable solutions suitable for resource-constrained environments by integrating human feedback with semisupervised and weakly supervised learning techniques. This work advances the field of medical and dental image segmentation and provides practical methods for leveraging multi-stage learning in real-world applications where data and computational resources are limited. | |
dc.format.extent | 190 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14154/75216 | |
dc.language.iso | en_US | |
dc.publisher | University of Missouir - Kansas City | |
dc.subject | Deep Learning | |
dc.subject | Computer Vision | |
dc.subject | Medical imaging | |
dc.subject | medical image segmentation | |
dc.subject | weakly supervised learning | |
dc.subject | semi-supervised learning | |
dc.subject | Few Shot Learning | |
dc.subject | 3D Modeling | |
dc.subject | Active Self-Learning | |
dc.title | ADAPTIVE SELF-LEARNING AND MULTI-STAGE MODELING FOR EFFICIENT MEDICAL AND DENTAL IMAGE SEGMENTATION | |
dc.type | Thesis | |
sdl.degree.department | School of Science and Engineering | |
sdl.degree.discipline | Computer Science | |
sdl.degree.grantor | University of Missouir - Kansas City | |
sdl.degree.name | Doctor of Philosophy in Computer Science |