SACM - United States of America
Permanent URI for this collectionhttps://drepo.sdl.edu.sa/handle/20.500.14154/9668
Browse
2 results
Search Results
Item Restricted Bond Strength of Denture Teeth to Conventional, Milled and 3D-Printed Denture Bases(Saudi Digital Library, 2025) Abumansour, Malik; Ontiveros, Joe C; Gonzalez, Maria D; Belles, Donald M; amnuay, Sudarat Kiat; amnua, Chun-YenOBJECTIVE: This in vitro study assessed the shear bond strength and failure modes between denture teeth and denture bases fabricated using conventional, two-part milled, monolithic milled, or 3D-printed complete denture techniques. MATERIALS AND METHODS:A total of 40 denture base substrates (30 mm diameter x 10 mm height) were processed according to the following material treatment groups, n=10 for all groups: (1) Conventional complete denture processed using heat-polymerized resin (Lucitone 199, Dentsply Sirona) as a control; (2) 3D-printed denture resins (Lucitone Digital IPN 3D Premium, shade A1, and Lucitone Digital Print 3D Denture Resin, Original); (3) Two-part milled dentures from pre-polymerized polymethylmethacrylate pucks (Lucitone Digital Fit Denture Base Disc, Dentsply Sirona; Multilayer PMMA Discs, Dentsply Sirona), and (4) Monolithic milling denture (AvaDent, Extreme cross-linked PMMA). For all groups, prefabricated denture teeth or bases were embedded in autopolymerizing acrylic resin, except for Group four, in which the base and teeth were fabricated as a single unit. After embedding, specimens were polished using 400-grit sandpaper to achieve uniform surface exposure. The bonding of the two parts, base and teeth, was performed according to each manufacturer’s protocol. All specimens were tested within 24 hours of bonding. Shear bond strength was measured using a universal testing machine at a crosshead speed of 1 mm/min. The mode of failure was observed and recorded. Data were analyzed using one-way ANOVA followed by Tukey post hoc test at a significance level of α = 0.05. RESULTS: The monolithic milling group demonstrated the highest bond strength among all groups, with statistically significant differences compared to the others (p < 0.01), with 100% cohesive failure. The 3D printed group exhibited a significantly higher mean bond strength than both the conventional and milled groups (p = 0.001), with 70% of mixed failure. There was no significant difference between the conventional and milled groups (p = 0.98), which represent 90% and 100% of adhesive failure, respectively. CONCLUSIONS: Among all groups, the monolithic milled dentures (AvaDent) demonstrated the highest bond strength at the base-to-tooth interface. The 3D-printed denture specimens showed greater base-to-tooth bond strengths compared to both the conventionally processed and the two-part milled dentures, which showed no significant difference from each other.18 0Item Restricted Deep Learning-Based Digital Human Modeling And Applications(Saudi Digital Library, 2023-12-14) Ali, Ayman; Wang, PuRecent advancements in the domain of deep learning models have engendered remarkable progress across numerous computer vision tasks. Notably, there has been a burgeoning interest in the field of recovering three-dimensional (3D) human models from monocular images in recent years. This heightened interest can be attributed to the extensive practical applications that necessitate the utilization of 3D human models, including but not limited to gaming, human-computer interaction, virtual systems, and digital twin. The focus of this dissertation is to conceptualize and develop a suite of deep learning-based models with the primary objective of enabling the expeditious and high-fidelity digitalization of human subjects. This endeavor further aims to facilitate a multitude of downstream applications that leverage digital 3D human models. The endeavor to estimate a three-dimensional (3D) human mesh from a monocular image necessitates the application of intricate deep-learning models for enhanced feature extraction, albeit at the expense of heightened computational requirements. As an alternative approach, researchers have explored the utilization of a skeleton-based modality, which represents a lightweight abstraction of human pose, aimed at mitigating the computational intensity. However, this approach entails the omission of significant visual cues, particularly shape information, which cannot be entirely derived from the 3D skeletal representation alone. To harness the advantages of both paradigms, a hybrid methodology that integrates the benefits of 3D human mesh and skeletal information offers a promising avenue. Over the past decade, substantial strides have been made in the estimation of two-dimensional (2D) joint coordinates derived from monocular images. Simultaneously, the application of Convolutional Neural Networks (CNNs) for the extraction of intricate visual features from images has demonstrated its prowess in feature extraction. This progress serves as a compelling impetus for our investigation into a hybrid architectural framework that combines CNNs with a lightweight graph transformer-based approach. This innovative architecture is designed to elevate the 2D joint pose to a comprehensive 3D representation and recover essential visual cues essential for the precise estimation of pose and shape parameters. While SOTA results in 3D Human Pose Estimation (HPE) are important, they do not guarantee the accuracy and plausibility required for biomechanical analysis. Our innovative two-stage deep learning model is designed to efficiently estimate 3D human poses and associated kinematic attributes from monocular videos, with a primary focus on mobile device deployment. The paramount significance of this contribution lies in its ability to provide not only accurate 3D pose estimations but also biomechanically plausible results. This plausibility is essential for achieving accurate biomechanical analyses, thereby advancing various applications, including motion tracking, gesture recognition, and ergonomic assessments. Our work significantly contributes to enhancing our understanding of human movement and its interaction with the environment, ultimately impacting a wide range of biomechanics-related studies and applications. In the realm of human movement analysis, one prominent downstream task is the recognition of human actions based on skeletal data, known as Skeleton-based Human Action Recognition (HAR). This domain has garnered substantial attention within the computer vision community, primarily due to its distinctive attributes, such as computational efficiency, the innate representational power of features, and robustness to variations in illumination. In this context, our research demonstrates that, by representing 3D pose sequences as RGB images, conventional Convolutional Neural Network (CNN) architectures, exemplified by ResNet-50, when complemented by as tute training strategies and diverse augmentation techniques, can attain State-of-the-Art (SOTA) accuracy levels, surpassing the widely adopted Graph Neural Network models. The domain of radar-based sensing, rooted in the transmission and reception of radio waves, offers a non-intrusive and versatile means to monitor human movements, gestures, and vital signs. However, despite its vast potential, the lack of comprehensive radar datasets has hindered the broader implementation of deep learning in radar-based human sensing. In response, the application of synthetic data in deep learning training emerges as a crucial advantage. Synthetic datasets provide an expansive and practically limitless resource, enabling models to adapt and generalize proficiently by exposing them to diverse scenarios, transcending the limitations of real-world data. As part of this research’s trajectory, a novel computational framework known as "virtual radar" is introduced, leveraging 3D pose-driven physics-informed principles. This paradigm allows for the generation of high-fidelity synthetic radar data by merging 3D human models and the principles of Physical Optics (PO) approximation for radar cross-section modeling. The introduction of virtual radar marks a groundbreaking path towards establishing foundational models focused on the nuanced understanding of human behavior through privacy-preserving radar-based methodologies.16 0