SACM - United States of America

Permanent URI for this collectionhttps://drepo.sdl.edu.sa/handle/20.500.14154/9668

Browse

Search Results

Now showing 1 - 10 of 1756
  • ItemRestricted
    Clinical Evaluation of Mandibular Flexure during Mouth Opening with Digital 3-dimensional Analysis
    (Tufts University School of Dental Medicine, 2024) Tashkandi, Oula; Yo-Wei, Chen
    ABSTRACT Background: Median mandibular flexure (MMF) is a phenomenon of flexure or deformation of the mandible which may affect prosthodontic treatments, specifically for full arch rehabilitation. During this phenomenon the mandible tends to flex in an inward and downward movement during maximum mouth opening. This flexure also occurs during other functional movements such as mouth closing, and lateral movements. Objective: This study aimed to evaluate the 3-dimensional distortion of median mandibular flexure during maximum mouth opening and minimum mouth opening using a digital 3-dimensional analysis. Methods: Twenty subjects with full dentition in the mandibular arch were recruited. Two intraoral scans were obtained with an optical scanner (Trios 4, 3Shape, Denmark). The first scan was executed with more than 35mm mouth opening. The second scan was performed with minimal mouth opening (less than 10mm). The two scans were exported from the intraoral scanner as Standard Tessellation Language (STL) files. The STL files were imported in the 3-dimensional inspection software (Geomagic Control X, 3D Systems, SC, USA). Superimpositions with Best-fit Algorithms were performed following the Initial Alignment. Linear measurements were made between the tip of canines, the buccal cusp tip of premolars, and the mesio-buccal cusp tip of first and second molars. 3D comparison of the two scans was calculated with the surface of teeth. Results: The mean RMS deviation increased progressively from canine to molar: means (± standard deviations) were 0.037 (±0.014) mm for RMS 1 (canine), 0.053 (±0.019) mm for RMS 2 (premolar), and 0.083 (±0.029) mm for RMS 3 (molar). Significant differences were confirmed by both repeated-measures ANOVA and the Wilcoxon signed-rank test (p < 0.001). The linear measurements taken the maximum linear distance from the MxMO scan (Linear 1) had a mean (± standard deviation) of 48.29 mm (±2.28 mm), while the minimum linear distance from the MnMO scan (Linear 2) had a mean (± standard deviation) of 48.36 mm (±2.27 mm). A Shapiro-Wilk test indicated normality for RMS values but non-normality for linear measurements. Conclusion: The mandibular flexure observed in the molar area had the highest degree of deviation and the canine area had the least. A significant difference in linear measurements between maximum and minimum mouth openings suggests variability in mandibular dimensions during movement, which has implications for procedures like prosthetics and implants. Median mandibular flexure is a noticeable phenomenon since there is evidence that there is a significant amount of deviation in the mandible and the extent of mandibular flexure is greatest in the molar region and progressively less towards the anterior region. Additionally, there is a significant change in the inter-molar distance between minimum mouth opening and maximum mouth opening. This confirms the importance of taking this deviation into consideration when diagnosing and treatment planning full arch mandibular case.
    15 0
  • ItemRestricted
    Amplitude-Modulated Characterization and Calibration of Phased Arrays Using Non-Coherent Detection
    (North Carolina state university, 2025) Almahmoud, Saleh; Floyd, Brian
    Phased arrays have emerged as critical components in today’s communication, radar, and imaging systems, particularly in response to the pursuit of higher frequencies, specifi- cally the millimeter-wave (mmWave) band, attributed to shorter wavelengths. These arrays play an important role in directing beams towards desired directions, thereby helping the transmission of signals over longer distances. Utilizing electronic circuitry, arrays synthe- size the desired beam pattern by manipulating the amplitude and phase of each element’s radiation. However, the inherent variations in the electronic characteristics of each ar- ray element can impact the radiation response, leading to undesirable beam patterns. To mitigate these non-idealities, it is critical to calibrate the array during the manufacturing process. Additionally, factors such as the aging of the array and temperature variations can further affect its performance, in-situ calibration methods are necessary. Currently, most testing and calibration methods are conducted on an element-by-element basis (serially) using a vector network analyzer (VNA), which can be both expensive and time-consuming. Parallel testing is mostly done using coherent detection methods, introducing complexity. A promising solution is Code-Modulated Embedded Test (CoMET), which has demonstrated effectiveness as a non-coherent and parallel testing and calibration method, achieving high levels of accuracy and speed. CoMET employs a power detector to test and calibrate all elements within the array in parallel. This dissertation will concentrate on the investigation and utilization of Amplitude- Modulated CoMET (AM-CoMET) to address specific challenges encountered in conven- tional CoMET. The primary contributions of this study are listed as follows. Firstly, the research aims to establish a theoretical derivation and a framework for AM-CoMET tech- niques, the study seeks to validate the effectiveness of AM-CoMET through simulations and measurements using commercial phased arrays. AM-CoMET demonstrated a 0.1 dB RMS gain error and 0.85° RMS phase error for board-level test. For free-space testing, the RMS gain error is 0.4 dB and the RMS phase error is 1°. Secondly, the research investigates the impact of inherent non-idealities within the AM-CoMET codes on its performance. Additionally, the study has yielded a novel correction technique that demonstrates a significant improvement in gain and phase estimation by mitigating the impact of the inherent non-idealities. Multiple factors impact the effec- tiveness of the correction technique; however, the board-level measurements show an improvement of gain estimation to a near-ideal case. Additionally, the correction technique was able to reduce gain error by about 40 % in a free-space environment. Thirdly, this work aims to provide a generalized mathematical framework that captures both modulation schemes, amplitude modulation (AM) and phase modulation (PM). This framework enables the analysis of code unbalancing and other non-idealities, and directly correlates them to the extracted amplitude and phase responses. Additionally, methods for correcting and de-embedding errors caused by non-ideal code modulation are presented, without the need for complex equation solving. Lastly, the research utilizes CoMET for the characterization of the 1-dB compression point (P1dB), through the estimation of the third-order harmonic gain. Moreover, an alterna- tive approach to exploit CoMET for directly estimating the P1dB is introduced. A comparison between the two methods is provided, highlighting the advantages and drawbacks of each. CoMET demonstrated an accuracy of around 0.7 dB in characterizing the P1dB compared to a VNA. These results confirm the ability of CoMET to characterize linearity metrics for phased arrays, further strengthening its potential as a substitute for VNAs.
    6 0
  • ItemRestricted
    Energy Efficient Sensing for Unsupervised Event Detection in Real-Time
    (Saudi Digital Library, 2019) Bukhari, Abdulrahman; Hyoseung Kim
    General-purpose sensing offers a flexible usage and a wide range of Internet of Things (IoT) applications deployment. In order to achieve a general-purpose sensing system that is suitable for IoT applications, several design aspects such as performance, efficiency and usability, must be taken into consideration. The work of this thesis is focusing on implementing an energy efficient general-purpose sensing system that is based on unsupervised learning techniques for events labeling and classification. The system clusters raw data collected from a variety of events, like microwave, kettle and faucet running, etc., for classification. During the training phase, the system computes sensing polling periods, based on the rate of change in classes, that are then feed into a dynamic scheduler implemented on the sensor board in order to reduce energy consumption. The system is deployed in a one-bedroom apartment for raw data collection, and system evaluation. The results show that the mean accuracy of event classification is 83%, and sensor data polling is reduced in average by 95%, which translates to 90% energy saving, compared to the fixed polling period in the state-of-art approach.
    21 0
  • ItemRestricted
    COMPREHENSIVE APPROACHES TO THE CAPTURE, IDENTIFICATION. AND QUANTITATIVE ANALYSIS OF ANTIMICROBIAL PEPTIDES AND OTHER PEPTIDES OF INTEREST FROM BIOLOGICAL SOURCES
    (Saudi Digital Library, 2024) Altalhi, Amaal; Bishop, Barney
    The emergence of antibiotic-resistant pathogens necessitates the development of novel antimicrobial agents. This dissertation focuses on the synthesis and characterization of magnetic iron (III) oxide particles (IOPs) incorporating amphipathic cross-linked polymers, specifically N-methacryloyl-6-aminohexanoic acid (MA6AHA) and N-isopropylacrylamide/methacrylic acid (NIPMAm/MAA). These IOPs are evaluated for their stability and effectiveness in capturing antimicrobial peptides (AMPs) from biological sources, using American alligator plasma as a model system. The captured peptides were analyzed using tandem mass spectrometry (LC-MS/MS), employing electron-transfer/higher-energy collision dissociation (EThcD) fragmentation techniques, and data analysis was conducted with PEAKS Xpro software. Results demonstrated that the functionalized IOPs efficiently captured a variety of peptides, including potential AMPs, highlighting their potential utility in discovering novel bioactive peptides. This study contributes to the broader research effort in combating antibiotic resistance and exploring the reptilian host-peptidome.
    4 0
  • ItemRestricted
    Enhancing Decision Support Systems in Smart Systems by Using Advanced AI Tools and Blockchain
    (Towson University, 2025) Alzahrani, Mohammed; Liao, Weixian
    Decision Support Systems (DSSs) are computer-based systems used to enhance decision-making in various fields, including transportation and education. These systems take into consideration all alternatives involved in decision-making, ensuring that decisions are made accurately and effectively. However, decision-making can be challenging due to the involvement of multiple criteria and large amounts of data, which must be carefully considered. These factors can negatively impact decisions, affecting business or company goals. This dissertation introduces advanced Artificial Intelligence (AI) tools and blockchain to address decision-making issues based on two case studies: one in smart transportation and the other in smart education. This dissertation demonstrates the effectiveness of MTL, sentiment analysis, and blockchain in enhancing decision-making in education and transportation smart systems. The first case study examines how Multi-Task Learning (MTL) enhances smart systems, particularly in the context of smart transportation, across various categories. Numerous related tasks in smart transportation require efficient handling. MTL can help train these multiple associated tasks together, such as traffic flow and speed, sharing their features within one model, making the model more robust, efficient, and scalable. This means that it considers all possible patterns and representations among these tasks. Additionally, MTL helps train new tasks with a pre-trained model instead of training them from scratch. This leads to more accurate predictions and decision-making. The second case study aims to illustrate how sentiment analysis and blockchain with smart contracts can enhance the accuracy of decision-making in smart systems, particularly in improving the scholarship approval process. There are multiple factors in the scholarship approval process, so using smart contracts with blockchain can enhance decision-making by automating the process. Blockchain, based on the IBFT 2.0 consensus, is used to manage and record student transcripts, supplementary documents, scholarship decisions, and tuition and salary distribution in a transparent, immutable, and auditable way. Additionally, the DistilRoBERTa transformer learning model is utilized for sentiment analysis to address the limitation of unstructured data and enhance its detection (e.g., students supplementary documents) to determine whether the tone is positive, negative, or neutral. This enhances prediction accuracy and, consequently, decision-making in the scholarship process by assigning a confidence score and label to each student document.
    17 0
  • ItemRestricted
    A Unified Deep Learning Framework for Simultaneous Segmentation and Multiclass Classification of Chronic Wounds
    (Wayne State University, 2025) Alhababi, Mustafa; Malik, Hafiz; Auner, Gregory
    Chronic wounds affect millions worldwide, posing significant challenges for healthcare systems and a heavy economic burden globally. The segmentation and classification (S&C) of chronic wounds are critical for wound care management and diagnosis, aiding clinicians in selecting appropriate treatments. Existing approaches have utilized either traditional machine learning or deep learning methods for S&C. However, most focus on binary classification, with few addressing multi-class classification, often showing degraded performance for pressure and diabetic wounds. Wound segmentation has been largely limited to foot ulcer images, and there is no unified diagnostic tool for both S&C tasks. To address these gaps, we developed a unified approach that performs S&C simultaneously. For segmentation, we proposed Attention-Dense-UNet (Att-d-UNet), and for classification, we introduced a feature concatenation-based method. Our framework segments wound images using Att-d-UNet, followed by classification into one of the wound types using our proposed method. We evaluated our models on publicly available wound classification datasets (AZH and Medetec) and segmentation datasets (FUSeg and AZH). To test our unified approach, we extended wound classification datasets by generating segmentation masks for Medetec and AZH images. The proposed unified approach achieved 90% accuracy and an 86.55% dice score on the Medetec dataset and 81% accuracy and an 86.53% dice score on the AZH dataset These results demonstrate the effectiveness of our separate models and unified approach for wound S&C. Wound segmentation aids in the measurement of wound area which further assists in analyzing the wound healing progress. In this research work, we have also presented a deep learning-based segmentation approach namely Dual-UNet for precisely segmenting the diabetic foot ulcers images. The foot ulcer images are passed to Dual-UNet to generate the mask image having the segmented wound area. In our proposed approach, two UNets are utilized each having encoder, atrous spatial pyramid pooling (ASPP) block, decoder with skip connections, and output block to improve the segmentation results. We assessed our framework by performing the experimentation on two benchmark datasets named foot ulcer segmentation (FUSeg) challenge and AZH segmentation dataset. Additionally, we have also evaluated our Dual-UNet for cross-dataset validation to demonstrate the generalization capability of the proposed approach. Both the quantitative and visual results demonstrate the effectiveness of the proposed framework for the segmentation of chronic diabetic wounds. We also propose a novel lightweight fused-densenet method capable of reliable classification of multiple types of chronic wounds. Our method comprises a fully trained and a partially trained densenet model, which is fused to develop an effective multiclass wound classification approach. We introduce the GeLU activation function to tackle the dying ReLU problem, enhanced performance, better learning, and efficient training. Further, we add the dense and dropout layers along with the L2 regularization approach to counter the model overfitting. We assessed the performance of our lightweight model on the standard Medetec and AZH datasets, as well as their augmented versions. We employed multiple augmentation techniques to increase the number of samples and diversity of these datasets to tackle the overfitting and class imbalance issues. Experimental evaluation on AZH, Medetec, and augmented versions of both datasets signifies the efficacy of our proposed method for multiclass wound classification.
    24 0
  • ItemRestricted
    HOW DO RISK MANAGEMENT PRACTICES MEDIATE THE RELATIONSHIP BETWEEN CYBERSECURITY STRATEGY IMPLEMENTATION AND ORGANIZATIONAL PERFORMANCE?
    (Marymount University, 2025) Aldawsari, Najla; Mbaziira, ALex
    In the era of rapid digital transformation and increasing interconnectivity, healthcare organizations face an alarming rise in sophisticated cyber threats. Despite considerable global investment in cybersecurity, healthcare institutions continue to experience costly ransomware attacks, exposing persistent vulnerabilities in cyber risk governance. This study empirically examines how risk management practices mediate the relationship between cybersecurity strategy implementation and organizational performance. Grounded in General Deterrence Theory, the research utilizes a quantitative methodology to analyze data collected from 269 senior cybersecurity professionals in Saudi Arabia. Findings reveal that risk management practices significantly enhance the effectiveness of cybersecurity strategies. Organizations with fully integrated risk management frameworks reported higher perceived effectiveness and better alignment with business outcomes. Mediation analysis confirmed that integration, not the frequency of risk assessments, plays a critical role in translating cybersecurity initiatives into improved organizational performance. Furthermore, respondents overwhelmingly affirmed the financial and strategic benefits of cybersecurity investments, particularly through mechanisms such as multi-factor authentication, continuous employee training, and cultivating a cybersecurity-aware culture. Widely used frameworks like the NIST Cybersecurity Framework and HIPAA were associated with stronger organizational resilience. This research fills a critical gap in the existing literature by providing empirical insights into how strategic risk management influences the impact of cybersecurity on performance. The findings underscore the importance of embedding cybersecurity into broader risk governance structures and offer practical guidance to healthcare organizations seeking to strengthen their cybersecurity posture.
    16 0
  • ItemRestricted
    Nonlinear Kalman Filtering for Systems under the Influence of State-Dependent Noises
    (The Pennsylvania State University, 2025) Alsaggaf, Abdulrahman; Ebeigbe, Donald
    Kalman Filtering (KF) theory stands as a cornerstone in the field of dynamic state estimation, but it continues to encounter persistent challenges, particularly with respect to nonlinear systems and state-dependent noise. While established variants such as the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) have achieved considerable prominence for their utility in complex estimation problems, their foundational assumption of zero-mean, Gaussian noise is often at odds with some physical systems and their observations. Indeed, practical engineering applications — ranging from autonomous vehicles navigating uncertain environments to financial models tracking volatile markets, and robotic sensors operating under fluctuating conditions — reveal the prevalence of noise that is not only non-Gaussian and biased, but also intricately linked to the system state. Such conditions can significantly undermine estimation accuracy and may even precipitate filter divergence. Accordingly, there is a pressing need for filtering methodologies that are both more resilient and more attuned to the nuances of real-world systems that are influenced by state-dependent noises. This dissertation seeks to address these gaps through several key contributions. First, it introduces a novel nonlinear Kalman Filtering approach that explicitly accommodates non-zero-mean and state-dependent noise within both process and measurement models. Second, it introduces a structured framework for noise modeling, seamlessly integrating these characteristics into a revised prediction-correction paradigm. Third, the methodology is extended to encompass systems lacking direct measurement-to-state correspondences and is shown to be compatible with arbitrary nonlinear transformations, thereby broadening its practical scope. Fourth, rigorous theoretical guarantees are established, demonstrating that the proposed filter achieves unbiasedness and minimum variance under well-defined conditions. Fifth, the principle underlying state-dependent noise Kalman filtering is extended to improve performance when stronger nonlinearities exist. The proposed Kalman filtering schemes preserve the well-known recursive structure of Kalman filters while maintaining computational tractability. A comprehensive suite of empirical evaluations attests to the efficacy of the proposed approach. Across a spectrum of test scenarios, the proposed filters demonstrate the ability to give reliable state estimates by reducing estimation errors and improving robustness. These empirical findings not only reinforce the theoretical developments presented herein but also illustrate the filter's capacity to adapt to nonlinear systems characterized by intricate, state-dependent noise. Furthermore, this work draws attention to enduring limitations in current Kalman Filtering methodologies, including the need for more comprehensive convergence analyses and the development of robust strategies for handling systems influenced by state-dependent noise. Opportunities for future research emerge in several promising directions, including the design of adaptive filters leveraging machine learning for dynamic noise model adaptation, and the incorporation of supplementary sensing modalities to enhance error detection and mitigation. The formulation of a unified theoretical framework capable of accommodating a wide array of noise structures would represent a significant advancement for real-time state estimation. Addressing these open challenges promises not only to advance the field of nonlinear filtering theory but also to broaden its applicability to areas such as autonomous systems, sensor networks, economics, and healthcare. This dissertation strengthens the foundational theory of Kalman filtering and creates a path forward for sustained scholarly innovation in the modeling and estimation of systems that do not typically satisfy the assumptions required for the implementation of traditional Kalman filtering.
    22 0
  • ItemRestricted
    Kinematic Synthesis and Analysis for Soft Robots with Compliant Mechanisms Using Fuzzy Logic and Neural Networks
    (Saudi Digital Library, 2025) الهندي, أحمد; Meng-Sang, Chew
    This dissertation presents a novel framework for the kinematic synthesis and analysis of Compliant Mechanisms (CMs) that leverages fuzzy logic and neural networks to address inherent uncertainties in their design and behavior. Traditional deterministic and probabilistic methods often fail to capture the full spectrum of CM performance or are computationally prohibitive. The core contribution is the development of a Fuzzy-Kinematic Synthesis Framework that reformulates mechanism design using fuzzy arithmetic. The classic Freudenstein's equation is transformed into a parametric fuzzy form, treating input and output angles as Triangular Fuzzy Numbers (TFNs) to enable region-based synthesis. Solving these equations yields fuzzy link lengths—defined regions encompassing all viable mechanism configurations. This framework quantifies performance uncertainty through a "Function Spread" metric, derived from the fuzzy output. Building on this, an Envelope-Driven Control Methodology is developed. Forward and Inverse Kinematic Envelopes define the achievable workspace and input-output relationships. Adaptive Neuro-Fuzzy Inference Systems (ANFIS) are applied to serve as efficient surrogates for kinematic problems, drastically reducing computational cost. A Mamdani-type Fuzzy Inference System enables generative design within the performance boundaries, creating adaptable systems from a single mechanism. The methodology demonstrates closed-loop control by synthesizing inputs to trace arbitrary paths within the positional envelope. The framework is validated through detailed case studies, including a compliant surgical grasper. Results show efficient handling of kinematic complexity, design optimization via performance envelopes, and robust prediction for mechanisms with complex sensitivity profiles.
    37 0
  • ItemRestricted
    Capital Structure, ETF Flows, and Performance
    (Saudi Digital Library, 2025) Alkabbaa, Nayef; Kabir, Mohammad
    The aim of the dissertation is to study the impact of distinct financial variables such as ETF flows and capital structure on the performance of ETFs and firms. We investigate the heterogeneous effects of decomposed ETF flows, demand-driven, arbitrage-driven, and unexpected on abnormal returns across six ETF categories. Using a sample of 424 U.S. equity ETFs from 2000 to 2023 we run panel regressions, quantile models, and two-stage least squares (2SLS) estimations. The findings of this paper are in line with return-chasing behavior and crowding dynamics, demand flows are significantly associated with underperformance in index and Smart Beta ETFs, while active ETFs show less consistent effects. These effects persist across lag structures and are most pronounced in higher-performing quantiles. Under high volatility conditions, arbitrage flows improve alpha in most ETF classes. Unexpected flows generally lack predictive power, underscoring their idiosyncratic nature but their impact is more pronounced in sector active ETFs. Our findings challenge the one-size-fits-all approach of ETF flow analysis, suggesting the importance of ETF classification when evaluating flow-performance relationships We also study the relationship between capital structure and firm performance in the U.S. information technology (IT) sector during 2010–2022. Using panel data from 32 publicly listed IT firms (401 firm-year observations), we apply a comprehensive econometric framework including pooled OLS, fixed and random effects, quantile regression, 2SLS, and Pooled Mean Group ARDL. Return on equity and market capitalization are the two main performance measures in this paper, while capital structure is captured through total liabilities to total assets (debt ratio), cost of debt, and cost of capital. Results show a consistently negative impact of the cost of debt on both accounting- and market-based performance. At high leverage levels, debt ratio has a nonlinear and distribution-sensitive effect and is positive in long-run models. The capital structure outcomes confirm heterogeneity across firm types by using threshold and quantile regressions. Robustness checks validate the findings. Both papers highlight the distinct link between the impact of financial variables such as ETF flows and capital structure to the variation in performance level at both fund and firm level.
    14 0

Copyright owned by the Saudi Digital Library (SDL) © 2025