Saudi Cultural Missions Theses & Dissertations
Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10
Browse
34 results
Search Results
Item Restricted The Role of Artificial Intelligence in Project Management(University of Technology Sydney, 2024-11-11) Muryif Alshehri, Mohammed; Abdo, PeterThe increasing complexity of global projects has elevated the challenges in project management, necessitating the adoption of innovative solutions. This study investigates the transformative potential of Artificial Intelligence (AI) in project management, emphasizing its role in enhancing decision-making, risk management, and operational efficiency. Employing a systematic literature review methodology, the research synthesizes findings from 13 high-index journal articles to evaluate AI techniques, including machine learning, decision trees, and advanced predictive analytics. The study identifies AI’s ability to improve resource allocation, forecasting accuracy, and stakeholder engagement while mitigating risks and optimizing sustainability. Findings highlight the integration challenges such as data quality, system compatibility, and resistance to change, which hinder the widespread adoption of AI tools. Despite these obstacles, AI demonstrates considerable benefits, including automation of routine tasks, enhanced cost estimation, and improved project timelines. Notably, AI-driven tools have achieved a 20% reduction in project completion times and a 15% decrease in costs due to proactive risk mitigation. This research provides actionable insights into the effective implementation of AI within the framework of traditional project management methodologies. It concludes that while AI presents significant opportunities to redefine project management practices, its successful adoption requires addressing technical and organizational challenges, along with fostering an adaptive cultural mindset. This study lays the groundwork for future research aimed at leveraging AI to create sustainable, efficient, and resilient project management ecosystems.18 0Item Restricted Quantifying and Profiling Echo Chambers on Social Media(Arizona State University, 2024) Alatawi, Faisal; Liu, Huan; Sen, Arunabha; Davulcu, Hasan; Shu, KaiEcho chambers on social media have become a critical focus in the study of online behavior and public discourse. These environments, characterized by the ideological homogeneity of users and limited exposure to opposing viewpoints, contribute to polarization, the spread of misinformation, and the entrenchment of biases. While significant research has been devoted to proving the existence of echo chambers, less attention has been given to understanding their internal dynamics. This dissertation addresses this gap by developing novel methodologies for quantifying and profiling echo chambers, with the goal of providing deeper insights into how these communities function and how they can be measured. The first core contribution of this work is the introduction of the Echo Chamber Score (ECS), a new metric for measuring the degree of ideological segregation in social media interaction networks. The ECS captures both the cohesion within communities and the separation between them, offering a more nuanced approach to assessing polarization. By using a self-supervised Graph Auto-Encoder (EchoGAE), the ECS bypasses the need for explicit ideological labeling, instead embedding users based on their interactions and linguistic patterns. The second contribution is a Heterogeneous Information Network (HIN)-based framework for profiling echo chambers. This framework integrates social and linguistic features, allowing for a comprehensive analysis of the relationships between users, topics, and language within echo chambers. By combining community detection, topic modeling, and language analysis, the profiling method reveals how discourse and group behavior reinforce ideological boundaries. Through the application of these methods to real-world social media datasets, this dissertation demonstrates their effectiveness in identifying polarized communities and profiling their internal discourse. The findings highlight how linguistic homophily and social identity theory shape echo chambers and contribute to polarization. Overall, this research advances the understanding of echo chambers by moving beyond detection to explore their structural and linguistic complexities, offering new tools for measuring and addressing polarization on social media platforms.7 0Item Restricted LIGHTREFINENET-SFMLEARNER: SEMI-SUPERVISED VISUAL DEPTH, EGO-MOTION AND SEMANTIC MAPPING(Newcastle University, 2024) Alshadadi, Abdullah Turki; Holder, ChrisThe advancement of autonomous vehicles has garnered significant attention, particularly in the development of complex software stacks that enable navigation, decision-making, and planning. Among these, the Perception [1] component is critical, allowing vehicles to understand their surroundings and maintain localisation. Simultaneous Localisation and Mapping (SLAM) plays a key role by enabling vehicles to map unknown environments while tracking their positions. Historically, SLAM has relied on heuristic techniques, but with the advent of the "Perception Age," [2] research has shifted towards more robust, high-level environmental awareness driven by advancements in computer vision and deep learning. In this context, MLRefineNet [3] has demonstrated superior robustness and faster convergence in supervised learning tasks. However, despite its improvements, MLRefineNet struggled to fully converge within 200 epochs when integrated into SfmLearner. Nevertheless, clear improvements were observed with each epoch, indicating its potential for enhancing performance. SfmLearner [4] is a state-of-the-art deep learning model for visual odometry, known for its competitive depth and pose estimation. However, it lacks high-level understanding of the environment, which is essential for comprehensive perception in autonomous systems. This paper addresses this limitation by introducing a multi-modal shared encoder-decoder architecture that integrates both semantic segmentation and depth estimation. The inclusion of high-level environmental understanding not only enhances scene interpretation—such as identifying roads, vehicles, and pedestrians—but also improves the depth estimation of SfmLearner. This multi-task learning approach strengthens the model’s overall robustness, marking a significant step forward in the development of autonomous vehicle perception systems.33 0Item Restricted Balancing Innovation and Protection: Is AI Regulation the Future of Saudi FinTech?(King's College London, 2024-09) Alkhathlan, Alaa Saad; Keller, AnatThis study investigates the implications of artificial intelligence in the Saudi FinTech sector, focusing on the evolving regulatory landscape. While AI holds substantial promise for driving innovation, it also poses ethical and practical challenges such as data privacy, algorithmic transparency, and fairness. This study examines the current regulatory framework in Saudi Arabia, highlighting efforts like the AI Ethics Principles and the Personal Data Protection Law. Despite these measures, significant gaps remain due to the voluntary nature of the AI Ethics Principles and Generative AI Guidelines, resulting in inconsistent implementation. The primary aim of this study is to guide policymakers on regulating AI in the Saudi FinTech sector while preserving innovation. Key recommendations urge policymakers to develop regulations based on international best practices, addressing issues such as data privacy, algorithmic biases, and systemic risks. Emphasising the need for continuous dialogue among regulators, FinTech companies, and international partners, the study also calls for enhancing human-machine collaboration, establishing regulatory sandboxes, creating an AI Oversight Committee, and supporting research to better understand AI's implications. By aligning with Saudi Vision 2030 goals, these recommendations aim to strengthen Saudi Arabia's AI regulatory framework, support sustainable growth in the FinTech sector, and build public trust in AI-driven financial services.14 0Item Restricted AI in Telehealth for Cardiac Care: A Literature Review(University of technology sydney, 2024-03) Alzahrani, Amwaj; Li, lifuThis literature review investigates the integration of artificial intelligence (AI) in telehealth, with a specific focus on its applications in cardiac care. The review explores how AI enhances remote patient monitoring, facilitates personalized treatment plans, and improves healthcare accessibility for patients with cardiac conditions. AI-driven tools, such as wearable devices and implantable medical devices, have demonstrated significant potential in tracking critical health parameters, enabling timely interventions, and fostering proactive patient care. Additionally, AI-powered chatbots and telehealth platforms provide patients with real-time support and guidance, enhancing engagement and adherence to treatment regimens. The findings reveal that AI contributes to improving healthcare outcomes by enabling early detection of cardiac events, tailoring treatment plans to individual patient needs, and expanding access to care for underserved populations. However, the integration of AI in telehealth is not without challenges. Ethical considerations, such as ensuring data privacy, managing biases in AI algorithms, and addressing regulatory complexities, emerge as critical areas requiring attention. Furthermore, technological limitations, including the need for robust validation and patient acceptance of AI technologies, underscore the importance of bridging the gap between research and real-world implementation. This review also examines future trends, including the integration of blockchain technology with AI to enhance data security and privacy in telehealth systems. Advancements in machine learning and the Internet of Things (IoT) are paving the way for innovative solutions, such as secure remote monitoring and personalized rehabilitation programs. While AI holds transformative potential in revolutionizing telehealth services for cardiac patients, addressing these challenges is imperative to ensure equitable, effective, and patient-centered care. This review underscores the need for interdisciplinary collaboration and regulatory oversight to unlock the full potential of AI in telehealth and improve outcomes for cardiac patients globally.21 0Item Restricted Utilizing Artificial Intelligence to Develop Machine Learning Techniques for Enhancing Academic Performance and Education Delivery(University of Technology Sydney, 2024) Allotaibi, Sultan; Alnajjar, HusamArtificial Intelligence (AI) and particularly the related sub-discipline of Machine Learning (ML), have impacted many industries, and the education industry is no exception because of its high-level data handling capacities. This paper discusses the various AI technologies coupled with ML models that enhance learners' performance and the delivery of education systems. The research aims to help solve the current problems of the growing need for individualized education interventions arising from student needs, high dropout rates and fluctuating academic performance. AI and ML can then analyze large data sets to recognize students who are at risk academically, gauge course completion and learning retention rates, and suggest interventions to students who may require them. The study occurs in a growing Computer-Enhanced Learning (CED) environment characterized by elearning, blended learning, and intelligent tutelage. These technologies present innovative concepts to enhance administrative procedures, deliver individualized tutorials, and capture students' attention. Using predictive analytics and intelligent tutors, AI tools can bring real-time student data into the classroom so that educators can enhance the yields by reducing dropout rates while increasing performance. Not only does this research illustrate the current hope and promise of AI/ML in the context of education, but it also includes relevant problems that arise in data privacy and ethics, as well as technology equality. To eliminate the social imbalance in its use, the study seeks to build efficient and accountable AI models and architectures to make these available to all students as a foundation of practical education. The students’ ideas also indicate that to prepare the learning environments of schools for further changes, it is necessary to increase the use of AI/ML in learning processes15 0Item Restricted The role and use of Artificial Intelligence (Al tools) in audits of financial statements(Aston university, 2024-09) Alsaedi, Amal; George ,SalijenIntegrating artificial intelligence (AI) in the auditing function holds significant potential to transform the industry. As firms and stakeholders increasingly recognise the value of and demand audit quality, the accuracy, validity, and integrity of information generated by audit processes have become a vital consideration. Integrating AI into audit processes would be viewed as advancing audit techniques. However, the current limited adoption of this technology by audit firms raises concerns about their awareness of its transformative potential. This study aims to identify AI tools used in auditing and their impact on the audit process and quality. The study bridges the existing gap using a secondary exploratory method. Qualitative data was collected from transparency reports by the Big Four audit firms, i.e., KPMG, Deloitte, EY and PwC, and audit quality inspection reports for the four firms by FRC. For recency purposes, only reports published between 2020 and 2023 were considered. A thematic analysis of the data collected reveals that adoption of AI and data analytics in auditing is still low, and the Big Four firms are actively promoting increased adoption. The results demonstrate a notable disparity between potential and current applications, as shown by a clear gap between the publicised potential of AI and data analytics and their implementation within audit processes.24 0Item Restricted Artificial Intelligence Systems: Exploring AI Systems’ Patentability in The United Kingdom And The European Patent Office(University of Liverpool, 2024) Alarawi, Khalid; Jacques, SabineThe topic of Artificial Intelligence (AI) has become a common interest for the public and corporations on a global level. Through a legal analysis of case law between the United Kingdom and the European Patent Office (EPO), this paper will argue that while AI systems are indeed patentable in both the United Kingdom and the EPO and that despite the differences, the result of patenting AI is likely to be the same across both jurisdictions, there still needs to be further clarification in regards to different AI systems. Given that they are addressed as a hybrid between computer programs and mathematical methods, there needs to be deeper exploration towards AI system patentability across different types, and different technical applications.6 0Item Restricted Leveraging Brain-Computer Interface Technology to Interpret Intentions and Enable Cognitive Human-Computer Interaction(Univeristy of Manchester, 2024) Alsaddique, Luay; Breitling, RainerIn this paper, I present the developed, integration, and evaluation of a Brain–Computer Interface (BCI) system which showcases the accessibility and usability of a BCI head- set to interact external devices and services. The paper initially provides a detailed survey of the history of BCI technology and gives a comprehensive overview of BCI paradigms and the underpinning biology of the brain, current BCI technologies, recent advances in the field, the BCI headset market, and prospective applications of the technology. The research focuses on leveraging BCI headsets within a BCI platform to interface with these external end-points through the Motor Imagery BCI paradigm. I present the design, implementation, and evaluation of a fully functioning, efficient, and versatile BCI system which can trigger real-world commands in devices and digital services. The BCI system demonstrates its versatility through use cases such as control- ling IoT devices, infrared (IR) based devices, and interacting with advanced language models. The system’s performance was quantified across various conditions, achiev- ing detection probabilities exceeding 95%, with latency as low as 1.4 seconds when hosted on a laptop and 2.1 seconds when hosted on a Raspberry Pi. The paper concludes with a detailed analysis of the limitations and potential im- provements of the newly developed system, and its implications for possible appli- cations. It also includes a comparative evaluation of latency, power efficiency, and usability, when hosting the BCI system on a laptop versus a Raspberry Pi.30 0Item Restricted Towards Representative Pre-training Corpora for Arabic Natural Language Processing(Clarkson University, 2024-11-30) Alshahrani, Saied Falah A; Matthews, JeannaNatural Language Processing (NLP) encompasses various tasks, problems, and algorithms that analyze human-generated textual corpora or datasets to produce insights, suggestions, or recommendations. These corpora and datasets are crucial for any NLP task or system, as they convey social concepts, including views, culture, heritage, and perspectives of native speakers. However, a corpus or dataset in a particular language does not necessarily represent the culture of its native speakers. Native speakers may organically write some textual corpora or datasets, and some may be written by non-native speakers, translated from other languages, or generated using advanced NLP technologies, such as Large Language Models (LLMs). Yet, in the era of Generative Artificial Intelligence (GenAI), it has become increasingly difficult to distinguish between human-generated texts and machine-translated or machine-generated texts, especially when all these different types of texts, i.e., corpora or datasets, are combined to create large corpora or datasets for pre-training NLP tasks, systems, and technologies. Therefore, there is an urgent need to study the degree to which pre-training corpora or datasets represent native speakers and reflect their values, beliefs, cultures, and perspectives, and to investigate the potentially negative implications of using unrepresentative corpora or datasets for the NLP tasks, systems, and technologies. One of the most widely utilized pre-training corpora or datasets for NLP are Wikipedia articles, especially for low-resource languages like Arabic, due to their large multilingual content collection and massive array of metadata that can be quantified. In this dissertation, we study the representativeness of the Arabic NLP pre-training corpora or datasets, focusing specifically on the three Arabic Wikipedia editions: Arabic Wikipedia, Egyptian Arabic Wikipedia, and Moroccan Arabic Wikipedia. Our primary goals are to 1) raise awareness of the potential negative implications of using unnatural, inorganic, and unrepresentative corpora—those generated or translated automatically without the input of native speakers, 2) find better ways to promote transparency and ensure that native speakers are involved through metrics, metadata, and online applications, and 3) strive to reduce the impact of automatically generated or translated contents by using machine learning algorithms to identify or detect them automatically. To do this, firstly, we analyze the metadata of the three Arabic Wikipedia editions, focusing on differences using collected statistics such as total pages, articles, edits, registered and active users, administrators, and top editors. We document issues related to the automatic creation and translation of articles (content pages) from English to Arabic without human (i.e., native speakers) review, revision, or supervision. Secondly, we quantitatively study the performance implications of using unnatural, inorganic corpora that do not represent native speakers and are primarily generated using automation, such as bot-created articles or template-based translation. We intrinsically evaluate the performance of two main NLP tasks—Word Representation and Language Modeling—using the Word Analogy and Fill-Mask evaluation tasks on our two newly created datasets: the Arab States Analogy Dataset and the Masked Arab States Dataset. Thirdly, we assess the quality of Wikipedia corpora at the edition level rather than the article level by quantifying bot activities and enhancing Wikipedia’s Depth metric. After analyzing the limitations of the existing Depth metric, we propose a bot-free version by excluding bot-created articles and bot-made edits on articles called the DEPTH+ metric, presenting its mathematical definitions, highlighting its features and limitations, and explaining how this new metric accurately reflects human collaboration depth within the Wikipedia project. Finally, we address the issue of template translation in the Egyptian Arabic Wikipedia by identifying these template-translated articles and their characteristics. We explore the content of the three Arabic Wikipedia editions in terms of density, quality, and human contributions and employ the resulting insights to build multivariate machine learning classifiers leveraging article metadata to automatically detect template-translated articles. We lastly deploy the best-performing classifier publicly as an online application and release the extracted, filtered, labeled, and preprocessed datasets to the research community to benefit from our datasets and the web-based detection system.48 0