Saudi Cultural Missions Theses & Dissertations
Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10
Browse
53 results
Search Results
Item Restricted LIGHTREFINENET-SFMLEARNER: SEMI-SUPERVISED VISUAL DEPTH, EGO-MOTION AND SEMANTIC MAPPING(Newcastle University, 2024) Alshadadi, Abdullah Turki; Holder, ChrisThe advancement of autonomous vehicles has garnered significant attention, particularly in the development of complex software stacks that enable navigation, decision-making, and planning. Among these, the Perception [1] component is critical, allowing vehicles to understand their surroundings and maintain localisation. Simultaneous Localisation and Mapping (SLAM) plays a key role by enabling vehicles to map unknown environments while tracking their positions. Historically, SLAM has relied on heuristic techniques, but with the advent of the "Perception Age," [2] research has shifted towards more robust, high-level environmental awareness driven by advancements in computer vision and deep learning. In this context, MLRefineNet [3] has demonstrated superior robustness and faster convergence in supervised learning tasks. However, despite its improvements, MLRefineNet struggled to fully converge within 200 epochs when integrated into SfmLearner. Nevertheless, clear improvements were observed with each epoch, indicating its potential for enhancing performance. SfmLearner [4] is a state-of-the-art deep learning model for visual odometry, known for its competitive depth and pose estimation. However, it lacks high-level understanding of the environment, which is essential for comprehensive perception in autonomous systems. This paper addresses this limitation by introducing a multi-modal shared encoder-decoder architecture that integrates both semantic segmentation and depth estimation. The inclusion of high-level environmental understanding not only enhances scene interpretation—such as identifying roads, vehicles, and pedestrians—but also improves the depth estimation of SfmLearner. This multi-task learning approach strengthens the model’s overall robustness, marking a significant step forward in the development of autonomous vehicle perception systems.24 0Item Restricted Balancing Innovation and Protection: Is AI Regulation the Future of Saudi FinTech?(King's College London, 2024-09) Alkhathlan, Alaa Saad; Keller, AnatThis study investigates the implications of artificial intelligence in the Saudi FinTech sector, focusing on the evolving regulatory landscape. While AI holds substantial promise for driving innovation, it also poses ethical and practical challenges such as data privacy, algorithmic transparency, and fairness. This study examines the current regulatory framework in Saudi Arabia, highlighting efforts like the AI Ethics Principles and the Personal Data Protection Law. Despite these measures, significant gaps remain due to the voluntary nature of the AI Ethics Principles and Generative AI Guidelines, resulting in inconsistent implementation. The primary aim of this study is to guide policymakers on regulating AI in the Saudi FinTech sector while preserving innovation. Key recommendations urge policymakers to develop regulations based on international best practices, addressing issues such as data privacy, algorithmic biases, and systemic risks. Emphasising the need for continuous dialogue among regulators, FinTech companies, and international partners, the study also calls for enhancing human-machine collaboration, establishing regulatory sandboxes, creating an AI Oversight Committee, and supporting research to better understand AI's implications. By aligning with Saudi Vision 2030 goals, these recommendations aim to strengthen Saudi Arabia's AI regulatory framework, support sustainable growth in the FinTech sector, and build public trust in AI-driven financial services.7 0Item Restricted AI in Telehealth for Cardiac Care: A Literature Review(University of technology sydney, 2024-03) Alzahrani, Amwaj; Li, lifuThis literature review investigates the integration of artificial intelligence (AI) in telehealth, with a specific focus on its applications in cardiac care. The review explores how AI enhances remote patient monitoring, facilitates personalized treatment plans, and improves healthcare accessibility for patients with cardiac conditions. AI-driven tools, such as wearable devices and implantable medical devices, have demonstrated significant potential in tracking critical health parameters, enabling timely interventions, and fostering proactive patient care. Additionally, AI-powered chatbots and telehealth platforms provide patients with real-time support and guidance, enhancing engagement and adherence to treatment regimens. The findings reveal that AI contributes to improving healthcare outcomes by enabling early detection of cardiac events, tailoring treatment plans to individual patient needs, and expanding access to care for underserved populations. However, the integration of AI in telehealth is not without challenges. Ethical considerations, such as ensuring data privacy, managing biases in AI algorithms, and addressing regulatory complexities, emerge as critical areas requiring attention. Furthermore, technological limitations, including the need for robust validation and patient acceptance of AI technologies, underscore the importance of bridging the gap between research and real-world implementation. This review also examines future trends, including the integration of blockchain technology with AI to enhance data security and privacy in telehealth systems. Advancements in machine learning and the Internet of Things (IoT) are paving the way for innovative solutions, such as secure remote monitoring and personalized rehabilitation programs. While AI holds transformative potential in revolutionizing telehealth services for cardiac patients, addressing these challenges is imperative to ensure equitable, effective, and patient-centered care. This review underscores the need for interdisciplinary collaboration and regulatory oversight to unlock the full potential of AI in telehealth and improve outcomes for cardiac patients globally.19 0Item Restricted Utilizing Artificial Intelligence to Develop Machine Learning Techniques for Enhancing Academic Performance and Education Delivery(University of Technology Sydney, 2024) Allotaibi, Sultan; Alnajjar, HusamArtificial Intelligence (AI) and particularly the related sub-discipline of Machine Learning (ML), have impacted many industries, and the education industry is no exception because of its high-level data handling capacities. This paper discusses the various AI technologies coupled with ML models that enhance learners' performance and the delivery of education systems. The research aims to help solve the current problems of the growing need for individualized education interventions arising from student needs, high dropout rates and fluctuating academic performance. AI and ML can then analyze large data sets to recognize students who are at risk academically, gauge course completion and learning retention rates, and suggest interventions to students who may require them. The study occurs in a growing Computer-Enhanced Learning (CED) environment characterized by elearning, blended learning, and intelligent tutelage. These technologies present innovative concepts to enhance administrative procedures, deliver individualized tutorials, and capture students' attention. Using predictive analytics and intelligent tutors, AI tools can bring real-time student data into the classroom so that educators can enhance the yields by reducing dropout rates while increasing performance. Not only does this research illustrate the current hope and promise of AI/ML in the context of education, but it also includes relevant problems that arise in data privacy and ethics, as well as technology equality. To eliminate the social imbalance in its use, the study seeks to build efficient and accountable AI models and architectures to make these available to all students as a foundation of practical education. The students’ ideas also indicate that to prepare the learning environments of schools for further changes, it is necessary to increase the use of AI/ML in learning processes12 0Item Restricted The role and use of Artificial Intelligence (Al tools) in audits of financial statements(Aston university, 2024-09) Alsaedi, Amal; George ,SalijenIntegrating artificial intelligence (AI) in the auditing function holds significant potential to transform the industry. As firms and stakeholders increasingly recognise the value of and demand audit quality, the accuracy, validity, and integrity of information generated by audit processes have become a vital consideration. Integrating AI into audit processes would be viewed as advancing audit techniques. However, the current limited adoption of this technology by audit firms raises concerns about their awareness of its transformative potential. This study aims to identify AI tools used in auditing and their impact on the audit process and quality. The study bridges the existing gap using a secondary exploratory method. Qualitative data was collected from transparency reports by the Big Four audit firms, i.e., KPMG, Deloitte, EY and PwC, and audit quality inspection reports for the four firms by FRC. For recency purposes, only reports published between 2020 and 2023 were considered. A thematic analysis of the data collected reveals that adoption of AI and data analytics in auditing is still low, and the Big Four firms are actively promoting increased adoption. The results demonstrate a notable disparity between potential and current applications, as shown by a clear gap between the publicised potential of AI and data analytics and their implementation within audit processes.22 0Item Restricted Artificial Intelligence Systems: Exploring AI Systems’ Patentability in The United Kingdom And The European Patent Office(University of Liverpool, 2024) Alarawi, Khalid; Jacques, SabineThe topic of Artificial Intelligence (AI) has become a common interest for the public and corporations on a global level. Through a legal analysis of case law between the United Kingdom and the European Patent Office (EPO), this paper will argue that while AI systems are indeed patentable in both the United Kingdom and the EPO and that despite the differences, the result of patenting AI is likely to be the same across both jurisdictions, there still needs to be further clarification in regards to different AI systems. Given that they are addressed as a hybrid between computer programs and mathematical methods, there needs to be deeper exploration towards AI system patentability across different types, and different technical applications.5 0Item Restricted Leveraging Brain-Computer Interface Technology to Interpret Intentions and Enable Cognitive Human-Computer Interaction(Univeristy of Manchester, 2024) Alsaddique, Luay; Breitling, RainerIn this paper, I present the developed, integration, and evaluation of a Brain–Computer Interface (BCI) system which showcases the accessibility and usability of a BCI head- set to interact external devices and services. The paper initially provides a detailed survey of the history of BCI technology and gives a comprehensive overview of BCI paradigms and the underpinning biology of the brain, current BCI technologies, recent advances in the field, the BCI headset market, and prospective applications of the technology. The research focuses on leveraging BCI headsets within a BCI platform to interface with these external end-points through the Motor Imagery BCI paradigm. I present the design, implementation, and evaluation of a fully functioning, efficient, and versatile BCI system which can trigger real-world commands in devices and digital services. The BCI system demonstrates its versatility through use cases such as control- ling IoT devices, infrared (IR) based devices, and interacting with advanced language models. The system’s performance was quantified across various conditions, achiev- ing detection probabilities exceeding 95%, with latency as low as 1.4 seconds when hosted on a laptop and 2.1 seconds when hosted on a Raspberry Pi. The paper concludes with a detailed analysis of the limitations and potential im- provements of the newly developed system, and its implications for possible appli- cations. It also includes a comparative evaluation of latency, power efficiency, and usability, when hosting the BCI system on a laptop versus a Raspberry Pi.23 0Item Restricted Towards Representative Pre-training Corpora for Arabic Natural Language Processing(Clarkson University, 2024-11-30) Alshahrani, Saied Falah A; Matthews, JeannaNatural Language Processing (NLP) encompasses various tasks, problems, and algorithms that analyze human-generated textual corpora or datasets to produce insights, suggestions, or recommendations. These corpora and datasets are crucial for any NLP task or system, as they convey social concepts, including views, culture, heritage, and perspectives of native speakers. However, a corpus or dataset in a particular language does not necessarily represent the culture of its native speakers. Native speakers may organically write some textual corpora or datasets, and some may be written by non-native speakers, translated from other languages, or generated using advanced NLP technologies, such as Large Language Models (LLMs). Yet, in the era of Generative Artificial Intelligence (GenAI), it has become increasingly difficult to distinguish between human-generated texts and machine-translated or machine-generated texts, especially when all these different types of texts, i.e., corpora or datasets, are combined to create large corpora or datasets for pre-training NLP tasks, systems, and technologies. Therefore, there is an urgent need to study the degree to which pre-training corpora or datasets represent native speakers and reflect their values, beliefs, cultures, and perspectives, and to investigate the potentially negative implications of using unrepresentative corpora or datasets for the NLP tasks, systems, and technologies. One of the most widely utilized pre-training corpora or datasets for NLP are Wikipedia articles, especially for low-resource languages like Arabic, due to their large multilingual content collection and massive array of metadata that can be quantified. In this dissertation, we study the representativeness of the Arabic NLP pre-training corpora or datasets, focusing specifically on the three Arabic Wikipedia editions: Arabic Wikipedia, Egyptian Arabic Wikipedia, and Moroccan Arabic Wikipedia. Our primary goals are to 1) raise awareness of the potential negative implications of using unnatural, inorganic, and unrepresentative corpora—those generated or translated automatically without the input of native speakers, 2) find better ways to promote transparency and ensure that native speakers are involved through metrics, metadata, and online applications, and 3) strive to reduce the impact of automatically generated or translated contents by using machine learning algorithms to identify or detect them automatically. To do this, firstly, we analyze the metadata of the three Arabic Wikipedia editions, focusing on differences using collected statistics such as total pages, articles, edits, registered and active users, administrators, and top editors. We document issues related to the automatic creation and translation of articles (content pages) from English to Arabic without human (i.e., native speakers) review, revision, or supervision. Secondly, we quantitatively study the performance implications of using unnatural, inorganic corpora that do not represent native speakers and are primarily generated using automation, such as bot-created articles or template-based translation. We intrinsically evaluate the performance of two main NLP tasks—Word Representation and Language Modeling—using the Word Analogy and Fill-Mask evaluation tasks on our two newly created datasets: the Arab States Analogy Dataset and the Masked Arab States Dataset. Thirdly, we assess the quality of Wikipedia corpora at the edition level rather than the article level by quantifying bot activities and enhancing Wikipedia’s Depth metric. After analyzing the limitations of the existing Depth metric, we propose a bot-free version by excluding bot-created articles and bot-made edits on articles called the DEPTH+ metric, presenting its mathematical definitions, highlighting its features and limitations, and explaining how this new metric accurately reflects human collaboration depth within the Wikipedia project. Finally, we address the issue of template translation in the Egyptian Arabic Wikipedia by identifying these template-translated articles and their characteristics. We explore the content of the three Arabic Wikipedia editions in terms of density, quality, and human contributions and employ the resulting insights to build multivariate machine learning classifiers leveraging article metadata to automatically detect template-translated articles. We lastly deploy the best-performing classifier publicly as an online application and release the extracted, filtered, labeled, and preprocessed datasets to the research community to benefit from our datasets and the web-based detection system.34 0Item Restricted Investigating the Influence of Knowledge and Attitudes on AI Practices in English Language Teaching: A Mixed-Methods Study of New Zealand Secondary School ESOL Teachers(Victoria University of Wellington, 2024) Khalil, Daya; Siyanova-Chanturia, AnnaThe rapid integration of Artificial Intelligence (AI) in education has transformed the teaching landscape, offering new opportunities but also posing challenges for teachers (Rahman et al., 2024; Karataş et al., 2024; Kartal & Yeşilyurt, 2024). Previous studies, such as those by Zhang et al. (2023) and Wang et al. (2023), have highlighted the potential benefits of AI for streamlining teaching practices and enhancing instructional efficiency. However, the effective use of AI depends on teachers’ knowledge and attitudes, which shape how they adopt and implement AI tools (Chiu et al., 2024; Kim & Kwon, 2023). Despite the increasing focus on AI in research, no empirical evidence to date has directly investigated how secondary school English for Speakers of Other Languages (ESOL) teachers’ knowledge and attitudes influence their AI practices in New Zealand. This study aims to fill that gap by exploring these relationships. This mixed-methods study involved survey data from 35 secondary school ESOL teachers and semi-structured interviews with four participants. Quantitative results showed that 68.6% of teachers reported low use of AI in English language teaching (ELT), while 31.4% demonstrated moderate use. Knowledge levels varied, with 40% having low knowledge and only 17.1% possessing advanced knowledge. Attitudes were mixed, with 22.9% showing positive attitudes and 25.7% expressing negative attitudes. Regression analysis revealed that attitudes (β = 0.560) were a stronger predictor of practices than knowledge (β = 0.379). Qualitative themes highlighted cautious exploration, the perceived need for robust verification methods of AI content, and the influence of both confidence and familiarity on AI use. Teachers with positive attitudes were more inclined to integrate AI meaningfully, while those with limited knowledge or negative attitudes restricted their use to simpler applications. These results emphasize the need for professional development that strengthens both technical knowledge and critical perspectives, supporting responsible and effective AI integration in ELT.15 0Item Restricted Integration of Artificial Intelligence in Supply Chain Management: A Case Study of Toyota Motor Corporation(University of Gloucestershire, 2024-08) AlQuwaie, Thamer Adeeb; Plummer, David; Rasheed, Muhammad Babar; Zhang, ShujunThis study examines AI deployment in Toyota Motor Corporation's supply chain management. By analysing the literature and interviewing key workers from Toyota, the research illustrates how AI technologies enhance logistics, demand forecasting, inventory management, and procurement. AI-driven predictive analytics and automation improved decision-making accuracy, operational efficiency, and cost savings. The research notes low data quality, expensive initial costs, and staff unwillingness to change as important impediments. The research suggests continual training, robust data management rules, and gradual AI deployment to solve these issues. The research also emphasises the importance of human factors in AI integration, including open communication and worker engagement for smooth adjustments. The research found that high management and departmental collaboration are needed to use AI technology successfully. Future research should include cross-sector and cross-regional comparisons, longitudinal studies to track impacts, and more work on social and ethical concerns. This research analyses Toyota's AI integration to provide supply chain AI users information and advice.9 0