SACM - Australia
Permanent URI for this collectionhttps://drepo.sdl.edu.sa/handle/20.500.14154/9648
Browse
6 results
Search Results
Item Restricted Cloud computing efficiency: optimizing resource utilization, energy consumption, latency, availability, and reliability using intelligent algorithms(The Universit of Western Australia, 2024) Alelyani, Abdullah Hamed A; Datta, Amitava; Ghulam, Mubasher HassanCloud computing offers significant potential for transforming service delivery with a cost-efficient, pay-as-you-go model, which has led to a dramatic increase in demand. The advantages of virtual machine (VM) and container technologies further optimize resource utilization in cloud environments. Containers and VMs improve application reliability by distributing replicated tasks across different physical machines (PMs). However, several persistent issues in cloud computing remain, including energy consumption, resource management, network traffic costs, availability, latency, service level agreement (SLA) violations, and reliability. Addressing these issues is critical for ensuring QoS. This thesis proposes approaches to address these issues and improve cloud performance.13 0Item Restricted Utilizing Artificial Intelligence to Develop Machine Learning Techniques for Enhancing Academic Performance and Education Delivery(University of Technology Sydney, 2024) Allotaibi, Sultan; Alnajjar, HusamArtificial Intelligence (AI) and particularly the related sub-discipline of Machine Learning (ML), have impacted many industries, and the education industry is no exception because of its high-level data handling capacities. This paper discusses the various AI technologies coupled with ML models that enhance learners' performance and the delivery of education systems. The research aims to help solve the current problems of the growing need for individualized education interventions arising from student needs, high dropout rates and fluctuating academic performance. AI and ML can then analyze large data sets to recognize students who are at risk academically, gauge course completion and learning retention rates, and suggest interventions to students who may require them. The study occurs in a growing Computer-Enhanced Learning (CED) environment characterized by elearning, blended learning, and intelligent tutelage. These technologies present innovative concepts to enhance administrative procedures, deliver individualized tutorials, and capture students' attention. Using predictive analytics and intelligent tutors, AI tools can bring real-time student data into the classroom so that educators can enhance the yields by reducing dropout rates while increasing performance. Not only does this research illustrate the current hope and promise of AI/ML in the context of education, but it also includes relevant problems that arise in data privacy and ethics, as well as technology equality. To eliminate the social imbalance in its use, the study seeks to build efficient and accountable AI models and architectures to make these available to all students as a foundation of practical education. The students’ ideas also indicate that to prepare the learning environments of schools for further changes, it is necessary to increase the use of AI/ML in learning processes40 0Item Restricted Automatic Detection and Verification System for Arabic Rumor News on Twitter(University of Technology Sydney, 2026-04) Karali, Sami; Chin-Teng, LinLanguage models have been extensively studied and applied in various fields in recent years. However, the majority of the language use models are designed for and perform significantly better in English compared to other languages, such as Arabic. The differences between English and Arabic in terms of grammar, writing, and word-forming structures pose significant challenges in applying English-based language models to Arabic content. Therefore, there is a critical need to develop and refine models and methodologies that can effectively process Arabic content. This research aims to address the gaps in Arabic language models by developing innovative machine learning (ML) and natural language processing (NLP) methodologies. We apply the developed model to Arabic rumor detection on Twitter to test its effectiveness. To achieve this, the research is divided into three fundamental phases: 1) Efficiently collecting and pre-processing a comprehensive dataset of Arabic news tweets; 2) The refinement of ML models through an enhanced Convolutional Neural Network (ECNN) equipped with N-gram feature maps for accurate rumor identification; 3) The augmentation of decision-making precision in rumor verification via sophisticated ensemble learning techniques. In the first phase, the research meticulously develops a methodology for the collection and pre-processing of Arabic news tweets, aiming to establish a dataset optimized for rumor detection analysis. Leveraging a blend of automated and manual processes, the research navigates the intricacies of the Arabic language, enhancing the dataset’s quality for ML applications. This foundational phase ensures removing irrelevant data and normalizing text, setting a precedent for accuracy in subsequent detection tasks. The second phase is to develop an Enhanced Convolutional Neural Network (ECNN) model, which incorporates N-gram feature maps for a deeper linguistic analysis of tweets. This innovative ECNN model, designed specifically for the Arabic language, marks a significant departure from traditional rumor detection models by harnessing the power of spatial feature extraction alongside the contextual insights provided by N-gram analysis. Empirical results underscore the ECNN model’s superior performance, demonstrating a marked improvement in detecting and classifying rumors with heightened accuracy and efficiency. The culmination of the study explores the efficacy of ensemble learning methods in enhancing the robustness and accuracy of rumor detection systems. By synergizing the ECNN model with Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Gated Recurrent Unit (GRU) networks within a stacked ensemble framework, the research pioneers a composite approach that significantly outstrips the capabilities of singular models. This innovation results in a state-of-the-art system for rumor verification that outperforms accuracy in identifying rumors, as demonstrated by empirical testing and analysis. This research contributes to bridging the gap between English-centric language models and Arabic language processing, demonstrating the importance of tailored approaches for different languages in the field of ML and NLP. These contributions signify a monumental step forward in the field of Arabic NLP and ML and offer practical solutions for the real-world challenge of rumor proliferation on social media platforms, ultimately fostering a more reliable digital environment for Arabic-speaking communities.50 0Item Restricted iBFog: Intelligent Blockchain-based Methodology for Verifiable Fog Selection and Participation(University of Technology Sydney, 2024-05-28) Alshuaibi, Enaam Abdulmonem O; Hussain, Farookh KhadeerFog computing has emerged as an important game-changing technology to address the resource challenges of the Internet of Things (IoT). However, the rapid increase in computational resource requirements at the edge of the network results in small-to-medium enterprises that provide fog services (FogSMEs) facing challenges in scalability, resource limitations, and network reliability. As a result of these challenges, FogSMEs are unable to meet modern data processing, security, and decision-making requirements. By exploring strategies that allow FogSMEs to maximize the benefits of the distributed nature of fog computing, in this thesis, we discuss volunteer computing as an innovative and cost-effective solution to improve their infrastructure. Leveraging idle computational resources from a global network of volunteer users, FogSMEs can achieve scalable, real-time services without significant investment in physical infrastructure. Research has identified significant gaps in the existing literature, including the absence of intelligent platforms to manage volunteer resources, dynamic selection mechanisms for volunteer nodes, and incentives to increase volunteer recruitment. To bridge the gaps in the recent literature, this thesis proposes an intelligent and reliable framework for selecting and verifying volunteer computing resources for fog scalability, named iBFog, by addressing three critical objectives: developing a trustworthy platform for managing volunteer nodes, designing an incentive mechanism to motivate participation, and implementing an intelligent selection mechanism for optimal node utilization. These objectives aim to overcome the challenges of fog scalability by ensuring efficient, secure, and reliable fog computing networks, especially for FogSMEs. This thesis contributes to the literature along three dimensions by including a systematic literature review to identify the need for an intelligent framework utilizing volunteer computing for fog scalability, the development of the iBFog framework that comprises a blockchain-based fog repository using Hyperledger Fabric, a game-based incentive module using Stackelberg game theory, and a ranking and selection module using three methods: a statistical method, a machine learning method, and a deep learning method. These components collectively address the identified research gaps, offering a comprehensive solution to the challenges of FogSME scalability. By intelligently managing, incentivizing and selecting volunteer computing resources, the iBFog framework advances the field of fog computing using a novel approach to enhancing its scalability. This framework not only addresses the immediate challenges of fog computing scalability but also sets the groundwork for future research and development in distributed computing environments.21 0Item Restricted Using Artificial Intelligence to Improve the Diagnosis and Treatment of Cancer(The University of Melbourne, 2024-02-01) Aljarf, Raghad; Ascher, DavidCancer is a complex and heterogeneous disease driven by the accumulation of mutations at the genetic and epigenetic levels—making it particularly challenging to study and treat. Despite Whole-genome sequencing approaches identifying thousands of variations in cancer cells and their perturbations, fundamental gaps persist in understanding cancer causes and pathogenesis. Towards this, my PhD focused on developing computational approaches by leveraging genomic and experimental data to provide fundamental insights into cancer biology, improve patient diagnosis, and guide therapeutic development. The increased mutational burden in most cancers can make it challenging to identify mutations essential for tumorigenesis (drivers) and those that are just background accumulation (passenger), impacting the success of targeted treatments. To overcome this, I focused on using insights about the mutations at the protein sequence and 3D structure level to understand the genotype-phenotype relationship to tumorigenesis.I have looked at proteins that participate in two DNA repair processes: primarily non homologous end joining (NHEJ) along with eukaryotic homologous recombination (HR), where missense mutations have been linked to many diverse cancers. The molecular consequences of these mutations on protein dynamics, stability, and binding affinities to other interacting partners were evaluated using in silico biophysical tools. This highlighted that cancer-causing mutations were associated with structure destabilization and altered protein conformation and network topology, thus impacting cell signalling and function. Interestingly, my work on NHEJ DNA repair machinery highlighted diverse driving forces for carcinogenesis among core components like Ku70/80 and DNA-PKcs. Cancer-causing mutations in anchor proteins (Ku70/80) impacted crucial protein-protein interactions, while those in catalytic components (DNA-PKcs) were likely to occur in regions undergoing purifying selection. This insight led to a consensus predictor for identifying driving mutations in NHEJ. While when assessing the functional consequences of BRCA1 and BRCA2 genes of HR DNA repair at the protein sequence level, this methodology underlined that cancer-causing mutations typically clustered in well-established structural domains. Using this insight, I developed a more accurate predictor for classifying pathogenic mutations in HR repair compared to existing approaches.This broad heterogeneity of cancers complicates potential treatment opportunities. I, therefore, next explored the properties of compounds potentially active against one or various types of cancer, including screens against 74 distinct cancer cell lines originating from 9 tumour types. Overall, the identified active molecules were shown to be enriched in benzene rings, aligning with Lipinski's rule of five, although this might reflect screening library biases. These insights enabled the development of a predictive platform for anticancer activity, thereby optimizing screening libraries with potentially active anticancer molecules.Similarly, I used compounds' structural and molecular properties to accurately predict those compounds with increased teratogenicity early in the drug development process and prioritize drug combinations to augment combinatorial screening libraries, potentially alleviating acquired drug resistance. The outcomes of this doctoral work highlight the potential benefits of using computational approaches in unravelling the underlying mechanisms of carcinogenesis and guiding drug discovery for designing more effective therapies. Ultimately, the predictions generated by these tools would improve our understanding of the genotype-phenotype association, enabling promising patient diagnosis and treatment.26 0Item Restricted Developing AI-Powered Support for Improving Software Quality(University of Wollongong, 2024-01-12) Alhefdhi, Abdulaziz Hasan M.; Dam, Hoa Khanh; Ghose, AdityaThe modern scene of software development experiences an exponential growth in the number of software projects, applications and code-bases. As software increases substantially in both size and complexity, software engineers face significant challenges in developing and maintaining high-quality software applications. Therefore, support in the form of automated techniques and tools is much needed to accelerate development productivity and improve software quality. The rise of Artificial Intelligence (AI) has the potential to bring such support and significantly transform the practices of software development. This thesis explores the use of AI in developing automated support for improving three aspects of software quality: software documentation, technical debt and software defects. We leverage a large amount of data from software projects and repositories to provide actionable insights and reliable support. Using cutting-edge machine/deep learning technologies, we develop a novel suite of automated techniques and models for pseudo-code documentation generation, technical debt identification, description and repayment, and patch generation for software defects. We conducted several intensive empirical evaluations which show the high effectiveness of our approach.40 0