SACM - Australia

Permanent URI for this collectionhttps://drepo.sdl.edu.sa/handle/20.500.14154/9648

Browse

Search Results

Now showing 1 - 7 of 7
  • ItemRestricted
    Utilizing Artificial Intelligence to Develop Machine Learning Techniques for Enhancing Academic Performance and Education Delivery
    (University of Technology Sydney, 2024) Allotaibi, Sultan; Alnajjar, Husam
    Artificial Intelligence (AI) and particularly the related sub-discipline of Machine Learning (ML), have impacted many industries, and the education industry is no exception because of its high-level data handling capacities. This paper discusses the various AI technologies coupled with ML models that enhance learners' performance and the delivery of education systems. The research aims to help solve the current problems of the growing need for individualized education interventions arising from student needs, high dropout rates and fluctuating academic performance. AI and ML can then analyze large data sets to recognize students who are at risk academically, gauge course completion and learning retention rates, and suggest interventions to students who may require them. The study occurs in a growing Computer-Enhanced Learning (CED) environment characterized by elearning, blended learning, and intelligent tutelage. These technologies present innovative concepts to enhance administrative procedures, deliver individualized tutorials, and capture students' attention. Using predictive analytics and intelligent tutors, AI tools can bring real-time student data into the classroom so that educators can enhance the yields by reducing dropout rates while increasing performance. Not only does this research illustrate the current hope and promise of AI/ML in the context of education, but it also includes relevant problems that arise in data privacy and ethics, as well as technology equality. To eliminate the social imbalance in its use, the study seeks to build efficient and accountable AI models and architectures to make these available to all students as a foundation of practical education. The students’ ideas also indicate that to prepare the learning environments of schools for further changes, it is necessary to increase the use of AI/ML in learning processes
    11 0
  • ItemRestricted
    Automatic Detection and Verification System for Arabic Rumor News on Twitter
    (University of Technology Sydney, 2026-04) Karali, Sami; Chin-Teng, Lin
    Language models have been extensively studied and applied in various fields in recent years. However, the majority of the language use models are designed for and perform significantly better in English compared to other languages, such as Arabic. The differences between English and Arabic in terms of grammar, writing, and word-forming structures pose significant challenges in applying English-based language models to Arabic content. Therefore, there is a critical need to develop and refine models and methodologies that can effectively process Arabic content. This research aims to address the gaps in Arabic language models by developing innovative machine learning (ML) and natural language processing (NLP) methodologies. We apply the developed model to Arabic rumor detection on Twitter to test its effectiveness. To achieve this, the research is divided into three fundamental phases: 1) Efficiently collecting and pre-processing a comprehensive dataset of Arabic news tweets; 2) The refinement of ML models through an enhanced Convolutional Neural Network (ECNN) equipped with N-gram feature maps for accurate rumor identification; 3) The augmentation of decision-making precision in rumor verification via sophisticated ensemble learning techniques. In the first phase, the research meticulously develops a methodology for the collection and pre-processing of Arabic news tweets, aiming to establish a dataset optimized for rumor detection analysis. Leveraging a blend of automated and manual processes, the research navigates the intricacies of the Arabic language, enhancing the dataset’s quality for ML applications. This foundational phase ensures removing irrelevant data and normalizing text, setting a precedent for accuracy in subsequent detection tasks. The second phase is to develop an Enhanced Convolutional Neural Network (ECNN) model, which incorporates N-gram feature maps for a deeper linguistic analysis of tweets. This innovative ECNN model, designed specifically for the Arabic language, marks a significant departure from traditional rumor detection models by harnessing the power of spatial feature extraction alongside the contextual insights provided by N-gram analysis. Empirical results underscore the ECNN model’s superior performance, demonstrating a marked improvement in detecting and classifying rumors with heightened accuracy and efficiency. The culmination of the study explores the efficacy of ensemble learning methods in enhancing the robustness and accuracy of rumor detection systems. By synergizing the ECNN model with Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Gated Recurrent Unit (GRU) networks within a stacked ensemble framework, the research pioneers a composite approach that significantly outstrips the capabilities of singular models. This innovation results in a state-of-the-art system for rumor verification that outperforms accuracy in identifying rumors, as demonstrated by empirical testing and analysis. This research contributes to bridging the gap between English-centric language models and Arabic language processing, demonstrating the importance of tailored approaches for different languages in the field of ML and NLP. These contributions signify a monumental step forward in the field of Arabic NLP and ML and offer practical solutions for the real-world challenge of rumor proliferation on social media platforms, ultimately fostering a more reliable digital environment for Arabic-speaking communities.
    17 0
  • Thumbnail Image
    ItemRestricted
    iBFog: Intelligent Blockchain-based Methodology for Verifiable Fog Selection and Participation
    (University of Technology Sydney, 2024-05-28) Alshuaibi, Enaam Abdulmonem O; Hussain, Farookh Khadeer
    Fog computing has emerged as an important game-changing technology to address the resource challenges of the Internet of Things (IoT). However, the rapid increase in computational resource requirements at the edge of the network results in small-to-medium enterprises that provide fog services (FogSMEs) facing challenges in scalability, resource limitations, and network reliability. As a result of these challenges, FogSMEs are unable to meet modern data processing, security, and decision-making requirements. By exploring strategies that allow FogSMEs to maximize the benefits of the distributed nature of fog computing, in this thesis, we discuss volunteer computing as an innovative and cost-effective solution to improve their infrastructure. Leveraging idle computational resources from a global network of volunteer users, FogSMEs can achieve scalable, real-time services without significant investment in physical infrastructure. Research has identified significant gaps in the existing literature, including the absence of intelligent platforms to manage volunteer resources, dynamic selection mechanisms for volunteer nodes, and incentives to increase volunteer recruitment. To bridge the gaps in the recent literature, this thesis proposes an intelligent and reliable framework for selecting and verifying volunteer computing resources for fog scalability, named iBFog, by addressing three critical objectives: developing a trustworthy platform for managing volunteer nodes, designing an incentive mechanism to motivate participation, and implementing an intelligent selection mechanism for optimal node utilization. These objectives aim to overcome the challenges of fog scalability by ensuring efficient, secure, and reliable fog computing networks, especially for FogSMEs. This thesis contributes to the literature along three dimensions by including a systematic literature review to identify the need for an intelligent framework utilizing volunteer computing for fog scalability, the development of the iBFog framework that comprises a blockchain-based fog repository using Hyperledger Fabric, a game-based incentive module using Stackelberg game theory, and a ranking and selection module using three methods: a statistical method, a machine learning method, and a deep learning method. These components collectively address the identified research gaps, offering a comprehensive solution to the challenges of FogSME scalability. By intelligently managing, incentivizing and selecting volunteer computing resources, the iBFog framework advances the field of fog computing using a novel approach to enhancing its scalability. This framework not only addresses the immediate challenges of fog computing scalability but also sets the groundwork for future research and development in distributed computing environments.
    20 0
  • Thumbnail Image
    ItemRestricted
    Using Artificial Intelligence to Improve the Diagnosis and Treatment of Cancer
    (The University of Melbourne, 2024-02-01) Aljarf, Raghad; Ascher, David
    Cancer is a complex and heterogeneous disease driven by the accumulation of mutations at the genetic and epigenetic levels—making it particularly challenging to study and treat. Despite Whole-genome sequencing approaches identifying thousands of variations in cancer cells and their perturbations, fundamental gaps persist in understanding cancer causes and pathogenesis. Towards this, my PhD focused on developing computational approaches by leveraging genomic and experimental data to provide fundamental insights into cancer biology, improve patient diagnosis, and guide therapeutic development. The increased mutational burden in most cancers can make it challenging to identify mutations essential for tumorigenesis (drivers) and those that are just background accumulation (passenger), impacting the success of targeted treatments. To overcome this, I focused on using insights about the mutations at the protein sequence and 3D structure level to understand the genotype-phenotype relationship to tumorigenesis.I have looked at proteins that participate in two DNA repair processes: primarily non homologous end joining (NHEJ) along with eukaryotic homologous recombination (HR), where missense mutations have been linked to many diverse cancers. The molecular consequences of these mutations on protein dynamics, stability, and binding affinities to other interacting partners were evaluated using in silico biophysical tools. This highlighted that cancer-causing mutations were associated with structure destabilization and altered protein conformation and network topology, thus impacting cell signalling and function. Interestingly, my work on NHEJ DNA repair machinery highlighted diverse driving forces for carcinogenesis among core components like Ku70/80 and DNA-PKcs. Cancer-causing mutations in anchor proteins (Ku70/80) impacted crucial protein-protein interactions, while those in catalytic components (DNA-PKcs) were likely to occur in regions undergoing purifying selection. This insight led to a consensus predictor for identifying driving mutations in NHEJ. While when assessing the functional consequences of BRCA1 and BRCA2 genes of HR DNA repair at the protein sequence level, this methodology underlined that cancer-causing mutations typically clustered in well-established structural domains. Using this insight, I developed a more accurate predictor for classifying pathogenic mutations in HR repair compared to existing approaches.This broad heterogeneity of cancers complicates potential treatment opportunities. I, therefore, next explored the properties of compounds potentially active against one or various types of cancer, including screens against 74 distinct cancer cell lines originating from 9 tumour types. Overall, the identified active molecules were shown to be enriched in benzene rings, aligning with Lipinski's rule of five, although this might reflect screening library biases. These insights enabled the development of a predictive platform for anticancer activity, thereby optimizing screening libraries with potentially active anticancer molecules.Similarly, I used compounds' structural and molecular properties to accurately predict those compounds with increased teratogenicity early in the drug development process and prioritize drug combinations to augment combinatorial screening libraries, potentially alleviating acquired drug resistance. The outcomes of this doctoral work highlight the potential benefits of using computational approaches in unravelling the underlying mechanisms of carcinogenesis and guiding drug discovery for designing more effective therapies. Ultimately, the predictions generated by these tools would improve our understanding of the genotype-phenotype association, enabling promising patient diagnosis and treatment.
    25 0
  • Thumbnail Image
    ItemRestricted
    Developing AI-Powered Support for Improving Software Quality
    (University of Wollongong, 2024-01-12) Alhefdhi, Abdulaziz Hasan M.; Dam, Hoa Khanh; Ghose, Aditya
    The modern scene of software development experiences an exponential growth in the number of software projects, applications and code-bases. As software increases substantially in both size and complexity, software engineers face significant challenges in developing and maintaining high-quality software applications. Therefore, support in the form of automated techniques and tools is much needed to accelerate development productivity and improve software quality. The rise of Artificial Intelligence (AI) has the potential to bring such support and significantly transform the practices of software development. This thesis explores the use of AI in developing automated support for improving three aspects of software quality: software documentation, technical debt and software defects. We leverage a large amount of data from software projects and repositories to provide actionable insights and reliable support. Using cutting-edge machine/deep learning technologies, we develop a novel suite of automated techniques and models for pseudo-code documentation generation, technical debt identification, description and repayment, and patch generation for software defects. We conducted several intensive empirical evaluations which show the high effectiveness of our approach.
    30 0
  • Thumbnail Image
    ItemRestricted
    Intelligent Approaches for Robust Blockchain-based Identity Management
    (Saudi Digital Library, 2023-02-01) Alharbi, Mekhled; Hussain, Farookh
    Smart contracts, which are maintained on blockchain, are self-executing protocols designed to monitor and confirm the fulfilment of a contract’s terms. The trustworthiness of these contracts is guaranteed by these protocols, which also excludes any intermediaries from the transactions. Blockchain is a modern technology with rapidly expanding significance that is used in many applications, such as financial transactions, smart cities, and share trading. Currently, users’ identities are stored and managed by service providers using their centralized system. Identity information management is usually undertaken by the providers which raises concerns about user privacy and trustworthiness. Blockchain technology has the potential to enhance the identity management domain by eliminating the need for a trusted intermediary. However, the advent of blockchain technology has led to new identity management concepts to tackle trustworthiness and privacy challenges, granting users control over their information. Blockchain is suitable for situations requiring both trust and transparency due to its inherent characteristics. Therefore, there is a critical need to develop intelligent approaches to manage user identity information in a reliable manner. Thus, we tackle this issue by providing a solution that combines the mechanism of identity management with smart contracts based on blockchain and the use of artificial intelligence. We performed a systematic literature review to deepen our understanding of the issues and solutions employed in addressing these challenges to identify the drawbacks of the existing methods in the field of identity management. In the existing literature, no solution has been proposed to manage user identities in a way that guarantees data privacy and trustworthiness through the use of blockchain-based smart contracts and artificial intelligence techniques. The use of blockchain based on smart contracts has the potential to play a significant part in identity management by improving transparency and privacy. In this thesis, we develop intelligent approaches to solve the aforementioned research issue. We integrate blockchain-based smart contracts with identity management to detect duplicate user identities while maintaining the privacy of the data of these identities, thus multiple machine learning approaches are proposed to detect duplicate users’ identities on top of blockchain. We also develop an early warning system to generate alerts for users whose identities are nearing expiration. Furthermore, we propose an algorithm to intelligently compute the trustworthiness score of a user’s identity based on the identity documents provided by the user, which are stored safely, hence boosting confidence in the users’ trustworthiness score. Finally, a software prototype is selected to validate the performance of the methods proposed in this thesis.
    36 0
  • Thumbnail Image
    ItemRestricted
    Deep Discourse Analysis for Early Prediction of Multi-Type Dementia
    (Saudi Digital Library, 2023-06-12) Alkenani, Ahmed Hassan A; Li, Yuefeng
    Ageing populations are a worldwide phenomenon. Although it is not an inevitable consequence of biological ageing, dementia is strongly associated with increasing age, and is therefore anticipated to pose enormous future challenges to public health systems and aged care providers. While dementia affects its patients first and foremost, it also has negative associations with caregivers’ mental and physical health. Dementia is characterized by irreversible gradual impairment of nerve cells that control cognitive, behavioural, and language processes, causing speech and language deterioration, even in preclinical stages. Early prediction can significantly alleviate dementia symptoms and could even curtail the cognitive decline in some cases. However, the diagnostic procedure is currently challenging as it is usually initiated with clinical-based traditional screening tests. Typically, such tests are manually interpreted and therefore may entail further tests and physical examinations thus considered timely, expensive, and invasive. Therefore, many researchers have adopted speech and language analysis to facilitate and automate its initial prescreening. Although recent studies have proposed promising methods and models, there is still room for improvement, without which automated pre-screening remains impracticable. There is currently limited empirical literature on the modelling of the discourse ability of people with prodromal dementia stages and types, which is defined as spoken and written conversations and communications. Specifically, few researchers have investigated the nature of lexical and syntactic structures in spontaneous discourse generated by patients with dementia under different conditions for automated diagnostic modelling. In addition, most previous work has focused on modelling and improving the diagnosis of Alzheimer’s disease (AD), as the most common dementia pathology, and neglect other types of dementia. Further, current proposed models suffer from poor performance, a lack of generalizability, and low interpretability. Therefore, this research thesis explores lexical and syntactic presentations in written and spoken narratives of people with different dementia syndromes to develop high-performing diagnostic models using fusions of different lexical and syntactic (i.e., lexicosyntactic) features as well as language models. In this thesis, multiple novel diagnostic frameworks are proposed and developed based on the “wisdom of crowds” theory, in which different mathematical and statistical methods are investigated and properly integrated to establish ensemble approaches for an optimized overall performance and better inferences of the diagnostic models. Firstly, syntactic- and lexical-level components are explored and extracted from the only two disparate data sources available for this study: spoken and written narratives retrieved from the well-known DementiaBank dataset, and a blog-based corpus collected as a part of this research, respectively. Due to their dispersity, each data source was independently analysed and processed for exploratory data analysis and feature extraction. One of the most common problems in this context is how to ensure a proper feature space is generated for machine learning modelling. We solve this problem by proposing multiple innovative ensemble-based feature selection pipelines to reveal optimal lexicosyntactics. Secondly, we explore language vocabulary spaces (i.e., n-grams) given their proven ability to enhance the modelling performance, with an overall aim of establishing two-level feature fusions that combine optimal lexicosyntactics and vocabulary spaces. These fusions are then used with single and ensemble learning algorithms for individual diagnostic modelling of the dementia syndromes in question, including AD, Mild Cognitive Impairment (MCI), Possible AD (PoAD), Frontotemporal Dementia (FTD), Lewy Body Dementia (LBD), and Mixed Dementia (PwD). A comprehensive empirical study and series of experiments were conducted for each of the proposed approaches using these two real-world datasets to verify our frameworks. Evaluation was carried out using multiple classification metrics, returning results that not only show the effectiveness of the proposed frameworks but also outperform current “state-of-the-art” baselines. In summary, this research provides a substantial contribution to the underlying task of effective dementia classification needed for the development of automated initial pre-screenings of multiple dementia syndromes through language analysis. The lexicosyntactics presented and discussed across dementia syndromes may highly contribute to our understanding of language processing in these pathologies. Given the current scarcity of related datasets, it is also hoped that the collected writing-based blog corpus will facilitate future analytical and diagnostic studies. Furthermore, since this study deals with associated problems that have been commonly faced in this research area and that are frequently discussed in the academic literature, its outcomes could potentially assist in the development of better classification models, not only for dementia but also for other linguistic pathologies.
    18 0

Copyright owned by the Saudi Digital Library (SDL) © 2024