Saudi Cultural Missions Theses & Dissertations

Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10

Browse

Search Results

Now showing 1 - 9 of 9
  • ItemRestricted
    INSIGHTS AND STRATEGIES FOR IMPROVING SOFTWARE DEVELOPER PRODUCTIVITY
    (University of North Texas, 2024-12) Alhumud, Waleed Ghazi; Bryce, Renee
    This dissertation explores several methods and insights to enhance productivity by saving time and effort, improving testing skills, and reducing testing costs. The first contribution provides software testing tools based on selected criteria. Holding team-based competitions to detect faults in different programming languages is introduced in the second contribution. In the last contribution, we utilize an optimization technique of regression testing to enhance software developers’ productivity. The results indicate that using software testing tools based on selected criteria saves time and effort by automating repetitive tasks and detecting faults early. Additionally, improving testing skills and learning new programming languages through holding team-based competitions to catch bugs in different languages leads to enhanced productivity. More than 93% of the participants agree that these competitions not only increase their testing skills but also assist them in learning new languages. Moreover, utilizing test suite prioritization might improve software developers’ productivity by reducing testing costs and time by executing only half of the prioritized test suites to obtain 100% of code coverage utilizing our proposed criteria. The contributions in this dissertation aid in the ongoing effort to advance testing practices in software development. They provide methods for practitioners and organizations to improve software quality by enhancing software developers’ productivity.
    17 0
  • ItemRestricted
    Dynamic Feature Location Framework for Software Project
    (University of Bahrain, 2024-08) Buzaid, Faisal; Albalooshi, Fawzi
    The Dynamic Feature Location Techniques (DFLTs) aim to automate the process of identifying the source code responsible for executing specific features within software systems. Manual implementation of DFLTs is time-consuming and demanding for developers, leading to the proposal of semi-automated approaches. One common approach involves generating execution traces by executing multiple scenarios for each software feature and then mapping the corresponding source code based on these traces. However, the execution traces are often large and contain irrelevant data to the software feature, requiring solutions to reduce their size and the eliminate irrelevant data. One such solution involves minimizing the number of scenarios needed to exercise a software feature, but little work has been done in this area. To address this gap, a generic framework called Aggregation of Execution Traces to Formulate a Scenario (AETFS) is introduced in this work. AETFS leverages runtime software output and employs textual analysis techniques to extract relevant data from the execution trace for scenario creation. It explores textual analysis, including topic modeling, as a means to select accurate scenarios for DFLTs. The performance of AETFS is characterized in terms of execution trace granularity, enabling the identification of meaningful terms that can filter the execution trace using textual analysis techniques such as Latent Semantic Indexing (LSI). The evaluation encompasses eight subject systems with 600 features, making it more extensive than previous studies. The study identifies certain attributes of execution traces and text queries that impact AETFS’s performance. Two distinct groups emerge, one achieving superior Feature Location (FL) using AETFS and the other achieving better FL using a conventional baseline method. Combining AETFS with the baseline method significantly enhances performance, with the top results surpassing the baseline by 45% and the lowest by 12% over AETFS. In conclusion, this work highlights the importance of rigorously characterizing the proposed DFLTs framework to identify optimal scenarios for exercising software features. It emphasizes the need to differentiate between scenarios and their characterizations to generate necessary insights. The findings demonstrate the effectiveness of AETFS while providing valuable insights for further advancements in the field of DFLTs.
    14 0
  • ItemRestricted
    Evaluating Text Summarization with Goal-Oriented Metrics: A Case Study using Large Language Models (LLMs) and Empowered GQM
    (University of Birmingham, 2024-09) Altamimi, Rana; Bahsoon, Rami
    This study evaluates the performance of Large Language Models (LLMs) in dialogue summarization tasks, focusing on Gemma and Flan-T5. Employing a mixed-methods approach, we utilized the SAMSum dataset and developed an enhanced Goal-Question-Metric (GQM) framework for comprehensive assessment. Our evaluation combined traditional quantitative metrics (ROUGE, BLEU) with qualitative assessments performed by GPT-4, addressing multiple dimensions of summary quality. Results revealed that Flan-T5 consistently outperformed Gemma across both quantitative and qualitative metrics. Flan-T5 excelled in lexical overlap measures (ROUGE-1: 53.03, BLEU: 13.91) and demonstrated superior performance in qualitative assessments, particularly in conciseness (81.84/100) and coherence (77.89/100). Gemma, while showing competence, lagged behind Flan-T5 in most metrics. This study highlights the effectiveness of Flan-T5 in dialogue summarization tasks and underscores the importance of a multi-faceted evaluation approach in assessing LLM performance. Our findings suggest that future developments in this field should focus on enhancing lexical fidelity and higher-level qualities such as coherence and conciseness. This study contributes to the growing body of research on LLM evaluation and offers insights for improving dialogue summarization techniques.
    32 0
  • Thumbnail Image
    ItemRestricted
    Model Driven Development of Mobile Health Applications
    (King's College London, 2024-09) Alwakeel, Lyan Abdulrahman; Zschaler, Steffen; Lano, Kevin
    The proliferation of mobile devices has created a demand for software applications that can be run on these devices. However, developing mobile applications that meet user requirements and can function on a variety of devices poses several challenges. This is especially true for mobile applications in critical domains like healthcare, where the stakes are high and quality is paramount. In response, software engineering has focused on improving the development process, methods, and tools to create high-quality mobile applications. One promising approach is Model-Driven Development (MDD), which generates low-level code from high-level models, enabling developers to “write once, run anywhere”. The MDD approach plays a substantial role in increasing software productivity and enhancing solution quality by automating the generation of implementations across various platforms, instead of relying on manual coding for each platform version. The selection of the right architecture design and back-end services is crucial for developing effective application solutions, and expertise in these choices can be encoded into an MDD process. While there have been some research efforts to apply MDD to mobile applications, the existing studies are limited and offer opportunities for further improvement. Currently, the published work primarily focuses on either generating user interfaces, developing simple data-centric applications, or applications with predefined functions. Therefore, this research introduces AppCraft, a framework based on MDD that is specifically designed for developing cross-platform mobile health (mHealth) applications. The AppCraft framework simplifies the generation of complex, intelligent, and high-quality self-management mHealth applications by leveraging MDD principles. Moreover, this research investigates the potential of AppCraft in integrating machine learning models within mHealth applications. By leveraging high-level models, AppCraft simplifies the integration process, providing significant benefits to both developers and machine learning engineers. This allows developers to accelerate the mobile application development process and enables ML engineers to test their models effectively. The research is based on design science research principles, which employ artefacts as proof-of-concept to validate research findings. The research commences by conducting a thorough review of existing MDD frameworks, mobile development approaches, and mobile architectures. This is followed by a systematic literature review focused on self-management mHealth applications. Drawing insights from this analysis, the AppCraft framework is developed as a result. The effectiveness of the AppCraft framework is assessed through a series of case studies in the healthcare domain, demonstrating substantial reductions in development time and effort. The evaluation validates the framework’s applicability, flexibility, and simplicity. Furthermore, the generated applications undergo comprehensive evaluation, affirming their efficiency, consistency, and usability.
    77 0
  • Thumbnail Image
    ItemRestricted
    EMPIRICAL EXPLORATION OF SOFTWARE TESTING
    (New Jersey Institute of Technology, 2024-04-18) Alblwi, Samia; Mili, Ali; Oria, Vincent
    Despite several advances in software engineering research and development, the quality of software products remains a considerable challenge. For all its theoretical limitations, software testing remains the main method used in practice to control, enhance, and certify software quality. This doctoral work comprises several empirical studies aimed at analyzing and assessing common software testing approaches, methods, and assumptions. In particular, the concept of mutant subsumption is generalized by taking into account the possibility for a base program and its mutants to diverge for some inputs, demonstrating the impact of this generalization on how subsumption is defined. The problem of mutant set minimization is revisited and recast as an optimization problem by specifying under what condition the objective function is optimized. Empirical evidence shows that the mutation coverage of a test suite depends broadly on the mutant generation operators used with the same tool and varies even more broadly across tools. The effectiveness of a test suite is defined by its ability to reveal program failures, and the extent to which traditional syntactic coverage metrics correlate with this measure of effectiveness is considered.
    11 0
  • Thumbnail Image
    ItemRestricted
    Developing AI-Powered Support for Improving Software Quality
    (University of Wollongong, 2024-01-12) Alhefdhi, Abdulaziz Hasan M.; Dam, Hoa Khanh; Ghose, Aditya
    The modern scene of software development experiences an exponential growth in the number of software projects, applications and code-bases. As software increases substantially in both size and complexity, software engineers face significant challenges in developing and maintaining high-quality software applications. Therefore, support in the form of automated techniques and tools is much needed to accelerate development productivity and improve software quality. The rise of Artificial Intelligence (AI) has the potential to bring such support and significantly transform the practices of software development. This thesis explores the use of AI in developing automated support for improving three aspects of software quality: software documentation, technical debt and software defects. We leverage a large amount of data from software projects and repositories to provide actionable insights and reliable support. Using cutting-edge machine/deep learning technologies, we develop a novel suite of automated techniques and models for pseudo-code documentation generation, technical debt identification, description and repayment, and patch generation for software defects. We conducted several intensive empirical evaluations which show the high effectiveness of our approach.
    47 0
  • Thumbnail Image
    ItemRestricted
    Development Techniques for Large Language Models for Low Resource Languages
    (University of Texas at Dallas, 2023-12) Alsarra, Sultan; Khan, Latifur
    Recent advancements in Natural Language Processing (NLP) driven by large language models have brought about transformative changes in various sectors reliant on extensive text-based research. This dissertation is the culmination of techniques designed for crafting domain-specific large language models tailored to low-resource languages, offering invaluable support to researchers engaged in large-scale text analysis. The primary focus of these models is to address the nuances of politics, conflicts, and violence in the Middle East and Latin America using domain-specific, pre-trained large language models in Arabic and Spanish. Throughout the development of these language models, we construct a multitude of downstream tasks, including named entity recognition, binary classification, multi-label classification, and question answering. Additionally, we lay out a roadmap for the creation of domain-specific large language models. Our core objective is to contribute by devising NLP strategies and methodologies that surmount the challenges posed by low-resource languages. This contribution extends to curating an extensive corpus of texts centered around regional politics and conflicts in Spanish and Arabic, thereby enriching research in the domain of NLP large language models for low-resource languages. We assess the performance of our models against the Bidirectional Encoder Representations from Transformers (BERT) model as a baseline. Our findings unequivocally establish that the utilization of domain-specific pre-trained language models markedly enhances the performance of NLP models in the realm of politics and conflict analysis. This is observed in both Arabic and Spanish, spanning diverse types of downstream tasks. Consequently, our work equips researchers in the realm of large language models for low-resource languages with potent tools. Simultaneously, it offers political and conflict analysts, including policymakers, scholars, and practitioners, novel approaches and instruments for deciphering the intricate dynamics of local politics and conflicts, directly in Arabic and Spanish.
    86 0
  • Thumbnail Image
    ItemRestricted
    Toward Better Understanding and Documentation of Rationale for Code Changes
    (Saudi Digital Library, 2023-07-28) Al Safwan, Khadijah; Servant, Francisco Javier
    Software development is driven by the development team’s decisions. Communicating the rationale behind these decisions is essential for the project's success. Although the software engineering community recognizes the need and importance of rationale, there has been a lack of in-depth study of rationale for code changes. To bridge this gap, this dissertation examines the rationale behind code changes in-depth and breadth. This work includes two studies and an experiment. The first study aims to understand software developers’ need. It finds that software developers need to investigate code changes to understand their rationale when working on diverse tasks. The study also reveals that software developers decompose the rationale of code commits into 15 separate components that they could seek when searching for rationale. The second study surveys software developers’ experiences with rationale. It uncovers issues and challenges that software developers encounter while searching for and recording rationale for code changes. The study highlights rationale components that are needed and hard to find. Additionally, it discusses factors leading software developers to give up their search for the rationale of code changes. Finally, the experiment predicts the documentation of rationale components in pull request templates. Multiple statistical models are built to predict if rationale components’ headers will not be filled. The trained models are effective in achieving high accuracy and recall. Overall, this work’s findings shed light on the need for rationale and offer deep insights for fulfilling this important information need.
    19 0
  • Thumbnail Image
    ItemRestricted
    Hypothesis-Based Debuggers
    (2023-05-19) Alaboudi, Abdulaziz; LaToza, Thomas
    Debugging has traditionally been described as a process of formulating and testing hypotheses about defects, but there has been a lack of tools to assist developers in this crucial process. While prior work has mainly focused on fault localization, developing tools that automatically locate buggy code, the effectiveness of these tools compared to those that may help developers find and test hypotheses remain unclear. I conducted a randomized controlled experiment to address this question. The results showed that a tool that helped identify the correct hypothesis improved developers' success in fixing defects six times compared to a fault localization tool. This finding underscores the potential effectiveness of debuggers that assist in finding and testing hypotheses. To design effective debuggers to support formulating and testing hypotheses, a deeper understanding of debugging behavior is required. To fill this knowledge gap, I created a new dataset of debugging work using live-streamed programming videos. I used it to study two behaviors based on the hypothesis debugging model: the broader activities developers engage in during debugging and the edit-run cycle behavior developers engage in while testing hypotheses. These studies revealed several interesting findings about debugging behavior, suggesting that debuggers should aim to reduce the amount of work spent switching between activities and rerunning the program to find and test hypotheses in debugging. This dissertation introduces the concept of hypothesis-based debuggers, representing a new type of debugger designed to streamline identifying and testing hypotheses. Unlike traditional debuggers that require manual switching between activities to gather information about the defect, hypothesis-based debuggers automate gathering all necessary information about the defect and use it to identify relevant hypotheses. Instead of ineffectively testing multiple hypotheses, hypothesis-based debuggers rank hypotheses based on their relevance and offer a step-by-step testing plan for each relevant hypothesis. To validate the effectiveness of hypothesis-based debuggers, I implemented and evaluated a debugger named Hypothesizer. Results showed that Hypothesizer significantly improves the success rate of fixing defects compared to traditional debuggers and Stack Overflow. Hypothesizer simplifies the process of observing defect behavior, assists developers in identifying the correct hypothesis, and helps them effectively verify or refute relevant hypotheses, even with limited knowledge of the codebase.
    27 0

Copyright owned by the Saudi Digital Library (SDL) © 2025