Saudi Cultural Missions Theses & Dissertations

Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10

Browse

Search Results

Now showing 1 - 4 of 4
  • ItemUnknown
    Fine-Tuning Large Language Models: A Systematic Review of Methods, Challenges, and Domain- Specific Adaptations
    (Saudi Digital Library, 2025) Alharbi, Shaima; Hussain, Farookh
    Fine-tuning large language models (LLMs) has emerged as a crucial step for adapting these powerful models to specialized tasks and domains. In this paper, we present a systematic literature review of recent techniques for fine-tuning LLMs, the challenges encountered across different application domains, and the strategies developed to address domain- specific requirements. We identify four key requirements for effective fine-tuning: (1) Parameter-efficient and scalable methods that mitigate the resource cost of updating billion- parameter models, (2) High-quality, low-cost data usage techniques for curating or generating training data, (3) Domain adaptability and knowledge integration approaches (including retrieval augmentation and alignment with knowledge bases), and (4) Robust evaluation and interpretability practices to ensure fine-tuned models are accurate and trustworthy. We analyze six representative papers in diverse domains – healthcare (biomedical LLMs like Med-PaLM and BioGPT), recommender systems (e.g. the DEALRec data-efficient tuning framework), smart manufacturing (knowledge-graph- augmented RAG pipelines), socially-informed AI (instruction- tuned models like FLAN and LLaMA-based alignments), and education (comparing specialized small models to GPT-4 with retrieval). Through this analysis, we synthesize how each approach fulfills or falls short of the identified requirements. Our review highlights emerging trends such as parameter- efficient fine-tuning (PEFT), retrieval augmented generation (RAG), and multi-task instruction tuning as promising directions to specialize LLMs while controlling cost and maintaining performance. We discuss open challenges including the trade-off between efficiency and performance, data bias and scarcity, maintaining generalization across domains, and improving evaluation metrics and interpretability. Finally, we outline future research opportunities to further enhance the fine-tuning of LLMs for domain-specific applications.
    23 0
  • ItemRestricted
    Fine-Tuning Large Language Models A Systematic Review of Methods Challenges and Domain-Specific Adaptations
    (Saudi Digital Library, 2025) Altalhi, Sarah; Albaqami, Norah; Alharbi, Shaima; Hussain, Farookh
    Fine-tuning large language models (LLMs) has emerged as a crucial step for adapting these powerful models to specialized tasks and domains. In this paper, we present a systematic literature review of recent techniques for fine-tuning LLMs, the challenges encountered across different application domains, and the strategies developed to address domain-specific requirements. We identify four key requirements for effective fine-tuning: (1) Parameter-efficient and scalable methods that mitigate the resource cost of updating billion-parameter models, (2) High-quality, low-cost data usage techniques for curating or generating training data, (3) Domain adaptability and knowledge integration approaches (including retrieval augmentation and alignment with knowledge bases), and (4) Robust evaluation and interpretability practices to ensure fine-tuned models are accurate and trustworthy. We analyze six representative papers in diverse domains – healthcare (biomedical LLMs like Med-PaLM and BioGPT), recommender systems (e.g. the DEALRec data-efficient tuning framework), smart manufacturing (knowledge-graph-augmented RAG pipelines), socially-informed AI (instruction-tuned models like FLAN and LLaMA-based alignments), and education (comparing specialized small models to GPT-4 with retrieval). Through this analysis, we synthesize how each approach fulfills or falls short of the identified requirements. Our review highlights emerging trends such as parameter-efficient fine-tuning (PEFT), retrieval augmented generation (RAG), and multi-task instruction tuning as promising directions to specialize LLMs while controlling cost and maintaining performance. We discuss open challenges including the trade-off between efficiency and performance, data bias and scarcity, maintaining generalization across domains, and improving evaluation metrics and interpretability. Finally, we outline future research opportunities to further enhance the fine-tuning of LLMs for domain-specific applications.
    11 0
  • ItemRestricted
    Navigating Consumer Resistance: Understanding Barriers to Technology Adoption in the Modern Market Through Systematic Literature Review
    (University of Essex, 2024-09-11) Alsaidan, Nawaf; Kuanr, Abhisek
    This study systematically explores and identifies factors contributing to consumer resistance to technology adoption in modern markets and proposes strategies to address them. The research focuses on three objectives: identifying primary barriers to adoption, analyzing psychological, sociocultural, and functional impacts, and evaluating marketing strategies to mitigate resistance. Using a systematic literature review of 21 studies (2010–2023), the study finds that consumer resistance arises from psychological barriers (e.g., tradition and skepticism), sociocultural influences (e.g., norms and social pressures), and functional issues (e.g., value, risk, complexity). It concludes that holistic approaches addressing all barrier types are essential for effective technology adoption.
    8 0
  • ItemRestricted
    Navigating Consumer Resistance: Understanding Barriers to Technology Adoption in the Modern Market Through Systematic Literature Review
    (University of Essex, 2024-09-11) Alsaidan, Nawaf; Kuanr, Abhisek
    This study systematically explores and identifies factors contributing to consumer resistance to technology adoption in modern markets and proposes strategies to address them. The research focuses on three objectives: identifying primary barriers to adoption, analyzing psychological, sociocultural, and functional impacts, and evaluating marketing strategies to mitigate resistance. Using a systematic literature review of 21 studies (2010–2023), the study finds that consumer resistance arises from psychological barriers (e.g., tradition and skepticism), sociocultural influences (e.g., norms and social pressures), and functional issues (e.g., value, risk, complexity). It concludes that holistic approaches addressing all barrier types are essential for effective technology adoption.
    23 0

Copyright owned by the Saudi Digital Library (SDL) © 2025