Saudi Cultural Missions Theses & Dissertations

Permanent URI for this communityhttps://drepo.sdl.edu.sa/handle/20.500.14154/10

Browse

Search Results

Now showing 1 - 2 of 2
  • ItemRestricted
    Developing a virus-like particle (VLP) Polio vaccine
    (Saudi Digital Library, 2025) Altalhi, Sarah; Mulley, Jay
    Poliovirus (PV) is a subtype of the enterovirus C species widely known as the agent responsible for poliomyelitis, a destruction of neurons that can result in paralysis and death. Control of the infection is achieved by vaccination with either a live-attenuated oral poliovirus vaccine (OPV) or an inactivated poliovirus vaccine (IPV), with global coverage reaching nearly 100%. As cases of the natural infection fall, the manufacture of the current vaccines, both of which rely of large-scale virus growth, present a biosafety risk and PV vaccines developed in the absence of live virus are desirable to address vaccine production to maintain PV global vaccination. Virus-like particles (VLPs), which are incapable of infection, made of the assembled viral capsid proteins synthesized in recombinant expression systems, represent a promising infection-free vaccine for PV. This study produced the capsid proteins VP0, VP1, and VP3 in E.coli and baculovirus expression systems and assessed both their expression levels and their ability to assemble into virus-like particles. To avoid protein aggregation in the E.coli system each protein was fused to the SUMO-tag and purified individually for SUMO tag removal and attempted VLP assembly. Novelly, E.coli strains were transformed with all three vectors simultaneously and processed similarly. Despite these modifications, SUMO tagged capsid proteins remained largely insoluble and the low levels present in the soluble fractions failed to be cleaved by the SUMO protease. In the baculovirus system the P1 precursor protein, co-expressed with 3C protease, gave rise to the authentic cleavage pattern and modification at both the N- and C- termini were investigated as a means to improve expression levels. A range of mutations aimed at optimizing the N-myristoylation reaction revealed several that were associated with increased levels of cleaved mature capsid proteins. Further, modification of the C-terminus, including short truncations of the VP1 sequence, was shown to benefit expression level. In a final study, mutations in VP4, previously reported in the live virus to abrogate the virus entry reaction, were investigated in the VLP system. Individual mutations at VP4 residues 24, 28 and 29 were shown to significantly alter expression levels and further analysis of these changes within the VP0 protein only were studied following VP0 fusion to Green Fluorescent Protein (GFP). The outcome of these studies showed that single residue changes in VP4 in the context of VP0 can significantly affect protein expression level and subcellular localization. Overall, these studies suggest that optimization of the P1 sequence can improve the levels of PV protein expressed. However, when tested, the antigenicity of the PV VLPs detected was predominantly in the H (heated) rather than the N (native) form suggesting that none of the alterations tested resulted in a VLP conformation required for vaccine use.
    7 0
  • ItemRestricted
    Fine-Tuning Large Language Models A Systematic Review of Methods Challenges and Domain-Specific Adaptations
    (Saudi Digital Library, 2025) Altalhi, Sarah; Albaqami, Norah; Alharbi, Shaima; Hussain, Farookh
    Fine-tuning large language models (LLMs) has emerged as a crucial step for adapting these powerful models to specialized tasks and domains. In this paper, we present a systematic literature review of recent techniques for fine-tuning LLMs, the challenges encountered across different application domains, and the strategies developed to address domain-specific requirements. We identify four key requirements for effective fine-tuning: (1) Parameter-efficient and scalable methods that mitigate the resource cost of updating billion-parameter models, (2) High-quality, low-cost data usage techniques for curating or generating training data, (3) Domain adaptability and knowledge integration approaches (including retrieval augmentation and alignment with knowledge bases), and (4) Robust evaluation and interpretability practices to ensure fine-tuned models are accurate and trustworthy. We analyze six representative papers in diverse domains – healthcare (biomedical LLMs like Med-PaLM and BioGPT), recommender systems (e.g. the DEALRec data-efficient tuning framework), smart manufacturing (knowledge-graph-augmented RAG pipelines), socially-informed AI (instruction-tuned models like FLAN and LLaMA-based alignments), and education (comparing specialized small models to GPT-4 with retrieval). Through this analysis, we synthesize how each approach fulfills or falls short of the identified requirements. Our review highlights emerging trends such as parameter-efficient fine-tuning (PEFT), retrieval augmented generation (RAG), and multi-task instruction tuning as promising directions to specialize LLMs while controlling cost and maintaining performance. We discuss open challenges including the trade-off between efficiency and performance, data bias and scarcity, maintaining generalization across domains, and improving evaluation metrics and interpretability. Finally, we outline future research opportunities to further enhance the fine-tuning of LLMs for domain-specific applications.
    54 0

Copyright owned by the Saudi Digital Library (SDL) © 2026