Browsing by Author "Alanazi, Almutasim Billa Abdullah Alanazi"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Restricted ENHANCING SAFETY AND EFFICIENCY FOR CONTROL SYSTEMS BASED ON REINFORCEMENT LEARNING(The University of Arizona, 2025) Alanazi, Almutasim Billa Abdullah Alanazi; Tharp, Hal StanleyThis dissertation explores applying Artificial Intelligence (AI) techniques, specifically Reinforcement Learning (RL), to develop new control strategies that enhance safety and efficiency across various control systems. RL, recognized for its ability to adapt in dynamic environments without precise knowledge of system dynamics, offers a promising alternative to traditional (classical) control methods, such as Proportional-Integral-Derivative (PID) control. In this research, RL models are applied to four different applications to assess the efficacy of RL in terms of safety and efficiency, namely hydroponic systems, navigation systems, power management, and autonomous vehicles; all RL models are based on model-free RL method, specifically Q-learning and policy optimization algorithms (PPO) approach. The reason for selecting the applications is that safety and efficiency are important keys to such applications. Thus, for our study, we examine the safety and efficiency of model performance and output objectives more closely and note the advantages introduced by the RL models. For the first application, the RL model in the hydroponics system outperforms traditional methods in maintaining optimal pH and Electrical Conductivity (EC) levels, achieving accuracies of 96% and 99%, respectively, and demonstrating reduced settling times under disturbance conditions. In a high-disturbance study over 1,000 episodes, on average, the RL model improved safety by up to 15% by monitoring the system's success and observing the return rewards from the environment, with positive rewards counting as a success when the controlled reference is within the desired operating zone. Also, it achieved 35-40% energy savings by enabling the idle mode when the controlled reference was within the desired operating zone. Highlighting its potential to enhance agriculture technologies in terms of safety and efficiency. For the second application, the RL model enhances the navigation system's path planning by incorporating environmental factors when navigating through risk zones, such as poor weather conditions or sandstorms. The strategy involves updating the reward matrix based on safety and efficiency specifications to generate optimal routes that prioritize driver safety. Further improvements are achieved by minimizing energy consumption through these optimal routes, thereby enhancing overall efficiency. For the third application, the RL model in the power management controlled hybrid solar system limits grid reliance by up to 11.76% during peak hours, improving safety, enhancing energy stability, and offering a viable alternative to traditional methods like Time-of-Use programs. A financial analysis suggests a 14-year payback period, with projected profits of $8,500 over 25 years, indicating both economic and environmental benefits. In the fourth application, the RL model for autonomous vehicles employs continuous reward functions that demonstrate improved lane-following and obstacle avoidance, thereby enhancing safety. Compared to existing studies using the AWS DeepRacer simulator, this model reduced training time by 50%, resulting in more efficient learning. The continuous reward mechanism is ideal for real-time scenarios, as it provides the agent with enhanced feedback, leading to better performance and more learning efficiency. The findings highlight RL’s ability to address complex control challenges, enhancing safety and efficiency across examined cases, and underline its potential for broader application in control systems and real-world applications.10 0