A Deep Reinforcement Learning Approach for Fast Frequency Control in Elastic Power System

dc.contributor.advisorBarati, Masoud
dc.contributor.authorAlbeladi, Faisal
dc.date.accessioned2024-05-26T11:49:31Z
dc.date.available2024-05-26T11:49:31Z
dc.date.issued2024
dc.description.abstractAs power systems transition towards sustainable energy sources, the integration of renewables poses several challenges that necessitate innovative management and control strategies. This dissertation addresses the urgent challenges faced by modern power systems due to the high penetration of renewable energy sources, increased demand, and the integration of diverse market participants. These factors introduce significant unpredictability and complexity, leading to voltage and frequency stability issues and necessitating the construction of new power infrastructure, which imposes operational, financial and environmental burdens. To overcome these obstacles, this dissertation explores the innovative application of Deep Learning (DL) and Reinforcement Learning (RL) to develop real-time, adaptive control strategies that can cope with the non-linear and stochastic nature of today's power grids. Specifically, it presents a model-free frequency control scheme utilizing Deep Reinforcement Learning (DRL) that focuses on enhancing primary and secondary frequency control mechanisms through the Deep Deterministic Policy Gradient (DDPG) method. This approach demonstrates significant potential in mitigating the adverse effects of grid stochasticity, thereby bolstering system stability. Additionally, the dissertation delves into the optimization of grid-interactive efficient buildings using the Soft Actor-Critic (SAC) algorithm, optimizing a cluster of energy storage systems performance for improved peak shaving, valley filling, and grid self-sustainability. The experimental results demonstrate that the central controller achieves high performance at both local and district wide operational evaluation indices. Lastly, we tackle the low inertia challenge in zero-carbon grids through the application of Graph Neural Networks (GNNs), particularly the Graph Attention Network (GAT), for effective inertia estimation. This innovative method aids in the precise management of grid resources post-disturbance, highlighting the critical role of attention mechanisms in enhancing decision-making processes for system operators. The findings underscore the transformative impact of DL and RL in advancing power system control and management, especially amidst the complexities introduced by renewable energy integration and the transition to low-inertia grids. The proposed solutions not only pave the way for advanced real-time control strategies but also signify a leap towards sustainable and resilient power systems in the era of green energy.
dc.format.extent157
dc.identifier.urihttps://hdl.handle.net/20.500.14154/72141
dc.language.isoen_US
dc.publisherUniveristy of Pittsburgh
dc.subjectDeep Reinforcement Learning
dc.subjectFrequency control
dc.subjectGrid-supportive loads
dc.subjectGrid-interactive efficient building
dc.subjectInertia Estimation
dc.subjectLoad frequency control
dc.subjectprimary frequency control
dc.titleA Deep Reinforcement Learning Approach for Fast Frequency Control in Elastic Power System
dc.typeThesis
sdl.degree.departmentElectrical and Computer Engineering
sdl.degree.disciplineElectrical Engineering
sdl.degree.grantorUniveristy of Pittsburgh
sdl.degree.nameDoctor of Philosophy

Files

Copyright owned by the Saudi Digital Library (SDL) © 2025