Deep Reinforcement Learning for Real-Time Energy Management in Community Microgrids

No Thumbnail Available

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Lancaster University

Abstract

The integration of renewable energy sources (RESs), energy storage systems (ESSs), and the electrification of transportation are driving a rapid transformation of modern power systems. These changes not only provide great potential to reduce carbon emissions, increase sustainability and improve reliability, but also present complex challenges. Traditional and centralized power systems were originally designed for one-way power flow, from large power plants to consumers, and are becoming increasingly inadequate in the face of intermittent renewable generation, distributed and variable loads, and heightened risks of severe weather disturbances. Thus, intelligent, adaptive and resilient methods for energy management have become a critical priority. In this thesis, I address the need for advanced, real-time control in modern power systems through the use of deep reinforcement learning (DRL) to optimize active and reactive power flows under uncertainty. First, a model-free framework for a single home energy management system (HEMS) that integrates photovoltaic (PV) panels, ESSs, electric vehicles (EVs), and multiple types of residential loads. In contrast to existing methods that focus on active power flows alone, the proposed method optimizes reactive power to improve power factor and avoid possible financial penalties. This framework adapts to fluctuating renewable generation, uncertain EV charging profiles, and the unpredictable behavior of loads by using DRL algorithms that can learn directly from interactions with the environment without explicit mathematical models. Real world data tests show over 30% electricity cost savings and substantial power factor improvements. To extend this concept from individual homes to larger communities, a community energy management system (CEMS) is proposed. Multiple smart homes, each equipped with a HEMS, are interconnected through a point of common coupling to form a community microgrid (CMG). Each home acts as a local agent making autonomous decisions, and a multi-agent DRL (MADRL) architecture is employed to coordinate their actions in a decentralized yet cooperative manner. Further, electricity price forecasting is integrated with a Long Short Term Memory (LSTM) network for proactive scheduling of flexible loads. Simulation results show that this data driven, cooperative control approach can reduce overall community electricity costs by up to 29.66% and keep community voltages more stable than conventional centralized and model-based methods. Also, the proposed MADRL strategy retains decision making at the household level, which provides benefits in terms of privacy, scalability, and adaptability to various grid conditions. The thesis then incorporates optimal power flow (OPF) constraints into the energy management system (EMS) for CMG with high penetration of renewables, ESSs and EVs, recognizing that even larger scale distribution networks require advanced coordination. The work reformulates the OPF problem as a Markov decision process (MDP) and uses a dual-layer DRL structure. The objectives of the first layer controls for continuous control of active power using a twin delayed deep deterministic policy gradient (TD3) algorithm with cost minimization, load shedding prevention and efficient use of DERs. The second layer, which uses a double deep Q-network (DDQN), controls discrete reactive power to maintain voltage stability. This dual-layer approach addresses the challenges of high-dimensional, non-linear, and stochastic power systems. The experiments on a modified IEEE-15 bus system demonstrate up to 10.41% cost savings versus no EMS, with less voltage violations and less load shedding. The dual-layer DRL framework is resilient to stochastic variations in renewable output and load demand, and is a practical candidate for real-time distribution network operations. Overall, the research presented demonstrates that DRL-based solutions, whether applied to individual homes, local communities or larger distribution networks, can successfully deal with the uncertainty and variability of modern power systems. By integrating cutting-edge neural network architectures for price forecasting, multi-agent coordination, and dual-layer control, the proposed methods outperform traditional optimization and control approaches in terms of cost efficiency, voltage stability, and scalability. As a result, these techniques offer great potential for enabling flexible, economically viable, and robust power grid operations. With increasing proportion of RESs, ESSs and EVs, the demand for such intelligent, adaptive and decentralized energy management solutions will increase, leading to a more sustainable and resilient electricity infrastructure.

Description

Keywords

Deep reinforcement learning, Energy management system, Microgrid, Smart grid, Voltage regulation

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyright owned by the Saudi Digital Library (SDL) © 2025