Resilience enhancement of post-disaster power distribution systems using Deep Reinforcement Learning
| dc.contributor.advisor | Zohdy, Mohamed | |
| dc.contributor.advisor | Kaur, Amanpreet | |
| dc.contributor.advisor | Alghamdi, Ali | |
| dc.contributor.advisor | Edwards, William | |
| dc.contributor.advisor | Al-Salman, Zeina | |
| dc.contributor.author | Alotaibi, Raed | |
| dc.date.accessioned | 2026-02-25T10:38:50Z | |
| dc.date.issued | 2026 | |
| dc.description.abstract | Weather-driven extreme events are placing growing stress on aging distribution infrastructure and increasingly threaten continuity of service for critical loads during prolonged outages. Microgrids can enhance resilience by transitioning to islanded operations and supplying prioritized loads with local distributed energy resources (DERs); however, post-disaster restoration remains challenging because operators must make coupled discrete–continuous decisions under tight resource and operating constraints. This dissertation addressed this challenge by developing a parameterized deep reinforcement learning controller, PDQN-CLR, that targets priority-weighted restoration while enforcing operational feasibility under scarcity. The PDQN-CLR modeled restoration as a hybrid action in which a discrete operational category was selected and paired with a continuous parameter vector specifying the DER real and reactive power setpoints. The approach was evaluated in a closed-loop OpenDSS environment using the IEEE 123-node feeder configured as an islanded microgrid with five grid-forming battery energy storage systems and 17 prioritized critical loads over a 72-step (36-hour) horizon with a 30-minute decision interval. Snapshot power-flow evaluation was performed at each step. Uncertainty was represented through capacity-factor derating under sufficient and scarce regimes, and each episode randomized the fault scenario, derating level, and initial state of charge. This dissertation also introduced (i) an uncertainty-aware evaluation protocol based on capacity-factor derating; (ii) a three-tier priority-weighted reward to encode the critical-load hierarchy; and (iii) a DER-aware load service rule that reduced voltage-only overstatement in islanded operation. With sufficient resources, PDQN-CLR achieved a mean PCS of 0.94 versus 0.80 for a Greedy baseline, while maintaining a low constraint violation score (CVS) of approximately 0.009 in both sufficient and scarce regimes. The baseline produced substantially larger violation magnitudes (CVS ≈ 0.388–0.667), indicating more severe and/or more frequent exceedances of the constraints. These results indicate that parameterized deep reinforcement learning can improve priority-weighted restoration when capacity is available and preserve feasibility as a primary outcome when scarcity limits achievable restoration. | |
| dc.format.extent | 127 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.14154/78296 | |
| dc.language.iso | en_US | |
| dc.publisher | Saudi Digital Library | |
| dc.subject | Energy Management | |
| dc.subject | Reinforcement learning | |
| dc.subject | Critical load restoration | |
| dc.subject | Grid resilience | |
| dc.subject | Parameterized deep Q-network | |
| dc.title | Resilience enhancement of post-disaster power distribution systems using Deep Reinforcement Learning | |
| dc.type | Thesis | |
| sdl.degree.department | Electrical and Computer Engineering | |
| sdl.degree.discipline | Electrical and Computer Engineering | |
| sdl.degree.grantor | Oakland University | |
| sdl.degree.name | Doctor of Philosophy |
