Stability Constrained Reinforcement Learning for Real-Time Voltage Control (Papers Track)

Jie Feng (UCSD); Yuanyuan Shi (University of California San Diego); Guannan Qu (Carnegie Mellon University); Steven Low (California Institute of Technology); Animashree Anandkumar (Caltech); Adam Wierman (California Institute of Technology)

Slides PDF Recorded Talk NeurIPS 2022 Poster Topia Link Cite
Power & Energy Reinforcement Learning


This paper is a summary of a recently submitted work. Deep Reinforcement Learning (DRL) has been recognized as a promising tool to address the challenges in real-time control of power systems. However, its deployment in real-world power systems has been hindered by a lack of explicit stability and safety guarantees. In this paper, we propose a stability constrained reinforcement learning method for real-time voltage control in both single-phase and three-phase distribution grids. The key idea underlying our approach is an explicitly constructed Lyapunov function that certifies stability. We demonstrate the effectiveness of our approach with IEEE test feeders, where the proposed method achieves the best overall performance, while always achieving voltage stability. In contrast, standard RL methods often fail to achieve voltage stability.

Recorded Talk (direct link)