Empowering Safe Reinforcement Learning in Power System Control with CommonPower (Tutorials Track)

Hannah Markgraf (Technical University of Munich); Michael Eichelbeck (Technical University of Munich); Matthias Althoff (Technical University of Munich)

Poster File Cite
Reinforcement Learning Buildings

Abstract

Reinforcement learning (RL) has become a valuable tool for addressing complex decision-making problems in power system control. However, the unique intricacies of this domain necessitate the development of specialized RL algorithms. While benchmarking problems have proven effective in advancing algorithm development in various domains, existing suites do not enable a systematic study of two key challenges in power system control: ensuring adherence to physical constraints and evaluating the impact of forecast accuracy on controller performance. This tutorial introduces the sophisticated capabilities of the CommonPower toolbox, designed to address these overlooked challenges. We guide participants in composing benchmark problems within CommonPower, leveraging predefined components, and demonstrate the creation of new components. We showcase the training of a safe RL agent to solve a benchmark problem, comparing its performance against a built-in MPC baseline. Notably, CommonPower's symbolic modeling approach enables the automatic derivation of safety shields for vanilla RL algorithms. We explain the theory behind this feature in a concise introduction to the field of safe RL. Furthermore, we present CommonPower's interface for seamlessly integrating diverse forecasting strategies into the system. The workshop emphasizes the significance of safeguarding vanilla RL algorithms and encourages researchers to systematically investigate the influence of forecast uncertainties in their experiments.