A Set-Theoretic Approach to Safe Reinforcement Learning in Power Systems (Papers Track)

Daniel Tabas (University of Washington); Baosen Zhang (University of Washington)

Paper PDF Slides PDF Recorded Talk


Reducing the carbon footprint of the energy sector will be a vital part of the fight against climate change, and doing so will require the widespread adoption of renewable energy resources. Optimally integrating a large number of these resources requires new control techniques that can both compensate for the variability of renewables and satisfy hard engineering constraints. Reinforcement learning (RL) is a promising approach to data-driven control, but it is difficult to verify that the policies derived from data will be safe. In this paper, we combine RL with set-theoretic control to propose a computationally efficient approach to safe RL. We demonstrate the method on a simplified power system model and compare it with other RL techniques.

Recorded Talk (direct link)