Curriculum Based Reinforcement Learning to Avert Cascading Failures in the Electric Grid (Papers Track)
Amarsagar Reddy Ramapuram Matavalam (Arizona State University); Kishan Guddanti (Pacific Northwest National Lab); Yang Weng (Arizona State University)
We present an approach to integrate the domain knowledge of the electric power grid operations into reinforcement learning (RL) frameworks for effectively learning RL agents to prevent cascading failures. A curriculum-based approach with reward tuning is incorporated into the training procedure by modifying the environment using the network physics. Our procedure is tested on an actor-critic-based agent on the IEEE 14-bus test system using the RL environment developed by RTE, the French transmission system operator (TSO). We observed that naively training the RL agent without the curriculum approach failed to prevent cascading for most test scenarios, while the curriculum based RL agents succeeded in most test scenarios, illustrating the importance of properly integrating domain knowledge of physical systems for real-world RL applications.