A Scalable Network-Aware Multi-Agent Reinforcement Learning Framework for Distributed Converter-based Microgrid Voltage Control (Papers Track)
Han Xu (Tsinghua University); Guannan Qu (Carnegie Mellon University)
Renewable energy plays a crucial role in mitigating climate change. With the rising use of distributed energy resources (DERs), microgrids (MGs) have emerged as a solution to accommodate high DER penetration. However, controlling MGs' voltage during islanded operation is challenging due to system's nonlinearity and stochasticity. Although multi-agent reinforcement learning (MARL) methods have been applied to distributed MG voltage control, they suffer from bad scalability and are found difficult to control the MG with a large number of DGs due to the well-known curse of dimensionality. To address this, we propose a scalable network-aware reinforcement learning framework which exploits network structure to truncate the critic's Q-function to achieve scalability. Our experiments show effective control of a MG with up to 84 DGs, surpassing the existing maximum of 40 agents in the existing literature. We also compare our framework with state-of-the-art MARL algorithms to show the superior scalability of our framework.