Distributed Reinforcement Learning for DC Open Energy Systems (Papers Track)

Qiong Huang (Okinawa Institute of Science and Technology Graduate University); Kenji Doya (Okinawa Institute of Science and Technology)

Paper PDF Cite
Reinforcement Learning Power & Energy

Abstract

The direct current open energy system (DCOES) enables the production, storage, and exchange of renewable energy within local communities, which is helpful, especially in isolated villages and islands where centralized power supply is unavailable or unstable. As solar and wind energy production varies in time and space depending on the weather and the energy usage patterns differ for different households, how to store and exchange energy is an important research issue. In this work, we explore the use of deep reinforcement learning (DRL) for adaptive control of energy storage in local batteries and energy sharing through DC grids. We extend the Autonomous Power Interchange System (APIS) emulator from SonyCSL to combine it with reinforcement learning algorithms in each house. We implemented deep Q-network (DQN) and prioritized DQN to dynamically set the parameters of the real-time energy exchange protocol of APIS and tested it using the actual data collected from the DCOES in the faculty houses of Okinawa Institute of Science and Technology (OIST). The simulation results showed that RL agents outperformed the hand-tuned control strategy. Sharing average energy production, storage, and usage within the local community further improved efficiency. The implementation of DRL methods for adaptive energy storage and exchange can help reducing carbon emission and positively impact the climate.