Global-Local Policy Search and Its Application in Grid-Interactive Building Control (Papers Track)

Xiangyu Zhang (National Renewable Energy Laboratory); Yue Chen (National Renewable Energy Laboratory); Andrey Bernstein (NREL)

Paper PDF Cite
Buildings Power & Energy Reinforcement Learning

Abstract

As the buildings sector represents over 70% of the total U.S. electricity consumption, it offers a great amount of untapped demand-side resources to tackle many critical grid-side problems and improve the overall energy system's efficiency. To help make buildings grid-interactive, this paper proposes a global-local policy search method to train a reinforcement learning (RL) based controller which optimizes building operation during both normal hours and demand response (DR) events. Experiments on a simulated five-zone commercial building demonstrate that by adding a local fine-tuning stage to the evolution strategy policy training process, the control costs can be further reduced by 7.55% in unseen testing scenarios. Baseline comparison also indicates that the learned RL controller outperforms a pragmatic linear model predictive controller (MPC), while not requiring intensive online computation.