Stripping off the implementation complexity of physics-based model predictive control for buildings via deep learning (Papers Track) Spotlight

Jan Drgona (Pacific Northwest National Laboratory); Lieve Helsen (KU Leuven); Draguna Vrabie (PNNL)

Paper PDF Recorded Talk Cite
Buildings Reinforcement Learning

Abstract

Over the past decade, model predictive control (MPC) has been considered as the most promising solution for intelligent building operation. Despite extensive effort, transfer of this technology into practice is hampered by the need to obtain an accurate controller model with minimum effort, the need of expert knowledge to set it up, and the need of increased computational power and dedicated software to run it. A promising direction that tackles the last two problems was proposed by approximate explicit MPC where the optimal control policies are learned from MPC data via a suitable function approximator, e.g., a deep learning (DL) model. The main advantage of the proposed approach stems from simple evaluation at execution time leading to low computational footprints and easy deployment on embedded HW platforms. We present the energy savings potential of physics-based (also called 'white-box') MPC applied to an office building in Belgium. Moreover, we demonstrate how deep learning approximators can be used to cut the implementation and maintenance costs of MPC deployment without compromising performance. We also critically assess the presented approach by pointing out the major challenges and remaining open-research questions.

Recorded Talk (direct link)

Loading…