Generating physically-consistent high-resolution climate data with hard-constrained neural networks
Paula Harder (Fraunhofer Institute ITWM, Mila Quebec AI Institute), Qidong Yang (Mila Quebec AI Institute, New York University), Venkatesh Ramesh (Mila Quebec AI Institute, University of Montreal), Alex Hernandez-Garcia (Mila Quebec AI Institute, University of Montreal), Prasanna Sattigeri (IBM Research), Campbell D. Watson (IBM Research), Daniela Szwarcman (IBM Research) and David Rolnick (Mila Quebec AI Institute, McGill University).
The availability of reliable, high-resolution climate and weather data is important to inform long-term decisions on climate adaptation and mitigation and to guide rapid responses to extreme events. Forecasting models are limited by computational costs and therefore often predict quantities at a coarse spatial resolution. Statistical downscaling can provide an efficient method of upsampling low-resolution data. In this field, deep learning has been applied successfully, often using methods from the super-resolution domain in computer vision. Despite often achieving visually compelling results, such models often violate conservation laws when predicting physical variables. In order to conserve important physical quantities, we develop methods that guarantee physical constraints are satisfied by a deep downscaling model while also increasing their performance according to traditional metrics. We introduce two ways of constraining the network: a renormalization layer added to the end of the neural network and a successive approach that scales with increasing upsampling factors. We show the applicability of our methods across different popular architectures and upsampling factors using ERA5 reanalysis data.