Improving the predictions of ML-corrected climate models with novelty detection (Papers Track)

Clayton H Sanford (Columbia); Anna Kwa (Allen Institute for Artificial Intelligence); Oliver Watt-Meyer (Allen Institute for AI); Spencer Clark (Allen Institute for AI); Noah Brenowitz (Allen Institute for AI); Jeremy McGibbon (Allen Institute for AI); Christopher Bretherton (Allen Institute for AI)

Paper PDF Slides PDF Recorded Talk NeurIPS 2022 Poster Topia Link Cite
Climate Science & Modeling Hybrid Physical Models Uncertainty Quantification & Robustness Unsupervised & Semi-Supervised Learning

Abstract

While previous works have shown that machine learning (ML) can improve the prediction accuracy of coarse-grid climate models, these ML-augmented methods are more vulnerable to irregular inputs than the traditional physics-based models they rely on. Because ML-predicted corrections feed back into the climate model’s base physics, the ML-corrected model regularly produces out of sample data, which can cause model instability and frequent crashes. This work shows that adding semi-supervised novelty detection to identify out-of-sample data and disable the ML-correction accordingly stabilizes simulations and sharply improves the quality of predictions. We design an augmented climate model with a one-class support vector machine (OCSVM) novelty detector that provides better temperature and precipitation forecasts in a year-long simulation than either a baseline (no-ML) or a standard ML-corrected run. By improving the accuracy of coarse-grid climate models, this work helps make accurate climate models accessible to researchers without massive computational resources.

Recorded Talk (direct link)

Loading…