In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness (Papers Track)

Robbie M Jones (Stanford University); Sang Michael Xie (Stanford University); Ananya Kumar (Stanford University); Fereshte Khani (Stanford); Tengyu Ma (Stanford University); Percy Liang (Stanford University)

Paper PDF Slides PDF Recorded Talk Cite
Meta- and Transfer Learning Computer Vision & Remote Sensing Unsupervised & Semi-Supervised Learning

Abstract

Many machine learning applications used to tackle climate change involve lots of unlabeled data (such as satellite imagery) along with auxiliary information such as climate data. In this work, we show how to use auxiliary information in a semi-supervised setting to improve both in-distribution and out-of-distribution (OOD) accuracies (e.g. for countries in Africa where we have very little labeled data). We show that 1) on real-world datasets, the common practice of using auxiliary information as additional input features improves in-distribution error but can hurt OOD. Oppositely, we find that 2) using auxiliary information as outputs of auxiliary tasks to pre-train a model improves OOD error. 3) To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary outputs and fine-tunes this model with the pseudolabels (self-training). We show both theoretically and empirically on remote sensing datasets for land cover prediction and cropland prediction that In-N-Out outperforms auxiliary inputs or outputs alone on both in-distribution and OOD error.

Recorded Talk (direct link)

Loading…