Stable Learning via Self-supervised Invariant Risk Minimization

Empirical Risk Minimization based methods are based on the consistency hypothesis that all data samples are generated i.i.d. However, this hypothesis cannot hold in many real-world applications. Consequently, simply minimizing training loss can lead the model into recklessly absorbing all statistical correlations in the training dataset. It is why a well-trained model may perform unstably in different testing environments. Hence, learning a stable predictor that can simultaneously performs well in all testing environments is important for machine learning tasks. In this work, we study this problem from the perspective of Invariant Risk Minimization. Specifically, we propose a novel Self-supervised Invariant Risk Minimization method based on the fact that the real causality connections between features are consistent no matter how the environment changes. First, we propose a self-supervised invariant representation learning objective function, which aims to learn a stable representation of the consistent causality. Based on that, we further propose a stable predictor training algorithm. This algorithm aims to improve the predictor's stability using the invariant representation learned by using our proposed objective function. We conduct extensive experiments on both synthetic and real-world datasets to show that our proposal outperforms previous state-of-the-art stable learning methods. The code will be released later.