Whitening for Self-Supervised Representation Learning
A new loss function for self-supervised representation learning, which is based on the whitening of the latent space features.
Main Characteristic
- A new self-supervised representation learning loss function, Whitening Mean Squared Error (W-MSE ). W-MSE constrains the batch samples to lie in a spherical distribution and it is an alternative to positive-negative instance contrasting methods.
- W-MSE loss does not need a large number of negatives, thus one can include more positives in the current batch.
Research areas
Integrative AI
Last updated
02.08.2022 - 21:16
Detailed Description
We present a new loss function for self-supervised representation learning, which is based on the whitening of the latent space features. The whitening operation has a “scattering” effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. This solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance.