Domain Adaptation and Generalization
A standard assumption of learning based models is that training and test datashare the same input distribution. However, models trained on given datasets perform poorly when tested on data acquired in different settings. This problem is known as domain shift and is particularly relevant, e.g., for visual models of agents acting in the real world or when we have no labeled data available for our target scenario. In the latter case, for instance, we could use synthetically generated data to obtain data for our target task, but this would create a mismatch between training (synthetic) and test (real) images. Filling the gap between these two different input distributions is the goal of domain adaptation (DA) algorithms. In particular, the goal of DA is to produce a model for a target domain (for which we have few or no labeled data) by exploiting labeled data available in a different, source, domain. Various DA techniques have been developed to address the domain shift problem.

In this short course, an introduction to these algorithms and to domain adaptation and generalization is provided. In particular, first the domain shift problem is introduced, showing application scenarios where it is strongly present. Second, an overview of the algorithms that have been developed to tackle this issue is provided. In particular, the last research trends addressing the DA problem within deep neural networks are highlighted. Lastly, the domain generalization problem is addressed, which is a more challenging task because it assumes that target data is also not available, implying that the training algorithm should be devised to generalize as much as possible without any adaptation to the target in order to properly classify never observed, out-of-distribution samples.