Memory-based Multi-Source Meta-Learning (M3L)
A framework to train a generalizable model for unseen domains.
- A novel Multi-Source Meta-Learning framework for multi-source Domain Generalization (DG), which can simulate the train-test process of DG during training. It enables to learn domain-invariant representations and thus improves the generalization ability.
- The memory-based module implements the identification loss in a nonparametric way and can prevent unstable optimization caused by traditional parametric manner during meta-optimization.
- We use the new method to generate diverse meta-test features, which can be directly injected into our meta learning framework and obtain further improvements.
The problem of multisource domain generalization in Person Re-Identification, which aims to learn a model that can perform well on unseen domains with only several labeled source domains is presented. To do so, we propose the Memory-based Multi-Source MetaLearning (M3L) framework to train a generalizable model for unseen domains. Specifically, a meta-learning strategy is introduced to simulate the train-test process of domain generalization for learning more generalizable models. To overcome the unstable meta-optimization caused by the parametric classifier, we present a memory-based identification loss that is non-parametric and harmonizes with meta-learning. We also present a meta batch normalization layer (MetaBN) to diversify meta-test features, further establishing the advantage of meta-learning.