Self-Paced Learning for Open-Set Domain Adaptation
-
Graphical Abstract
-
Abstract
Domain adaptation tackles the challenge of generalizing knowledge acquired from a source domain to a target domain with different data distributions. Traditional domain adaptation methods presume that the classes in the source and target domains are identical, which is not always the case in real-world scenarios. Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain. OSDA aims to not only recognize target samples belonging to known classes shared by source and target domains but also perceive unknown class samples. Traditional domain adaptation methods aim to align the entire target domain with the source domain to minimize domain shift, which inevitably leads to negative transfer in open-set domain adaptation scenarios. We propose a novel framework based on self-paced learning to distinguish known and unknown class samples precisely, referred to as SPL-OSDA (self-paced learning for open-set domain adaptation). To utilize unlabeled target samples for self-paced learning, we generate pseudo labels and design a cross-domain mixup method tailored for OSDA scenarios. This strategy minimizes the noise from pseudo labels and ensures our model progressively to learn known class features of the target domain, beginning with simpler examples and advancing to more complex ones. To improve the reliability of the model in open-set scenarios to meet the requirements of trustworthy AI, multiple criteria are utilized in this paper to distinguish between known and unknown samples. Furthermore, unlike existing OSDA methods that require manual hyperparameter threshold tuning to separate known and unknown classes, our propused method self-tunes a suitable threshold, eliminating the need for empirical tuning during testing. Compared with empirical threshold tuning, our model exhibits good robustness under different hyperparameters and experimental settings. Comprehensive experiments illustrate that our method consistently achieves superior performance on different benchmarks compared with various state-of-the-art methods.
-
-