• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Liu Xinghong, Zhou Yi, Zhou Tao, Qin Jie. Self-Paced Learning for Open-Set Domain Adaptation[J]. Journal of Computer Research and Development, 2023, 60(8): 1711-1726. DOI: 10.7544/issn1000-1239.202330210
Citation: Liu Xinghong, Zhou Yi, Zhou Tao, Qin Jie. Self-Paced Learning for Open-Set Domain Adaptation[J]. Journal of Computer Research and Development, 2023, 60(8): 1711-1726. DOI: 10.7544/issn1000-1239.202330210

Self-Paced Learning for Open-Set Domain Adaptation

Funds: This work was supported by the National Natural Science Foundation of China (62106043), the Natural Science Foundation of Jiangsu Province (BK20210225), and the Technological Innovation Foundation for Overseas Graduates of Nanjing City (1109002305).
More Information
  • Author Bio:

    Liu Xinghong: born in 1996. Master candidate. Student member of CCF. His main research interests include computer vision and machine learning

    Zhou Yi: born in 1990. PhD, associate professor. Member of CCF. His main research interests include computer vision, machine learning, and medical image analysis

    Zhou Tao: born in 1986. PhD, professor, PhD supervisor. Member of CCF. His main research interests include medical image analysis, machine learning, and computer vision. (taozhou.ai@gmail.com

    Qin Jie: born in 1989. PhD, professor. Member of CCF. His main research interests include computer vision and machine learning. (qinjiebuaa@gmail.com)

  • Received Date: March 30, 2023
  • Revised Date: June 01, 2023
  • Available Online: June 12, 2023
  • Domain adaptation tackles the challenge of generalizing knowledge acquired from a source domain to a target domain with different data distributions. Traditional domain adaptation methods presume that the classes in the source and target domains are identical, which is not always the case in real-world scenarios. Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain. OSDA aims to not only recognize target samples belonging to known classes shared by source and target domains but also perceive unknown class samples. Traditional domain adaptation methods aim to align the entire target domain with the source domain to minimize domain shift, which inevitably leads to negative transfer in open-set domain adaptation scenarios. We propose a novel framework based on self-paced learning to distinguish known and unknown class samples precisely, referred to as SPL-OSDA (self-paced learning for open-set domain adaptation). To utilize unlabeled target samples for self-paced learning, we generate pseudo labels and design a cross-domain mixup method tailored for OSDA scenarios. This strategy minimizes the noise from pseudo labels and ensures our model progressively to learn known class features of the target domain, beginning with simpler examples and advancing to more complex ones. To improve the reliability of the model in open-set scenarios to meet the requirements of trustworthy AI, multiple criteria are utilized in this paper to distinguish between known and unknown samples. Furthermore, unlike existing OSDA methods that require manual hyperparameter threshold tuning to separate known and unknown classes, our propused method self-tunes a suitable threshold, eliminating the need for empirical tuning during testing. Compared with empirical threshold tuning, our model exhibits good robustness under different hyperparameters and experimental settings. Comprehensive experiments illustrate that our method consistently achieves superior performance on different benchmarks compared with various state-of-the-art methods.

  • [1]
    He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition[C] //Proc of the 29th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770−778
    [2]
    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint, arXiv: 1409.1556, 2014
    [3]
    Ren Shaoqing, He Kaiming, Girshick R B, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C] //Proc of the 29th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2015: 91−99
    [4]
    He Kaiming, Gkioxari G, Dolla ́r P, et al. Mask R-CNN[C] //Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 2980−2988
    [5]
    Quiñonero-Candela J, Sugiyama M, Schwaighofer A, et al. Dataset Shift in Machine Learning[M]. Cambridge, MA: MIT, 2008
    [6]
    Pan Sinno Jialin, Yang Qiang. A survey on transfer learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345−1359 doi: 10.1109/TKDE.2009.191
    [7]
    Long Mingsheng, Cao Yue, Wang Jianmin, et al. Learning transferable features with deep adaptation networks[C] //Proc of the 32nd Int Conf on Machine Learning. New York: ACM, 2015: 97−105
    [8]
    Ganin Y, Ustinova E, Ajakan H, et al. Domain-adversarial training of neural networks[J]. Journal of Machine Learning Research, 2016, 17: (59): 1–35
    [9]
    Wang Jindong, Chen Yiqiang, Feng Wenjie, et al. Transfer learning with dynamic distribution adaptation[J]. ACM Transactions on Intelligent Systems and Technology, 2020, 11(1): 6: 1–6: 25
    [10]
    Yu Chaohui, Wang Jindong, Chen Yiqiang, et al. Transfer learning with dynamic adversarial adaptation network[C] //Proc of the 19th IEEE Conf on Int Conf on Data Mining. Piscataway, NJ: IEEE, 2019: 778−786
    [11]
    Fu Bo, Cao Zhangjie, Long Mingsheng, et al. Learning to detect open classes for universal domain adaptation[C] //Proc of the 16th European Conf on Computer Vision. Berlin: Springer, 2020: 567−583
    [12]
    Busto P P, Gall J. Open set domain adaptation[C] //Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 754−763
    [13]
    Saito K, Yamamoto S, Ushiku Y, et al. Open set domain adaptation by backpropagation[C] //Proc of the 15th European Conf on Computer Vision. Berlin: Springer, 2018: 156−171
    [14]
    Ben-David S, Blitzer J, Crammer K, et al. Analysis of representations for domain adaptation[C] //Proc of the 21st Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2007: 137−144
    [15]
    Goodfellow J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C] //Proc of the 28th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2014: 2672−2680
    [16]
    Wu Yuan, Inkpen D, El-Roby A. Dual mixup regularized learning for adversarial domain adaptation[C] //Proc of the 16th European Conf on Computer Vision. Berlin: Springer, 2020: 540−555
    [17]
    Xu Minghao, Zhang Jian, Ni Bingbing, et al. Adversarial domain adaptation with domain mixup[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 6502−6509
    [18]
    Long Mingsheng, Cao Zhangjie, Wang Jianmin, et al. Conditional adversarial domain adaptation[C] //Proc of the 32nd Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2018: 1645−1655
    [19]
    贾颖霞,郎丛妍,冯松鹤. 基于类别相关的领域自适应交通图像语义分割方法[J]. 计算机研究与发展,2020,57(4):876−887 doi: 10.7544/issn1000-1239.2020.20190475

    Jia Yingxia, Lang Congyan, Feng Songhe. A semantic segmentation method of traffic scene based on categories-aware domain adaptation[J]. Journal of Computer Research and Development, 2020, 57(4): 876−887 (in Chinese) doi: 10.7544/issn1000-1239.2020.20190475
    [20]
    Saito K, Watanabe K, Ushiku Y, et al. Maximum classifier discrepancy for unsupervised domain adaptation[C] //Proc of the 31st IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 3723−3732
    [21]
    Liu Hong, Cao Zhangjie, Long Mingsheng, et al. Separate to adapt: Open set domain adaptation via progressive separation[C] //Proc of the 32nd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 2927−2936
    [22]
    Shermin T, Lu Guojun, Teng S W, et al. Adversarial network with multiple classifiers for open set domain adaptation[J]. IEEE Transactions on Multimedia, 2021, 23: 2732−2744 doi: 10.1109/TMM.2020.3016126
    [23]
    Luo Yadan, Wang Zijian, Huang Zi, et al. Progressive graph learning for open-set domain adaptation[C] //Proc of the 37th Int Conf on Machine Learning. New York: ACM, 2020: 6468−6478
    [24]
    Pan Yingwei, Yao Ting, Li Yehao, et al. Exploring category-agnostic clusters for open-set domain adaptation[C] //Proc of the 33rd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 13864−13872
    [25]
    Zhou Yi, Bai Shaochen, Zhou Tao, et al. Delving into local features for open-set domain adaptation in fundus image analysis [C] //Proc of the 25th Int Conf on Medical Image Computing and Computer Assisted Intervention. Berlin: Springer, 2022: 682−692
    [26]
    Kumar M, Packer B, Koller D. Self-paced learning for latent variable models[C] //Proc of the 24th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2010: 1189−1197
    [27]
    Ge Yixiao, Wang Haibo, Zhu Feng, et al. Self-supervising fine-grained region similarities for large-scale image localization[C] //Proc of the 15th European Conf on Computer Vision. Berlin: Springer, 2018: 369−386
    [28]
    Guo Sheng, Huang Weilin, Zhang Haozhi, et al. Curriculumnet: Weakly supervised learning from large-scale web images[C] //Proc of the 15th European Conf on Computer Vision. Berlin: Springer, 2018: 139−154
    [29]
    Jiang Lu, Zhou Zhengyuan, Leung T, et al. MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels[C] //Proc of the 35th Int Conf on Machine Learning. New York: ACM, 2018: 2304−2313
    [30]
    Lin Liang, Wang Keze, Meng Deyu, et al. Active self-paced learning for cost-effective and progressive face identification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(1): 7−19 doi: 10.1109/TPAMI.2017.2652459
    [31]
    Choi J, Jeong M, Kim T, et al. Pseudo-labeling curriculum for unsupervised domain adaptation[J]. arXiv preprint, arXiv: 1908.00262, 2019
    [32]
    Ge Yixiao, Zhu Feng, Chen Dapeng, et al. Self-paced contrastive learning with hybrid memory for domain adaptive object Re-ID[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2020: 11309–11321
    [33]
    Li Shuang, Gong Kaixiong, Xie Binhui, et al. Critical classes and samples discovering for partial domain adaptation[J]. IEEE Transactions on Cybernetics. DOI: 10.1109/TCYB.2022.3163432
    [34]
    Cao Zhangjie, You Kaichao, Long Mingsheng, et al. Learning to transfer examples for partial domain adaptation[C] //Proc of the 32nd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 2985−2994
    [35]
    Chen Lin, Chen Huaian, Wei Zhixiang, et al. Reusing the task-specific classifier as a discriminator: Discriminator-free adversarial domain adaptation[C] //Proc of the 35th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 7181−7190
    [36]
    Jang J, Na B, Shin D, et al. Unknown-aware domain adversarial learning for open-set domain adaptation[C] //Proc of the 36th Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT, 2022: 16755−16767
    [37]
    Li Guangrui, Kang Guoliang, Zhu Yi, et al. Domain consensus clustering for universal domain adaptation[C] //Proc of the 34th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 9752−9761
    [38]
    Saito K, Saenko K. Ovanet: One-vs-all network for universal domain adaptation[C] //Proc of the 18th IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 8980−8989
    [39]
    Jiang Junguang, Shu Yang, Wang Jianmin, et al. Transferability in deep learning: A survey [J]. arXiv preprint, arXiv: 2201.05867, 2022
    [40]
    Jiang Junguang, Chen Baixu, Fu Bo, et al. Transfer learning library, [CP/OL]. Github, (2022-08-03) [2023-03-30].https://github.com/thuml/Transfer-Learning-Library
    [41]
    Saenko K, Kulis B, Fritz M, et al. Adapting visual category models to new domains[C] //Proc of the 11th European Conf on Computer Vision. Berlin: Springer, 2010: 213−226
    [42]
    Venkateswara H, Eusebio J, Chakraborty S. Deep hashing network for unsupervised domain adaptation[C] //Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 5385–5394
    [43]
    Peng Xingchao, Usman B, Kaushik N, et al. VisDa: The visual domain adaptation challenge[J]. arXiv preprint, arXiv: 1710.06924, 2017
    [44]
    Bucci S, Loghmani M R, Tommasi T. On the effectiveness of image rotation for open set domain adaptation[C] //Proc of the 16th European Conf on Computer Vision. Berlin: Springer, 2020: 422−438
    [45]
    Deng Jia, Dong Wei, Socher R, et al. ImageNet: A large-scale hierarchical image database[C] //Proc of the 22nd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2009: 248−255
    [46]
    Van der Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(86): 2579−2605
  • Cited by

    Periodical cited type(0)

    Other cited types(4)

Catalog

    Article views (335) PDF downloads (160) Cited by(4)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return