高级检索
    卓君宝, 苏驰, 王树徽, 黄庆明. 最小熵迁移对抗散列方法[J]. 计算机研究与发展, 2020, 57(4): 888-896. DOI: 10.7544/issn1000-1239.2020.20190476
    引用本文: 卓君宝, 苏驰, 王树徽, 黄庆明. 最小熵迁移对抗散列方法[J]. 计算机研究与发展, 2020, 57(4): 888-896. DOI: 10.7544/issn1000-1239.2020.20190476
    Zhuo Junbao, Su Chi, Wang Shuhui, Huang Qingming. Min-Entropy Transfer Adversarial Hashing[J]. Journal of Computer Research and Development, 2020, 57(4): 888-896. DOI: 10.7544/issn1000-1239.2020.20190476
    Citation: Zhuo Junbao, Su Chi, Wang Shuhui, Huang Qingming. Min-Entropy Transfer Adversarial Hashing[J]. Journal of Computer Research and Development, 2020, 57(4): 888-896. DOI: 10.7544/issn1000-1239.2020.20190476

    最小熵迁移对抗散列方法

    Min-Entropy Transfer Adversarial Hashing

    • 摘要: 散列算法具有高效的存储和查询特性,被广泛应用于大规模的图像检索.大多数现有的深度散列方法都基于独立同分布的假设,即训练集(源域)和测试集(目标域)的分布一致.然而在现实应用中,源域和目标域往往存在较大的差异,即跨域检索.因此有些研究工作开始将跨域识别的方法引入到跨域检索中,以增强所学散列函数的泛化性.现有跨域检索方法仍存在散列码的判别力不足和域不变能力不足2个问题.提出语义保持模块和最小熵损失来解决这2个问题.语义保持模块是1个分类子网络,该模块可以充分利用源域的类别标注信息,并将该语义信息传递给散列学习子网络使得学习到的散列码包含更多的语义信息,即增强判别力.此外,对于无标注的目标域,熵表征目标域样本的分类响应的集中程度,理想的散列码经过语义保持模块后得到的分类响应应该集中于某一个类别,即最小熵状态.引入最小熵损失促使目标域样本与源域样本在类别响应这一空间上分布更加对齐,进而使得散列码更具域不变性.通过引入语义保持模块和最小熵损失,在现有方法的基础上构建了端到端的跨域检索网络,并在2个数据集上进行了大量实验,与领域内现有主要模型进行了详尽的对比,实验证明所提模型取得了更优的性能.

       

      Abstract: Owing to its storage and retrieval efficiency, hashing is widely applied to large-scale image retrieval. Most of existing deep hashing methods assume that the database in the target domain is identically distributed with the training set in the source domain. However, in practical applications, such assumption is so strict that there exists considerable domain discrepancy between source and target domain. To address such cross-domain image retrieval problem, some research works introduce domain adaptation techniques into image retrieval methods. The goal is to enhance the generalization ability of the learned hashing function. However, the learned Hash codes lack discrimination and domain-invariance in existing cross-domain hashing methods. We propose semantic preservation module and min-entropy loss to tackle these issues. We simply construct a classification sub-network as semantic preservation module to fully utilize labels in source domain. Semantic information encoded in labels can be passed to hashing learning network, which encourages learned Hash codes to contain more semantic information and discriminativity. As for unlabeled target domain samples, the entropy of their classification responses characterizes the confidence of classifier. Ideal target classification responses should tend to be one-hot vectors which minimizes the entropy. Therefore, we add minimization entropy loss to our model. Minimizing the entropy of classification responses of target samples aligns the distribution between source and target domain in classifier responses space. Therefore, the learned Hash codes tend to be more domain-invariant. With the semantic preservation module and min-entropy loss, we construct an end-to-end deep neural network for cross-domain image retrieval. Extensive experiments show the superiority of our model over existing state-of-the-art methods.

       

    /

    返回文章
    返回