高级检索
    刘兴波, 聂秀山, 尹义龙. 基于双向线性回归的监督离散跨模态散列方法[J]. 计算机研究与发展, 2020, 57(8): 1707-1714. DOI: 10.7544/issn1000-1239.2020.20200122
    引用本文: 刘兴波, 聂秀山, 尹义龙. 基于双向线性回归的监督离散跨模态散列方法[J]. 计算机研究与发展, 2020, 57(8): 1707-1714. DOI: 10.7544/issn1000-1239.2020.20200122
    Liu Xingbo, Nie Xiushan, Yin Yilong. Mutual Linear Regression Based Supervised Discrete Cross-Modal Hashing[J]. Journal of Computer Research and Development, 2020, 57(8): 1707-1714. DOI: 10.7544/issn1000-1239.2020.20200122
    Citation: Liu Xingbo, Nie Xiushan, Yin Yilong. Mutual Linear Regression Based Supervised Discrete Cross-Modal Hashing[J]. Journal of Computer Research and Development, 2020, 57(8): 1707-1714. DOI: 10.7544/issn1000-1239.2020.20200122

    基于双向线性回归的监督离散跨模态散列方法

    Mutual Linear Regression Based Supervised Discrete Cross-Modal Hashing

    • 摘要: 跨模态散列可以将异构的多模态数据映射为语义相似度保持的紧凑二值码,为跨模态检索提供了极大的便利.现有的跨模态散列方法在利用类别标签时,通常使用2个不同的映射来表示散列码和类别标签之间的关系.为更好地捕捉散列码和语义标签之间的关系,提出一种基于双向线性回归的监督离散型跨模态散列方法.该方法仅使用一个稳定的映射矩阵来描述散列码与相应标签之间线性回归关系,提升了跨模态散列学习精度和稳定性.此外,该方法在学习用于生成新样本散列码的模态特定映射时,充分考虑了异构模态的特征分布与语义相似度的保持.在2个公开数据集上与现有方法的实验结果验证了该方法在各种跨模态检索场景下的优越性.

       

      Abstract: Cross-modal hashing can map heterogeneous multimodal data into compact binary codes with similarity preserving, which provides great efficiency in cross-modal retrieval. Existing cross-modal hashing methods usually utilize two different projections to describe the correlation between Hash codes and class labels. In order to capture the relation between Hash codes and semantic labels efficiently, we propose a method named mutual linear regression based supervised discrete cross-modal hashing(SDCH) in this study. Only one stable projection is used in the proposed method to describe the linear regression relation between Hash codes and corresponding labels, which enhances the precision and stability in cross-modal hashing. In addition, we learn the modality-specific projections for out-of-sample extension by preserving the similarity and considering the feature distribution with different modalities. Comparisons with several state-of-the-art methods on two benchmark datasets verify the superiority of SDCH under various cross-modal retrieval scenarios.

       

    /

    返回文章
    返回