Abstract:
Cross-modal hashing can map heterogeneous multimodal data into compact binary codes with similarity preserving, which provides great efficiency in cross-modal retrieval. Existing cross-modal hashing methods usually utilize two different projections to describe the correlation between Hash codes and class labels. In order to capture the relation between Hash codes and semantic labels efficiently, we propose a method named mutual linear regression based supervised discrete cross-modal hashing(SDCH) in this study. Only one stable projection is used in the proposed method to describe the linear regression relation between Hash codes and corresponding labels, which enhances the precision and stability in cross-modal hashing. In addition, we learn the modality-specific projections for out-of-sample extension by preserving the similarity and considering the feature distribution with different modalities. Comparisons with several state-of-the-art methods on two benchmark datasets verify the superiority of SDCH under various cross-modal retrieval scenarios.