Search for a command to run...
Hashing methods have recently attracted extensive attention in cross-modal retrieval. Most supervised hashing methods attempt to preserve the semantic information into hash codes by leveraging the original logical label matrix. However, they generally treat all labels equally, and ignore the relative significance of different labels due to the variety of data features. In this article, we argue that exploring the relative importance of labels benefits the enhancement of semantic information, and we propose a novel LAbel Distribution Guided Hashing (LADH) method for cross-modal retrieval. In particular, LADH first learns a feature-induced label distribution for each sample to weigh different labels, which leverages the multi-modal feature information to enrich the semantic label information. By jointly using the learned label distributions and multi-modal features, the latent representation and hash codes are obtained with multi-modal feature selection and enhanced semantic similarities embedded. An efficient algorithm is designed to solve the proposed method whose time complexity is linear to the number of the training instances. Experimental results on several public benchmark datasets verify the effectiveness and efficiency of our method compared with the state-of-the-art methods.
Published in: ACM Transactions on Knowledge Discovery from Data
Volume 19, Issue 1, pp. 1-23
DOI: 10.1145/3697353