Search for a command to run...
Deep hashing has emerged as an efficient and robust solution for image retrieval through representation learning. However, convolutional neural network (CNN)-based hashing methods are constrained by their reliance on grid structures, limiting their capacity to model complex or unstructured data relationships. This article proposes a novel deep hashing model, Hashing by Autoencoder with graph convolutional networks (HAGCN), that integrates transfer learning–based visual embeddings, obtained via an autoencoder, with graph convolutional networks (GCNs). The model dynamically constructs local subgraphs from the output of a transfer model, enabling the learning of both global and local structural relationships through the graph Laplacian. A GCN layer is employed to effectively capture local topologies in unstructured data, enhancing both representation quality and learning efficiency through parameter sharing and transfer learning. Experiments conducted on the evaluation datasets demonstrate that the proposed method outperforms existing CNN-based and GCN-based deep hashing approaches. Furthermore, the analysis of various GCN filters under the proposed framework offers valuable insights into filter selection for deep hashing. In terms of average performance improvement compared to the comparison models, HAGCN shows an increase of about 7.4% compared to the minimum performance and about 2.7% compared to the maximum performance. Ultimately, GCN filters contribute to structural preservation and improved expressiveness, while the combination of dynamic graph construction and transfer learning facilitates the generation of compact, robust hash codes from high-dimensional image data.