Search for a command to run...
Vector quantization (VQ) is a fundamental research problem in image synthesis, which aims to represent an image with a discrete token sequence. Existing studies effectively address this problem by learning a discrete codebook from scratch and in a code-independent manner to quantize continuous representations into discrete tokens. However, learning a codebook from scratch and in a code-independent manner is highly challenging, which may be a key reason causing codebook collapse, i.e., some code vectors can rarely be optimized without regard to the relationship between codes and good codebook priors such that die off finally. In this paper, inspired by pretrained language models, we find that these language models have actually pretrained a superior codebook via a large number of text corpus, but such information is rarely exploited in VQ. To this end, we propose a novel codebook transfer framework with vision-to-language translation, called VQCT-VLT, which aims to transfer a well-trained codebook from pretrained language models to VQ for robust codebook learning. Specifically, we first introduce a pretrained codebook from language models and part-of-speech knowledge as priors. Then, we construct a vision-related codebook with these priors for achieving codebook transfer. Finally, a novel codebook transfer network is designed to exploit abundant semantic relationships between codes contained in pretrained codebooks for robust codebook learning. Although the above version can achieve superior image synthesis performance, we find the learned codebook difficult to align the text semantics. To this end, we introduce image captions as auxiliary supervisory information and then design a vision-to-language translation module to further achieve vision-language-aligned codebook learning. Experimental results on various tasks show that our VQCT-VLT method achieves superior performance over previous state-of-the-art VQ methods.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence
Volume PP, pp. 1-18