Search for a command to run...
Some large language models (LLMs) are open source and are therefore fully open for scientific study. However, many LLMs are proprietary, and their internals are hidden, which hinders the ability of the research community to study their behavior under controlled conditions. For instance, the token input embedding specifies an internal vector representation of each token used by the model. If the token input embedding is hidden, latent semantic information about the set of tokens is unavailable to researchers. This article presents a general and flexible method for prompting an LLM to reveal its token input embedding, even if this information is not published with the model. Moreover, this article provides strong theoretical justification—a mathematical proof for generic LLMs—for why this method should be expected to work. If the LLM can be prompted systematically and certain benign conditions about the quantity of data collected from the responses are met, the topology of the token embedding is recovered. With this method in hand, we demonstrate its effectiveness by recovering the token subspace of the Llemma-7BLLM. We demonstrate the flexibility of this method by performing the recovery at three different times, each using the same algorithm applied to different information collected from the responses. While the prompting can be a performance bottleneck depending on the size and complexity of the LLM, the recovery runs within a few hours on a typical workstation. The results of this paper apply not only to LLMs but also to general nonlinear autoregressive processes.