Search for a command to run...
Large language models are increasingly deployed in interactive systems, yet controlling their behavior over extended multi-turn interactions remains challenging. Most existing approaches rely on prompt-based steering, leaving system behavior sensitive to conversational context and probabilistic drift. This paper presents the Modular Multi-State AI Teaching Protocol (MMA-TP), a protocol-level framework for constraining large language model behavior through structured interaction design rather than model modification. MMA-TP pairs an engineered system prompt, which establishes a persistent runtime persona, with a structured specification that encodes interaction states, transitions, and response constraints. Operating entirely at the interaction level, the framework leverages contextual conditioning and distributional bias to stabilize behavior across extended sessions without altering model parameters or decoding strategies. A mechanistic analysis grounded in transformer attention dynamics explains how persistent structured input biases probabilistic generation toward protocol-consistent behavior. Behavioral evaluation across multiple subject domains demonstrates that MMA-TP reliably enforces declared constraints, preserves phase ordering, and resists structural degradation relative to prompt-only instruction. These results indicate that protocol-level interaction control offers a lightweight and reusable approach for stabilizing large language model behavior in complex interactive settings.
Published in: Applied and Computational Engineering
Volume 225, Issue 1, pp. 68-74