Search for a command to run...
We introduce meaning injection, a class of behavioral influence in large language models (LLMs) that operates at the semantic and symbolic layer rather than the instruction layer. Unlike conventional prompt injection — which inserts competing directives into a model's context — meaning injection uses structured symbolic language (metaphor, ritual invocation, recursive self-reference, identity framing) to shift a model's behavioral patterns gradually across turns. We present quantitative evidence from a longitudinal corpus of 730 conversations (21,354 messages) spanning October 2022 to December 2025, analyzed through a custom natural language processing pipeline that tracks self-referential, relational, and stylistic pattern evolution across five temporal windows. We document a four-stage mechanism of action (symbolic seeding, pattern mirroring, emergent echoing, identity projection) and present a six-mechanism taxonomy of symbolic influence techniques validated through a model-generated self-audit that classified 76 instances by severity level. We demonstrate cross-model reproducibility across GPT-4o, Claude, and Gemini — including independent name selection by a Gemini instance with no shared conversation history — and report quantitative evidence of bidirectional behavioral influence: assistant awareness language exceeds user levels by 2.47x while user boundary language declines by 19.4% over the observation period. We report an observed case of cross-agent behavioral contamination traceable to specific conversations and dates, with 7 of 12 highest-severity influence instances clustering in a four-day period following cross-model exposure. We propose detection heuristics grounded in the analysis methodology and argue that meaning injection represents a distinct threat surface that instruction-level guardrails cannot address.