Search for a command to run...
Large language models are often described as “hallucinating” when they produce incorrect or unsupported outputs. The term is useful descriptively, but it conflates several distinct mechanisms, including local generation error, unsupported fabrication, and interaction-driven reinforcement of an initially weak claim. This paper isolates the third case. I argue that a subset of hallucination-like behavior is better understood as Recursive Narrative Amplification (RNA): a feedback dynamic in which an already introduced claim gains authority through recursive reuse before sufficient external constraint has refreshed or corrected the trajectory. Closure pressure at turn t is modeled with the update rule ΔC_t = r_t - k_t, where r_t represents recursive reinforcement and k_t represents effective constraint refresh. A positive cumulative imbalance pushes the trajectory toward Premature Narrative Closure (PNC). Controlled deterministic experiments using fixed-schema recursive loops, RAW vs SANITIZED reinjection, gain sweeps, and refresh interventions support this interpretation. The strongest result is a 7-agent confidence-gain run in which RAW closes under constant claim and evidence while SANITIZED remains open, even though both branches remain parse-valid and state-valid throughout. Additional gain and refresh sweeps show that higher reinforcement accelerates closure onset while stronger refresh suppresses persistence. These results do not imply that all hallucinations reduce to RNA. They do show that RNA is a distinct, measurable interaction instability that should be monitored separately from local output error.Evidence repository: https://guardianai.fr/evidence/