Search for a command to run...
Contemporary robotics safety architectures intervene after a system has already formed action decisions, attempting to constrain outputs rather than govern cognition itself. This paper demonstrates that post-reasoning safety is not merely insufficient for learning-enabled embodied systems operating in open environments—it is structurally incapable of guaranteeing safety. We introduce Pre-Reasoning Constitutional Gating (PRCG), a governance architecture that inverts the prevailing paradigm. Instead of filtering actions after reasoning, PRCG governs whether reasoning is constitutionally permitted to begin at all. Four precondition domains—epistemic sufficiency, human authorization, dignity preservation, and escalation availability—must be satisfied before cognitive processes receive computational resources. Integrated with Immaculate Reasoning Atom (IRA) v2.0, PRCG establishes a dual-gate system: no reasoning without constitutional clearance, and no action without constitutionally governed reasoning. The paper provides formal definitions, structural proofs of post-reasoning safety inadequacy, architectural specifications, and implementation mappings for embodied intelligence systems. The central conclusion is architectural and precise: for adaptive robots embedded in human environments, governance cannot follow capability—it must precede it.