Search for a command to run...
No existing framework formally explains why the same AI system succeeds on one encoding of a problem and fails on another—or predicts, before execution, which failures are recoverable and which are not. We address this gap by proving that problem difficulty is not intrinsic to tasks but relational: a function D(P,E,S) of the problem P, encoding E, and solver S jointly. We present Representation-Solver Compatibility Theory (RSCT), which certifies whether an encoding is admissible for a given solver before execution. RSCT decomposes representation quality into three structurally independent axes: (i) signal purity α = R/(R+N), grounded in Fano's inequality; (ii) geometric compatibility κ_gate, measuring the fraction of solver potential the encoding unlocks; and (iii) representational turbulence σ, governing dynamical stability. The central result—Theorem 4.1—is that intelligence, precisely defined, is encoding competence: the fraction of a solver's theoretical ceiling that a given encoding policy achieves. Capacity (the solver's ceiling) is a fixed parameter of the deployment context, not a controllable variable. When competence is low, scaling capacity produces bounded, predictable diminishing returns (Corollary 4.1.1). This is a falsifiable, quantitative prediction about the limits of the prevailing scaling paradigm. From this foundation we derive: (i) three structural independence theorems proving that no two of (α, κ_gate, σ) constrain the third; (ii) a weakest-link optimality theorem proving that min-aggregation is the unique adversarially safe composition rule for modal health scores; (iii) a four-gate pre-execution gatekeeper with O(M) complexity that enforces all three axes sequentially; and (iv) a theoretical reframing of learning dynamics as Representation Search under Stability Constraints (RSSC), showing how training can be viewed as search over encoding space. The operational DGM implementation uses fixed embeddings and geometric compatibility without runtime learning.