Search for a command to run...
Current AI governance rests on an implicit assumption: that human-in-the-loop oversight can scale safely alongside generative AI systems. We argue this assumption is structurally untenable. Three constraints hold simultaneously: legal liability remains with humans, human cognitive throughput has a biological ceiling, and economic pressures drive AI output velocity beyond that ceiling. Once velocity exceeds human processing limits, oversight becomes nominal and humans are reduced to "moral crumple zones" (Elish, 2019). Unlike physical automation, where anomaly criteria are externally defined, generative AI requires supervisors to evaluate cognitive products against internally held standards. Through predictive-error minimization, repeated exposure to AI output recalibrates these standards, degrading anomaly detection even when supervisors remain attentive. This degradation renders error detection deficient, causing the penalty function that should restrain output expansion to remain dormant. Risks accumulate invisibly and manifest as threshold shocks rather than gradual corrections. Under these dynamics, expected loss diverges with increasing output velocity, and the irreducibility of error probability in probabilistic systems ensures that model capability improvements cannot offset this divergence. Preliminary empirical evidence from approximately 25,000 preprint records across twelve academic subfields confirms the driver of this crisis, demonstrating that output velocity accelerates nonlinearly in information-space domains while plateauing against physical-space constraints (Appendix A). We derive that rate-limiting AI output to within human processing capacity is the sole variable available for bounding expected loss, and propose a flow-design governance paradigm as a principled alternative to supervision-enhancing approaches. The consequence is counterintuitive: as generative AI capability grows, its autonomous use in high-loss domains will contract.