Search for a command to run...
This paper introduces a formal definition of epistemic generativity as a structural property of evaluative systems. It distinguishes systems that optimize under a fixed evaluative structure from those capable, in principle, of operating on that structure itself. Let ℒ denote an evaluative function over a policy space Π, and 𝒪(S) the domain of operation of a system S. Contemporary AI systems are characterized by ℒ ∉ 𝒪(S): they select policies relative to externally specified criteria but do not revise those criteria. Epistemic generativity is defined as the threshold condition ℒ ∈ 𝒪(S) under admissibility constraints requiring endogenous, evidence-responsive operation over ℒ. The paper shows that prevailing uses of “generativity”—as behavioral novelty, compositional capacity, or scale—do not capture this distinction, as they remain consistent with optimization under fixed evaluation. It further clarifies the status of out-of-distribution (OOD) failure as performative deviation under ℒ, rather than normative evidence against ℒ, absent endogenous revision. On this account, behavioral sophistication does not entail evaluative authorship: intelligence, understood as effective optimization, is structurally distinct from epistemic agency. The contribution is deliberately minimal and classificatory. It provides a formal criterion for separating optimization from generativity and grounds implications for responsibility and governance. Where ℒ remains external to 𝒪(S), accountability attaches to those who specify, constrain, and update evaluative criteria. The analysis reframes current debates in machine learning and AI governance by locating the relevant threshold not in performance, but in control over evaluation itself.