Search for a command to run...
A foundational challenge for artificial intelligence is not whether machines can solve well defined tasks, but whether they can adapt across novel and open-ended domains. Biological systems achieve such adaptivity by coupling fast sensorimotor control with slower abstraction and memory consolidation across timescales. Despite remarkable progress, contemporary large-scale models remain energy-inefficient at inference, weakly coupled to embodied goal-directed control, and prone to interference without principled consolidation. We propose a dual-loop architecture that couples a fast recurrent perception–action loop with a slow consolidation-and-planning loop that reorganizes experience into compositional memories over a learned relational graph. The fast loop is implemented as a stable excitatory–inhibitory dynamical system with online prediction-error learning, uncertainty aware state estimation, and asymmetric consequence-driven updating in which aversive outcomes produce rapid, preferential policy correction and memory consolidation. The slow loop performs attention-based associative retrieval over a spectrally structured memory graph, enabling context-sensitive diffusion-based recall and the composition of long horizon plans from consolidated fragments. Both loops share a common dynamical substrate and are derived from a single variational objective that unifies learning, action selection, and uncertainty estimation via free energy minimization, enabling metacognitive regulation of computational depth by scaling inference resources to predictive uncertainty. We prove Lyapunov stability for the fast-loop dynamics under quasi-static learning assumptions and establish robustness bounds for inter-loop coupling under timescale separation. The framework yields four falsifiable predictions: improved task switching, reduced catastrophic forgetting, compositional zero-shot transfer, and uncertainty-adaptive compute allocation, providing concrete criteria for evaluation in embodied control, continual learning, and compositional generalization