Search for a command to run...
Current AI memory is brittle: it forgets past information when learning new data or requires computationally expensive retraining. Elastic Associative Memory (EAM) introduces a new paradigm: a memory system that evolves naturally. By decoupling where information is stored from what is stored, EAM acts as a persistent, schema-free substrate that self-organizes, expands on demand, and adapts to new information without erasing its past. Limitations of Current Memory ArchitecturesStandard memory models are rigid. Even accumulate-only models like Kanerva’s Sparse Distributed Memory (SDM) rely on fixed, random indexing, which assumes the world’s data distribution never changes. In real-world settings—where data shifts, clusters, and scales unpredictably—these fixed systems misallocate resources, fail under load, and suffer from catastrophic forgetting. EAM ArchitectureEAM draws inspiration from biological processes such as neurogenesis and synaptic pruning, completing Kanerva’s vision by building an adaptable index around an indestructible storage primitive. Its memory is organized into three independently evolving layers: Accumulate-Only Counters (Storage): Information is stored via additive superposition; new writes never overwrite or destroy previous traces. Self-Organizing Index (Access): Memory locations dynamically migrate toward dense data, split to create capacity when overloaded, and merge when redundant. Softmax-Sharpened Readout (Retrieval): A precision readout filters out noise and extracts the clearest signal at query time (β = 5). Empirical Results Lifelong Learning Without Replay: Retains 0.807 cumulative accuracy after 20 consecutive distribution shifts with zero replay (classical methods drop to 0.634). Extreme Elasticity: Sustains 0.565 reconstruction performance under 50× overload by dynamically growing the location pool 7.6× via splitting, while classical methods fail completely. Real-World Application: Achieves 2.6× higher retrieval accuracy on GloVe word embeddings. For full architecture details, formal proofs of counter superposition under index migration, and comprehensive benchmarks on synthetic and real-world datasets, please refer to the uploaded paper.