Search for a command to run...
Abstract Biological neural systems have been refined over millions of years of evolutionary optimization to maximize information processing under metabolic and developmental constraints, yielding network topologies with characteristic structural signatures: sparse connectivity, small-world organization, and modular architecture. Whether these evolutionarily derived structural properties constitute transferable inductive biases for artificial learning systems is unknown. Here we test this hypothesis directly by initializing sparse multilayer perceptrons from biologically derived adjacency matrices spanning molecular, structural, functional, and behavioral interaction networks and comparing their performance against synthetic alternatives matched for sparsity but lacking evolutionary structural organization. Biologically pre-initialized networks consistently outperformed both fully connected baselines and synthetic sparse alternatives across four classification benchmarks, achieving approximately 90% classification accuracy with as little as 25% of available training data. Systematic comparisons against randomly rewired, degree-preserved, and Watts–Strogatz small-world networks with matched sparsity establish that topology, not connection density, drives these advantages: higher-order structural features encoded by evolutionary optimization, including local clustering, modular organization, and hub connectivity, provide inductive biases unavailable from random sparse graphs. These findings establish evolutionarily optimized network topology as a principled structural prior for artificial neural architectures, with direct implications for neuromorphic computing, edge-deployed machine learning, and the broader program of brain-inspired artificial intelligence. Significance Statement Biological nervous systems and gene regulatory networks have been shaped by millions of years of evolution to generalize efficiently from limited experience under tight resource constraints, precisely the challenge that confronts machine learning systems in data-scarce settings. We show that the global wiring topology produced by this evolutionary process can be transplanted directly into artificial classifiers to confer substantial data efficiency: networks pre-wired from biological blueprints achieve approximately 90% classification accuracy using only a fraction of the training data required by conventional architectures. The advantage cannot be explained by sparsity alone, the evolutionarily shaped organization of those connections is the active ingredient. Evolution, it appears, has solved a version of the sparse learning problem that artificial intelligence is still working on.