Search for a command to run...
Humans can now emulate language in silica-based neural networks, but we remain ignorant about how language emerged in carbon-based neural networks in the first place. This gap represents not merely a scientific blind spot, but a unique opportunity to revolutionize artificial intelligence (AI) through biomimicry. By reverse-engineering the evolutionary principles that enabled language in the hominid lineage-principles still in operation today in nonhuman great apes, humans' closest living relatives-we can inspire language-based AI models to be radically more efficient and sustainable. Current AI models achieve remarkable performance through brute-force scaling of data and compute, yet they remain orders of magnitude less energy-efficient than the human brain. In contrast, language in ape-like hominid ancestors evolved under stringent energetic and ecological constraints, yielding sophisticated combinatorial systems, rhythmic hierarchies, recursive call structures, and context-dependent vocal motifs using minimal neural and energetic resources. These naturally selected patterns and rules, honed over millions of generations, offer the true 'foundational algorithms' of language and a proven blueprint for sustainable intelligence. Bridging carbon- and silica-based language systems through biomimicry will accelerate truly sustainable AI but also illuminate why language alone-over every conceivable alternative-was elected as the foundational medium and architecture for advanced intelligent behavior.
Published in: Bioinspiration & Biomimetics
Volume 21, Issue 1, pp. 013001-013001