Search for a command to run...
The rapid integration of artificial intelligence (AI) into society has surfaced systemic risks, particularly deep polarization exacerbated by algorithms optimizing for individual engagement. The dominant alignment paradigm, reliant on unilateral control (e.g., RLHF), is ill-suited to address these emergent collective dynamics. This paper proposes a philosophical and computational shift toward symbiotic alignment (SA), moving beyond top-down constraints to a framework of mutual adaptation and co-evolution. We ground SA in collective predictive coding (CPC), reframing human--AI symbiosis as participation in a symbol emergence system (SES). Mathematically, we formalize this interaction as multi-agent reinforcement learning (MARL) augmented by a collective regularization term, driving agents to minimize collective free energy (CFE) while preserving individual autonomy. Crucially, this formulation reveals that social coherence does not require uniformity; within this framework, we computationally reinterpret "plurality" as a stable multimodal distribution of shared beliefs, where diverse worldviews coexist through co-creative negotiation. We conclude by outlining the research agenda for realizing this vision: designing AI agents capable of co-creative learning and social mechanisms ("gardeners") that foster trust, thereby steering our technological future toward a flourishing plurality.