Search for a command to run...
The proliferation of generative artificial intelligence (AI) as an autonomous recommendation agent fundamentally challenges traditional paradigms of marketing communication. As AI systems increasingly mediate consumer–brand relationships, understanding how artificial agents construct persuasive discourse—distinct from human communicators—becomes critical for developing effective dual-channel marketing strategies. Grounded in Source Credibility Theory and the Computers Are Social Actors (CASA) paradigm, this study investigates the semantic and structural divergence between AI-generated product recommendations and human influencer marketing messages in social commerce contexts. Employing a mixed-methods computational approach integrating term frequency analysis, TF-IDF weighting, Latent Dirichlet Allocation (LDA) topic modeling, and BERT-based contextualized semantic embedding analysis (KR-SBERT), we examined 330 Instagram influencer posts and 541 AI-generated responses concerning inner beauty enzyme products—a hybrid category combining functional health claims with hedonic beauty appeals—in the Korean social commerce market. AI-generated responses were collected through a systematically designed query protocol with empirically grounded prompts derived from actual consumer search behaviors, and analytical robustness was verified through sensitivity analyses across multiple parameter thresholds. Our findings reveal a fundamental divergence in persuasive architecture: human influencers construct experiential narratives exhibiting message characteristics typically associated with peripheral-route cues (sensory descriptions, emotional testimonials, social context), while AI recommendations employ systematic, evidence-based discourse exhibiting message characteristics typically associated with central-route argumentation (functional mechanisms, ingredient specifications, objective criteria). Topic modeling identified four distinct thematic clusters for each source type: human discourse centers on embodied experience and relational consumption, whereas AI discourse organizes around informational utility and rational decision support. Jensen–Shannon Divergence analysis (JSD = 0.213 bits) confirmed moderate distributional divergence, while chi-square testing (χ2 = 847.23, p < 0.001) and Cramér’s V (0.312, indicating a medium-to-large effect) demonstrated statistically significant and substantively meaningful differences. These findings extend CASA theory by demonstrating that AI recommendation agents develop a characteristic “AI communication signature” distinguishable from human persuasion patterns. We propose an integrated Dual-Agent Persuasion Proposition—synthesizing CASA, ELM, and Source Credibility perspectives—suggesting that AI and human recommenders serve complementary functions across different stages of the consumer decision journey—a proposition whose predictions regarding sequential persuasive effectiveness and consumer processing routes await experimental validation. These findings carry implications for AI content strategy optimization, platform design, and emerging regulatory frameworks for AI-generated content labeling.