Search for a command to run...
Introduction The rapid evolution of artificial intelligence from static large language models to autonomous, agentic AI systems has introduced capabilities such as persistent memory, tool-augmented reasoning, and multi-agent collaboration. While these advancements significantly enhance real-world applicability, they also create a new and underexplored class of privacy risks, including unintended retention, propagation, and amplification of sensitive information across tasks, users, and execution cycles. Existing research predominantly focuses on stateless or single-inference models, leaving the privacy implications of agentic systems insufficiently understood. Methods This study presents a comprehensive architectural analysis of data leakage in agentic AI systems. The proposed framework models the end-to-end agent workflow and systematically examines how sensitive information can traverse key components, including persistent memory modules, planning and reasoning processes, tool invocation layers, inter-agent communication channels, and feedback-driven autonomy loops. Based on this architecture, a structured taxonomy of leakage pathways is developed and mapped to realistic threat models and attack vectors observed in practical deployments. Results The analysis identifies multiple leakage pathways unique to agentic AI systems, demonstrating how data can persist, propagate, and be unintentionally exposed across system components and operational cycles. The findings reveal that these leakage mechanisms are more complex and pervasive than those observed in traditional large language model settings, particularly due to the integration of memory, tools, and multi-agent interactions. Discussion The study highlights the limitations of existing LLM-centric privacy and security defenses when applied to autonomous agentic systems. It emphasizes the need for lifecycle-aware, component-level mitigation strategies that address privacy risks across the entire agent workflow. The proposed architectural perspective provides a foundation for designing privacy-by-design agentic AI systems and supports safer deployment in sensitive and regulated domains.