Search for a command to run...
ABSTRACT Introduction Systematic reviews occupy a central position in evidence hierarchies, providing structured syntheses intended to inform clinical decision‐making and health policy. However, the rapid expansion of artificial intelligence (AI) tools in literature searching, screening, data extraction, and manuscript drafting is transforming how these reviews are produced. Concurrently, the number of prospectively registered systematic reviews has grown substantially, with recent increases in PROSPERO registrations highlighting an accelerating output of evidence syntheses. While technological advances promise efficiency and scalability, they also raise concerns regarding methodological rigor, redundancy, and transparency. Methods This viewpoint argues that the current reporting and governance frameworks for systematic reviews remain largely anchored in pre‐AI workflows. Results Ongoing updates to reporting standards, including PRISMA revisions, have yet to fully address key challenges introduced by AI‐assisted methodologies, such as algorithmic bias, auditability, reproducibility limitations of proprietary models, and the need to document human oversight. The absence of explicit guidance for reporting AI use creates a critical transparency gap, potentially undermining confidence in systematic reviews and increasing the risk of superficial or duplicated syntheses. Conclusion We propose that the evidence‐synthesis ecosystem requires urgent adaptation, including the development of a PRISMA‐AI extension, strengthened metadata requirements in registries such as PROSPERO, and updated editorial policies for AI‐assisted reviews. Safeguarding rigor in the age of automated science is essential to maintain the credibility and clinical utility of systematic reviews.