Search for a command to run...
Purpose: This study conducts an integrative literature review on the role of artificial intelligence (AI) in scientific research, examining both its transformative potential and associated risks. The aim is to identify strategic priorities and governance mechanisms necessary to ensure ethical and epistemologically sound use of AI in science. Design/Methodology/Approach: The review synthesizes findings from thirteen high-impact academic and institutional sources published between 2022 and 2024, including empirical studies, policy briefs, and conceptual analyses. Thematic content analysis was employed to extract core issues related to the epistemological, ethical, and operational dimensions of AI integration in research. Findings: The results reveal three major areas of concern: (i) the illusion of understanding generated by AI tools; (ii) ethical risks related to bias, fraud, and over-automation; and (iii) governance gaps in publishing and scientific evaluation. Conversely, opportunities include increased efficiency, hypothesis generation, and broader access to knowledge. Five strategic priorities were identified to guide responsible AI integration in science. Practical Implications: This study provides actionable insights for journal editors, policymakers, and researchers to establish living guidelines, strengthen human-AI collaboration, and prevent epistemic monocultures. Training and governance frameworks are critical to mitigating misuse and fostering innovation. Originality/Value: By integrating diverse sources, this review contributes to the debate on how to harness AI's potential in science without undermining critical thinking, scientific integrity, or academic diversity.