Search for a command to run...
Introduction: Despite the rapid adoption of artificial intelligence within medical devices, it reshapes accuracy in diagnostics, improves treatment decisions, and patient monitoring. Algorithmic bias, model drift, cybersecurity threats, regulatory frameworks, and ethical issues are still some serious challenges for the technological breakthroughs of using artificial intelligence in medical devices. Methods: This review examines the existing risk assessment approaches for AI-based medical devices and obtains the findings from case studies, safety recalls, and regulatory sources. It enhances existing frameworks to construct a structured, lifecycle-based model for managing AIrelated risks in medical devices. Results: The study identified that the primary factors contributing to the failure of the AI-based medical devices are algorithmic bias, inadequate clinical validation, and cybersecurity vulnerabilities. The proposed framework incorporates these risks into a lifecycle-based model accompanied by mitigation tools to ensure regulatory compliance. Discussion: By consolidating existing models and global policy perspectives, our findings compare the new approach to existing regulatory standards and suggest practical tools and strategies for bias mitigation, ethical governance, and cybersecurity readiness in AI-MD, aligned with international standards and best practices. Conclusion: This evaluation enhances and clarifies current regulatory and technical approaches, proposing a unified framework intended to make AI-based medical devices safer and more reliable. It highlights the importance of continuous risk assessment, ethical monitoring, and international harmonization to ensure that AI-based medical devices remain both innovative and trustworthy in healthcare.
Published in: Recent Advances in Computer Science and Communications
Volume 19