Search for a command to run...
The rapid adoption of artificial intelligence (AI) in healthcare analytics has raised significant concerns regarding patient data privacy, security, and regulatory compliance. This study develops a Privacy-by-Design (PbD) data governance model tailored for AI-driven healthcare analytics in remote research environments. The proposed framework integrates privacy-preserving technologies to enable secure data analysis while preventing the centralization of sensitive health records. The novelty of this research lies in the design of a unified governance architecture that combines federated learning, differential privacy, and blockchain-enabled consent management to support transparent, accountable, and regulation-compliant healthcare AI systems. A simulation-based research approach was adopted using publicly available and synthetically generated healthcare datasets distributed across multiple virtual clients to emulate decentralized clinical data environments. The framework was evaluated based on model performance, privacy guarantees, fairness, and system convergence under heterogeneous data conditions. Experimental results demonstrate that the proposed PbD–federated architecture achieves reliable predictive performance while maintaining strong privacy protection and auditability without direct data sharing. Overall, the findings confirm that integrating privacy-preserving learning with governance mechanisms can enable responsible and scalable healthcare analytics in remote research contexts. The study provides a practical blueprint for developing trustworthy AI systems that align with modern data protection principles and ethical healthcare data management.
Published in: Asian Journal of Research in Computer Science
Volume 19, Issue 3, pp. 37-52