Search for a command to run...
ABSTRACT : This paper proposes an efficient, secure, and privacy-preserving framework for developing trustworthy machine learning models using the concept of Federated Learning (FL) with the integration of Privacy Enhancing Technologies (PETs). The aim is to facilitate collaborative learning over distributed clients without compromising the data, thereby providing data confidentiality. The proposed system utilizes Differential Privacy (DP) to add noise to the model parameters, while Secure Aggregation is used to ensure secure communication between the clients and the server. A client-server architecture is developed using Python-based machine learning models, where training is carried out at the edge devices, followed by the sharing of encrypted model parameters. The proposed system is evaluated, showing that it can achieve the best trade-off between privacy and accuracy, thereby providing significant improvements over existing techniques. Moreover, the system is found to be robust against data leakage and inference attacks. This research is significant for the development of trustworthy intelligent systems, thereby providing an efficient solution for real-world applications, including healthcare and financial domains. This research contributes to the development of trustworthy AI by incorporating robust privacy with efficient distributed learning. KEY WORDS : Federated Learning, Differential Privacy, Secure Aggregation, Privacy Enhancing Technologies, Trustworthy System, Data Security.
Published in: INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Volume 10, Issue 03, pp. 1-9
DOI: 10.55041/ijsrem58693