Federated Learning for Privacy-Preserving Distributed Model Training

Authors

  • Giulia Bianchi University of Rome, Italy Author

Abstract

Federated Learning (FL) has emerged as a promising approach for training machine learning models across decentralized devices without centralizing data. This paper explores the principles, challenges, and advancements in FL, focusing particularly on its role in privacy-preserving distributed model training. We discuss the fundamental concepts of FL, its architecture, and various strategies employed to ensure data privacy while aggregating model updates from multiple edge devices. Key challenges such as communication efficiency, heterogeneous data distributions, and security concerns are addressed alongside state-of-the-art solutions and future research directions.

Downloads

Download data is not yet available.

Downloads

Published

2024-05-22

Issue

Section

Articles

How to Cite

Federated Learning for Privacy-Preserving Distributed Model Training. (2024). Innovative Computer Sciences Journal, 10(1), 1−6. https://inscipub.com/ICSJ/article/view/143