Published 2025-05-30
How to Cite

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
Federated Learning (FL) shows great application potential in distributed data modeling. It can achieve cross-device or cross-organization collaborative training while protecting data privacy. However, the heterogeneous data distribution (Non-IID) in real applications makes traditional federated optimization methods face problems such as slow convergence, model drift, and performance degradation. In addition, there is still a potential risk of privacy leakage during gradient transmission and model update, which affects the security and scalability of federated learning. In response to these challenges, this study proposes an adaptive gradient scaling (AGS) scheme to optimize the convergence of federated learning on non-independent and identically distributed data, and combines differential privacy (DP) and secure aggregation to improve privacy protection capabilities. The experiment is verified based on the LEAF federated learning dataset. The results show that the AGS scheme can effectively improve the model convergence speed, improve the final accuracy, and enhance the training stability. It has achieved significant performance improvements on mainstream federated optimization methods such as FedAvg, FedProx, and Scaffold. In addition, this study further analyzes the adaptability of AGS in different heterogeneous data environments and explores its potential application value in fields such as healthcare, finance, and edge computing. This study provides a new methodology for optimizing federated learning in complex data distribution environments and promotes its efficient deployment in privacy-sensitive scenarios.