1,203
Views
0
CrossRef citations to date
0
Altmetric
Articles

Network Gradient Descent Algorithm for Decentralized Federated Learning

, & ORCID Icon
Pages 806-818 | Received 24 Jun 2021, Accepted 02 May 2022, Published online: 06 Jun 2022
 

Abstract

We study a fully decentralized federated learning algorithm, which is a novel gradient descent algorithm executed on a communication-based network. For convenience, we refer to it as a network gradient descent (NGD) method. In the NGD method, only statistics (e.g., parameter estimates) need to be communicated, minimizing the risk of privacy. Meanwhile, different clients communicate with each other directly according to a carefully designed network structure without a central master. This greatly enhances the reliability of the entire algorithm. Those nice properties inspire us to carefully study the NGD method both theoretically and numerically. Theoretically, we start with a classical linear regression model. We find that both the learning rate and the network structure play significant roles in determining the NGD estimator’s statistical efficiency. The resulting NGD estimator can be statistically as efficient as the global estimator, if the learning rate is sufficiently small and the network structure is weakly balanced, even if the data are distributed heterogeneously. Those interesting findings are then extended to general models and loss functions. Extensive numerical studies are presented to corroborate our theoretical findings. Classical deep learning models are also presented for illustration purpose.

Supplementary Materials

Supplementary_Material.pdf:This document provides the extensions of the proposed method, the proofs of the theoretical results in the main text, and some additional simulation results. Appendix A provides for technical lemmas which are useful to prove the results in the main text. Appendix B contains the detailed proofs of the main theorems and corollaries developed in the main text. Appendix C reports some extensions and discussions of the proposed method.

Code.zip:This file is the python code for the proposed method. Please see the “README.md” in the file for using the code.

Additional information

Funding

Danyang Huang’s research is partially supported National Natural Science Foundation of China (No. 12071477, 71873137); fund for building world-class universities (disciplines) of Renmin University of China; Public Computing Cloud, Renmin University of China. Hansheng Wang’s research is partially supported by National Natural Science Foundation of China (No. 11831008) and also partially supported by the Open Research Fund of Key Laboratory of Advanced Theory and Application in Statistics and Data Science (KLATASDS-MOE-ECNU-KLATASDS2101).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 123.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.