982
Views
0
CrossRef citations to date
0
Altmetric
Research Article

FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack

ORCID Icon &
Article: 2197173 | Received 28 Nov 2022, Accepted 22 Mar 2023, Published online: 06 Apr 2023
 

Abstract

Federated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a benign benchmark gradient and eliminate the poisoning attack in the later stage of global model aggregation. To address the above problems, we present a global to local based privacy-preserving federated consensus scheme against poisoning attacks (FedG2L). This scheme can effectively reduce the influence of poisoning attacks on model accuracy. In the global aggregation stage, a gradient-similarity-based secure consensus algorithm (SecPBFT) is designed to eliminate malicious gradients. During this procedure, the gradient of the data owner will not be leaked. Then, we propose an improved ACGAN algorithm to generate local data to further update the model without poisoning attacks. Finally, we theoretically prove the security and correctness of our scheme. Experimental results demonstrated that the model accuracy is improved by at least 55% than no defense scheme, and the attack success rate is reduced by more than 60%.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Natural Science Foundation of China (62125205), the Natural Science Basic Research Program of Shaanxi Province (2022JQ-594).